A model neuron turns inputs into an output by weighting them, summing them, and applying a simple rule. Use the sliders to see each step.
Plain language version. Inputs (x1, x2) are the information you feed into the neuron. Weights (w1, w2) are importance settings: a large positive weight makes that input matter more; a negative weight makes it push the output down. The bias (b) shifts the overall tendency to fire. The neuron adds everything up and passes the sum through an activation function that squashes or thresholds the value.
Notation:
| Step | Expression | Value |
|---|---|---|
| 1 | w1 × x1 | 0 |
| 2 | w2 × x2 | 0 |
| 3 | s = (w1x1 + w2x2) + b | 0 |
| 4 | y = f(s) | 0 |
From one neuron to a network. A single layer means inputs connect directly to the output. A two-layer network has one hidden layer between inputs and outputs. A three-layer network has two hidden layers. More layers allow the network to build more complex intermediate features.