Neural Network Basics: Weights, Sum, Activation

A model neuron turns inputs into an output by weighting them, summing them, and applying a simple rule. Use the sliders to see each step.

Plain language version. Inputs (x1, x2) are the information you feed into the neuron. Weights (w1, w2) are importance settings: a large positive weight makes that input matter more; a negative weight makes it push the output down. The bias (b) shifts the overall tendency to fire. The neuron adds everything up and passes the sum through an activation function that squashes or thresholds the value.

Notation: s= i=1 n wi xi +b ,  y=f(s)

Sum s: 0.00
Output y: 0.50
Step turns the sum into 0 or 1. Sigmoid turns it into a probability-like number between 0 and 1. ReLU keeps positives and clips negatives to 0.
x1 x2 Σ y w1 w2 + b f(s)
Computation: the neuron sums weighted inputs, adds bias, then applies f(s).
StepExpressionValue
1w1 × x10
2w2 × x20
3s = (w1x1 + w2x2) + b0
4y = f(s)0

From one neuron to a network. A single layer means inputs connect directly to the output. A two-layer network has one hidden layer between inputs and outputs. A three-layer network has two hidden layers. More layers allow the network to build more complex intermediate features.

Single layer (inputs to output)
Two layer (one hidden layer)
Three layer (two hidden layers)