Gradient descent

Gradient descent is an optimization technique used in machine learning to minimize a model’s error by iteratively adjusting its parameters using the loss function, which measures how wrong the model is, and the gradients of that loss, which indicate how the parameters should be changed.

Types of gradient descent include

  • batch gradient descent: Updates parameters using the gradient calculated from the entire dataset;
  • stochastic gradient descent (SGD): Updates parameters after each individual training example (one data point); and
  • mini-batch gradient descent: Updates parameters using small subsets of the dataset, balancing computational efficiency and stability.

Gradient descent is widely applied in linear regression, logistic regression, and many other machine learning models. In neural networks, it plays a key role in backpropagation: Forward propagation computes the network’s predictions, and backpropagation computes gradients, which gradient descent then uses to update weights and biases.

We use cookies

Our website uses cookies to ensure you get the best experience. By browsing the website you agree to our use of cookies. Please note, we don’t collect sensitive data and child data.

To learn more and adjust your preferences click Cookie Policy and Privacy Policy. Withdraw your consent or delete cookies whenever you want here.

Allow all cookies