Open In App

Sigmoid Function

Last Updated : 02 Feb, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Sigmoid is a mathematical function that maps any real-valued number into a value between 0 and 1. Its characteristic "S"-shaped curve makes it particularly useful in scenarios where we need to convert outputs into probabilities. This function is often called the logistic function.

Mathematically, sigmoid is represented as:

\sigma = \frac{1}{1 + e^{-x}}

where,

  • x is the input value,
  • e is Euler's number (\approx 2.718 )

Sigmoid function is used as an activation function in machine learning and neural networks for modeling binary classification problems, smoothing outputs, and introducing non-linearity into models.

Sigmoid-Activation-Function
Graph of Sigmoid Activation Function

In this graph, the x-axis represents the input values that ranges from - \infty \ to \ +\infty and y-axis represents the output values which always lie in [0,1].

In machine learning, x could be a weighted sum of inputs in a neural network neuron or a raw score in logistic regression. If the output value is close to 1, it indicates high confidence in one class and if the value is close to 0, it indicates high confidence in the other class.

Properties of the Sigmoid Function

The sigmoid function has several key properties that make it a popular choice in machine learning and neural networks:

  1. Domain: The domain of the sigmoid function is all real numbers. This means that you can input any real number into the sigmoid function, and it will produce a valid output.
  2. Asymptotes: As x approaches positive infinity, σ(x) approaches 1. Conversely, as x approaches negative infinity, σ(x) approaches 0. This property ensures that the function never actually reaches 0 or 1, but gets arbitrarily close.
  3. Monotonicity: The sigmoid function is monotonically increasing, meaning that as the input increases, the output also increases.
  4. Differentiability: The sigmoid function is differentiable, which allows for the calculation of gradients during the training of machine learning models.

Sigmoid Function in Backpropagation

If we use a linear activation function in a neural network, the model will only be able to separate data linearly, which results in poor performance on non-linear datasets. However, by adding a hidden layer with a sigmoid activation function, the model gains the ability to handle non-linearity, thereby improving performance.

During the backpropagation, the model calculates and updates weights and biases by computing the derivative of the activation function. The sigmoid function is useful because:

  • It is the only function that appears in its derivative.
  • It is differentiable at every point, which helps in the effective computation of gradients during backpropagation.

Derivative of Sigmoid Function

The derivative of the sigmoid function, denoted as σ'(x), is given by σ'(x)=σ(x)⋅(1−σ(x)).

Let's see how the derivative of sigmoid function is computed.

We know that, sigmoid function is defined as:

y = \sigma(x) = \frac{1}{1 + e^{-x}}

Define:

u = 1 + e^{-x}

Rewriting the sigmoid function:

y = \frac{1}{u}

Differentiating u with respect to x:

\frac{du}{dx} = -e^{-x}

Differentiating y with respect to u:

\frac{dy}{du} = -\frac{1}{u^2}

Using the chain rule:

\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}

\frac{dy}{dx} = (- \frac{1}{u^2}) \cdot (e^{-x})

\frac{dy}{dx} = \frac{e^{-x}}{u^2}

Since u = 1 + e^{-x}, substituting:

\frac{dy}{dx} = \frac{e^{-x}}{(1 + e^{-x})^2}

Since:

\sigma(x) = \frac{1}{1 + e^{-x}}

Rewriting:

1 - \sigma(x) = \frac{e^{-x}}{1 + e^{-x}}

Substituting:

\frac{dy}{dx} = \sigma(x) \cdot (1 - \sigma(x))

Final Result

\sigma'(x) = \sigma(x) \cdot (1 - \sigma(x))

The above equation is known as the generalized form of the derivation of the sigmoid function. The below image shows the derivative of the sigmoid function graphically.

Sigmoid-and-its-derivative

Issue with Sigmoid Function in Backpropagation

One key issue with using the sigmoid function is the vanishing gradient problem. When updating weights and biases using gradient descent, if the gradients are too small, the updates to weights and biases become insignificant, slowing down or even stopping learning.

Sigmoid

The shades red region highlights the areas where the derivative \sigma^{'}(x) is very small (close to 0). In these regions, the gradients used to update weights and biases during backpropagation become extremely small. As a result, the model learns very slowly or stops learning altogether, which is a major issue in deep neural networks.

Practice Problems

Problem 1: Calculate the derivative of the sigmoid function at 𝑥=0.

\sigma(0) = \frac{1}{1 + e^0} = \frac{1}{2}

\sigma'(0) = \sigma(0) \cdot (1 - \sigma(0))

= \frac{1}{2} \times \left(1 - \frac{1}{2}\right) = \frac{1}{4}

Problem 2: Find the Value of \sigma'(2)

\sigma(2) = \frac{1}{1 + e^{-2}} \approx 0.88

\sigma'(2) = \sigma(2) \cdot (1 - \sigma(2))σ′(2)=σ(2)⋅(1−σ(2))

\approx 0.88 \times (1 - 0.88) \approx 0.1056

Compute \sigma'(-1):

\sigma(-1) = \frac{1}{1 + e^1} \approx 0.2689

\sigma'(-1) = \sigma(-1) \cdot (1 - \sigma(-1))

\approx 0.2689 \times (1 - 0.2689) \approx 0.1966


Next Article

Similar Reads