Sigmoid is a mathematical function that maps any real-valued number into a value between 0 and 1. Its characteristic "S"-shaped curve makes it particularly useful in scenarios where we need to convert outputs into probabilities. This function is often called the logistic function.
Mathematically, sigmoid is represented as:
\sigma = \frac{1}{1 + e^{-x}}
where,
- x is the input value,
- e is Euler's number (\approx 2.718 )
Sigmoid function is used as an activation function in machine learning and neural networks for modeling binary classification problems, smoothing outputs, and introducing non-linearity into models.
Graph of Sigmoid Activation Function In this graph, the x-axis represents the input values that ranges from - \infty \ to \ +\infty and y-axis represents the output values which always lie in [0,1].
In machine learning, x could be a weighted sum of inputs in a neural network neuron or a raw score in logistic regression. If the output value is close to 1, it indicates high confidence in one class and if the value is close to 0, it indicates high confidence in the other class.
Properties of the Sigmoid Function
The sigmoid function has several key properties that make it a popular choice in machine learning and neural networks:
- Domain: The domain of the sigmoid function is all real numbers. This means that you can input any real number into the sigmoid function, and it will produce a valid output.
- Asymptotes: As x approaches positive infinity, σ(x) approaches 1. Conversely, as x approaches negative infinity, σ(x) approaches 0. This property ensures that the function never actually reaches 0 or 1, but gets arbitrarily close.
- Monotonicity: The sigmoid function is monotonically increasing, meaning that as the input increases, the output also increases.
- Differentiability: The sigmoid function is differentiable, which allows for the calculation of gradients during the training of machine learning models.
Sigmoid Function in Backpropagation
If we use a linear activation function in a neural network, the model will only be able to separate data linearly, which results in poor performance on non-linear datasets. However, by adding a hidden layer with a sigmoid activation function, the model gains the ability to handle non-linearity, thereby improving performance.
During the backpropagation, the model calculates and updates weights and biases by computing the derivative of the activation function. The sigmoid function is useful because:
- It is the only function that appears in its derivative.
- It is differentiable at every point, which helps in the effective computation of gradients during backpropagation.
Derivative of Sigmoid Function
The derivative of the sigmoid function, denoted as σ'(x), is given by σ'(x)=σ(x)⋅(1−σ(x)).
Let's see how the derivative of sigmoid function is computed.
We know that, sigmoid function is defined as:
y = \sigma(x) = \frac{1}{1 + e^{-x}}
Define:
u = 1 + e^{-x}
Rewriting the sigmoid function:
y = \frac{1}{u}
Differentiating u with respect to x:
\frac{du}{dx} = -e^{-x}
Differentiating y with respect to u:
\frac{dy}{du} = -\frac{1}{u^2}
Using the chain rule:
\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}
\frac{dy}{dx} = (- \frac{1}{u^2}) \cdot (e^{-x})
\frac{dy}{dx} = \frac{e^{-x}}{u^2}
Since u = 1 + e^{-x}, substituting:
\frac{dy}{dx} = \frac{e^{-x}}{(1 + e^{-x})^2}
Since:
\sigma(x) = \frac{1}{1 + e^{-x}}
Rewriting:
1 - \sigma(x) = \frac{e^{-x}}{1 + e^{-x}}
Substituting:
\frac{dy}{dx} = \sigma(x) \cdot (1 - \sigma(x))
Final Result
\sigma'(x) = \sigma(x) \cdot (1 - \sigma(x))
The above equation is known as the generalized form of the derivation of the sigmoid function. The below image shows the derivative of the sigmoid function graphically.
Issue with Sigmoid Function in Backpropagation
One key issue with using the sigmoid function is the vanishing gradient problem. When updating weights and biases using gradient descent, if the gradients are too small, the updates to weights and biases become insignificant, slowing down or even stopping learning.
The shades red region highlights the areas where the derivative \sigma^{'}(x) is very small (close to 0). In these regions, the gradients used to update weights and biases during backpropagation become extremely small. As a result, the model learns very slowly or stops learning altogether, which is a major issue in deep neural networks.
Practice Problems
Problem 1: Calculate the derivative of the sigmoid function at 𝑥=0.
\sigma(0) = \frac{1}{1 + e^0} = \frac{1}{2}
\sigma'(0) = \sigma(0) \cdot (1 - \sigma(0))
= \frac{1}{2} \times \left(1 - \frac{1}{2}\right) = \frac{1}{4}
Problem 2: Find the Value of \sigma'(2)
\sigma(2) = \frac{1}{1 + e^{-2}} \approx 0.88
\sigma'(2) = \sigma(2) \cdot (1 - \sigma(2))σ′(2)=σ(2)⋅(1−σ(2))
\approx 0.88 \times (1 - 0.88) \approx 0.1056
Compute \sigma'(-1):
\sigma(-1) = \frac{1}{1 + e^1} \approx 0.2689
\sigma'(-1) = \sigma(-1) \cdot (1 - \sigma(-1))
\approx 0.2689 \times (1 - 0.2689) \approx 0.1966
Similar Reads
Hyperbolic Function Hyperbolic Functions are similar to trigonometric functions but their graphs represent the rectangular hyperbola. These functions are defined using hyperbola instead of unit circles. Hyperbolic functions are expressed in terms of exponential functions ex. In this article, we will learn about the hyp
7 min read
numpy.i0() function | Python numpy.i0() function is the modified Bessel function of the first kind, order 0. it's usually denoted by I0. Syntax : numpy.i0(x) Parameters : x : [array_like, dtype float or complex] Argument of the Bessel function. Return : [ndarray, shape = x.shape, dtype = x.dtype] The modified Bessel function ev
1 min read
Python | Tensorflow nn.sigmoid() Tensorflow is an open-source machine learning library developed by Google. One of its applications is to develop deep neural networks. The module tensorflow.nn provides support for many basic neural network operations.One of the many activation functions is the sigmoid function which is defined as f
3 min read
What is the Limit of a Function The limit of a function is a fundamental concept in calculus and mathematical analysis, describing the behavior of a function as its input approaches a particular value. Simply put, a function f(x) has a limit L at x = a if, as x gets closer to a, the values of f(x) approach L.In this article, we wi
8 min read
Swish Activation Function As the Machine Learning community keeps working on trying to identify complex patterns in the dataset for better results, Google proposed the Swish Activation function as an alternative to the popular ReLU activation function. The authors of the research paper show that using the Swish Activation fu
4 min read