Copyright | 2007--2022 wren romano |
---|---|
License | BSD-3-Clause |
Maintainer | [email protected] |
Stability | provisional |
Portability | portable (with CPP, FFI) |
Safe Haskell | None |
Language | Haskell2010 |
Data.Number.LogFloat.Raw
Description
This module provides implementations for computing various logarithmic and exponential functions without losing precision (as the naive implementations do). These are the "raw" implementations; i.e., sans newtypes and other conveniences. Since the lack of newtypes means we can't rely on types to clarify things, we use the traditional baroque names for things. The design considerations behind (most of) these implementations are documented at: https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf
In base-4.9.0.0 GHC added some of these to the Floating
class
exported from Numeric. Alas, they provide default definitions
using the naive implementations, so one can't really rely on the
Floating
methods being precision preserving. Overall, the
specific instance for Double
looks fine (though they use
different cutoffs for log1pexp
for some reason); but it's easy
enough to reimplement here, to make absolutely sure we're getting
the right thing.
@since: 0.14.0
Synopsis
- expm1 :: Double -> Double
- log1p :: Double -> Double
- log1mexp :: Double -> Double
- log1pexp :: Double -> Double
- logSumExp :: [Double] -> Double
- kahanSum :: [Double] -> Double
- logSoftmax :: [Double] -> [Double]
- softmax :: [Double] -> [Double]
- sigmoid :: Double -> Double
- logit :: Double -> Double
- logitExp :: Double -> Double
Logarithmic/exponential basics
expm1 :: Double -> Double Source #
Compute exp x - 1
without losing precision.
Standard C libraries provide a special definition for expm1
which is more accurate than doing the naive thing, especially
for very small arguments.
This installation was compiled to use the FFI version.
Compute log (1 + x)
without losing precision.
Standard C libraries provide a special definition for this
function, which is more accurate than doing the naive thing,
especially for very small arguments. For example, the naive
version underflows around 2 ** -53
, whereas the specialized
version underflows around 2 ** -1074
.
N.B. The statistics:Statistics.Math
module provides a pure
Haskell implementation of log1p
for those who are interested.
We do not copy it here because it relies on the vector
package
which is non-portable. If there is sufficient interest, a portable
variant of that implementation could be made. Contact the
maintainer if the FFI and naive implementations are insufficient
for your needs.
This installation was compiled to use the FFI version.
Compute log (1 - exp x)
without losing precision.
log1pexp :: Double -> Double Source #
Compute log (1 + exp x)
without losing precision. Algebraically
this is 0 ⊔ x
, which is the log-domain's analogue of 1 + x
.
Summation
logSumExp :: [Double] -> Double Source #
O(n). Log-domain summation, aka: (log . sum . fmap exp)
.
Algebraically this is ⨆ xs
, which is the log-domain equivalent
of ∑ xs
.
N.B., this function requires two passes over the input. Thus, it is not amenable to list fusion, and hence will use a lot of memory when summing long lists.
kahanSum :: [Double] -> Double Source #
O(n). Floating-point summation, via Kahan's algorithm. This
is nominally equivalent to sum
, but greatly mitigates the
problem of losing precision.
N.B., this only requires a single pass over the data; but we use a strict left fold for performance, so it's still not amenable to list fusion.
Softmax
logSoftmax :: [Double] -> [Double] Source #
O(n). Log-domain softmax, aka: (fmap log . softmax)
.
N.B., this requires three passes over the data: two for the
logSumExp
, and a third for the normalization itself. Thus,
it is not amenable to list fusion, and hence will use a lot of
memory when summing long lists.
softmax :: [Double] -> [Double] Source #
O(n). Normal-domain softmax: > softmax xs = [ exp x / sum [ exp y | y <- xs] | x <- xs ]
N.B., this requires three passes over the data: same as logSoftmax
.
Sigmoid and related functions
sigmoid :: Double -> Double Source #
The logistic function; aka, the inverse of logit
.
> sigmoid x = 1 / (1 + exp (-x))
> sigmoid x = exp x / (exp x + 1)
> sigmoid x = (1 + tanh (x2)) 2
The quantile function; aka, the inverse of sigmoid
.
> logit x = log (x / (1 - x))
> logit x = 2 * atanh (2*x - 1)