Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[deleted]


Well, it's exact until you evaluate it at a float — because it's the floats that are inexact, not the technique.

And that limitation is just as true of symbolic algebra. Symbolic algebra is "exact" iff you can do exact evaluation, or if your application doesn't require you to evaluate at all. Automatic differentiation is the same, it's just that the unevaluated forms in symbolic algebra are "nicer".


[deleted]


>Dual numbers cannot, because they use plain old floating point math in the derivative equations.

There's nothing stopping you from switching out the floating point for rationals or something like that.

For example, the Haskell `ad` package has its functions parameterized over all instances of Num: http://hackage.haskell.org/package/ad-3.4/docs/Numeric-AD.ht...


> they are almost never used in any real automatic differentiation system

They're efficient enough for first-order derivatives. For example they are used in Ceres, Google's library for non-linear least-squares optimization

https://ceres-solver.googlesource.com/ceres-solver/+/master/...


The exact part struck me the most. I was expecting some technique that tries to figure out the maximal error during the calculations and provides enough space for an exact floating point presentation in advance to avoid the usual problem of continually increasing memory demands that you usually get with exact numbers. Such techniques can be used while calculating delaunay triangulations and maybe they are applicable here as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: