I think you're hitting up against something that is very subtle and it speaks to the fundamental nature of what it means to compute something. I think you can gain a bit of insight into what you're trying to get at if you expand the discussion to the interplay between mathematics and physics.
Let's look at your example of baking. Indeed it seems like an imperative (physical) process. When we mix eggs and flour to form a batter we can't get the eggs and flour back! Indeed even cracking the eggs seems like destructive update to me. Think about why that is. We've performed a non-reversible transformation on our ingredients AND we can't go back in time! In this case time is implicit in our understanding.
Now consider trying to model baking mathematically. We want to build an equation that will tell us the state of our cake. This will be function - that is dependent on time - time is now explicit in our model. Now we can say at t0 our eggs are whole. At some time in the future t1 our eggs are cracked. This doesn't negate the fact at t0 the eggs are whole. We can't go back in time and update what the state of the eggs then - that makes no sense! So here we see that when time is an explicit parameter we have time symmetry. The equations don't care if time goes backwards or forwards.
And we can always do this. If you have some process that has a state that changes through time you can model that process with a static model that includes time as an explicit parameter. This means that our model is a higher dimensional look at the real thing. And our actual computation (or experiment or baking etc.) is a degenerate case of the more general model where the time parameter is actual time.
How does this relate to functional programming versus imperative programming. Well it is the difference between theory and experiment. But in this case we also have to deal with the fact that our program is computed (a physical process happening in time) and so we're using a physical computation which follows the rules of mathematics to model mathematical rules using a degenerate case of those rules. So if it seems like there's an abstraction leaking it's because it kinda is.
I like to think about it in this way because it highlights the very tight knit nature of physics and computation. Indeed a general theory of what it means to compute something could lead to a unified theory of physics and vice-versa.
That's an insightful comment and an important point. Because the underlying computation is physical, there's no such thing as side-effect-free computation. Any attempt to treat programming as purely mathematical, i.e. pure FP, is going to run up against this physical substrate. You can't abstract it away completely, and it gets complicated if you try. So I don't think pure FP is likely to be an ultimate solution to the core problems of minimizing complexity and building scalable abstractions. It would be, if programming were math, but math is not executable.
I believe one can even see this in the OP. Some of those examples seem more complicated than the side-effecty, loopy code that they're replacing. Maybe it is all habit and my mind has been warped by years of exposure to side effects and loops. But what if those things are more "native" to programming than pure FP can easily allow, and we need a more physical model of computation?
I've often thought it would be interesting to try teaching programming as a kind of mechanics in which values are physical commodities that get moved around and modified—to see whether it would be more accessible to beginners that way. I also wonder what sort of programming languages one might come up with out of such a model. It would be an interesting experiment. One would use mathematical abstractions to reason about such a system, but would not think of the system itself as a mathematical one.
> That's an insightful comment and an important point. Because the underlying computation is physical, there's no such thing as side-effect-free computation. Any attempt to treat programming as purely mathematical, i.e. pure FP, is going to run up against this physical substrate. You can't abstract it away completely, and it gets complicated if you try. So I don't think pure FP is likely to be an ultimate solution to the core problems of minimizing complexity and building scalable abstractions. It would be, if programming were math, but math is not executable.
Oh come on. Is the fact that your processor emits slightly more heat from computing your pure functions, and once in a while a cosmic ray may shift a bit or two, really such a big hindrance? You sound like a die-hard purist, ironically. We might as well ditch the whole idea of applied mathematics since there is always going to be a mismatch between the real world and mathematics. Triangulate? You can't do that, because the degrees you measure isn't going to be measured to the 'infinitely' precise decimal point. The whole theory behind geometry is based on practically unatainable ideals; surely Plato would be turning in his grave if he knew that cartographers had been using geometry to make those (fairly inaccurate) maps. shudder
I think that is an uncharitable interpretation of gruseom's comment. I assumed he was referring to the fact that some programming tasks are inherently stateful, such as writing OS kernels and device drivers.
However, I think that we should be able to, in principle, provide abstractions on top of inherently stateful things that hides their nature. In the same vein, we should be able to architect programs so that we can separate the inherently stateful stuff from the functionally pure stuff. For some applications, this will help. For others, not so much.
> I think that is an uncharitable interpretation of gruseom's comment. I assumed he was referring to the fact that some programming tasks are inherently stateful, such as writing OS kernels and device drivers.
No. The underlying computation is physical is just as true for any computer program, OS kernel or not. What was referred to was clearly the fact that it's silicon at the bottom level of any computation, just as his parent was talking about ("leaky abstraction" and all that).
Let's look at your example of baking. Indeed it seems like an imperative (physical) process. When we mix eggs and flour to form a batter we can't get the eggs and flour back! Indeed even cracking the eggs seems like destructive update to me. Think about why that is. We've performed a non-reversible transformation on our ingredients AND we can't go back in time! In this case time is implicit in our understanding.
Now consider trying to model baking mathematically. We want to build an equation that will tell us the state of our cake. This will be function - that is dependent on time - time is now explicit in our model. Now we can say at t0 our eggs are whole. At some time in the future t1 our eggs are cracked. This doesn't negate the fact at t0 the eggs are whole. We can't go back in time and update what the state of the eggs then - that makes no sense! So here we see that when time is an explicit parameter we have time symmetry. The equations don't care if time goes backwards or forwards.
And we can always do this. If you have some process that has a state that changes through time you can model that process with a static model that includes time as an explicit parameter. This means that our model is a higher dimensional look at the real thing. And our actual computation (or experiment or baking etc.) is a degenerate case of the more general model where the time parameter is actual time.
How does this relate to functional programming versus imperative programming. Well it is the difference between theory and experiment. But in this case we also have to deal with the fact that our program is computed (a physical process happening in time) and so we're using a physical computation which follows the rules of mathematics to model mathematical rules using a degenerate case of those rules. So if it seems like there's an abstraction leaking it's because it kinda is.
I like to think about it in this way because it highlights the very tight knit nature of physics and computation. Indeed a general theory of what it means to compute something could lead to a unified theory of physics and vice-versa.