So, to be clear on what you are saying, if I understand corectly you are saying that training a neural net to approximate a function is formulating a law, like for example a natural law? Is that right?
As a for instance, if I train a neural net to predict the motions of the planets, the trained model is a law of planetary motion, like Kepler's laws of planetary motion? Is that correct?
I would say it's essentially equivalent, especially if you choose a neural network architecture with a very low-dimensional layer in the middle with only a handful of variables.
Then the first half of the network (before the low-dimensional layer) will learn how to "encode" the state of the system in the video in as few variables as possible, such as the orientations and angular momenta of the double pendulum. This is equivalent to what humans do when we look at a messy physical system like the Solar System and model it with a few quantitative parameters.
The bottleneck layer will represent the handful of state variables, and then finally the other half of the network will learn the mathematical function that predicts the system's evolution. This is equivalent to what humans do when we work out physical laws and equations of motion.
OK, thanks for clarifying. I feel that your description of neural nets' inner
workings is a bit idealised and I'm not convinced that we have seen any evidence
that they are as powerful in representing real-world phenomena as you suggest.
But that's a big discussion so let's leave this aside for a moment.
I can agree that a neural net can learn a model that can predict the behaviour
of a system, to some extent, within some margin of error.
That's not enough for me to see neural net models as (scientific) "laws". For
the sake of having a common definition of what a scientific law is, I'm going
with what wikipedia describes as a scientific law: a statement that describes or
predicts some set of natural phenomena, according to some observations
(paraphrasing from: https://en.wikipedia.org/wiki/Scientific_law). Sorry for not
introducing this definition earlier on. If you disagree with it, then that's my
bad for not estabilishing common terminology beforhand.
In that sense, neural net models are not scientific laws because, while they can
predict (but not describe) they are not "statements". Rather they are systems.
They have behaviour and their behaviour may match that of some target system,
like the weather say. But like a simulation of the economy, or an armillary
sphere are not, themselves "laws", even though they are possibly based on
"laws", a neural net's model can't be said to be a "law", even if it's based on
observations and even if it has an internal structure that makes its behaviour
consistent with some (known or unknown) law.
There is also the matter of usability: neural net models are, as we know, "black
boxes" that can't be inspected or queried, except by asking them to analyse some
data. While useful, that's not a "law", because it does not help us understand
the systems they model. If this sounds like a semantic quibble, it isn't. To me
anyway it doesn't make sense to base scientific knowledge on a bunch of
inscrutable black boxes. Scientific laws and scientific theories are not black
boxes.
As an aside, neural nets fall short of what Donald Michie (father of AI in the
UK) called "ultra-strong machine learning" [1]. That's the property fo a machine
learning system that improves not only its own performance, but that of its
user, also. Current techniques aren't even close to that.
____________________
[1] Machine Learning: the next five years, Donald Michie, 1988
As a for instance, if I train a neural net to predict the motions of the planets, the trained model is a law of planetary motion, like Kepler's laws of planetary motion? Is that correct?