Another sequence that's about as simple to define as Goodstein's is the following:
Start with any binary tree, which is either 0, or a pair [s,t] of binary trees.
Then while it's not 0, repeatedly apply the following predecessor operation P on binary trees:
P([0,t]) = t
P([s,t]) = [P(s),t] but with all instances of t replaced by [P(s),t]
For example, starting from [[0,0],0], we have the sequence of predecessor trees
This sequence grows unbelievably faster than Goodstein's, and even faster than the infamous TREE() function [1], while having an almost trivial definition. The number of predecessors to reach 0 is sequence A367433 in the Online Encyclopedia of Integer Sequences [2].
Last 4 terms are trivial. But I have trouble following even the first step. The SE answer is also pretty comprehensible, but as if there was some default assumption I’m
not aware of. Do we make one-time substitution, or recursive? Stopping rules feel arbitrary in all enumeration combinatorics I try.
The first step proceeds as follow. We want the predecessor of [s,t] with s=[0,0] and t=0.
We first compute s' = P(s) = P([0,0]) = 0. Then in [s',t] = [0,0] we must replace all occurrences of 0 with [0,0], which results in [[0,0],[0,0]]. This is a one-time substitution (else it would never end).
A theorem which is true in every model is provable by Godel's completeness theorem. Since this theorem is true for the standard model of the natural numbers but not provable, it follows there are nonstandard models of the natural numbers for which it is false.
That is, there are models of Peano arithmetic which contain all of the natural numbers we know and love, and some other ones on top of that and there are some Goodstein sequences using those extra "non-standard" natural numbers which do not terminate at zero.
>Since this theorem is true for the standard model of the natural numbers but not provable
I've always found the provable vs true comparison confusing.
How can we say the statement is true under the standard model if we cannot prove it? I understand that it could be true, but how do we know it?
If it's proven with second order arithmetic, then this implies it is true under the standard model too?
Or are there statements true independently of the axiomatic system you use to prove them?
(Apologies if this is too off-topic)
The provability of a statement depends upon which system you are in. For instance, within PA one can’t prove that PA is consistent but within ZFC one can prove that PA is consistent. We can say of a statement: Statement A can’t be proven in a given axiomatic system but it can be proven in a different system.
Let’s assume the Natural Numbers are consistent system. Let’s collect all true statements in this system and use that collection as our axioms. It is now the case that every true statement about the Natural Numbers can be proven in this system. The problem with this system of axioms is that there is no effective procedure for determining if a statement is an axiom or not. It is not a useful system.
Every true statement can be proven in some system. The incompleteness theorems show that we can’t have a relatively simple set of axioms that are powerful enough to prove all true statements about the Natural Numbers. Every simple enough set of axioms for the Natural Numbers will have nonstandard implementations (models) in which some statements are false in these nonstandard models but true in the Natural Numbers.
> The existence of non-standard models of arithmetic can be demonstrated by an application of the compactness theorem. To do this, a set of axioms P* is defined in a language including the language of Peano arithmetic together with a new constant symbol x. The axioms consist of the axioms of Peano arithmetic P together with another infinite set of axioms: for each numeral n, the axiom x > n is included. Any finite subset of these axioms is satisfied by a model that is the standard model of arithmetic plus the constant x interpreted as some number larger than any numeral mentioned in the finite subset of P. Thus by the compactness theorem there is a model satisfying all the axioms P. Since any model of P* is a model of P (since a model of a set of axioms is obviously also a model of any subset of that set of axioms), we have that our extended model is also a model of the Peano axioms. The element of this model corresponding to x cannot be a standard number, because as indicated it is larger than any standard number.
So basically take Peano arithmetic and say "Hey Peano Arithmetic, what's the largest number you have? Oh n you say? well exists x > n. Haha". Seems like childish game.
It's philosophical. It's either turtle all the way down or the axiomatic systems. With axiomatic systems you'll always get things like this, and this is what keeps mathematician awake at night.
The shocking part is just how much you can't get away from stupid problems like this. Hilbert believed we could settle these kinds of issues once and for all, the Goedel proved that we can't. So in a sense there isn't anything like "the actual natural numbers without shenanigans", there always are shenanigans.
There always are shenanigans if you pass a certain threshold of expressive power. Can we do most or all useful /interesting stuff below that threshold though?
Yes, addition and multiplication over the infinite naturals. I don't think it's obvious that all three of these are needed together for all interesting applications. For instance, various types of finitism eliminate the infinities.
Not at all. There is no largest natural number to begin with even in the standard model. One way to conceptualize non-standard natural numbers would be to consider natural numbers with an infinite number of digits. Any such number would be greater than any natural number, and no first order model of arithmetic can exclude every possible way to express such numbers.
The main issue is that first order logic can't define the concept of finite. There is no way for a first order system to express a statement like "There are only finitely many x such that P(x) holds." Introducing such a finite quantifier or finite predicate will also introduce inconsistencies.
If it were possible then one could introduce an axiom along the lines of "For all x, x has a finite number of predecessors." and then we could eliminate all non-standard natural numbers.
I think I'm philosophically a finitist / constructivist, which seems to be very 19th century and out of vogue with modern Mathematicians AFAICT.
> There is no largest natural number to begin with even in the standard model.
I'm aware of that. I get Peano Arithmetic more of less.
> One way to conceptualize non-standard natural numbers would be to consider natural numbers with an infinite number of digits. Any such number would be greater than any natural number, and no first order model of arithmetic can exclude every possible way to express such numbers.
I'm not sure you can argue such a number is "greater" than any natural number? They seem incomparable.
>Laurence Kirby and Jeff Paris[1] showed that it is unprovable in Peano arithmetic (but it can be proven in stronger systems, such as second-order arithmetic). This was the third example of a true statement about natural numbers that is unprovable in Peano arithmetic, after the examples provided by Gödel's incompleteness theorem and Gerhard Gentzen's 1943 direct proof of the unprovability of ε0-induction in Peano arithmetic.
It seems math is never perfect but always perfectible. A perfect system wouldn't have paradoxes. One common example is Russel paradox.
We arrive at different conclusions by choosing a different set of axioms and constructing everything else based on that set. We can have parallels that intersect and parallels that don't.
We don't want perfect systems but useful ones. A perfect axiom system wouldn't have true but unprovable statements either, yet, as we learned a while ago, any such "perfect" system would be unable to express even basic arithmetic.
During the past century, it has been demonstrated again and again that the "ultimate truth" either doesn't exist or cannot be attained (Incompleteness Theorem, Uncertainty Principle, Observable Universe, not to mention a million philosophical and psychological problems).
If that is what science strives for, it's a lost cause. Fortunately, lots of valuable things can be achieved without chasing such lofty, unattainable goals.
If "ultimate truth" cannot be attained, but can be asymptotically approached, is striving for it still a lost cause if the knowledge gained in the approach might be useful?
I'll never be able to run a marathon as fast as Eliud Kipchoge. That doesn't mean it's a lost cause for me to try to get my marathon time as close to his as possible - I can still achieve valuable things despite the goal being lofty and unattainable. Further, I might achieve more through chasing an unattainable goal, than I would if I'd set my sights lower.
It's also worth remembering the aphorism that people saying: “It can’t be done,” are always being interrupted by somebody doing it.
Actually… reaching the very limits of knowability is the crown achievement of science and your first two examples have very important engineering implications.
Given these economies, perhaps it makes sense to say wherever in science we aren’t at the boundary of knowable, there’s still something worth discovering.
I disagree strongly with your reading of those results. Each says that the "ultimate truth" is complicated -- that not every question has an easy answer. That doesn't say there isn't an ultimate truth, just "that question cannot be answered" is the ultimate truth. Exploring the bounds of knowability is incredibly important to the dual side of science; exploring the bounds of knowledge. In my lofty opinion, the very purpose of humanity is to bring these bounds together.
Kelsey has a new channel on YouTube called Chalk Talk. It got some traction with a few lovely videos, but it's been some time since she maade one. I suspect there is a funding issue.
The author of this article writes that the theorem cannot be proven in "Peano arithmetic". But that's only true if by that he means "first-order Peano arithmetic", a system which allows for absurd "non-standard numbers". When ordinary mathematicians talk about "Peano arithmetic", they arguably have the second-order induction axiom in mind, not the first-order infinite induction axiom scheme. And they most certainly have the natural numbers in mind, not some possibly absurd "numbers" with infinitely many predecessors. And in this normal version of Peano arithmetic, the theorem can be proven.
The link to "Peano arithmetic" at the top of the Goodstein page takes you to Peano axioms page. That page says Peano axioms are "close to" second-order arithmetic, and it also provides an informal distinction between Peano axioms and Peano arithmetic. But there's no wikipedia page for Peano arithmetic.
So I'm curious if this theorem is unprovable in Peano axioms, or just Peano arithmetic. If the latter, then the link at the top of the Goodstein page is rather misleading, unless you're paying close enough attention to notice the blurb about the distinction between Peano axioms and Peano arithmetic.
Logicians and set theorists use this terminology. But everyone else just uses the second-order induction axiom when talking about arithmetic, without explicitly talking about first or second-order logic. Steven Shapiro made this point in his book "Thinking About Mathematics".
According to different page in wikipedia, Peano axioms is second order but "Peano arithmetic" is first order.
> The ninth, final axiom is a second-order statement of the principle of mathematical induction over the natural numbers, which makes this formulation close to second-order arithmetic. A weaker first-order system called Peano arithmetic is obtained by explicitly adding the addition and multiplication operation symbols and replacing the second-order induction axiom with a first-order axiom schema.
No it's pretty standard. Peano's axioms from the 19th century had an induction axiom in second order logic, since it quantified over predicates. Peano arithmetic (PA), also called first order arithmetic, came later. It is a first order theory whose induction axioms are an infinite schema. To confuse things further, second-order arithmetic (SOA) is also a first order theory, whose objects are naturals and sets of naturals.
It's pretty standard also to talk about "first-order Peano arithmetic" and "second-order Peano arithmetic". This is much more clear but inconsistent with the other usage which you describe.
Moreover, non-logicians don't talk about "first-order" or "second-order" logic at all. They just express the induction axiom in plain English, and in this case it is (as Stewart Shapiro argued) equivalent to the second-order axiom.
I think the article mentions this clearly in the introduction:
“it is unprovable in Peano arithmetic (but it can be proven in stronger systems, such as second-order arithmetic).”
[1] https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem#TREE_...
[2] https://oeis.org/A367433