Hacker Newsnew | past | comments | ask | show | jobs | submit | abetusk's commentslogin

I asked OpenAI.

Take an $n$, chosen from $[N,2N]$. Take it's prime factorization $n = \prod_{j=1}^{k} q_j^{a_j}$. Take the logarithm $\log(n) = \sum_{j=1}^{k} a_j \log(q_j)$.

Divide by $\log(n)$ to get the sum equal to $1$ and then define a weight term $w _ j = a_j \log(q_j)/\log(n)$.

Think of $w_j$ as "probabilities". We can define an entropy of sorts as $H_{factor}(n) = - \sum_j w_j \log(w_j)$.

The mean entropy is, apparently:

$$ E_{n \in [N,2N]}[ H_{factor}(n) ] = E_{n\in[N,2N] [ - \sum_j w_j(n) \log(w_j(n)) ] $$

Heuristics (such as Poisson-Dirichlet) suggest this converges to 1 as $N \to \infty$.

OpenAI tells me that the reason this might be interesting is that it's giving information on whether a typical integer is built from one, or a few, dominant prime(s) or many smaller ones. A mean entropy of 1 is saying (apparently) that there is a dominant prime factor but not an overwhelming one. (I guess) a mean to 0 means dominant prime, mean to infinity means many small factors (?) and oscillations mean no stable structure.


Wolfram has failed to live up to his promise of providing tools to make progress on fundamental questions of science.

From my understanding, there are two ideas that Wolfram has championed: Rule 110 is Turing machine equivalent (TME) and the principle of computational equivalence (PCE).

Rule 110 was shown to be TME by Cook (hired by Wolfram) [0] and was used by Wolfram as, in my opinion, empirical evidence to support the claim that Turing machine equivalence is the norm, not the exception (PCE).

At the time of writing of ANKOS, there was a popular idea that "complexity happens at the edge of chaos". PCE pushes back against that, effectively saying the opposite, that you need a conspiracy to prevent Turning machine equivalence. I don't want to overstate the idea but, in my opinion, PCE is important and provides some, potentially deep, insight.

But, as far as I can tell, it stops there. What results has Wolfram proved, or paid others to prove? What physical phenomena has Wolfram explained? Entanglement still remains a mystery, the MOND vs. dark matter rages on, others have made progress on busy beaver, topology, Turing machine lower bounds and relations between run-time and space, etc. etc. The world of physics, computer science, mathematics, chemistry, biology, and most of the others, continues on using classical, and newly developed tools independent of Wolfram, that have absolutely nothing to do with cellular automata.

Wolfram is building a "new kind of science" tool but has failed to provide any use cases of when the tool would actually help advance science.

[0] https://en.wikipedia.org/wiki/Rule_110



> I have four reasons ...

> First, for independent programmers, I think it's incredibly simple and straightforward to move your personal open source projects off of GitHub.

> Second, although you likely don't pay GitHub to host your open-source projects, they still make money from them!

> Third, GitHub's web interface has been in a steepening decline since the Microsoft acquisition ...

> Finally, I think open source communities, with roots in hacker culture from the 80s and 90s, form a particularly fertile soil for this sort of action.

I'm a programmer. I've set up Gogs, run various Git repos remotely and locally. I understand how simple it is. Simplicity is not the issue.

I host many open source projects on Github, gratis, care of Microsoft. They make money from them? Excuse me while I clutch my pearls.

The web interface is nice enough so that it sets the standard by which I judge other front end GUI wrappers around Git. Is it in decline? I don't know, maybe, but it's still good enough from my perspective. Using Gitlab or Sourcehut is painful. I'm glad they both exist but the UI, in my opinion, is not as good as Github.

Github is, for me, about sociability. I'll go where the people are. I can host my open projects, repos, blog posts, etc. on a server I control but that's not the point. I want people to see my projects, be able to participate in a meaningful way and be social with other projects. In theory, all these can happen on a private server. In practice, the people is what makes the platform attractive.

There are decentralized suggestions in the post, which I appreciate, and I'd like to see more information on how to use them and build a community around those, as that's the only real alternative to centralized platforms that I can envision.


I think you have it wrong. Wolfram's claim is that for a wide array of small (s,k) (including s <= 4, k <= 3), there's complex behavior and a profusion of (provably?) Turing machine equivalent (TME) machines. At the end of the article, Wolfram talks about awarding a prize in 2007 for a proof that (s=2,k=3) was TME.

The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.

If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.

So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.

I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.

Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].

k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].

There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].

[0] https://arxiv.org/abs/cs/0212002

[1] https://www.math.cmu.edu/~af1p/Texfiles/AFFHCIRG.pdf

[2] https://arxiv.org/abs/1706.08431


To me, this reads like a profusion of empirical experiments without any cohesive direction or desire towards deeper understanding.

Yah Stephen Wolfram is too often grandiose thereby missing the hard edges.

But in this case, given how hard P=NP is, it might create wiggle room for progress.

Ideally it would have gone on and said in view of lemma/proof/conjecture X, sampling enumerated programs might shine light on ... no doubt that'd be better.

But here I'm inclined to let it slide if it's a new attack vector.


`xdotool` is awesome and this is the first I'm hearing of it. Thanks.

Do you have any other command line tool recommendations?


I think your approach is pretty much fundamentally flawed.

Put it this way, let's say someone recorded typing in the paragraph that you presented but saved the keystrokes, pauses, etc. Now they replay it back, with all the pauses and keystrokes, maybe with the `xdotool` as above, how could you possibly know the difference?

Your method is playing a statistical game of key presses, pauses, etc. Anyone who understands your method will probably not only be able to create a distribution that matches what you expect but could, in theory, create something that looks completely inhuman but will sneak past your statistical tests.


I'm no expert but, from what I understand, the idea is that they found two 3D shapes (maybe 2D skins in 3D space?) that have the same mean curvature and metric but are topologically different (and aren't mirror images of each other). This is the first (non-trivial) pair of finite (compact) shapes that have been found.

In other words, if you're an ant on one of these surfaces and are using mean curvature and the metric to determine what the shape is, you won't be able to differentiate between them.

The paper has some more pictures of the surfaces [0]. Wikipedia's been updated even though the result is from Oct 2025 [1].

[0] https://link.springer.com/article/10.1007/s10240-025-00159-z

[1] https://en.wikipedia.org/wiki/Bonnet_theorem


To be precise, the mean curvature and metric are the same but the immersions are different (they're not related by an isometry of the ambient space).

Topologically they're the same (the example found was different immersions of a torus).


Is it the case that 'they' are simply two ways of immersing the same two tori in R^3 such that the complements in R^3 of the two identical tori are topologically different?

If so, isn't this just a new flavor of higher-dimensional knot theory?


They don't appear to care about the images of the immersions or their complements, aside from them not being related by an isometry of R^3. They're not doing any topology with the image.

In other works, they have two immersions from the torus to R^3, whose induced metric and mean curvature are the same, and whose images are not related by an isometry of R^3. I didn't see anything about the topology of the images per se, that doesn't seem to be the point here.


As others mentioned, tool use wasn't restricted to homo sapiens. I think this makes sense, no? We didn't spontaneously use tools, it must have evolved incrementally in some way.

I think we see shades of this today. Bearded Capuchin monkeys chain a complex series of tasks and use tools to break nuts. From a brief documentary clip I saw [0], they first take the nut and strip away the outer layer of skin, leave it dry out in the sun for a week, then find a large soft-ish rock as the anvil with a heavier smaller rock to break open the nut. So they had to not only figure out that nuts need to be pre-shelled and dried, but that they needed a softer rock for the anvil and harder rock for the hammer. They also need at least some type of bipedal ability to carry the rock in the first place and use it as a hammer.

Apparently some white-faced Capuchins have figured out that they can soak nuts in water to soften it before hammering it open [1].

[0] https://www.youtube.com/watch?v=fFWTXU2jE14

[1] https://www.youtube.com/watch?v=N7sJq2XUiy8


No, we could have had something which other previous species didn't that unlocked the use of tools. Otherwise if no species could be the first, or it would be deemed spontaneous, no new skills could be unlocked.


This process also display coordination within a group and memory. Quite impressive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: