x86 has decades of knowhow and a zillion transistors to spend on making the memory pipeline, TLB caching & prefetching etc. etc. really really good. They work as well as they do despite the 4k base page size, not because of it.
If you'd start from a clean sheet today you'd probably end up with a somewhat bigger base page size. Not hugely larger though, as that wastes a lot of memory for most applications. Maybe 16k like some ARM chips use?
Not sure about this argument, do you have any references?
In a LWR, if the coolant/moderator boils away, sure, the reactivity goes down. But there is plenty enough decay heat left to melt all the fuel that can then flow into a puddle of suitable geometry and go boom. Hypothetically speaking, at least.
I suppose in practice most LWR's use lightly enriched fuel so it's very hard to get enough material close enough together to make it critical, let alone supercritical, without a moderator of some sort. Of course, plenty of research reactors, naval reactors etc. have operated with very highly enriched fuel (90+%?), but even these have AFAIU so far managed without accidentally turning themselves into nuclear bombs.
Seems most contemporary civilian fast reactor designs are designed to operate with HALEU fuel, where the limit is (somewhat arbitrarily) set at 20%. A lot higher enrichment than your typical LWR, but still much lower than you see in weapons, and you still need quite a lot of it before it can go boom.
It's straightforward. Consider what would happen (for example) if all the fuel in a reactor is compressed into a more compact configuration.
In a thermal reactor, there's no problem, as there's now no moderator. There was massive rearrangement and compaction of melted fuel at the TMI accident, but criticality was not going to be a serious issue for the fundamental reasons I gave above.
In a fast reactor? It can only become more reactive. Anything else there was only absorbing neutrons, not helping, and the geometric change reduces neutron leakage.
Edward Teller somewhat famously warned about the issue in 1967, in a trade magazine named "Nuclear News":
“For the fast breeder to work in its steady state breeding condition, you probably need half a ton of plutonium. In order that it should work economically in a sufficiently big power producing unit, it probably needs more than one ton of plutonium. I do not like the hazard involved. I suggested that nuclear reactors are a blessing because they are clean. They are clean as long as they function as planned, but if they malfunction in a massive manner, which can happen in principle, they can release enough fission products to kill a tremendous number of people.
… But if you put together two tons of plutonium in a breeder, one tenth of one
percent of this material could become critical. I have listened to hundreds of analyses of what
course a nuclear accident could take. Although I believe it is possible to analyze the
immediate consequences of an accident, I do not believe it is possible to analyze and foresee
the secondary consequences. In an accident involving plutonium, a couple of tons of
plutonium can melt. I don’t think anyone can foresee where one or two or five percent of this
plutonium will find itself and how it will get mixed with other material. A small fraction of
the original charge can become a great hazard."
(Natrium is not a breeder but the same argument holds.)
That no fast reactors have yet exploded is of course no great argument. How many fast reactors have been built, particularly large ones? Not many. And we've already seen a commercial fast reactor suffer fuel melting (Fermi 1).
There have been some sodium cooled designs that have used a closed cycle gas turbine using nitrogen as the working fluid for the secondary circuit, in order to avoid any issues with sodium-water reactions with a traditional steam Rankine secondary circuit.
There are also fast reactor designs using lead as the coolant rather than sodium. These are interesting, but less mature than sodium cooling. Sodium is better from a cooling and pumping perspective though.
A eutectic is an alloy that has a lower melting point than any of its components.
Lead-bismuth eutectic or LBE is a eutectic alloy of lead (44.5 at%) and bismuth (55.5 at%) used as a coolant in some nuclear reactors, and is a proposed coolant for the lead-cooled fast reactor, part of the Generation IV reactor initiative. It has a melting point of 123.5 °C/254.3 °F (pure lead melts at 327 °C/621 °F, pure bismuth at 271 °C/520 °F) and a boiling point of 1,670 °C/3,038 °F.
Yes, some lead cooled reactor designs have used LBE, others pure lead. Though AFAIU so far the only lead cooled reactors that have actually been built and operated in production have used LBE. There is a pure lead cooled reactor under construction that should be started up in a few years if the current schedule holds.
I have a small benchmark program doing tight binding calculations of carbon nanostructures that I have implemented in C++ with Eigen, C++ with Armadillo, Fortran, Python/numpy, and Julia. It's been a while since I've tested it but IIRC all the other implementations were about on par, except for python which was about half the speed of the others. Haven't tried with numba.
To bring Julia performance on par with the compiled languages I had to do a little bit of profiling and tweaking using @views.
I don't think the situation is that comparable to python, since in python the library has to be present at runtime. And with the dysfunctional python packaging there's potentially a lot of grey hairs saved by not requiring anything beyond the stdlib.
With Rust, it's an issue at compile-time only. You can then copy the binary around without having to worry about which crates were needed to build it.
Of course, there is the question of trust and discoverability. Maybe Rust would be served by a larger stdlib, or some other mechanism of saying this is a collection of high-quality well maintained libraries, prefer these if applicable. Perhaps the thing the blog post author hints at would be a solution without having to bundle everything into the stdlib, we'll see.
But I'd be somewhat vary of shoveling a lot of stuff into stdlib, it's very hard to get rid of deprecated functionality. E.g. how many command-line argument parsers are there in the python stdlib? 3?
Since Rust cares very much about zero-overhead abstractions and performance, I would guess if something like this were to be implemented, it would have to be via some optional (crate/module/function?) attributes, and the default would remain the existing monomorphization style of code generation.
Swift’s approach still monomorphizes within a binary, and only has runtime costs when calling code across a dylib boundary. I think rust could do something like this as well.
If Rust and static linking were to become much more popular, Linux distros could adopt some rsync/zsync like binary diff protocol for updates instead of pulling entire packages from scratch.
Even then, they would still need to rebuild massive amounts on updates. That is nice in theory, but see the number of bugs reported in Debian because upstream projects fail to rebuild as expected. "I don't have the exact micro version of this dependency I'm expecting" is one common reason, but there are many others. It's a pretty regular thing, and therefore would be burdensome to distro maintainers."
Static linking used to be popular, as it was the only way of linking in most computer systems, outside expensive hardware like Xerox workstations, Lisp machines, ETHZ, or what have you.
One of the very first consumer hardware to support dynamic linking was the Amiga, with its Libraries and DataTypes.
We moved away from having a full blown OS done with static linking, with exception of embedded deployments and firmware, for many reasons.
Servo has a distinct design goal that sets it apart from its predecessor within Mozilla and has already had offsprings that has made its way directly into Firefox.
Its purpose is not to reinvent everything. It’s not a hype project.
Servo's original purpose was to reinvent everything for Firefox to modernize the codebase, and make it secure and more performant (e.g. CSS styling engine, HTML parser, etc.) So it actually fits that purpose pretty well.
Doesn't FreeIPA work with EntraID? I used to use it with Exchange and it worked pretty well.. (or, as well as any non-microsoft product that has to intergrate with Microsoft products at least).
Yes, you can get very close to that API with this extension + existing Vulkan extensions. The main difference is that you still kind of need opaque buffer and texture objects instead of raw pointers, but you can get GPU pointers for them and still work with those. In theory I think you could do the malloc API design there but it's fairly unintuitive in Vulkan and you'd still need VkBuffers internally even if you didn't expose them in a wrapper layer.
I've got a (not yet ready for public) wrapper on Vulkan that mostly matches this blog post, and so far it's been a really lovely way to do graphics programming.
The main thing that's not possible at all on top of Vulkan is his signals API, which I would enjoy seeing - it could be done if timeline semaphores could be waited on/signalled inside a command buffer, rather than just on submission boundaries. Not sure how feasible that is with existing hardware though.
It's a baby-step in this direction, e.g. from Seb's article:
> Vulkan’s VK_EXT_descriptor_buffer (https://www.khronos.org/blog/vk-ext-descriptor-buffer) extension (2022) is similar to my proposal, allowing direct CPU and GPU write. It is supported by most vendors, but unfortunately is not part of the Vulkan 1.4 core spec.
The new `VK_EXT_descriptor_heap` extension described in the Khronos post is a replacement for `VK_EXT_descriptor_buffer` which fixes some problems but otherwise is the same basic idea (e.g. "descriptors are just memory").
If you'd start from a clean sheet today you'd probably end up with a somewhat bigger base page size. Not hugely larger though, as that wastes a lot of memory for most applications. Maybe 16k like some ARM chips use?
reply