So today I found out the cheapest E27 smart bulb that is 1500lm bright and >90cri is actually an Ikea Kajplats Matter over Thread bulb that costs 9.99€
And it has one trick up its sleeve:
▶ If you turn it off and on 12x times it switches to a zigbee mode so I was able to integrate it into my existing HomeAssistant setup even easier
Validating a core to server standards takes significantly longer.
V4 cores should be out this year using X925 and C1 Ultra-based V5 will probably be 2027-2028.
I suspect that X4 is already fast enough to beat EPYC in per-core performance when using the whole chip. ARM caught up/passed x86 in IPC all the way back around A77/78 in 2019-2020. They are now much faster per clock and hitting about the same all-core clockspeeds as standard EPYC (let alone zen5c EPYC).
The big issue is that Graviton5 is already starting to hit the market and uses the same v3 cores. A lot of marketshare for this chip will probably come from taking Ampere customers.
Cortex-X4 a.k.a. Neoverse V3 has significantly lower performance per core than Zen 5.
However, Neoverse V3 has a lower die area, so you could implement more cores per socket than with Zen 5, but this has not been done yet, as these new CPUs have only 136 cores per socket versus 192 cores per socket for Zen 5.
For programs that do not use array operations, i.e. which do not use AVX/AVX-512 instructions, Neoverse V3 has better performance per watt than Zen 5. But that changes for programs that benefit from AVX/AVX-512, where Zen 5 has better performance per watt.
Moreover, Zen 5 is already old. By the end of the year there will be Zen 6, which will be the real competitor for these new Arm CPUs, and Zen 6 will have better performance per watt, even more cores per socket and even more performance per core.
>Cortex-X4 a.k.a. Neoverse V3 has significantly lower performance per core than Zen 5.
I don't quite believe that, especially per core. In SPECint2017 from David Huang [1], Zen5 (HX 370) @ 5.1 GHz boost = 9.9 points, so Zen5 is approximately 1.94 points per GHz. But
Neoverse V3 (Cortex-X4) @ 3.2 GHz = 8.2 points, so V3 is approximately 2.56 points per GHz.
Arm 64C Neoverse V3 boosts to 3.7 GHz. AMD 64C Zen5 (9575F) boosts to 5 GHz. So this rough napkin mouth would show at maximum boost Neoverse V3 is right around maximum boost Zen5.
Zen5 fares much worse at base clocks, with Arm's 64C CPU offering +40% more SPECint perf per core than Zen5 because AMD downclocks to 3.3 GHz, but Arm is still up at 3.5 GHz + huge IPC advantage.
Softbank still owns 90% of ARM and they finished their acquisition of Ampere only a few months ago in November 2025.
I'm a chip designer and a chip this complicated takes about 3 years from start to actual silicon so it would have started well before Softbank started their acquisition process of Ampere.
The press release says it was co-developed with Meta who has a growing custom chip team. Normally these chips like Amazon's Graviton or Google's Axion are designed for their own data center use only and rented to customers. This ARM chip sounds like Meta and other companies will all be able to buy chips for their own data centers.
I'm guessing Softbank will get ARM and Ampere to align on future chips or just merge Ampere completely into ARM.
Agreed. The ARM AGI CPU supports a newer version of the vectorized instructions and has matrix math extensions that the AmpereOne M doesn’t. Also has almost twice the memory bandwidth. One paper at least, the AGI CPU seems like a better choice for AI workloads. Ampere is really pushing the AI workload use cases for the AmpereOne M, so this really makes their lives a lot harder.
Neoverse V3 is better than any core used or designed by Ampere, but it does not have matrix math extensions.
Neoverse V3 is the server version of Cortex-X4 and it is an Armv9.2-A CPU, with SVE2, but without SME/SME2.
The matrix math extensions, i.e. SME/SME2, are present only in the latest generation of Arm cores (the C1 cores), which implement the Armv9.3-A ISA, and also in recent Apple cores.
Neoverse V3 is also used in AWS Graviton5 and in a few NVIDIA products, e.g. in Thor, and it is also pretty much equivalent with the Skymont and Darkmont Intel E-cores, which are used in the Lunar Lake, Arrow Lake, Panther Lake and Clearwater Forest CPUs.
In the past you had to run the benchmark via an x86 translation layer like Fex / Box64 to get the Vulkan GPU benchmark. This was not reliable and led to crashes on some platforms. Thankfully we can run it native now
And it has one trick up its sleeve:
▶ If you turn it off and on 12x times it switches to a zigbee mode so I was able to integrate it into my existing HomeAssistant setup even easier
reply