It should be the opposite, I am a bit confused about LineageOS' statement here. The Quarterly releases represent solid milestones towards the final Android number milestone.
GrapheneOS claims that this made their rebasing much more efficient: instead of receiving a massive dump of all Android 15 at the end, developers receive incremental changes (the QPRs) to help them anticipate major changes in the code.
GrapheneOS only supports devices that are still supported by the OEM, and they generally seem to have very few modifications that touch on frequently-changed parts of AOSP. In short, they can be relatively certain that nothing will break when they rebase, Google does the work for them.
On the other hand, LineageOS runs a lot of devices at the very (lower) edge of compatibility, which means that (with Google pushing large changes quarterly instead of yearly) the build roster has to be reevaluated quarterly instead of yearly as well. This was not anticipated properly for the Android 14 (LineageOS 21) cycle, which resulted in 19 devices not being able to be built on a previously supported major version (and therefore dropping from the roster completely).
In addition, the components that have been causing rebase conflicts each year now have the opportunity to cause rebase conflicts multiple times a year.
The OP makes an interesting point but it doesn't point out the main problem with high level hardware languages: these kind of languages don't allow you to describe the hardware you want exactly, they only allow you to describe their functionality and then they generate a hardware for said functionality. The problem is that you will end up with a hardware that is less optimized than if you were to design it in Verilog.
I work at a very big semiconductor company and we did some trials with implementing the exact same hardware we had in Verilog but on an high level HDL and while development could be faster, we ended up with worse PPA (Power, Performance and Area). If you try to improve this PPA, you just end up bypassing the advantages of high level HDLs.
On top of that, it raises a lot of questions on verification: are you going to do verification (testbenches) in the Chisel code or in the generated Verilog code from Chisel? If you do it in Chisel, how do you prove that Chisel didn't introduce bugs in the generated Verilog code (which is what you will end up shipping to the foundry for tape out after synthesis and place & route)? If you do it in the generated Verilog code, how do you trace the bugs back to the Chisel code?
I do think that we need a new language but not for design. Verilog/System Verilog is fine for hardware design, we don't need to reinvent the wheel here. We will always end up in Verilog in our synthesis and quite frankly, we don't spend that much time writing Verilog for hardware design. Hardware design is 5 lines of code and that's it. The real cost of hardware development is the other side of the coin, which is hardware verification.
If hardware design is 5 lines of code, hardware verification is 500 lines. Writing testbenches and developing hardware verification environments and flows is essentially normal programming and we are stuck in System Verilog for that, which is a very bad programming language. Using System Verilog as a programming language is so prone to unintended bugs in your testbenches and bad programming constructs.
This is what we should try to improve, verification not design. We spend far too much time in hardware verification and a lot of that time is spent dealing with pitfalls from System Verilog as a programming language.
I wish people would be investing more thinking here rather than trying to make hardware design friendlier for programmers.
... these kind of languages don't allow you to describe the hardware you want exactly, they only allow you to describe their functionality and then they generate a hardware for said functionality.
It is a common misconception that Chisel is yet another high level synthesis language with these drawbacks, however it is not. It simply allows you to write your own high level abstractions on top of exact low-level synthesis primitives.
You may find ROHD to be interesting, since it takes a very serious approach to verification and even has a simulator built-in!
https://intel.github.io/rohd-website/
You’re preaching to the choir on improving validation. Like you mentioned, In commercial designs that’s where a majority of the man hours actually go, and coincidentally where basically none go in academia where chisel is popular.
I will mention though, not all IP needs max PPA or specific control of the netlist and there could be initial opportunities there. Especially with the cost per area on many nodes going significantly down it can be more economical, at some volumes, to save man hours with HLS even if it leads to larger areas, which in theory means more production cost but in reality might not even change the floor plan.
I worked at a startup company that created a DSL to generate verilog. We had the designers do unit testing in the DSL but “sign-off” verification was still UVM w/generated verilog. When a hidden bug costs millions of dollars, you can’t take the risk.
Cannot one write tests in Python? Python seems much nicer to use language. Sadly, I found no library that would allow to create an instance of verilog model from Python and control it (cocotb doesn't allow this).
I verified verilog using the Vera language back in the day. The largest test benches would take 5s to compile to interpreted bytecode. Verification engineers rarely change the design, so keeping that part compiled is great for productivity. It’s really too bad that monolithic compilation is so dang slow.
It reveals that there are things that marcan is glossing over and forcefully pushing for his way of doing things. No wonder why the conversation with the maintainer is not going forward.
He is talking as if these are private conversations and forgetting that this is completely public.
I admire his efforts in bringing up Linux on Apple Silicon platforms but this attitude of making a tantrum on Twitter is just sad and not very professional. And I bet a lot of junior developers use him as a role model, which is even sadder as this kind of attitude and snarkiness just keeps propagating.
He's always been a little like this. Fits the hot blooded Spaniards steriotype quite well. He'll get pissed off about something, fly off the handle and then calm down eventually. In the meantime he may churn out some very cool stuff because he actually is very technically talented.
I wouldn't call it the "hot blooded Spaniards stereotype", it's more like the general snarkiness and arrogance that emanates from this type of social networks.
If you feel the need to escalate your view to Twitter and turn it into a one-sided view, then maybe there is a chance you are missing something out.
Since you already know about Nitter, I'd like to point out that you can simply replace "twitter.com" with "nitter.ca" in the Twitter link and it will open the thread just fine.
The Portuguese implementation of this is quite poor and widely unused. They started to improve it recently but nothing beats the implementation of BankID in Sweden, it is definitely a killer feature.
I agree that HN goes overboard with Google Reader but I can also share the sentiment.
You are right when you say that there are many alternatives to Google Reader, even better ones you could say. I am fond not just of Google Reader but also I am fond of the times of better news consumption of back then.
When Google Reader disappeared, it left some sort of hole in news consumption that got filled up with Google+, Twitter and Facebook. The media outlets became obsessed about sharing news articles in social media, fighting for "likes", "+1s" and "retweets".
Google Reader provided a simple of way of having your news centralized on a snappy service, with good UI, without any ads or "smart suggestions" and without all of your social graph embedded in there. It was the way of consuming news for people that actually wanted to be informed.
And the best part, you could actually subscribe to other's people favorite feeds. It was kind of hidden, there was no dedicated "find friends" button or anything like that, you had to go out of your way and ask to someone "Can I have the link to you RSS feed for your saved items?" in order to "add them" to Google Read. And you could actually comment on their saved items.
I miss these times, I was actually a news junky back then because of Google Reader. I was shown what I wanted to be shown with no social crap or "hot articles" thrown to my face. I slowly lost interest in consuming news after that.
I have this experience more with HN rather then Reddit. Granted that the popular subreddits are quite overloaded with "normies" but I still visit lesser know communities that still keep their spirit and a solid user base.
HN on the other hand suffers from having just "one subreddit" and it became quite bad over the past years.
I consistently have intelligent discussions on HN. Disagreement is tolerated and hashed out. This does NOT happen on Reddit now. Reddit is unambiguously worse in that aspect.
While Element.io still has some UX points to improve upon, I wish people would be a bit more honest about the state of Matrix clients and less fatalist.
There are other alternative Matrix clients that implement an experience closer to Whatsapp than Element, like Fluffychat.
I have been running my own Matrix instance with federation on my 20 dollar/month Linode VPS for 5 years already.
I use between some friends and family and I'm also connected to massive channels like the official "Element Android" and "Element Web/Desktop".
And it runs... just fine. The server can be a bit performance hungry but I also host plenty of other services on my VPS (email, HTTP server, Seafile, VPN, etc) and I never noticed any degradation in performance in any of them.
They have been making a lot of progress in improving the performance of the Synapse server and it is very usable now. And it is not difficult to configure at all, it's easier than configuring Prosody.
I would say that the bottleneck now is more about improving the UX and the Voip capabilities of the clients.
GrapheneOS claims that this made their rebasing much more efficient: instead of receiving a massive dump of all Android 15 at the end, developers receive incremental changes (the QPRs) to help them anticipate major changes in the code.