Hacker Newsnew | past | comments | ask | show | jobs | submit | bArray's commentslogin

> Your laptop should be fully functional with a working power supply and either an ethernet port or USB port for connectivity. Age isn't a factor. We might modify your laptop to remove or power down the battery, wireless radios, etc. to ensure it can be used safely in the data center.

So they're going to open the laptop up and make hardware modifications to random laptops sent in? May as well have a VPS at that point.

A far better business offering would have been to offer pre-selected physical devices where such things are well known.


I had a similar problem for a Thinkpad, except there is more than one cell at different capacities that are switched between. The existing battery manager would not tell me which battery was actually being used, and whether there was still a secondary battery waiting to be discharged.

I wrote a very quick hacky program for X11 that stays always visible that will display the information for any number of batteries: https://gitlab.com/danbarry16/bat_mon

It ends up being 50kB with minimal optimization and sports a lightweight X11 library (GUI) and JSON parser (configuration).


What I'm looking for is a differential signal tester, where you can breakout any arbitrary cable or traces and test the properties of the wire with different frequencies. It should be able to measure interesting properties such as resistance, capacitance, inductance, phase/length difference, wire length, etc.

One of these devices for approximately $100 would sell all day long.


You can do that with a nanoVNA, except for the differential part. Less than $100.


> Those with a bit of silicon savvy would note that it’s not cheap to produce such a chip, yet, I have not raised a dollar of venture capital. I’m also not independently wealthy. So how is this possible?

What kind of order of magnitude of cost are we talking about?

What are the next steps - is there some service to cut the wafer and put into a package for you?


The masks alone are single digit millions, but with all the design tools and staff costs typically tens of millions is the benchmark number for a tape out in this node.

After coming out of the fab, the chips go through probing, packaging and reeling.


> The masks alone are single digit millions,

Ah, another reason why hardware erratas get fixed so rarely (I assume - along with retesting of course).


Yes, exactly. A lot depends on your expected volume. Essentially, masks are your fixed tooling cost for chips. You then amortize that over your full volume. It’s easier to justify another mask set to fix bugs if you are going to be selling oodles of chips and the cost ends up being negligible and much harder to justify it if the volume is low. Years ago, I was CTO at a startup when our chips came back from fab. Everything looked good except for a silly error that our chief architect had made. He felt horrible for a couple weeks. He was a great architect (meticulous and precise) and I kept telling him that it was no use crying over spilled milk. Engineering is hard. But there went another few million dollars of precious venture capital up in smoke for the replacement mask sets.


I knew the masks were expensive, but not that they were that expensive. Of course it's all a question of total quantity you use that mask for, but still...


It all depends on the node. Masks in 130nm are maybe in the $10k's-$100k's range. Masks for the latest TSMC nodes might cost you $30-40 million per set. The masks are pretty much a modern marvel in their own right - I'd wager they are some of the most precisely manufactured human objects in existence.


Most chips have basically one revision after first tapeout, because it's hard to get everything right first time. Small revisions can sometimes be done in the metal layer only, which is cheaper.


Can you share something about the subsequent per-chip manufacturing costs?


Rule of thumb is that a processed wafer from 28nm and older is around $3k/wafer and the cost goes up kind of exponentially towards the smaller nodes. Also, in general, the fab wants you to order a "FOUP" of wafers at a time - that's 25 wafers at a go.


So a little out of the budget of a hobbyist!

Is there a service to get on a FOUP with a group of people? I know for example of Tiny Tape Out [1], but I'm wondering where you might explore for larger designs.

[1] https://tinytapeout.com/


https://web.archive.org/web/20260312130613/https://www.marke...

^ Encase the link also responds with this for you:

    Access Denied

    You don't have permission to access "http://www.marketscreener.com/news/us-private-credit-defaults-hit-record-9-2-in-2025-fitch-says-ce7e5fd8df8fff2d" on this server.


To build a Gigawatt AI data center in 2025 is reported to cost $35bn [1]. If you're not going to build it to top specs, why even bother. Given that this is the UK, once all the bureaucracy is done, it'll be at least 50% more expensive.

A large business is estimated to use 50MWh at £14,706 a year [2]. It'll cost in excess of £300k per year just to run electricity, not that the grid has that in spare capacity [3]. It's completely in contrast to their green energy campaign.

Then, they don't even have any kind of contract actually in place:

> Asked about the terms of the contract that Nscale had signed to build the supercomputer by the end of this year, the government did not reply directly. Instead, it said that Nscale’s entire $2.5bn investment was “not a formal contract, rather an intention to commit capital”, and “may well include equipment and capital funding”.

There's not enough serious capital invested to get this off of the ground (or even to break ground seemingly). And then there are basic questions, like:

1. Why would build a data center that is supposed to create tonnes of jobs, in a location where it costs a lot to employ people?

2. Why would you outsource your data center if you live in the US or EU, when there are better options available locally? These data centers sure as hell won't be used by British companies because the government are crushing them with tax.

3. The energy cost is far too high compared to locations with nuclear or hydro electricity generation.

This whole thing stinks. I think it's a complete and utter lie.

[1] https://uk.investing.com/news/stock-market-news/how-much-doe...

[2] https://www.moneysupermarket.com/gas-and-electricity/busines...

[3] https://watt-logic.com/2025/01/09/blackouts-near-miss-in-tig...


Yesterday - The start (rendering) of a basic voxel editor for generating OBJ and STL files with just the keyboard. To solve 95% of my 3D modelling needs it turns out I likely just need cubes.

Today - Parsing a website's HTML (lots of pages, lots of links) to update an RSS feed that accepts filters. Rather than manually checking a website and losing track of what I have or haven't reviewed, the idea is to feed it into an RSS aggregator.


> A Bayesian decision-theoretic agent needs explicit utility functions, cost models, prior distributions, and a formal description of the action space. Every assumption must be stated. Every trade-off must be quantified. This is intellectually honest and practically gruelling. Getting the utility function wrong doesn’t just give you a bad answer; it gives you a confidently optimal answer to the wrong question.

I was talking somebody through Bayesian updates the other day. The problem is that if you mess up any part of it, in any way, then the result can be completely garbage. Meanwhile, if you throw some neural network at the problem, it can much better handle noise.

> Deep learning’s convenience advantage is the same phenomenon at larger scale. Why specify a prior when you can train on a million examples? Why model uncertainty when you can just make the network bigger? The answers to these questions are good answers, but they require you to care about things the market doesn’t always reward.

The answer seems simple to me - sometimes getting an answer is not enough, and you need to understand how an answer was reached. In the age of hallucinations, one can appreciate approaches where hallucinations are impossible.


It was either PCBWay or JLCPCB, but they had a "review window" where it was possible to make changes or cancel an order. They recently switched this to be an automated review, so there was no opportunity for corrections. It could be that the card companies blacklisted them after people started cancelling orders with their credit cards, because their UI stopped supporting the feature.


Ordered from JLC a few weeks ago, their "review" is still manual. You can select a "confirm production file" option to get a second chance too.


Don't they have a rather exhaustive self-service review UI on submission? Allowing people to cancel after they already verified exactly what they were getting seems a bit excessive, no?


I will review a board a dozen times, find no issues, submit it for production, then 5 minutes later discover an obvious bug. Characterize it how you will, the cancellation window has saved my bacon more than once.


Exactly this. You calm down, take a cup of coffee, marvel at your beautiful design, and then spot something out of place. The actual review has saved me a few times too, for example: "are you sure there are no copper layers in your PCB design?" - Doh! A few times they have raised issues regarding the limitations of their manufacturing capabilities, and this too has saved time and cost.


> In the last week we received ~470000 crash reports, these do not represent all crashes because it's an opt-in system, the real number of crashes will be several times larger.

470k crashes in a single week, and this is under-reported! I bet the number of crashes is far higher. My snap Firefox on Ubuntu would lock-up, forcing me to kill it from the system monitor, and this was never reported as a crash.

Once upon a time I wrote software for safety critical systems in C/C++, where the code was deployed and expected to work for 10 years (or more) and interact with systems not built yet. Our system could lose power at any time (no battery) and we would have at best 1ms warning.

Even if Firefox moves to Rust, it will not resolve these issues. 5% of their crashes could be coming from resource exhaustion, likely mostly RAM - why is this not being checked prior to allocation? 5% of their crashes could be resolved tomorrow if they just checked how much RAM was available prior to trying to allocate it. That accounts for ~23k crashes a week. Madness.

With the RAM shortages and 8GB looking like it will remain the entry laptop norm, we need to start thinking more carefully about how software is developed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: