Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's also what you get from a reactor with no containment dome, unlike every modern reactor.

On top of that, the Chernobyl reactor had a strong positive feedback: as the temperature went up, the reaction sped up. Modern reactors do the opposite.



Only up to a point. Modern designs tend to be short term passively safe, but shut off the pumps and radioactive decay alone can be enough to eventually cause a melt down in many modern designs. Which is the core issue, there’s a huge cost trade off for protection vs every possible issue no matter how remote.

Spent fuel pools are probably the greatest example of this. They haven’t caused a major issue yet but they’re potentially a much larger risk than the actual reactor.


The Soviets were the only ones that built reactors that way. As in, having the ability to blow up.

> Spent fuel pools are probably the greatest example of this. They haven’t caused a major issue yet but they’re potentially a much larger risk than the actual reactor.

I think you're also over estimating the danger here. We've been operating over 500 reactors for over 70 years. That's some pretty good statistical power.


Edit2: Blowing up isn’t the only risk from a melt down, a major meltdown on a river could contaminate millions of people’s drinking water without any boom.

First pools are shared between reactors so there’s probably only around 200 that have ever been built.

Also, the risk isn’t simply in year X for pool Y, it’s for every pool and every year. At best we can estimate the risk of a pool over it’s lifetime is probably under 2% and the risk from all pools is also under 2% in any given year. The risk of any pool over the next 50 years, now that we have very little hard data on. At least in terms of real world data ir could be 0.05% or 50% and we just don’t have enough real world data to validate.

Edit: You can scroll through this list but it looks like on average there’s around 3 reactors per location. https://en.wikipedia.org/wiki/List_of_commercial_nuclear_rea...


> At best we can estimate the risk of a pool over it’s lifetime is probably under 2% and the risk from all pools is also under 2% in any given year

How are you getting this number?

> The risk of any pool over the next 50 years, now that we have very little hard data on

What are you talking about? We have plenty of data. We literally have decades of physical testing and hundreds of millions (if not billions) of dollars worth of simulation testing. What do you think Sierra[0] (#3 super computer in the world) is doing all day? It is a classified machine at a DOE lab. Even a substantial portion of Summit does this same research (it also does a lot of climate research). But come on, 3 of the top 10 super computers are at US DOE labs, and the 3 exascale machines being built are also targeted for DOE labs (Aurora, Frontier, El Capitan (classified)). LLNL and ORNL spend significant resources on nuclear research.

[0] https://top500.org/system/179398/


“We've been operating over 500 reactors for over 70 years. That's some pretty good statistical power.”

I was correcting the statistical power comment. People estimate their quite safe, but essentially we ran the test once without problems which doesn’t say anything. 70 years ago there weren’t 500 reactors even today theirs less than 450 at probably under 200 sites. Thus the odds of any specific one having a problem next year is low, but the odds of any one having a problem next year is 200x as high and the odds of anyone having a problem in the next 50 years is ~50 * 200 times that. Or to put it another way they could be more dangerous than nuclear reactors but we just don’t have enough data to get signal from random noise.

As to why these estimates might not be meaningful, they ignore things like active sabotage which is why real world data is more meaningful.


> but essentially we ran the test once without problems which doesn’t say anything.

No, we ran the test over a thousand times (including research reactors). We have been running tests for over 70 years.

Honestly, it just sounds like you don't know very much about nuclear physics let alone nuclear reactors physics. Nor does it sound like you know much about statistics. You are being extremely confident while demonstrating a lack of knowledge.

Honestly, why talk with such confidence about a field you haven't worked in (even adjacent to) nor studied?


I am amused you think that.

Suppose you want to know how soon a CPU will fail. You can’t just test 1 CPU for 1 month and say well each of it’s 1 billion transistors lasted 1 month so the CPU also last a long time. That logic obviously doesn’t work because the part fails when any component fails.

Your trying to apply that same logic to say all spent fuel pools are safe over their lifetime because each individual spent fuel pools is unlikely to fail in a given year.

It’s the same thing with nuclear reactors, each individual reactor is low risk but build 1,000 of them and some will likely fail over their collective 50 year lifespan.


No, we are sampling a thousand different CPUs of about 30 varieties over 70 years. That's enough data to know on average how long a random CPU will last, to know how long any particular CPU will last, and to find trends about the longevity trends of newer CPUs over time.

You are using bad statistics. You are using bad science. And it is clear you don't know anything about nuclear fuel pools, reactors, nor nuclear waste management. Get off your high horse. If you want some books on these subjects I'm happy to recommend some but you got a lot of catching up to do before you have the right to act so cocky.


What’s the the odds that any spent nuclear fuel pool will fail in the next 50 years? That is exactly the number I referred to here and you seem to think we know. https://news.ycombinator.com/item?id=28898610.

Now you might want to use estimates of existing designs, but we don’t have them because new designs will be used over the next 50 years. So what exactly are your hard numbers based on? Effectively one test of roughly that length.


Eh, Fukushima had a western reactor design. And it sort of blew up.


After getting hit by a tsunami. And it was a 1970s design; modern designs would not have blown up, and in fact some other reactors in the area built ten years later faced the same challenges and did fine.

And even the one that blew up released very little radiation to the surrounding area. You'd get more of a dosage living in Denver than Fukushima.


Fukushima didn't blow up. There was no explosion. Taking a bulldozer to a building is very different than using explosives (especially nuclear explosives).


The reactor didn't blow up. The building around it, outside of the containment structure, did blow up because of a hydrogen leak. It wasn't actually a big problem by itself, but it looked pretty dramatic on TV, so it probably contributed to people's overreaction afterwards.


Fukushima weathered the worst earthquake AND tsunami in decades at the same time.

And we only had one direct fatality from radiation.


> Fukushima weathered the worst earthquake AND tsunami in decades at the same time.

To also elaborate, all the reactors at Fukushima were designed to be able to handle any earthquake and tsunami that they thought could possibly happen in the region. Problem is that there was an earthquake larger than any in recorded history and larger than they thought the fault was capable of making. Some people are surprised that we've learned a lot about earthquakes and faults within the last 40 years.



I'm not sure what you're trying to say. That bigger earthquakes can happen? Yeah. That's known. But there's also a maximum quake that a particular fault can generate. The Liquiñe-Ofqui fault is not the same fault line that caused the Tōhoku earthquake. You're comparing apples and oranges. It's like saying that Pompeii could happen in Arizona.


(me vaping some good shit)

If Yellowstone burps,

it maybe disturbs,

the Aspen Anomaly,

down to Lee's Ferry,

wiping downstream with the rubble of Glenn Canyon Dam and Lake Powell,

it could be hell.

I'm just a stoner, you're bright and smart.

No reason to be so uptight, and judging hard.

(Where are my Mango and Passionfruit drops? Nothing compares! Munchy Munchy Yum Yum (Vanishing madly giggling into the off.)(Don't scoff!))


>shut off the pumps and radioactive decay alone can be enough to eventually cause a melt down in many modern designs

The newest designs being built worldwide use natural circulation cooling and do not need cooling pumps in emergencies. Eventually the cooling pool needs to be refilled, but it's external to the containment pressure boundary, so you could refill it with a fire truck.


> so you could refill it with a fire truck.

Note you might need a lot of fire trucks, though. If decay heat is 0.2% of nominal power after a week, we need .8mL per nameplate MW (thermal) per second... which doesn't sound bad, but a 2GW reactor needs 5000L firetrucks showing up every 50 minutes, 24/7, after a week. This could be challenging depending upon the underlying emergency.

    ((0.002 megawatt) / ((2260 + 250) * (kilojoules / kg))) / (997 (kg / (m^3))) = 0.00079921038 liters per second
Assumption is that you're raising the temperature of the water 60C and then evaporating it, and that all the heat energy goes into the water. In the real world it can be expected to be slightly better than this, but not much...


You're off in your calculation by 3 orders of magnitude, you meant to write 0.002 gigawatt, not megawatt. Then yes, you get 0.7 liters/second, and 5000 L/hour for 2 GW plant. However, in actual reality, the firetruck would just park there, put one end of the hose into the plentiful source of water that surely must be available next to a power plant, and the other one into the entrance for coolant, and start its pump. Pumping 1 liter/second is quite in range of firetruck pumps abilities.


> You're off in your calculation by 3 orders of magnitude, you meant to write 0.002 gigawatt, not megawatt.

No... decay heat is about .2% of nameplate power after a week. So for each megawatt of nameplate thermal power, you need to get rid of 2 kilowatts of heat-- or 0.8mL/second of water boiling off. "If decay heat is 0.2% of nominal power after a week, we need .8mL per nameplate MW (thermal) per second..." Was quite clearly said.

> put one end of the hose into the plentiful source of water that surely must be available next to a power plant, and the other one into the entrance for coolant, and start its pump. Pumping 1 liter/second is quite in range of firetruck pumps abilities.

This is pretty optimistic in many disaster scenarios and doesn't apply to all plants.


That’s a great assumption unless loss of water is the underlying emergency and to really mess things up you lost both water and the scram failed.

That risk can be minimized by placing them near large natural bodies of water, but not all are ex: https://ejatlas.org/conflict/metsamor-nuclear-power-plant


Natural circulation gets heat from a reactor to X, but now your dependent on X. This often seems like a trivial detail, but Fukushima failed 3 days after the earthquake.

The issue is you want several things from a passive system at the same time, don’t lose heat in normal operation, quickly lose multiple GW of heat in an emergency and as much as 200+MW of heat for days after a shutdown. The obvious solution is to have a tank of water that boils if the reactor temperature gets to high, but now you need to keep that tank full.

Thus many designs result in a reactor that is passively safe for some number of hours and at risk after that. They describe this as a passively safe reactor even if it’s got external dependencies.


Nope.

Decay heat is below a half percent of operating after about a day. So 200 MW decay heat days later would mean a 40GW (thermal) reactor.

That's about ten times larger than the largest reactors in existence today.

Also, watts measure power, not heat.


Watts is joules per second be that electricity, horsepower, or heat.

Passive systems can’t assume a successful shutdown.


I can't figure out what you're saying here. You're assuming that the control rods aren't in, days later, for the purpose of calculating heat days after shutdown?


I am saying passive safety can’t assume anything else worked.


If you're going to apply "passive safety" globally, sure.

But what we're talking about here is passive cooling system safety, not that the entire reactor is passively safe. The multiply-redundant shutdown systems suffice to end the chain reaction.

If the chain reaction ends, you're pretty much immediately at 7% of decay heat -- so sure, a 1.5GW reactor will put out 100MW of decay heat, still. But this will rapidly fall off. After about an hour, it's more like 15MW; after a day, 6MW.

Your statement of "200MW of decay heat" days later assumes either a ridiculous initial condition (an implausibly large reactor) or assumes you still have an operating reactor, which... isn't decay heat anymore.


There have been multiple cases where reactors haven’t fully shut down safely. Assuming a scram will 100% work every time in an emergency simply isn’t appropriate or realistic.

Also, Palo Verde Nuclear Generating Station has 3 different 4000MW thermal reactors, 7% of that is 280MW, though sure if everything shuts down properly it should hit ~15MW. Mitsubishi APWR is aiming for 4.5GW thermal in normal operation though some safety margin needs to be considered on top of that.


> There have been multiple cases where reactors haven’t fully shut down safely

There's Chernobyl, and a few cases where a scram was delayed by 15 minutes or less. How can this produce hundreds of megawatts days later?

> There have been multiple cases where reactors haven’t fully shut down safely. Assuming a scram will 100% work every time in an emergency simply isn’t appropriate or realistic.

Assuming that every system fully fails is unrealistic, too.

> 7% of that is 280MW,

You said hundreds of megawatts days later.

> Mitsubishi APWR is aiming for 4.5GW thermal in normal operation though some safety margin needs to be considered on top of that.

What, they're going to run it over nameplate for days straight? A small excursion over 4.5GW won't appreciably change the amount of power output days later. Now you're just being silly.


Your boiling a fixed pool of water. 15 minutes of nameplate capacity is boiling over 5 days of reserve at an expected 0.2% thermal output. The margins are often measure in minutes not days which is a long way from anything that could be called passively safe.

> Assuming that every system fully fails is unrealistic, too.

Not if you want to say your system is passively safe. I fully believe nuclear can be operated safety, but a huge part of that is acknowledging every possible failure mode rather than just saying unlikely means impossible.


I think you're trying really hard to salvage a point talking about hundreds of megawatts of "decay heat" days later.

An operating reactor isn't making "decay heat".

The claim made is that the cooling system is passively safe in shutdown. Fudging the amount of decay heat by a couple orders of magnitude, and then arguing about "what if it doesn't shut down" is a bogus argument.

Obviously if you cannot reduce a reactor below nameplate power indefinitely, you have a big problem. Thankfully, we have multiply-redundant protections against this in modern designs: redundant control rod assemblies, neutron poisoning, positive stability, etc. Other than Chernobyl (a clearly bad design), all cases of delayed shutdown experienced so far have been innocuous and we've learned a lot from them.


I can only assume my original point wasn’t clear. The normal amount of decay heat is the best case possibility and should be handed just fine by any reasonable design. I don’t think there’s any reason to assume a design has that kind of fatal flaws. “quickly lose multiple GW of heat in an emergency and as much as 200+MW of heat for days after a shutdown.” Was in reference to something compounding the issue of which their’s two main issues either it didn’t shutdown quickly or it didn’t shutdown completely.

I am objecting to is the assumption that safety systems should assume things are fine in an emergency. Chernobyl had multiple compounding issues, many other accidents where less serious because X and Y happened but Z didn’t happen. Depending on such trends continuing results in a false sense of security.

A passively safe system doesn’t mean there isn’t damage. It’s perfectly reasonable for a design to say in the event of X, Y, and Z stuffs going to break. Causing a billion dollars in damage is a perfectly reasonable trade off, losing containment isn’t.

PS: Part of that is acknowledging bad designs are going to happen, we engineers are going to make mistakes. Which means not all assumptions hold.


> it didn’t shutdown completely.

This would be the only possible explanation, and it is directly contradicted by calling it "decay heat".

It's pretty tricky to think of a scenario where you'd have 5-10% of nameplate days after attempted shutdown.

The worst incident where there was a failed shutdown-- other than Chernobyl-- that I'm aware of was a 1980 BWR incident.

* The reactor was at nearly no power except decay power for the entire duration of the incident: half the rods fully inserted.

* Manual remediation got all the rods in within 15 minutes.

* Last-ditch shutdown procedures, e.g. SLCS, were unnecessary because there was still sufficient control and rapid rampdown of reactor output.

* This is an old BWR design and...

* Procedures were updated and improved, and even with these old BWR designs we've had no subsequent incidents in 40 years.

Failure to shut down is indeed something really, really bad-- but insisting that cooling be designed to withstand this is a bit silly. Instead, we'd best design to be sure to avoid failures to shutdown, excursions in power far over nameplate, etc... rather than insist cooling systems survive fundamentally unsurvivable events without any intervention. E.g. we don't criticize SL-1's cooling design for not surviving the excursion to 10,000x nameplate.


The watt is a measure of power, heat (or energy) per unit time.


One could use pumps for increased efficiency during normal operation but the idea is that natural circulation should be able to remove all the heat if the reactor is SCRAMed. NuScale's design for instance only uses pumps for the steam generator, the rest is handled by natural circulation and the reactor sits in a water pool that needs to be replenished after two weeks in case of a major accident.


No. Modern reactors use their pumps to keep the reactor going. Shut them off and the reactor shuts itself off.

They now all have passive safety systems that do not require power.


I know that molten salt reactors have a "salt plug" at the bottom of the tank that will melt if the temperature is too high, dumping the liquid fuel into a boron bath.

I think this kind of reactor is safe in a way that no modern reactor is - operators can remove all power and walk away in this shutdown state. This isn't possible with modern reactors, where 6% of the heat that they produce comes from daughter nuclei, and this decay heat requires cooling power for months after a controlled shutdown.

I do agree, we have to build these safely, with every conceivable scenario, such that walking away is possible.

Converting to thorium fuel would also be far better, as there is only one stable isotope in nature, so no refing is necessary (beyond high-purity smelting), and no centrifuges.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: