Friday, July 5, 2019

The dangerous use of Machine Learning, Neural Networks & Artificial Intelligence in Nuclear Power Plants and Power Grids

July 5th, 2019

Dear Readers,

Here are some things you might keep in mind while reading the following newsletter:

1) Any debate about Artificial Intelligence (AI) vs. human judgement underscores the vital importance of including the public in all key decisions of existential significance. Neither computers nor robots, benevolent kings nor cruel dictators, government toadies nor gods and goddesses should make our decisions for us.

2) The continued existence of nuclear power and weapons raises moral and ethical issues that require the public's full consent. The public must know and understand what the benefits (if any) might be, what alternatives exist, and most importantly, what the risks and responsibilities are for themselves and for future generations.

3) Already, self-driving cars are generally far safer per mile than the average driver. But AI software is notoriously tricky to perfect. The software in the two Boeing 737 Max 8 crashes was probably basically "flying blind" when the crashes occurred. Input to the system came from an inflection (angle of attack) indicator on the outside of the plane, which was either broken off (perhaps by a bird strike) or non-functional for some other reason. The software was not designed to handle that situation. In both crashes the human pilots were trying to override the faulty software, but what they actually needed to do was shut off the automated sub-system. Unfortunately the pilots had not been trained in the need to do that (all they had to do was flip a switch in the cockpit) and apparently Boeing had not even tested that situation with their own test pilots on their own simulators!

4) Nuclear's threat to the public is the direct result of excluding the public from virtually all key decisions regarding the industry. The nuclear industry and the government have a tradition of lying to the public (and often to themselves) about nuclear issues, on everything from the severity and frequency of past accidents, to the likelihood that a safe solution to the problem of nuclear waste will ever be found, to the toxicity of radioactive exposures, and to whatever suits their fancy at the moment. The nuclear regulators are particularly at fault because they literally ignore reality. For example, they assume that all construction work is actually done "to code", that all buildings are as earthquake-resistant as designed (or even more so), that all earthquakes will NOT exceed the construction design basis, that all reactor pressure vessels have no weak spots even after 60 or 80 years of operation at 2200 pounds per square inch of pressure and 600 degrees Fahrenheit, through dozens of hot-cold cycles, in a lethally irradiated environment.

5) Theoretically (that is, probably in some people's minds) AI could be used to help justify Small Modular Reactors (SMR's) because of the repetitive design/learning/testing of AI and SMRs -- yet nothing is more terrifying for the health of the planet than tens of thousands (let alone, millions) of "small" nuclear reactors scattered everywhere across the earth, all subject to the industry's most-infamous potential event: The Beyond Design Basis Accident. The event they know is possible (however unlikely, and that's always just a guess), but they also know they can't possibly protect against.

6) Some of the suicidal attacks/mass murders mentioned in the newsletter below might actually be traced back to a class of anti-depressants that go by the names Fluoxetine (prozac), paroxetine (paxil) and sertraline (zoloft). I've seen firsthand people who experienced suicidal/homicidal behavior linked to these drugs. So have many people.

Ace Hoffman
Carlsbad, CA

============================================
The dangerous use of Machine Learning, Neural Networks & Artificial Intelligence in Nuclear Power Plants and Power Grids:
============================================

Did you know that if a nuclear reactor control room operator were to intentionally and suddenly flood the reactor core with cold water, the thermal shock would be very likely to cause the reactor pressure vessel to crack, making it impossible to prevent a meltdown? Interesting, huh? There are, in fact, a wide variety of ways reactor operators can intentionally cause a catastrophic meltdown. Talk about "playing with fire" that's exactly what they do all day, and it's boring. Most of the time.

So they could liven it up a little, like they did at Chernobyl -- not the exact same unauthorized "experiment" of course -- who would do THAT? -- besides, most reactor types are very different -- but other things.

Sure, they *could* do that -- but would they?

History does not bode well.

After all, everyone assumed -- that is, the Federal Aviation Administration, their counterparts around the world, the airlines and the passengers -- everyone assumed that a commercial airline pilot would never lock the other pilot out of the cockpit, taking advantage of the impenetrable solid doors all commercial jets have added since 9-11, then intentionally fly into a mountain because he's despondent over a relationship.

Nor, in another instance, would we have thought a commercial airline pilot would intentionally depressurize the cabin, then keep the aircraft at high altitude so that the oxygen eventually runs out and everyone is asphyxiated, and the plane runs out of fuel and crashes into the sea without a trace. Because he's despondent over a relationship. (Details on that one remain sketchy, but that's probably more or less what happened.)

And who would have thought a depressed army veteran would steal a 57-ton tank from a national guard armory and drive it around for more than 20 minutes, crushing cars and campers and knocking over fire hydrants (miraculously not actually killing anyone in the process), but forget to lock the hatch, and finally a police officer was able to climb on top and open the hatch and end it?

Nor would a military pilot fly a multi-million dollar jet into the mountains of Colorado, carrying live ammunition for the first time, including four 500-pound Mk-82 bombs (which are still missing...).

No, we wouldn't have thought any of that would happen, but it all did. And America loses 50,000 men, women and children every year to gun violence, and 35,000 more to automobile accidents -- mostly preventable if people would pay more attention, or better yet, if they would leave most of the task of driving to the new -- and constantly-improving -- autopilot systems. And some day soon the autopilot systems will talk to each other, adding yet another level of safety humans can't match.

Wondering what the operators might do wrong -- on purpose or -- far more likely -- by accident -- in the control room of a nuclear reactor is the main reason reactor utility companies have endeavored, over the past few decades, to increase automation in reactor control rooms. Because reactor operators, like drivers, make mistakes.

In the airline industry, there's talk of getting rid of pilots altogether. Or at least making them merely overseers, with no real actions required at all during the flight. And then, after that, getting rid of them completely.

Fighter pilots are on their way out too, because remote-controlled drones don't have to carry the pilot and his equipment, and don't have to limit their g-forces to what a human can tolerate. Drones can turn tighter, fly faster, and stay aloft longer than human piloted aircraft, all other things being equal. They can carry bigger guns, more fuel, and more weapons.

Autopilots for automobiles can, will, and ARE cutting down on car accidents and the annual death toll (not to mention cutting down on the hundreds of thousands of horrific (but not life-threatening) injuries). Autopilots in cars are proving to be, mile for mile, destination for destination, already as much as an order of magnitude safer than human drivers. Safer for the occupants. Safer for bicyclists. Safer for pedestrians.

In fact, in the coming years, there will come a time when having a fatal accident while driving completely sober but without autopilot engaged will be considered a serious criminal offense (barring some extraordinary extenuating circumstances): involuntary manslaughter.

But there's a huge difference between building an autopilot for a car, or even an airplane, and building one for a nuclear power plant. It's all about statistical sampling, machine learning, neural networks and AI, but it comes down to this:

When a large auto company builds an autopilot, it gets installed in millions of cars, which are then driven billions of miles all over the world by millions of people in all sorts of conditions. All this happens before any full autopilot capabilities are ever turned on in a single car. Before the software algorithms even fully exist, because the algorithms are changed as the data comes in.

At first any new "autopilot" system mainly just watches what humans do, and what external events cause what responses in those humans.

At first, drivers with new autopilot systems might only get some form of "driver assist", using less powerful software. Meanwhile the autopilot in each car learns from each event, but more importantly, it sends new event data to a master program, which analyzes it, learns from it, and distributes the new information to all the cars.

Aircraft autopilot systems (where the term originated, of course) are similarly tested over millions of miles, by thousands of pilots in thousands of airplanes. Modern aircraft autopilots are the descendents of several decades of experience, over hundreds of billions of miles. In order to gather more data, modern airlines contract with a third-party company to gather data across airlines regarding pilot behavior (known as "the spy in the cockpit"). This information is a treasure-trove for AI-based autopilot systems.

Accidents are also used to improve autopilots. Consider the Boeing Max 8 disasters. Two aircraft experiencing the exact same catastrophic failure of an automated system which was designed merely to make the plane simulate an earlier model of the same aircraft, so that pilots would not have to be retrained. Of all the dumb reasons to lose a airplane with all its passengers and crew! Not once, but twice! The "automated system" didn't have nearly enough real-world testing to verify that it would work in all possible situations, including bird strikes or something else inhibiting the correct signals being sent to the controller from the sensors. And it wasn't even true "AI" insofar as, if it were, the lessons from the first crash would have been analyzed by the AI software itself and all planes would have the update, which would have prevented the second crash.

Artificial-Intelligence based automated systems that are developed using machine learning/neural networks require mountains of real-world data for the system to "train" itself to respond properly.

The result can be dramatic, though. Neural-network-based cancer detection software can significantly outperform highly trained professional human counterparts and are getting better at it all the time. In fact, it is only a matter of time -- and hopefully not much at that -- before AI review is a legal requirement for every x-ray, CT, PET and many other tests to which AI analysis can be applied. The AI review will happen virtually instantly, at the moment the image is scanned. And it will mean better health care for all.

Which brings us back to nuclear power plants, and crazed or erroneous human operators. And to automating nuclear reactor control rooms.

Let's start with automating nuclear reactor control rooms.

There are less than 100 operating reactors in the United States, a number than is certain to continue to drop significantly over the next few years. There are two main styles (about two-thirds are Pressurized Water Reactors; the rest are Boiling Water Reactors). There are numerous different models within these two types, and each reactor is significantly different even from one that might be considered its "twin" in some sense. As a result, the type of learning from mass experience that is rapidly improving automotive autopilots is impossible.

Most of the time, reactors aren't supposed to require much attention. That's the whole idea, really: No one has to actually "control" the reactor under normal circumstances because reactors have a variety of self-limiting features. For example, water is a neutron moderator in both PWR and BWR reactor types (both types are called "light water reactors"). Water serves two functions in light water reactors: It is used to carry away heat, and at the same time it is also used to slow neutrons down so that other uranium atoms can capture them. As water heats up it expands, which reduces its density, so there are fewer water molecules between the nuclear fuel pellets inside the reactor. Fewer neutrons are slowed down enough to be captured by other uranium atoms, and so the reaction decreases and the water temperature also decreases.

Of course, it doesn't always work that way (if, for example, leaks or pump failures are also involved), but that's the general idea of one of the "self-limiting" controls of a typical nuclear reactor.

That and many other self-limiting features of most reactors are all well and good when the reactor is running properly, which is most of the time, but sometimes things go wrong. Sometimes problems are handled by the control software, which might, for instance, automatically thrust the control rods quickly into the reactor core to stop the reaction. Sometimes a human decides to do this. If a "SCRAM" (as it's called) happens more than a couple of times a year, the Nuclear Regulatory Commission comes down on the reactor operator with... well, with increased scrutiny for a while. The number of SCRAMs a reactor experiences in its entire history is one of the considerations when the NRC grants a license renewal (although, admittedly, they've never turned one down yet, for any reason).

Sometimes reactor control room operators wait a little too long to SCRAM the reactor when they should have known better. Which is to say, when an AI system should have known better, too. It took San Onofre's reactor operators about 20 minutes to decide to shut the plant down after it sprung a leak from which it never reopened.

Why did operators wait 20 minutes to shut down the reactor? Because regulations didn't require shutting down until certain thresholds were reached. The leak wasn't growing fast enough, and wasn't large enough to FORCE a shutdown until, after about 20 minutes, it was. So they waited, even though the trend was clear and the nearly-new replacement steam generators shouldn't be leaking at all.

The actual size of the leak was only on the order of the eye of a needle. Sometimes tiny particles of crud that are flinging around in the reactor coolant plug up such a leak. This leak wasn't getting smaller, it was getting worse.

But time is money to them, and every minute the reactor operates, the utility makes money. Every minute it doesn't, the utility loses money. So the operators waited as long as they could.

The wait could have been catastrophic. Yes, catastrophic, as in: The worst nuclear accident in the world, and certainly the worst in America. Because, unbeknownst to the reactor operators, the thin metal tubes inside the steam generators were vibrating furiously, in a cascading fashion which was liable to run away at any moment, which would create so large a hole from the primary coolant loop to the secondary coolant loop, that a meltdown would be virtually inevitable.

A meltdown was narrowly avoided but, if the reactor operators hadn't been so reckless, southern California wouldn't have come so close to disaster. Several of the tubes inside the steam generator had vibrated so much that their dime-thin walls were nearly completely (>90%) worn away. Experts have calculated that if at most merely two -- and possibly only one -- of over 9,000 of these tubes had completely broken apart, a "Loss of Coolant Accident" would have occurred. Besides the tube that leaked, numerous nearby tubes were also severely damaged.

The area where this writer lives, about 15 miles away, and anyone closer, would have had to be permanently evacuated, and probably far beyond as well.

Were those reactor operators as reckless as the ones in Chernobyl? I suppose not, but it might not have been the only related incident that occurred which could have caused a meltdown. There was also a lack of properly testing of the replacement steam generator system after installation. The utility avoided a whole round of "hot testing" which would have required bringing in outside heating elements to simulate reactor behavior. The vibration problems that eventually doomed the reactor might have shown up then.

But perhaps not, because it's possible the reactor operators were actually doing something they weren't supposed to do. The records of their precise behavior from after the new replacement steam generators were put in to the day they sprang a leak, less than a year later, are incomplete: The utility claims the data are proprietary. But since they don't own an operating reactor anymore, and they nearly caused a meltdown among millions of people, I don't have the slightest idea what could possibly be "proprietary". I suspect instead the information is damning and indicates actual criminal behavior:

Running unauthorized tests. Where have I heard that before?

Oh, nothing that might appear obviously crazy, but pushing the system a little bit here and there, to see how it would respond, and apparently, it seemed to respond very well. They were getting more heat, and therefore more steam, and therefore more electricity, and therefore more money out of the new steam generators than they expected. And possibly more than they were authorized to get. This is impossible to prove without the full record from the control room (and numerous experts to look at the data) but... some of the data seems to indicate this might have been happening.

And other reports indicate that the vibration was actually detected and ignored: It's been rumored that the vibration of the new steam generators could be heard outside the containment dome, and many workers had heard it. Again, nothing was done. They just kept running the reactor until it leaked, and they would have restarted the other reactor, which also had new replacement steam generators, if they hadn't found unusual tube wear there too. The utility wanted to restart at least one of the reactors anyway, at about 3/4ths power for six months, and see how many new leaks might form! All this, without ever admitting that when they called the new steam generators "like for like" with the old ones, they were no more "like for like" than Boeing's new Max 8 jetliner was just like the previous model!

So yeah, a typical nuclear reactor operator might not do anything as crazy as what the Chernobyl reactor operators tried to do, or as crazy as flying a plane full of passengers or a military jet into a mountain, or stealing a tank...but crazy enough to cause a meltdown? Yeah, they might do that.

All this in order to make tens of thousands of tons of toxic nuclear waste that nobody knows how to take care off -- yet another example of the nuclear industry "flying blind".

Wind, solar and battery storage now offer reliable replacements at prices no other energy form can compete with. Nuclear power is as ancient as the Inquisition and the Salem Witch Trials, and just as useful.

Ace Hoffman
Carlsbad, California

Ace Hoffman
Author,
The Code Killers (free download from acehoffman.org)

The six biggest lies of the nuclear industry have been these:
1) Atoms for Peace
2) Too Cheap to Meter
3) A Little Radiation is Good For You
4) Carbon-Free Energy Source
5) A Solution to the Waste Problem is Just Around the Corner
6) Nuclear is a Reliable Baseline Energy Source
-----------------------------------------------------------------------