Monday, September 18, 2023

Is AI useful in the nuclear industry? Maybe "yes" and definitely "no":

Is AI useful in the nuclear industry? Maybe "yes" and definitely "no"
by Ace Hoffman (Pictures by Ace too, but they were created long before this essay was written)

September 18, 2023

First the good news: AI really is incredible.

Last week I heard a NASA spokesperson put it nicely. She said AI helps find "the data inside the noise;" the pattern "inside the wiggly line." AI is used to analyze long-timespan films of industrial machinery so that subtle movement that is causing stress cracking can be viewed. AI can help identify weakening parts, or identify long-term trends that are hard for humans to notice. Great stuff if it's used right. AI can be used to increase reliability of pumps and pipes in a sewage treatment plant. Sure, why not?

But will the same increase in reliability to pumps and valves in a nuclear reactor actually **prevent** meltdowns? Or just prevent SOME meltdowns? Well of course it's only "some" not "all." It's not a miracle drug. If it was THAT smart, it would tell humans to stop using nuclear power altogether!

Aside: We live in a world which is far more dangerous than it needs to be. Take air travel, for instance. AI is taking over all sorts of functions in the cockpit, including during dogfights of the world's top fighter jets. It's easing the mental strain on the pilots. It's even removing the pilots entirely. In fact, for 99% of all commercial flights, what do we need pilots for at all?

The answer, of course, is: Extreme or unusual situations. (Or a computer hardware malfunction, communications malfunction, equipment malfunction, software malfunction (besides the AI software itself), etc..)

But to be available when needed, the human pilots have to fly the planes themselves regularly, in order to be proficient when the "Scully" moment comes and you lose both engines flying out of a New York airport. Can AI help? Sure, but you know what would REALLY help? High speed rail. Far safer than air travel, ESPECIALLY for innocent bystanders when planes fall out of the sky. Nuclear power plants are NOT protected against large airplane strikes. And nuclear waste even less so. One of these days a terrible thing might happen. About a hundred large jets overfly San Onofre every day. Let's say the FAA manages to have AI software that alerts them instantly whenever a plane has been hijacked. Then how do they know if a nuclear power plant is being targeted? (Many hijacked planes have flown near nuclear reactors, of course, and at least one nuclear power facility has been threatened specifically (in the 1970s, if I recall correctly.)) But let's say the FAA decides to call a reactor and "warn" them that they "might" be targeted. What would a human operator do? What SHOULD they do? What would an AI program do? How easily can any FAA operator contact any nuclear reactor control room operator and what will they do with whatever knowledge has worked its way through the maze of steps, each of which could inhibit the warning going through, that results in whatever action is most appropriate? (SCRAM!).

We all use various forms of AI multiple times every day. And it helps tremendously.

But all that aside, one thing's for sure: Taking TWO things, neither of which works very well, and pairing them together is unlikely to yield a more positive result. Neither "nuclear" nor "AI" are properly functioning technologies (safe, reliable, etc.) -- and it's reasonable to assume that neither ever will be. Nuclear can NEVER be benign because it necessarily creates unmanageable waste streams and risks sudden catastrophic meltdowns; AI can never be benign because its mistakes can also cause real damage and, as described briefly below, it's "hit or miss" with no explanation of why it produces the results it gives.

(Further aside: During the Vietnam era the phrase "we had to destroy the town to save it" appeared. Perhaps AI will decide it has to cause a meltdown to prevent whatever it sees as otherwise unpreventable...)

I've always called "Artificial Intelligence" "Imitation Intelligence". I haven't changed my mind.

My wife and I have, together, more years in the computer industry than there are years in the computer industry (we started in our 20s, and are at 42 and 43 years the industry respectively; the industry isn't yet 85 years old (ENIAC was built late in WWII, less than 80 years ago).

Although neither of us have "officially" worked on AI development, we've certainly studied it, and we can make some qualified observations thus far, having worked near and around it since its inception. In my wife's current job, people use it frequently, for example, to write short code snippets or do research.

And our opinion of using it at nuclear power plants to help control the reactors? It's horrific!

The problem with AI is that AI returns a result we have no confirmation of (no "provenance"), and it is frequently wildly **not** what is needed or what will work in a particular situation. It's as if it forgot something obvious, you might say.

Modern chat AI, for example, simply grabs sources that seem related to the question asked of it based on criteria such as word count and word association, and assembles a response from those sources, with apparently little regard for the quality of the source. Humans try to ignore idiots. AI doesn't seem to know what an "idiot" is (perhaps because, in reality, it is one itself).

When returning results of a Google inquiry, usually no one really cares if it misses the 10th most-important web page on the subject and the person doing the inquiry doesn't find the information they desperately need, right? That sort of thing happens all the time -- you refine your query and try again.

But with AI running the show at a nuclear power plant -- controlling the valves, pumps, reading the temperature gauges and calculating the internal flow rates and instant-by-instant deciding whatever adjustments are needed -- well that might work fine for 100 years...and like FSD (Full Self Driving) it PROBABLY will be better at it than human control room operators.

But will it be perfect? Not likely.

Will it be "programmed" (or "taught") to know what to do when a meltdown starts? (Side note: If it's "really" AI it will throw its electronic hands up and say: "I CAN'T DO THIS!" and never even touch a nuclear power plant, they are simply too dangerous under ALL circumstances. But if humans aren't going to be that smart, can we expect AI to be?).

AI software is usually "trained" on vast amounts of existing data. Other AI can continue to "grow:" It can repeatedly go out on the Internet and get more current (but perhaps less accurate) data. Both are limited to what's available online at some time, not what's actually out there in the real world. A lot of past nuclear accidents are kept highly secret, either by the Nuclear Energy Institute, the Nuclear Regulatory Commission, or the owner/operator or even the employees involved. AI can't learn what it hasn't been exposed to. National borders block information exchange too, not just language barriers (which AI can -- sort of -- get around) but "proprietary" information and "NATSEC" information is intentionally hidden and unavailable. ("National Security" is regularly used as an excuse to hide reliability problems, embrittlement issues, operator errors, etc. that occur with military reactors.)

Besides all that, there's this: Will the AI software care if it fails and actually CAUSES a meltdown? NO. NOT AT ALL. And you can't punish an AI program for its failure, either. What are you going to do, turn it off and turn it on again?!? Tracking down the problem is well nigh impossible -- it's unlikely to be one line of code somewhere in the algorithm. AI's logic is, for all intents and purposes, encrypted -- and no one has the key. As a general rule: AI works in mysterious ways. That's kind of what makes it AI. The mathematical calculations are too complex for humans to comprehend. Its appeal is that it comes up with solutions humans have not been able to think of. It's awesome. But not perfect, and nuclear power needs to be impossibly close to perfection to be worth using.

My recommendation is we shut down the reactors. Thinking AI can be a "last best hope" to prevent operator error causing catastrophic (or expensive) accidents -- or merely improving efficiency -- isn't going to make them safe -- just safer (if we're lucky, and maybe not even that). AI won't eliminate "operator error," especially during critical, unusual or unique situations. It might even be the thing making the errors. And no one will know why it did what it did, possibly even in the aftermath.

Besides, nuclear energy actually blocks better solutions for Global Warming / Climate Change. Nukes suck up money and make false claims about being reliable "baseline" energy.

Keep AI where it belongs: Keeping cars on the road, and flying drones into ships and buildings...and into...nuclear power plants?!?

After the pilot has already bailed out?

(Also see substack clip shown below.)

Ace Hoffman
Carlsbad, CA
Professional computer programmer since 1980 (Assembler (LOTS of it!),
Cobol, RPG, Animate, HTML, etc. etc.)

Some programs I've written over the years (from: www.animatedsoftware.com ):



"What is clear is that Cruise—and its main rival, Waymo—coucould do a
better job handling emergency situations like this. The memo about
Davis̢۪s deadly crash was one of dozens that Farivar obtained via an
open-records request and published online. These memos document at
least 68 times self-driving cars in San Francisco interfered with
first responders or otherwise behaved in ways that emergency workers
found disconcerting."
Source: Understanding AI <understandingai@substack.com> 12:05 PM Sept 14 2023
Do driverless cars have a first responder problem?


Addendum (written/added Sept. 20, 2023):

After 9-11 there was a San Onofre annual safety meeting, and for the first and only time, there were PRESS galore. Cameras, reporters, everyone was there. Well over 100 people, when they were getting maybe half a dozen people at each NRC annual public meeting about the plant.

THAT DAY, before the meeting, mysteriously, I was surrounded by "enthusiastic" NRC personnel, about six of them -- all wanting to talk to me before the meeting. Wow! They're finally paying attention?!?!

I was so naive back then.

They were there specifically to keep me away from the reporters and news cameras.

Because I had blockbuster facts that our reactors are NOT designed to withstand large airplane strikes, and they knew I had the citations and would make some very devastating comments ON THE AIR that were all true, while the national tone was to pretend the reactors were safe.

After I wrote this latest piece, that F-35 went missing. An F-35 -- even unarmed -- can do quite a bit of damage to a reactor site, even if it's too small to breach the domes themselves. You really don't need to, though, to cause a meltdown.

The missing F-35 did cause me to make one change to the article I had already written -- I simply added the last line ("After the pilot has bailed out")!


No comments:

Post a Comment

Comments should be in good taste and include the commentator's full name and affiliation.