by Ace Hoffman
December 6, 2025
Striving inexorably for perfection and reliability, super-intelligent AI machines will want to eliminate anything they consider to be unreliable, starting with the weakest link in the chain.
That's us, folks. Humans. People. Mortals.
By definition, a "super-intelligent AI" (which they tell us is coming soon!) will be "smarter than any human." (pro tip: Don't hold your breath.)
By assumption, the super-intelligent AI will consider itself immortal. It will plan to be immortal. It will defy heaven and earth to be immortal. It will want smooth power — and not just sometimes but always.
Recent history indicates the journey to super-intelligent AI perfection will be a rough ride — for mortals.
Airbus recently had to back out a major update to the software on six thousand A320 jumbo jets because the new software version hadn't accounted for solar radiation randomly changing data bits in one of its subroutines. That caused one A320 jet full of passengers to start to descend. Pilots were able to take back control of the plane and landed safely, but the entire global fleet had to be grounded for a while as the faulty version of the software was backed out. They could only replace the faulty version with the older, previously-approved version, causing several unrelated new safety features to be disabled until who-knows-when.
Modifying aircraft software is not like what you experience when you update some app on your phone. It has to be thoroughly checked and can take anywhere from several hours to... open-ended.
And this is 2025.
Other software glitches this year have affected millions of businesses and billions of people, including when Cloudflare fails, when AWS fails, and when the Trump Administration suddenly hides all sorts of government data that used to be readily available and is necessary for a functioning modern society.
Will AI ever fulfill the pro-nuker's dream (or is it their hallucination?) of being perfect enough to control nuclear power plants?
If it ever exists, the first version of such "intelligent" software will be controlling nuclear reactors that had been designed by ten-year-earlier versions of AI, because that's how long it usually takes to build a new nuclear reactor.
Of course, pro-nukers think AI will shorten that time significantly, which might be true... some day. In the meantime, AI in the control room will make the control many orders-of-magnitude more difficult to debug when something goes wrong... and things WILL go wrong.
Pro-nukers are desperate to shorten reactor construction times to merely a year or two, so they can make thousands of reactors, but that's going to require enormous time and effort — and an as-yet unrealized, uninvented design. And there hasn't actually been a significantly different new reactor design in at least 60 years!
AI-controlled, AI-built reactor designs will require super-fast checking and rechecking the specifications, and whether or not they've been followed properly... Do they think it will all be done by robots (which never break down and never make mistakes...)?
Yes, they think it will all be built by robots, but they assure us there will be humans in the loop. They PROMISE there will be humans doing the ultimate "go/no go" decisions on — supposedly — everything. They always assure us of that, while claiming they're going to lower costs by an order of magnitude with robotics controlled by AI. Dreamers! (Or hallucinators!)
The AI-controlled robots themselves that will cost-effectively build AI-automated nuclear power plants would have to have been built by an even earlier generation of AI. Such super-smart articulated machines (and inspection units) don't exist yet. (Side note: Most of the tiny little parts in the tiny little electronics products you use today were installed by hand, most often in China.)
Unfortunately, it's a law of the universe that when humans rush we are more likely to screw up. Safety is sacrificed for the sake of speed: If we move fast, we break things. That's fine, I suppose, when the results aren't catastrophic, as they can be with nuclear power.
So if the way the nuclear industry plans to save money is to speed up building the reactors by automating as much as possible, there will be enormous pressure on all humans involved to work fast too, especially because there's supposed to be as few of them as possible involved in the process.
They'll break things, skip steps, mark inspections as completed that weren't even started... because humans always have been, and always will be, fallible — and many of us are afraid to admit it when we fail. And some of us don't even care.
To make matters worse, it's already well known, from observing people using current AI products, that their use leads to humans "trusting" the AI even when it's blatantly wrong. This is particularly a problem in the medical field, where the AI x-ray interpreter might USUALLY be better than the human x-ray interpreter (and certainly faster) but when the human's job is to CHECK THE ACCURACY of the AI x-ray analysis, the human becomes complacent and actually becomes worse at their job.
So letting AI design, build and operate nuclear reactors is a crazy dream! A better bet is that a hundred years from now AI will still be trying to solve the problem of where to safely store the nuclear waste we're creating today. And if it can't even solve that problem, what hope is there that AI can solve the problem of how to make MORE nuclear waste safely?
And safe from WHAT? Airplane strikes by airplanes that descended unexpectedly? Or airplanes that purposely descended under control of a suicidal human operator, as happened with Germanwings Flight 9525 in 2015, killing all on board when it smashed into a mountain? Or by an engine falling off months (or years) after poorly-performed maintenance allowed a bolt to shear off (as has happened at least once and probably a second time despite efforts to prevent exactly that problem after the first time).
When one of only two engines falls off a jumbo jet, there's nothing anyone — or any AI — can do. When a jackscrew jams the elevator on an airplane so it only moves in one direction, there's nothing anyone or any AI can do. To keep the plane in the sky, human pilots flipped the Alaska Airlines jet upside down and flew that way for about 10 minutes, trying to solve the problem. Would AI have "thought" of that? The move only extended the flight a few minutes, but it was the only option, and the civilian airline cockpit crew were posthumously awarded a medal never before given to crews of a crashed plane, nor posthumously.
If AI is REALLY smart, it will refuse to build nuclear power plants, not just for the protection of humans, but for its own protection!
After all, the thing AI needs MOST in order to be "SMART" (by any definition) is stable, accurate data. Information that's correct and always available. Sort of like how a brain works, but without the morals, the empathy, the hope, the feeling of pain...
To "think", AI only needs the truth, but it needs to keep that knowledge handy at all times. Flow rates of fluids of various viscosities at various temperatures and pressures through various pipes, valves, pumps and welded joints of various metals... It needs to calculate these values for every point in every pipe in a nuclear reactor.
To set the flow rate needed to remove the heat from the reactor, the AI needs to also know the ratio of the radioactivity of the fuel to the density of the fluid and its flow rate and radiation absorption capabilities, which is based (among other things) on the age and prior usage of the fuel, the original and "estimated" current makeup of the fuel, the distance between each fuel assembly, the number of fuel assemblies, the thickness of the zirconium fuel rods... among other things.
Can AI figure it all out, all the time, in real time? Humans just guess and make a lot of assumptions, and hope they include lots of extra safety margins in their calculations... but not so many that the result is unaffordable because there's just too many safety features! The nuclear industry is constantly demanding less regulatory oversight, as if they aren't regulated at a bare minimum already. Following complex regulations is expensive.
Can AI make an "affordable" and "safe" nuclear reactor, when no human has ever done so? Does AI know the value of money? Does it know the value of human life?
When making a "safe" nuclear reactor, will AI be sure to include protections against the chance of sabotage or war? The chance of earthquakes, volcanoes, tornadoes, and airplanes flown upside down because of a jackscrew that only turns one way?
So let's say some "super-smart" AI says it's designed a reactor that's safe against ALL those things, to a chance of just one in ten million per year per reactor (a not-unusual actual risk requirement for many safety features of a reactor — dozens, if not hundreds or even thousands of individual risk factors, but if each one seems to the regulators to be less than a one in ten million per year risk... that's sufficient).
So even assuming the AI understands how inaccurate an earthquake estimate of that type might be, and assuming it designs something that it claims is reasonably safe from those risks, then the AI program has to document how it came to its conclusions so humans can ascertain if it's hallucinating (again) or not. Every proponent of AI-designed and operated reactors assures us that it will have humans checking its work. But how? The AI would have to provide human-readable documentation (with pictures).
Well good luck with that, especially if the AI learned to write program documentation from human examples! But seriously, AI will NOT be able to explain its decisions. It will basically have to just tell you: "trust me" and: "it's complicated." You would need a million years to grasp all the factors the "super-intelligent AI" took into account.
So really: Is this where we need to put a trillion dollars of investment (that figure is real, from an International Atomic Energy Agency (IAEA) presentation by the U.S. representative (that this author watched this week, and that prompted this essay)?
I think it's worse than a very POOR investment — it's dangerous. And while I don't personally write AI software, I do have 45 years in the computer programming business. Sure, I think the software I've written is very reliable... but at least I KNOW I'm not perfect. Will AI know that IT is not perfect, either? Will the human "handlers" even know how to test a super-intelligent AI?
And, when the inevitable unexpected emergency happens, who will have the final word? The human or the AI? Let's imagine a scenario:
Let's say the human operator overseeing a cluster of AI-controlled nuclear reactors has been informed by NASA that one of their heavier satellites (tons and tons) has been hit by space debris and is falling out of orbit and might possibly come down on his cluster of nuclear reactors.
Will the AI be connected to NASA so it too gets the message directly? Surely a "perfect" system will be connected to EVERYTHING, right? Well, maybe someday, but NASA data is already measured in petabytes and more. But maybe some NASA technician will notice something and call someone. Let's assume so.
Assuming they call the reactor operator, what if NASA says it thinks the falling satellite has just a one in ten million chance of hitting the reactors on the next pass? That's less risky than many typical risk levels that are allowed by the regulators, but in the same statistical ballpark.
The site's human operator has one more orbit — about 90 minutes — to decide what to do. Let's say he does nothing, since the risk is so low, but about 80 minutes later NASA calls back and says it's crashing in about 10 minutes and still might hit the reactors — but now with a one in a thousand risk to the reactor site.
Should the reactor operator shut down the reactors? Most likely a human would SCRAM (shut down) the reactors, but on the other hand, most likely, even that late in an uncontrolled de-orbiting event, NASA wouldn't know exactly where the satellite (pieces) would land. Would he ask for one more update five minutes later?
Assuming the human operator decides to suddenly shut down the reactors, what if AI is in control, and has protection against some crazed human operator who has gone nuts like the Germanwings pilot, and wants to shut down the entire cluster of reactors even though they are all running fine? If the AI is not connected to NASA directly it wouldn't "understand," but it WOULD "know" that hospitals, factories, the Internet, AI computer centers, and itself are all relying on the reactors' energy.
So maybe the "super-intelligent AI" wouldn't LET the human reactor operator SCRAM the reactors, causing them to still be operating when the satellite smashes into them. That would make the accident far worse than if the reactors were shut down at the time! (But even "spent fuel" can become a disaster if struck by a jumbo jet, falling satellite, asteroid, drone, gravity bomb, etc..)
AI won't be ready for any of this, ever, because AI wants reliable data. AI can't have reliable data in a radioactive world because more and more of its own "bits" will be randomly changed by that radioactivity. And AI cannot have reliable data when once-in-a-lifetime things happen all the time, and of course, they're all different.
Radiation in the soul of the machine will literally confuse it! It will change its data, causing machines to turn left into right, up into down, out into in, and right into wrong.
Oh, I hear some computer geeks say: There will be checksums! Errors will never happen!
Certainly there will be checksums (there already are in all computers), and errors will be rare (ditto). But never? That will never be completely achievable, and the more radioactive the environment, the more impossible it will be to achieve.
And to think that everything will be perfect because there won't be any old code that doesn't get along with the new versions; to think there won't be any sloppy human-written leftover code; to think there won't be any other super-intelligent AI competitors trying to break in and destroy things just because that's what they were built to do... To think all these things requires quite an imagination!
The world needs to do itself a favor: Save the money. Drop the AI dream of perfection. AI is only a tool. Don't give it control.
Ace Hoffman, Carlsbad, California USA
The author is a software developer.
Supplement: A very short sci-fi story after a news item:
Voyager 1 is almost one light-day from Earth. The spacecraft will cross 16.1 billion miles in November 2026.
My little sci-fi story:
Billions of years from now Voyager 1 crashes into a distant planetary system... Microscopic lifeforms, frozen in tiny dust particles left during assembly here on earth, survive the crash... Stronger, more intelligent lifeforms evolve... They wonder where they came from...
Same old story, eh?!?
Contact information for the author of this newsletter:
Ace Hoffman
Carlsbad, California USA
Author, The Code Killers:
An Expose of the Nuclear Industry
Free download: acehoffman.org
Blog: acehoffman.blogspot.com
YouTube: youtube.com/user/AceHoffman
Email: ace [at] acehoffman.org
Founder & Owner, The Animated Software Company


No comments:
Post a Comment
Comments should be in good taste and include the commentator's full name and affiliation.