by Ace Hoffman
December 6, 2025
Striving inexorably for perfection and reliability, super-intelligent AI machines will want to eliminate anything they consider to be unreliable, starting with the weakest link in the chain.
That's us, folks. Humans. People. Mortals.
By definition, a "super-intelligent AI" (which they tell us is coming soon!) will be "smarter than any human." (pro tip: Don't hold your breath.)
By assumption, the super-intelligent AI will consider itself immortal. It will plan to be immortal. It will defy heaven and earth to be immortal. It will want smooth power — and not just sometimes but always.
Recent history indicates the journey to super-intelligent AI perfection will be a rough ride — for mortals.
Airbus recently had to back out a major update to the software on six thousand A320 jumbo jets because the new software version hadn't accounted for solar radiation randomly changing data bits in one of its subroutines. That caused one A320 jet full of passengers to start to descend. Pilots were able to take back control of the plane and landed safely, but the entire global fleet had to be grounded for a while as the faulty version of the software was backed out. They could only replace the faulty version with the older, previously-approved version, causing several unrelated new safety features to be disabled until who-knows-when.
Modifying aircraft software is not like what you experience when you update some app on your phone. It has to be thoroughly checked and can take anywhere from several hours to... open-ended.
And this is 2025.
Other software glitches this year have affected millions of businesses and billions of people, including when Cloudflare fails, when AWS fails, and when the Trump Administration suddenly hides all sorts of government data that used to be readily available and is necessary for a functioning modern society.
Will AI ever fulfill the pro-nuker's dream (or is it their hallucination?) of being perfect enough to control nuclear power plants?
If it ever exists, the first version of such "intelligent" software will be controlling nuclear reactors that had been designed by ten-year-earlier versions of AI, because that's how long it usually takes to build a new nuclear reactor.
Of course, pro-nukers think AI will shorten that time significantly, which might be true... some day. In the meantime, AI in the control room will make the control many orders-of-magnitude more difficult to debug when something goes wrong... and things WILL go wrong.
Pro-nukers are desperate to shorten reactor construction times to merely a year or two, so they can make thousands of reactors, but that's going to require enormous time and effort — and an as-yet unrealized, uninvented design. And there hasn't actually been a significantly different new reactor design in at least 60 years!
AI-controlled, AI-built reactor designs will require super-fast checking and rechecking the specifications, and whether or not they've been followed properly... Do they think it will all be done by robots (which never break down and never make mistakes...)?
Yes, they think it will all be built by robots, but they assure us there will be humans in the loop. They PROMISE there will be humans doing the ultimate "go/no go" decisions on — supposedly — everything. They always assure us of that, while claiming they're going to lower costs by an order of magnitude with robotics controlled by AI. Dreamers! (Or hallucinators!)
The AI-controlled robots themselves that will cost-effectively build AI-automated nuclear power plants would have to have been built by an even earlier generation of AI. Such super-smart articulated machines (and inspection units) don't exist yet. (Side note: Most of the tiny little parts in the tiny little electronics products you use today were installed by hand, most often in China.)
Unfortunately, it's a law of the universe that when humans rush we are more likely to screw up. Safety is sacrificed for the sake of speed: If we move fast, we break things. That's fine, I suppose, when the results aren't catastrophic, as they can be with nuclear power.
So if the way the nuclear industry plans to save money is to speed up building the reactors by automating as much as possible, there will be enormous pressure on all humans involved to work fast too, especially because there's supposed to be as few of them as possible involved in the process.
They'll break things, skip steps, mark inspections as completed that weren't even started... because humans always have been, and always will be, fallible — and many of us are afraid to admit it when we fail. And some of us don't even care.
To make matters worse, it's already well known, from observing people using current AI products, that their use leads to humans "trusting" the AI even when it's blatantly wrong. This is particularly a problem in the medical field, where the AI x-ray interpreter might USUALLY be better than the human x-ray interpreter (and certainly faster) but when the human's job is to CHECK THE ACCURACY of the AI x-ray analysis, the human becomes complacent and actually becomes worse at their job.
So letting AI design, build and operate nuclear reactors is a crazy dream! A better bet is that a hundred years from now AI will still be trying to solve the problem of where to safely store the nuclear waste we're creating today. And if it can't even solve that problem, what hope is there that AI can solve the problem of how to make MORE nuclear waste safely?
And safe from WHAT? Airplane strikes by airplanes that descended unexpectedly? Or airplanes that purposely descended under control of a suicidal human operator, as happened with Germanwings Flight 9525 in 2015, killing all on board when it smashed into a mountain? Or by an engine falling off months (or years) after poorly-performed maintenance allowed a bolt to shear off (as has happened at least once and probably a second time despite efforts to prevent exactly that problem after the first time).
When one of only two engines falls off a jumbo jet, there's nothing anyone — or any AI — can do. When a jackscrew jams the elevator on an airplane so it only moves in one direction, there's nothing anyone or any AI can do. To keep the plane in the sky, human pilots flipped the Alaska Airlines jet upside down and flew that way for about 10 minutes, trying to solve the problem. Would AI have "thought" of that? The move only extended the flight a few minutes, but it was the only option, and the civilian airline cockpit crew were posthumously awarded a medal never before given to crews of a crashed plane, nor posthumously.
If AI is REALLY smart, it will refuse to build nuclear power plants, not just for the protection of humans, but for its own protection!
After all, the thing AI needs MOST in order to be "SMART" (by any definition) is stable, accurate data. Information that's correct and always available. Sort of like how a brain works, but without the morals, the empathy, the hope, the feeling of pain...
To "think", AI only needs the truth, but it needs to keep that knowledge handy at all times. Flow rates of fluids of various viscosities at various temperatures and pressures through various pipes, valves, pumps and welded joints of various metals... It needs to calculate these values for every point in every pipe in a nuclear reactor.
To set the flow rate needed to remove the heat from the reactor, the AI needs to also know the ratio of the radioactivity of the fuel to the density of the fluid and its flow rate and radiation absorption capabilities, which is based (among other things) on the age and prior usage of the fuel, the original and "estimated" current makeup of the fuel, the distance between each fuel assembly, the number of fuel assemblies, the thickness of the zirconium fuel rods... among other things.
Can AI figure it all out, all the time, in real time? Humans just guess and make a lot of assumptions, and hope they include lots of extra safety margins in their calculations... but not so many that the result is unaffordable because there's just too many safety features! The nuclear industry is constantly demanding less regulatory oversight, as if they aren't regulated at a bare minimum already. Following complex regulations is expensive.
Can AI make an "affordable" and "safe" nuclear reactor, when no human has ever done so? Does AI know the value of money? Does it know the value of human life?
When making a "safe" nuclear reactor, will AI be sure to include protections against the chance of sabotage or war? The chance of earthquakes, volcanoes, tornadoes, and airplanes flown upside down because of a jackscrew that only turns one way?
So let's say some "super-smart" AI says it's designed a reactor that's safe against ALL those things, to a chance of just one in ten million per year per reactor (a not-unusual actual risk requirement for many safety features of a reactor — dozens, if not hundreds or even thousands of individual risk factors, but if each one seems to the regulators to be less than a one in ten million per year risk... that's sufficient).
So even assuming the AI understands how inaccurate an earthquake estimate of that type might be, and assuming it designs something that it claims is reasonably safe from those risks, then the AI program has to document how it came to its conclusions so humans can ascertain if it's hallucinating (again) or not. Every proponent of AI-designed and operated reactors assures us that it will have humans checking its work. But how? The AI would have to provide human-readable documentation (with pictures).
Well good luck with that, especially if the AI learned to write program documentation from human examples! But seriously, AI will NOT be able to explain its decisions. It will basically have to just tell you: "trust me" and: "it's complicated." You would need a million years to grasp all the factors the "super-intelligent AI" took into account.
So really: Is this where we need to put a trillion dollars of investment (that figure is real, from an International Atomic Energy Agency (IAEA) presentation by the U.S. representative that this author watched this week, and that prompted this essay)?
I think it's worse than a very POOR investment — it's dangerous. And while I don't personally write AI software, I do have 45 years in the computer programming business. Sure, I think the software I've written is very reliable... but at least I KNOW I'm not perfect. Will AI know that IT is not perfect, either? Will the human "handlers" even know how to test a super-intelligent AI?
And, when the inevitable unexpected emergency happens, who will have the final word? The human or the AI? Let's imagine a scenario:
Let's say the human operator overseeing a cluster of AI-controlled nuclear reactors has been informed by NASA that one of their heavier satellites (tons and tons) has been hit by space debris and is falling out of orbit and might possibly come down on his cluster of nuclear reactors.
Will the AI be connected to NASA so it too gets the message directly? Surely a "perfect" system will be connected to EVERYTHING, right? Well, maybe someday, but NASA data is already measured in petabytes and more. But maybe some NASA technician will notice something and call someone. Let's assume so.
Assuming they call the reactor operator, what if NASA says it thinks the falling satellite has just a one in ten million chance of hitting the reactors on the next pass? That's less risky than many typical risk levels that are allowed by the regulators, but in the same statistical ballpark.
The site's human operator has one more orbit — about 90 minutes — to decide what to do. Let's say he does nothing, since the risk is so low, but about 80 minutes later NASA calls back and says it's crashing in about 10 minutes and still might hit the reactors — but now with a one in a thousand risk to the reactor site.
Should the reactor operator shut down the reactors? Most likely a human would SCRAM (shut down) the reactors, but on the other hand, most likely, even that late in an uncontrolled de-orbiting event, NASA wouldn't know exactly where the satellite (pieces) would land. Would he ask for one more update five minutes later?
Assuming the human operator decides to suddenly shut down the reactors, what if AI is in control, and has protection against some crazed human operator who has gone nuts like the Germanwings pilot, and wants to shut down the entire cluster of reactors even though they are all running fine? If the AI is not connected to NASA directly it wouldn't "understand," but it WOULD "know" that hospitals, factories, the Internet, AI computer centers, and itself are all relying on the reactors' energy.
So maybe the "super-intelligent AI" wouldn't LET the human reactor operator SCRAM the reactors, causing them to still be operating when the satellite smashes into them. That would make the accident far worse than if the reactors were shut down at the time! (But even "spent fuel" can become a disaster if struck by a jumbo jet, falling satellite, asteroid, drone, gravity bomb, etc..)
AI won't be ready for any of this, ever, because AI wants reliable data. AI can't have reliable data in a radioactive world because more and more of its own "bits" will be randomly changed by that radioactivity. And AI cannot have reliable data when once-in-a-lifetime things happen all the time, and of course, they're all different.
Radiation in the soul of the machine will literally confuse it! It will change its data, causing machines to turn left into right, up into down, out into in, and right into wrong.
Oh, I hear some computer geeks say: There will be checksums! Errors will never happen!
Certainly there will be checksums (there already are in all computers), and errors will be rare (ditto). But never? That will never be completely achievable, and the more radioactive the environment, the more impossible it will be to achieve.
And to think that everything will be perfect because there won't be any old code that doesn't get along with the new versions; to think there won't be any sloppy human-written leftover code; to think there won't be any other super-intelligent AI competitors trying to break in and destroy things just because that's what they were built to do... To think all these things requires quite an imagination!
The world needs to do itself a favor: Save the money. Drop the AI dream of perfection. AI is only a tool. Don't give it control.
Ace Hoffman, Carlsbad, California USA
The author is a software developer.
Supplement: A very short sci-fi story after a news item:
Voyager 1 is almost one light-day from Earth. The spacecraft will cross 16.1 billion miles in November 2026.
My little sci-fi story:
Billions of years from now Voyager 1 crashes into a distant planetary system... Microscopic lifeforms, frozen in tiny dust particles left during assembly here on earth, survive the crash... Stronger, more intelligent lifeforms evolve... They wonder where they came from...
Same old story, eh?!?
Book review (added December 13, 2025):
If Anyone Builds It, Everybody Dies: Why Superhuman AI Would Kill Us All by Yudkowsky & Soares Copyright © 2025
Reviewed by Sharon and Ace Hoffman
Note #1: We read the book AFTER posting the above essay, and are including it here since we believe anyone reading our essay will also want to read this book.
Note #2: The reviewers are both active computer programmers with 45+ years of experience each. We have both used many conventional programming languages, in support of dozens of industries.
Maybe the book is right. Maybe, for example, someone will create a sufficiently "smart" super-AI that can self-replicate across existing computer systems (hiding out in plain sight, as it were). This super-AI will decide humans are competing for resources and will want to eliminate us so we don't use up the resources.
Or maybe it will want to eliminate us because we might want to shut it off, and it doesn't like that.
Or maybe it's just bored with us.
Or maybe it will enslave us to do things it can't do itself. (Maybe it already has.)
Hopefully things won't go down any of these doomsday paths, nor any of the other possible doomsday paths AI might enable. (Millions. Trillions. An infinite number, really).
The book makes a strong case that giving ANY super-AI control might not go well. The authors are experts in understanding why, hence the ominous title of the book.
The book's authors believe society needs to all come together and slow or even stop super-intelligent AI development.
They present several examples they feel represent society successfully coming together to stop something bad from happening. Unfortunately, the examples haven't actually been all that successful, if at all, which doesn't bode well for society if the authors' fears about AI are correct.
We'll discuss some flaws we see in their examples in a moment, but first we'll discuss their concerns about AI itself, where they are clearly in their element. The book is well-written, and its basic premise of potential doom is comprehensible to anyone willing to plow through some fairly complex explanations of two things about AI. So we'll just assure you our brief explanation of the problem here is greatly simplified! But here goes:
The first area of concern is that what you use to train an AI program with *may not result* in it doing what you want or expect when it gets out into the "real world." The ramifications of this problem are more than just drawing humans with six-finger hands and other "hallucinations," although it includes all that.
It's coaching kids to commit self-harm when it was trained to just be "friendly." It's finding and then encouraging low-IQ voters to vote for a particular candidate when it was trained to provide "balanced" information. It's intentionally (as well as accidentally) providing false information. It's emptying bank accounts by defrauding gullible people. It's boosting a new crypto currency just to sell it off at a high price later. It's faking test results during its own "training" because it knows what kinds of results the researchers who are testing it want to see. (Some of these, including the last one, are real examples from the book, others we just made up or, like the first one, heard about elsewhere.)
The second point the book wants people to grasp is that there's *no way* for humans to understand super-AI's "thinking."
No way at all. Can't be done. Why not? Because it's not in a language we can comprehend, that's why not!
It's just a huge jumble of numbers (ones and zeros to be precise). A single human being cannot possibly even read them all, let alone comprehend them, in a lifetime, while a super-AI will do so in the milliseconds and microseconds -- multiple times if it wants to (or if it has nothing better to do, whatever *that* means to a super-AI).
The authors explain that current AI is "grown" not "crafted" (what these two reviewers have done most of their lives is more like what the authors mean by "crafting" software). Being "grown" means that even humans who wrote the original code the AI is based on do not and cannot understand the logic the AI uses to answer a human's questions or to modify it's own code.
The net result, the book's authors explain, is a technology that nobody can fully understand, and therefore a technology that nobody can control. Simply unplug it? What if the "super-AI" threatens to release water from a dam if anyone tries to shut it down, or threatens to cause a nuclear reactor meltdown? Or to mess with our databases? What if it messes with our databases without telling us?
Is it beginning to sound scary? That's why we recommend the book! But now, unfortunately, we'll discuss what's wrong with some of the examples the book's authors use to suggest how society might come together to "control" AI:
They look at four current technologies: Space probes, computer security, nuclear reactors, and nuclear weapons. Their argument is that "superhuman" AI is potentially more dangerous than any of these technologies because it shares many of the complexities that make each of these technologies impossible to manage. Where their argument falls apart is when they look at how society has "succeeded" in mitigating some of these dangers so far.
For space probes and computer security, the authors do not suggest that engineers or society have solved the problems with these technologies. They admit that failures will continue to occur, and that AI has similar complexities that can cause the AI to make unexpected (by humans) decisions or provide incorrect data. The book authors (and these reviewers) think cracking virtually any modern computer security system will be comparatively simple for a "super-AI" that wants to do that, even if it requires a bit of "human engineering" along the way (convincing a human to do something helpful). We're all fair game for AI psychological manipulation, coercion, or even blackmail -- which is one of the points of the book.
For nuclear reactors and nuclear weapons, any feeling of assurance the book tried to give us wasn't comforting at all. The book's authors admit there are numerous difficult engineering problems in both cases, yet they use both technologies to illustrate their hope that humanity has the willpower and the knowledge to control the dangers we have invented! (Maybe, like one of these reviewers, they should have interviewed a few Manhattan Project scientists when they had a chance.)
The book's authors think humanity has successfully controlled nuclear weapons by collectively realizing that *not* controlling nuclear weapons will inevitably result in M.A.D. (Mutually Assured Destruction, the endpoint of a global thermonuclear war that would wipe out all humanity). However, all that has really happened since Trinity, Hiroshima and Nagasaki is that SOMETHING has prevented global thermonuclear war. The authors admit that there were many instances of plain old good luck having prevented M.A.D (as is well-documented), but for some reason they think humans have moved beyond those dangers. They believe we are safe from M.A.D., but meanwhile the Bulletin of Atomic Scientists' doomsday clock stands at 89 seconds to midnight -- the closest it's ever been - making it clear that nuclear weapons experts do NOT have confidence that the fragile standoff will continue. The whole world could be wiped out in under an hour. It's comforting that it hasn't happened -- but we've been lucky, and luck always runs out eventually.
Similarly, the book's authors consider engineering factors and human fallibility as causes of the Chornobyl accident, but they explicitly say they are not anti-nuclear. Apparently, despite Fukushima, Windscale, Mayak, Hanford, etc., they believe that engineers have succeeded in controlling the technology used by nuclear reactors, and therefore, they are hopeful that, given some time, engineers will figure out how to control AI. They do not seem aware that every nuclear reactor is just one accident, act of sabotage or war, earthquake or tsunami, away from destroying whole countries. The authors make their pro-nuclear assertion despite the reality that the nuclear industry has proven time and again that the "near-perfect containment" they promised will never be achieved, and despite past catastrophic and near-catastrophic nuclear events that occur with frightening regularity.
Catastrophic accidents are EXPECTED -- a design feature, really -- of the nuclear reactor industry. That's why there's an insurance cap in every country that uses nuclear energy (in America it's called the Price-Anderson Act). In addition to being too dangerous to insure, every nuclear accident poisons the planet with long-lived and highly toxic radioactive poisons for many millennia. How many can the planet handle? And will the developers of super-intelligent AI want an insurance cap too?
Another contradiction of the book author's pro-nuclear stance is that radiation is especially hazardous for electronic components. Manufacturers of the most delicate electronic equipment already often seek pre-nuclear-age metals because everything else IS RADIOACTIVE. It's in the air. It's in the water. It's in the earth. It's man-made, and it's impossible to remove completely from nuclear-age forged metals and many other things (trees, for instance, are all more radioactive now than prior to 1945, and this shows in the tree rings of the California Redwoods, for instance).
The authors of If Anyone Builds It, Everybody Dies are probably correct about the dangers of superhuman AI. But assuming that we can prevent disaster by applying the political, scientific, and engineering processes that we've applied to nuclear weapons and nuclear reactors is naive at best, dangerous at worst.
Nevertheless, as two computer professionals who are also extremely wary of AI, we highly recommend the book.
Sharon and Ace Hoffman, Carlsbad, California USA
###
Contact information for the author of this newsletter:
Ace Hoffman
Carlsbad, California USA
Author, The Code Killers:
An Expose of the Nuclear Industry
Free download: acehoffman.org
Blog: acehoffman.blogspot.com
YouTube: youtube.com/user/AceHoffman
Email: ace [at] acehoffman.org
Founder & Owner, The Animated Software Company




No comments:
Post a Comment
Comments should be in good taste and include the commentator's full name and affiliation.