Swords into Plowshares, or Vice Versa
Part 5 of Artificial General Intelligence (And Superintelligence) And How To Survive It
Weaponization of artificial intelligence is an existential threat in itself, but one which can be countered by implementing two somewhat contradictory strategies.
One, get there first and take the high ground. And once there, never yield it.
Two, create a self-regulating balance of power among trusted peers and honest citizens.
Key to both is overall AI safety, but that thread runs through all these issues, to some degree. Dealing with weaponization requires attention to the other seven threats, misinformation, proxy gaming, enfeeblement, value lock-in, emergent goals, deception and power-seeking behavior.
To clarify, getting there first and taking the high ground is exactly what it sounds like – developing and implementing the most-powerful technologies and then retaining control over them… or at the least the most-formidable versions in existence.
This does not mean only AI, AGI and ASI, but other technologies with immense implications such as human enhancement, genetic engineering, most of biotech in general and quantum computing. It also means breakthroughs which might not change everything in themselves, but which are nonetheless of great strategic importance by more conventional measures, such as practical fusion, cheap space flight and dramatically greater and cheaper compute.
A balance of power means all power does not converge into a single set of hands which operates with impunity and absolutely no checks on its actions. In the US Federal government, for example, there are the executive, legislative and judicial branches, but more to the point, the major powers of the Administration are broken up under the departments and beneath them into various sub-departments and agencies. Hence, great scientific research and technological innovation takes place in the national labs but they do not control military, intelligence or law-enforcement assets, the country’s financial powers and so forth.
Hence, even if the US government developed AGI and progressed rapidly towards ASI, until a true superintelligence became dramatically more brilliant and capable than anyone or anything else, no one agency or department would arrive at absolute power.
Similarly, within Western alliances such as NATO and America’s bilateral alliances with nations like Japan, Australia and the Republic of Korea, we should find more AI in cybersecurity, as seen in the US and UK. This means one supreme AI power does not have to secure everything, everywhere, all the time, and also allows for greater innovation and diversity among mutually cooperative AIs seeking broadly similar security goals.
Obviously, someone will technically cross the line into artificial superintelligence first, though how incrementally the world advances – relatively speaking – and how aggressively the first superintelligence seeks to maximize and leverage its initial advantage will have great bearing on how history will unfold from that point on.
But a key understanding is a free-willed superintelligence is hardly necessary to pose an existential risk to humanity. Ordinary humans without AI have managed this with nuclear weapons alone.
Our first risk with weaponization is what humans will do with AI.
Our second is what AIs will do with our example.
We have probably already reached the point that the most-advanced cyberwarfare operations can not compete in offense or defense without artificial intelligence, especially if the US, UK and NATO have already automated their cyberdefenses with AI.
Given how well DARPA’s tests of AI piloting of F-16s have been going, we’re likely also past the point in which a human pilot can compete with an advanced AI in combat, given equivalent aircraft, sensors and weapons.
We’ve also seen AIs become utterly dominant in highly flexible combat games with many variables, such as Dota 2, in very little time, suggesting even ground troops may be unable to handle artificial intelligence without AIs of their own, particularly if the forces they are fighting are unconcerned with civilian casualties and collateral damage.
These facts make for a dark world if democratic powers lose the AI race, or if weaponized AI spreads too quickly into too many hands.
But there are caveats worth remembering.
First, the West - defined as the advanced democractic world - appears, propaganda to the contrary, to still be in the lead in artificial intelligence, and especially in the hardware undergirding it… advanced semiconductors.
Second, the AIs we have now are far from invincible, and we need to remember their weaknesses when confronting them. Neural networks can be hacked using visual inputs, a principal which will undoubtedly be used to find other stimuli capable of, if not hacking a network, then at least sending it into a death spiral of distractions and deteriorating decision-making. Cyberattacks against AI-hardened networks and faced with AI-driven counterattacks may be walking into more resistance than they can possibly comprehend.
A computer playing a game or flying a plane or drone is operating based on its best strategies based on what it understands about the environment, the opposition and its goals.
To prevent domination by systems able to outthink you, at least in terms of speed, you want to make sure as many of the variables it calculates are completely wrong. This means stealth, deception and striking with a wide and ever-changing variety of weapons and tactics.
Ideally striking not only faster than it can calculate, but faster than it can perceive, and leaving it with no way to understand what happened. This will not only include other AIs.
It will also include more powerful sensors taking in broader sensory inputs than it realizes are accessible. Weapons striking with stealth, hypersonics, or both, or directed energy weapons undetectable to the targeted machine, such as lasers outside the visible spectrum, or sonic weapons. Or simply cyber, aimed at them, or their communication or sensor networks.
It will also mean changing up some weapons as a matter of course, particularly those fired by cheap, disposable drones tasked with initial interceptions for reconnaissance and harassment of incoming threats.
There are reasons to be concerned with armed drones engaging human beings. One stopgap measure is to restrict most armed drones to non-lethal force against human beings and occupied vehicles, while allowing them to use full force against obviously unpiloted fellow drones.
In terms of both lethal and non-lethal rounds, we already have a vast variety available for shotguns and grenade launchers. Given high-density power storage in advanced fuel cells or other methods, many energy-based techniques also become possible.
These include sticky, entangling cords, nets, tear gas, bean bags, rock salt, flares, jamming, EMPs, sonics, lasers, tasers, electrolasers, chaff, illuminating targets, and so forth. Combined with other, lethal ordinance for use specifically against other drones, such as armored piercing, incendiary and/or explosive rounds, guided missiles or micro-missiles, micromachines and nanoscale weapons, there’s a quite a few options make life extremely unpredictable for a machine.
One of the first things you want to do is cut it off from its networks, so it can’t get more information or more help, and it can’t tell them what happened to it. Then, you want to make sure it can’t escape, and that’s there’s nothing left of its memory to salvage.
Finally, you want to make sure it’s never contending with a single defense, much less a single, predictable defense. Analyzing patterns, changing tactics and trying again and again, especially if the units used are expendable or inexhaustible, these are the forte of an AI-guided attack.
If all of the above sounds very dark, it is.
Which is exactly why we want not only to deter such eventualities, but prevent anyone from accumulating the resources to attempt them.
America is not content to build up the economies of those who wish to do her harm, an attitude which seems to be spreading. Bad international actors, long dependent on Western economies, trade, financing and technology are rapidly finding themselves without safe harbor in the perilous world they have created.
Meanwhile, many nations, particularly smaller ones, or those in dangerous locations, are getting a glimpse into what the end of Pax Americana means in a new multipolar world. The law of the jungle is much less appealing if you don’t suffer from the delusion that you’re the apex predator.
But in terms of the basics of handling weaponized AI, we have been maintaining a lead in artificial intelligence itself, blocking access to key resources for untrustworthy powers, overwhelmingly powerful cybersecurity, taking a dramatic lead in science and technology by fully leveraging the research advantages of AI, and being prepared to deal with threats on the ground, from weaponized drones to emerging technologies made possible by rapidly advancing technology.
As with cyber, one constant will be the use of artificial intelligence to analyze, anticipate, adapt and advance.
But AI is only one element, however critical.
There remains a human element as well, particularly in an era when large language models like ChatGPT are mainly sifting what has been said with little to gauge truth from falsehood beyond how often it is repeated. Human understanding, insight and creativity still have a place, or we wouldn’t have to have this conversation at all.
AI would have already solved the problem.
So before we turn our eyes to other threats, or to the close alliance or cold peace between AGIs and ASIs, let us consider the human.
Next up, Empowerment versus Enfeeblement.
Or, finding a place for humans in their own world.