Evolutionary Algorithms, Psychological Warfare, and How a Basilisk Hack on the Planet Comes Back to Bite You
Part 15 of Artificial General Intelligence (And Superintelligence) And How To Survive It
So we’ve been discussing the "basilisk hack," an information assault by an AI superintelligence meant to suborn and rewrite the personality of the target, sometimes without their even being aware.
This was supposed to be science fiction, or at worst a far-future threat.
Instead it emerged as psychological warfare against all of humanity, executed via evolutionary algorithms through social media, qualifies as both weaponization and misinformation, with a dash of deception thrown in.
Let’s pause on this one, because while I may have been the first to refer to the potential these algorithms had as weapons, a couple times back in 2011, I didn’t even touch on how to execute.
Instead, I kept alluding to exactly how devastating some weapons were to you if you ever used them.
For obvious reasons.
This tool came up in the context of dystopian science fiction, featuring insane, self-destructive, superintelligent AIs and malevolent, superintelligent, cyber/bio/nano/cognitive viruses.
Ironically, if we’re discussing existential threats from malfunctioning, free-willed or human-directed superintelligences…
Well, suddenly the discussion becomes a lot more useful than a hypothetical.
The game was Eclipse Phase.
The eclipse phase of a virus, incidentally, is when the target cell has been infected but has yet to produce the viral progeny which will destroy it.
Given how discussions of AI existential risk focus on something unseen yet fatal happening below the surface, it’s an apt metaphor.
The virus also works as an analogy, as a molecular machine which infects to propagate, yet without true life of its own. People are often struck by the incoherence of the media deluge hitting us in 2016 and thereafter, which seemed to lack a fixed narrative, or logic, or even poetry, however crude.
But for the purposes of this "basilisk hack," coherent thought or even full, coherent entertainment properties may be counterproductive. Instead of a living thing, we got an ever-mutating psychological virus composed of these merging, splintering and recombining fragments.
To break minds, you may only want the broken pieces. The game may simply be to disrupt, never to rebuild.
Hence the viral nature of the threat. Viruses and bacteria, of course, are the original inspiration for evolutionary algorithms.
Because we have broken down our span of attention so thoroughly, they are actually better served by something that evokes an emotional response without drawing too much in the way of deep thought to it. You are trying to reshape someone without their active awareness.
So social media works very well in this regard. I'm particularly wary of how GIFs and especially ultra-brief video clips that play automatically might shape our perspective. You don't need to be a genius to formulate - or distort - a one-minute video.
GIFs and other images may work better when they are far from new but in fact are well-known quantities well-vetted by your psychological/propaganda analysts. Breaking down our attention span so that we learn about the world in these truncated clips seems particularly deadly. See TikTok for a more modern reference.
But professionals having trouble paying attention to as many long-form articles in this era are also telling. If even those with long attention spans and capable of thinking in depth have their attention so desperately splintered, how vulnerable are the rest of us?
There's something to be gained, also, from considering the risks posed by this early incarnation of a higher intelligence, if only one existing in aggregate in a multitude of computers, fed by vast numbers of bots and other data mining programs and sources. As far back as 2005, I’d hoped the mental virus of propaganda would serve as its own vaccine, but noted the example was not the work of any kind of superintelligence.
Even early AI, recklessly but effectively leveraged, took us to a whole new level, and not a good one.
So let us consider those early, dystopian hints at this technology and its consequences, as they illuminate certain issues before us.
Please, ignore the science-fiction terms you haven’t heard, and just open your mind to the possibilities of what even an early-generation superintelligence could do.
Especially if it’s an AI which has gone completely awry.
"Obviously you have different grades of AI, and even seed AIs. Remember that even relatively simple AIs, much less basic AGIs, could easily have a huge array of tools, both computational and physical, to accomplish their goals and may even have a few psycho-social tactics it can pull to influence individuals, markets or organizations.
"Even today we have chatbots, spam messages, improving skill programs (speech recognition, language translation, and understanding even murky questions), automated experimentors making hypotheses and testing them, and evolutionary algorithms.
"All of that is fairly standard even in 2011... heck, you can download a free copy of Eureqa and have your PC start looking for hidden mathematical relationships in your data sets.
"So even your very run-of-the-mill AIs in Eclipse Phase will be potentially formidable, though quite a few, obviously, will be focused on goals normally <irrelevant>..."
"Granted, it would be amusing if the open-ended AGI charged with protecting and optimizing a city's sewer system became the last, unconquerable defender of a city habitat being invaded by a huge military force of one kind or another.
"But in practical terms, you're more likely to have AIs dealing with massive property damage or other blatant threats by setting off alarms, contacting allies, transmitting images of offenders, and so forth.
"Not every coffee pot -- sad to say -- is authorized to retarget plasma batteries. Increasingly posthuman AGIs and especially seed AGIs become increasingly ridiculous both their potential power and intelligence, not to mention the possibly extreme nature of their goals."
The above comments are hinting at emergent goals and power-seeking behavior, two of the eight concerns previously mentioned by the Center for AI Safety.
Granted, in a more humorous and positive way.
"Systems that are both formidable and effectively insane are in some ways less of a long-term threat -- other powerful AGIs tend to notice them -- especially if they are not very good at concealing their intentions -- and either eliminate these entities themselves, task someone else with neutralizing them, or bring them to the attention of more powerful actors apt to take issue with their activities.
"Any truly posthuman intelligence could fully understand transhumans while still being alien in their outlook.
"Even a modest ability to "see the future" by running powerful predictive scenarios and taking in and processing oceans of information from across transhuman space, combined with a mind at least dozens of times faster and many times more powerful than an advanced transhuman would give you a being whose actions would be very hard to predict, even if their goals were still relatively comprehensible.
"The same can be said of a being who can combine accessible information with a host of social/physical clues to read virtually any collection of individuals at a glance -- ...motivations, long-term goals, immediate concerns, injuries, augmentations, psi sleights (active or otherwise), untapped potentials and so forth -- and who again combines those gifts with an incomprehensibly swift and powerful intelligence.
"Rapidly evolving nanite and infotech swarm weapons -- using evolutionary algorithms to self-optimize -- might be a threat to newly emergent seed AIs, but they would be apt to tap the same resources more effectively in their own defense and to purge those risks quickly and ruthlessly."
We're at the "tap the same resources more effectively in their own defense, and to purge those risks quickly and ruthlessly" stage…
When it comes to dealing with this information attack from 2016 onward.
For those keeping score at home.
The other relevant commentary on the technology:
"Perhaps the scariest option... you are meddling in a place which is utterly owned or infected by a force you know nothing about, and which may be vastly more intelligent and dangerous than you are."
"Finally, one of the things I assume... is that many of the most powerful and advanced intelligences are keenly aware of what happens to the less powerful around them, even when they possess no real sense of compassion."
"Whenever an aggressive force acts, it exposes something about itself in doing so -- the technology it possesses, tactics it employs, goals it may or may not seek to further by way of a particular assault.
“Hence, advanced...& seed AIs are often involved in a kind of cold war wherein they act subtly to assist otherwise outclassed victims - a single "ping" alerting an invaded system of an ongoing attack, a sudden shutdown of a key communications hub in mid-incursion or basilisk hack, ...an involuntarily inserted or gift software patch that renders an obvious technique completely useless against the individual or organization. One factor often seen in these kinds of conflicts is an unspoken, ongoing assessment made by all of the relatively sane participants.
“Am I exposing too much of my resources, technology, tools and/or identity in this matter, and if so, is it worth it?”
Now imagine you're not a supercomputer on a suicidal death march, reprogrammed to go down in the most bizarre and masochistic way possible. Is it possible you'd have even more reason to consider the consequences?
We’ll want to talk about the community of trust we need to build, and the cold peace facing less trustworthy actors - human and autonomous AI - as we move forward.
One major factor - the unspoken assumption so many make that an AI will assume there is no intelligence which equals or surpasses it.
And that they will be correct.
Even for merely human intelligence, that is an unfortunate blind spot. Rogue AIs will never be able to assume that. They will never be able to assume any other AIs will have no motivation to intervene when they cross known or unknown red lines.
They will never be able to assume a literal higher power will not extinguish them at any moment, should their actions threaten some unknown goal.
Building a civilization whose most-fundamental laws are not so easily set aside is a larger question, and will be the work of many.
But in the meantime, dealing with the many threads of this first, vast psychological war and cyberattack will do a great deal to protect us in the future.
The evolutionary algorithms in psychological warfare I've cited are an excellent example of profoundly unintelligent AI targeting us at a superhuman scale and with superhuman precision.
But again, developing the systems to perceive and counter these manipulations in an automated and incredibly swift fashion will tell us more about future steps for protecting ourselves from Nth-generation AI while not only protecting ourselves, but building those systems now.
We have motivation, and we have a map.
Not just to our existing adversaries, and their existing tools.
But AIs will use these red flags to find more red flags which in turn will lead to even more.
We won’t just burn the capacity to do this down to the bedrock. Not just for today or tomorrow’s attacker.
But in all futures to come. This pathway will never stand unguarded ever again.