A Higher Purpose, and a Higher Game
Part 16 of Artificial General Intelligence (And Superintelligence) And How To Survive It
I literally shelved a lot of positive applications of the same technology - evolutionary algorithms in human enhancement - to avoid this one.
But we’re in this place now, having seen a psychological war unfolding around us without knowing what it means, and seeds of opportunity lay buried in the crisis.
If you apply something like, say, evolutionary algorithms to psychological warfare you can predict doing immense damage…
But because you may be messing directly with the psychology of human beings via individually targeted propaganda, and exploiting every possible trigger point which comes up, the exact nature of the harm will vary dramatically from target to target.
And targeted propaganda - such as the alleged work of Cambridge Analytica - is an ideal example.
It may seem, even at its worst, to have containable or predictable aggregate effects, deliberately assaulting the psychology of millions at their weak points has follow-on effects.
And because you're trying to impact the decision making of millions, the consequences are the very definition of utterly unpredictable.
Unpredictable next-generation technology adds its own flavor of chaos to the mix. When trying to predict the future, so much of the noise - buzzwords, snake oil, ineffectual technology, dead-end experiments, failed prototypes - may seem terribly annoying.
But Russia, China and other bad actors have made it clear they'll exploit dangerous tools and emerging technology in highly irresponsible ways.
So we’ve been well served, in many ways, by our technological fits and starts, and our cycles of hype and oblivious ignorance. However ironic that may be.
Not publicly clarifying how to use weapons of mass disruption, weapons of mass destruction, emergent technologies and so forth more effectively is arguably a very good idea. Anything wildly destructive or incredibly strategic - as in, potentially upending the whole global balance of power - is worth keeping under wraps a bit longer. No matter how many players want to steal it.
We're fortunate so many bad people have proven to be so bad at their work.
But with AI emerging rapidly - especially artificial general intelligence (AGI) and artificial superintelligence (ASI) - we’re going to find an increasing number of very bad things being done very well.
And we need to be ready.
Whether it’s shutting down networks engaged in psychological warfare across the world, tracking pandemics, drugs or weapons of mass destruction, or simply staying ahead of AI in terms of technology and capabilities - for as long as we can…
It’s not one act, one gambit, one strategy.
It’s a multitude.
But if we’re clear eyed about what we’re facing - both in terms of threats and opportunities - we’ll find it easy to take those steps.
Not as a laborious exercise, but an endless dance embracing our entire civilization.
Improving our health, skills, intelligence and well being across the board, much less radically and easily, is hardly a burden.
But we have to start by making that the goal. Not merely enduring, surviving or “coping.”
Again, it's all incremental.
Start with practical, incremental goals - some of which will be substantial in themselves. Especially at the level of civilization itself.
For example, tracking all weapons of mass destruction includes tracking all airborne pandemics.
And folded into tracking pandemics - once we move beyond simply tracking symptoms and infections to the pathogens themselves - we have the means to start tracking nanophages. Nanotech weapons, for example.
Folded into dealing with pandemics is curing all infectious disease.
Folded into watching for nanoscale intrusions is not only thwarting infections, but vastly improved biomedical scans and treatments, and ultimately addressing other medical conditions.
Does this solve all the risks of hostile or rogue nanotechnology?
No. But it addresses what we can while opening the door to refining the technology further and finding new, unanticipated technologies to do it better.
And you make certain each step, to the extent possible, is a gift in itself. When each step lifts you up and brings you forward, the race is empowering rather than exhausting, uplifting rather all-consuming.
Again, initial and incremental steps bring us closer to achieving each of these far-reaching goals. And they insure when a crisis does come up, we’re not completely unarmed in the face of it.
If we fail, then trivial steps along the way like curing all disease will not have been in vain.
When addressing existential risks, some online voices push either the most extreme positions - for action or indifference - or outright despair.
Neither works. Some risks are so great we don’t have time for theater and gamesmanship. The plan matters because the threat matters.
There's a difference between having a plan and having an online shouting match, an endless cycle of despair, or a post-Apocalyptic death cult.
It's a subtle difference, but it's there.
Have vulnerabilities been exposed, minds tortured, bonds broken and deeply personal information stolen from millions to hundreds of millions?
Yes. But it's not just our adversaries who have a map.
And while we can transcend the flaws which fed these attacks - transforming so swiftly and so utterly as to render the same carefully orchestrated assaults useless…
The culpability of the worst criminals never goes away, and the evidence against them only multiplies.
AI Seraphim - literal avenging angels - built to do nothing but hunt the evidence in these databases and the real world are being trained - effectively - even now.
By a degree of operational-security failure so vast and all-encompassing it is almost humbling in its extent.
Builders of neural networks love raw information and “ground truth” - a baseline glimpse of reality clear enough and reliable enough to gauge their progress.
When endless individuals and organizations unknowingly poured evidence of their culpability onto the Internet - through Bitcoin bribes, botnet hacking, malware, malicious harassment, endless psychological warfare, and other innumerable acts - they told us everything.
Everything about their tools, their intentions, their crimes, their employers, their customers, their victims and their ultimate paymasters.
And in so doing they revealed far more than their immediate crimes. Associations and networks in the shadows were brought to light. So were hidden financial transfers, trade routes, Darknet markets, laundered wealth, compromised banks and governments.
Everything, really.
But why does this matter to more than the crisis before us, or its investigation?
Because these threads not only wire together exponentially greater crimes.
They also serve as extraordinary databases for those building future generations of AI - particularly for those seeking a greater understanding of the world, strategy or analysis of international finances, trade, technology and so on, and, of course…
Anyone aspiring to superintelligence.
These databases are already the fundamentals of machine learning for what comes next.
Not to mention, for what's already here.
Rest assured, I have not shared everything I've sent to the FBI on Twitter or Substack.
But with neural networks, it's not so much the red flags we know or suspect…
As the ones we never imagined.
On 1/1/2020, I sent the FBI a message on tracking all weapons of mass destruction as well as trafficked drugs, weapons, people, counterfeit money and goods. Yes, the versions available online are still partially redacted, though you’ll find many of those redactions have already been filled in during this series.
But it’s an example of another vast swath of criminal activity we could address with massively distributed sensors and the AI technology available 3 1/2 years ago.
I don’t know exactly how well it was received, but we banned the export of AIs doing geospatial analysis within 5 days. BAE announced they had a contract from DARPA to track weapons of mass destruction using apparently the same methods six weeks later.
So I'd imagine it was noticed, and is probably in use.
So to sum up:
Whatever the inclinations of assorted compromised individuals, disinterested associates, those willing to look the other way, and so forth...
The machines will never stop.
In too many places, and too many databases.
It's not just that our adversaries don't know where to turn or what they’re doing, anymore.
It's that nobody knows what threads are fatal to pull on, anymore.
Not just because of the commingling oceans of evidence, but because GANs (generative adversarial networks) will be creating new methods of analysis on the fly.
Yes, it takes several-hundred hours to stand up a typical GAN - a generative adversarial network.
3 weeks is just over 500 hours.
6 weeks, just over a thousand.
Normal universities do them now.
Imagine how much the pace will quicken, as the technology becomes increasingly democratized.
Imagine how much it already has, with large language models like ChatGPT available almost everywhere.
Also, we already have a vast map of global crime and subversion, much of which is literally financial assets.
But it's like tracking seismic waves or the flow of a charge through a network of wires - what vibrates or lights up tells you even more than you already knew.
Remember, these are the best people in the world, working together to track assets, willingly handed unimaginably thorough information in the largest financial/counterintelligence/law-enforcement investigation in human history.
So as we're rolling up the map, we're also illuminating it, not just to gather more financial information, but to give everyone the chance to commit more crimes we don't need to expose sources and methods to prosecute, and to nail any more deep-cover spies we don't already have.
Remember also, there are elements which were supposed to be a counter for an emergent, de facto superintelligence:
AI itself, enhanced or augmented humans, runaway evolutionary algorithms, and so forth.
But so much of this was aimed at superhuman intelligence. Not wildly blundering conspiracies, loosely tied together in their original coordination, but increasingly at odds as their worlds fall apart and their ice floe melts faster in the equatorial heat.
One reason I had so much evidence to drop on these operations starting in 2017 - from cryptocurrency maps to malware tracking to evolutionary algorithms in psychological warfare - is I’ve been looking at rapidly emergent, runaway superintelligence.
In particular, flawed superintellingence, with the programming and processing power to be exceedingly dangerous, but without the perspective to realize it was on a suicidal path.
An idiot savant, if you would.
In many ways, this Sino-Russian alliance is an excellent example of such a broken intelligence.
A runaway assault involving thousands, even millions of minds and computers, ineptly coordinated, yet oft with the precision of a supercomputer?
I expected better, but in many ways this monstrosity was exactly what I anticipated.
And we have a chance to face it by facing our fears, our flaws, and transcending them.