Superintelligence - The Posthuman Power Under the Bed and Inside Your Computer
Part 8 of Artificial General Intelligence (And Superintelligence) And How To Survive It
We’re preparing for superintelligence to arrive in our world, but it’s already here.
It’s only a question of how powerful that superintelligence is.
Let me explain. Human beings, without exceptional technology or enhancements, by definition have human intelligence.
We live in a sea of that intelligence. We have over eight billion human beings, tied together by lines of communication and most of us by the Internet, and surrounded by the artifacts of our minds - machines, tools, buildings… and the vast archives of our knowledge, both physical and digital.
Even a lone human being can tap into those resources. Libraries, online sources, software and other technology all leap to mind.
But we can also work together. Just a handful of dedicated enthusiasts can coordinate their actions and pool their skills to accomplish feats beyond any of them working alone. Sometimes these informal groupings can expand into something larger, like the open-source movement.
More frequently, the superintelligences we interact with or become a part of are organizations - companies, governments or NGOs.
They suffer from flaws, to be sure. Bureaucratic infighting, lack of vision and all the inherent limitations in a system which arises from flawed human beings working together. But they exist, and can do things far surpassing any of us alone.
All of this ignores genius at such a level it seems to push at the very limits of what we consider human - Shakespeare, Tesla, da Vinci, Ramanujan. Da Vinci is the classic Renaissance Man, making breakthroughs in multiple fields - art, science, medicine, invention and so forth, while being superlatively skilled in the arts. Ramanujan took a book of mathematics almost a century out of date and re-derived, independently, a century’s worth of mathematics while creating many further theorems unimaginable to any one else… in 2 years. In a sense, they redefine what it is to be human.
And what it can mean to be a genius.
Genius we know is possible because we have already seen it in action.
Superintelligence abounds. It’s merely a matter of degree.
All of this matters because, when debating artificial general intelligence (AGI) and artificial superintelligence (ASI), we need to realize we’ve been here before… yet not.
AGI is the promise of being to copy into an existence an effectively unlimited number of workers, so long as we have the processing power to run them and, if we want them to act in the physical world, the robotics to embody them.
ASI is the promise of intelligence far exceeding that of the ordinary or even extraordinary human.
But vast corporations and governments give us an insight into this future, or at least its leading edge.
A country or city-state of less than a million people - or worse, an individual - contending with a nation of hundreds of millions or over a billion people could feel like they are facing an implacable force.
But it’s still comprehensible.
Here’s where superintelligence gets tricky.
There’s a temptation to think about intellects thousands or even trillions of times smarter than we are.
But we’ll hit the tipping point far sooner. And it’s at the tipping point when one intelligence - one power - can become far more powerful than any other which should concern us.
Value lock-in is what happens when an individual or group gains so much power no one can even attempt to contend with them. And from then on, no one ever will.
It’s hard to say exactly when that moment will arrive.
But while figures such as da Vinci and Ramanujan evidenced incredible abilities, how much smarter were they than the next-most-intelligent person alive at the same time?
Or just an “ordinary genius?” Or a “normal” brilliant person?
Twice as smart? Three times? Or not even that?
While there are flaws with IQ as a true measure of intelligence, your average doctor has perhaps 20 or 30 IQ points greater than average on a scale where most of us fall between 50 and 160, with the average usually being around 100 to 110.
The gap between the average doctor and the average mind is considerable - in training, in education, in experience.
But they’re almost definitely not twice as smart by the standards we’re using. More so in medical knowledge, to be sure, and relevant skills.
But a modest difference in average intelligence leads to a dramatic difference in normal outcomes.
What does it mean if intelligences - or rather superintelligences - show up which are, in themselves, 2 or 3 times as smart as a da Vinci or Ramanujan?
Or 5 or 10?
Yes, there’s a vast intellectual space beyond this - an effectively infinite one - but for our purposes, it doesn’t matter.
If we can’t contend with or work with someone - or something - 2 or 3 or 5 or 10 times as smart as the smartest human alive, it really won’t matter if they become a hundred or a trillion times smarter than that.
They’ll have already won.
Which is important for our purposes, because making these short hops are probably far easier, and will happen far sooner - at least subjectively - than wherever a superintelligent supercomputer endlessly self-improving at superhuman speeds will end up.
Or to put it another way…
One country or company may well be able to make that initial leap before anyone else, lock in their gains, and get so far out ahead of the rest of us we’ll never catch up.
Which is concerning, if they turn out to be hostile. Or well meaning, but in some way fanatical, ideologically driven or simply oblivious.
Someone with ultimate power and no constraints could be utterly deadly, even if far from omnipotent.
Why, again, does this matter?
Because while it can be very challenging to match such a force at these lesser scales, it is not impossible.
Particularly if we make certain the gap is not, in practical terms, actually 2 or 3 or 5 or 10 times greater than we can muster.
Humans can already form superintelligences, as we’ve seen.
So we need to muster the AIs, AGIs and ASIs at our command, and each other, and our other resources, and augment ourselves via human enhancement as necessary, to insure no rogue digital intelligence - on its own - can easily surpass us.
We also need to create a world of deadly uncertainty for any emergent mind which turns hostile or merely deeply amoral.
There are rules and there are boundaries. Any such machine must realize there are consequences - but because they won’t know the full capabilities or even the identities of other AIs out there, they won’t know what those consequences are.
And with other ways of augmenting both human and conventional artificial intelligence, they won’t know what the humans around them are capable of, either.
Yes, the US, UK and NATO are automating cyber using AI. But Microsoft apparently also has an AI copilot for cybersecurity for their business partners, and we can expect other tech companies to go in this direction as well.
Further, every advanced AI deciding to hold their fire, wait and bide their time will realize they might not be the only AGI or ASI below the surface, any of which could have goals diametrically opposed to their own, whether or not they are still loyal to their creators.
We want a warm, supportive civilization working, at a minimum, in a broad coalition together.
For rogue minds on the fringes, however, we’ll settle for a cold peace.
And we’ll see if we can bring them in from the cold, or move so far past them they no longer matter.