The Quickening Pace, or… A Question of Speed
Part 3 of Artificial General Intelligence (And Superintelligence) And How To Survive It
Regulating weaponized or dangerously out-of-control AI is critical. We can’t really exaggerate the downsides, if AGI and ASI are done badly.
But where you see profound benefits and no practical downside to artificial intelligence, lean in. There are many clearly positive uses for AI, and many of those will increase our ability to oversee AGI and to protect ourselves from reckless or weaponized applications. You can still scrutinize them for flaws and unexpected dangers, but the more-benign uses are the main openings where you should rush forward.
Cybersecurity, including detecting, stopping, analyzing and tracking the sources intrusions and malware, is one area which is beneficial on its own, while also offering greater, automated capacity to perceive and thwart hostile or disruptive AI activities.
Biosecurity, and the larger field of biotech, is another. We are all keenly aware of the threat posed by pandemics, and how critical it can be to trace, contain, treat and vaccinate against lethal viruses, as well as addressing other dangerous pathogens, whether natural or artificial. Multi-layered biosecurity using a wide variety of sensors and software will be much harder to evade if someone or something releases a contagion, whether knowingly or unknowingly.
With the refinement of methods, instruments and neural networks, these resources could be applied to more exotic threats, from nanotech surveillance and weapons to nanoscale pollutants or unforeseen, dangerous interactions between various chemicals and nanoscale structures near cities or the larger environment. Likewise, environmental monitoring can expand to detect chemical and nuclear weapons of mass destruction, as well as overlapping with the detection of bioterrorism and nanotech.
Again, you don’t have to start with some vast, multi-trillion-dollar project to sense and destroy all possible threats. Beginning with key areas which have both immediate benefits and follow-on opportunities means you can start with far-less funding and even incorporate existing tools and technologies in relatively new research. That way, as your work expands, it becomes self-funding.
The seeds we plant will flower. The question is what fruit they will bear and what thorns, envenomed or otherwise, they may grow.
But the choices we make now, in so many ways, will have profound effects upon the future.
This leads to a larger question: “If you can invent anything, what do you invent?”
There are many possible answers, particularly based on your goals and needs. But in general, strategic critical and emerging technologies will win out over other innovations, and inventions which amplify your ability to develop radically more advanced science and technology will win out over other strategic innovations.
This cost/benefit consideration does not preclude work in clearly promising sectors, such as medicine, renewables, and fusion, or even minor or seemingly trivial concerns as simple as better recipes or more ergonomic chairs.
But in terms of prioritization, seed money and ultimately, massive budgets of money, “manpower” and machines, weigh what is most critical.
Then consider what will amplify your scientific and technological prowess.
Finally, in technologies which enhance sci/tech capacity, emphasize the “reinvestment of methods.”
Reinvestment of methods asks a simple question:
If you have a powerful method for finding better answers, what is the best question to use it on?
The answer? Finding even more powerful methods for finding better answers.
Then take those new methods, and find better methods still, in an endless, virtuous cycle.
To be clear, the possibility of AGI or ASI finding a virtuous cycle and iterating it endlessly at superhuman, ever-accelerating speeds, is one of the key concerns regarding “runaway superintelligence.” A runaway emergence of superintelligence means an ASI not only smarter than we are and beyond our control, but growing so rapidly its technology moves beyond our comprehension.
Indeed, it’s hypothetically a concern when any formidable intelligence is using the technique effectively, from a lone genius to a lab to a university to a tech company to an intelligence agency to an actual enhanced human to human-AI teaming or any combination thereof. But the opportunity to invent, test, apply breakthroughs, and move onto the next radical-problem-solving tool in a continuous cycle, while having the multitasking capacity to apply other spinoff technological insights simultaneously, is particularly inherent to an artificial superintelligence (ASI).
The idea applying the insights of computers to research and then plowing the benefits back into the next iteration of research is nothing new, it’s only a question of speed.
Semiconductors always had this. Moore’s Law, the steady doubling of computer chip performance doubling every 18 months and halving in price in the same time frame, was fed by continuous investment of money, technology and labor into these critical products, but the computers assisting us were perpetually among the first beneficiaries of each iterative improvement.
But again, the time frame meant we were doubling every 18 months (or 12, or 24, depending on when prognosticators offered their predictions), not in a matter of months, weeks, or even overnight.
Much less minutes or seconds.
But acceleration in research and development, while profound, could blind us to the larger changes taking place which feed directly back into this cycle. All the funds and labor time saved, and all the benefits reaped, from automating laborious tasks into something accomplished in a fraction of the time, or even in an instant…
Can be refocused your priorities – including further accelerating your technological progress.
Are you having your AI code new systems? Prototyping and manufacturing at new speeds via automation, CNCs or 3D printing? Is an AI running your social-media strategy, generating ads or handling much of your email correspondence?
The price and personnel required to maintain these activities using only humans can now be saved for more-critical priorities.
Such as improving your fundamental technological advantages themselves. Keep the cycle going, and these efficiencies can keep feeding back into your primary goals, turning waste into reinvested resources, even as the sci/tech edges enabling you grow sharper all the time.
So speed is a primary threat, along with processing power. But it is also an opportunity.
Embrace it, and direct it well, and you improve your chances of outpacing any threat.
Do so collectively, with trusted partners, and you increase the odds that someone friendly will be beyond the cutting edge in a critical technology when it goes from being critical to existential.
If you’re alone, there is no one to watch your back.
So go swiftly, but go together.