Artificial General Intelligence (And Superintelligence) And How To Survive It
Part 2 - Necessary Questions
Shaping this environment before threats can emerge is a question of taking our first-mover advantage and leveraging it – not only through regulation, but by actively engaging industry, academia and other stakeholders to encourage not only their assistance, but their creativity in attaining these goals.
To this end, while judicious regulation would be a welcome initiative, we can start by asking questions in many quarters.
A classic technique in community building is to go to someone with power or resources so you can present a problem and a proposal to tackle it. If they say yes, your well-considered proposal moves forward. If they say no, you ask what else they might be willing to do instead. And you keep asking the question until you arrive at a compromise position which moves your larger goals forward, or they make it clear they refuse to cooperate in any way.
The twist is when a government contemplating serious regulation with broad public and stakeholder support shows up asking for input, many people put to the question will become that much more creative in their answers.
In this case, you’re not only looking for their constructive feedback, but for their best brainstorming on what could be productive in achieving your safety goals, and their independent initiative in taking positive action before regulation even comes into effect. Even if their intent is to dodge or delay more draconian oversight, you stimulate positive activity across a multitude of companies and organizations, before you even have to set rules or legislate new laws.
This does not eliminate the need for oversight, but preemptive self-regulation can save time and encourage positive evolution with minimal effort.
These inquiries should be aimed at AI developers and researchers, cloud providers, manufacturers of advanced semiconductor chips, private and public cybersecurity teams and key decision makers in defense, intelligence, law enforcement and the development of other critical and emerging technologies.
Exactly what we ask will vary by field.
For those involved with AI, cloud computing, advanced chips and cyber, let’s start with the basics.
What are you doing to insure safety?
What are you doing to insure transparency?
What are you doing to prevent abuse?
What are you doing to notify authorities of threats?
Yes, these are incredibly basic, but we want to hear the answers, so we know they’re thinking about them. And also, because we want to be sure everyone is already considering the bare minimum levels of responsibility and security.
If the answers we get also provide best practices which can be shared throughout the industry, so much the better.
But to sum up, these simple questions of safety, transparency, prevention of abuse and notification of threats cover immense ground, which will vary based on what your company does.
Nvidia will be answering questions about the disposition of GPUs for AI research and development. Amazon may face inquiries about their cloud computing. OpenAI may be discussing what nefarious ends ChatGPT might be turned to, how they are preventing it from assisting, and what responsibility - legal and ethical - they have to inform the government of criminal plans in the making.
Hence even fundamental questions raise many issues. Does Amazon watch for a rogue AI, hackers or hostile governments purchasing or seizing processing time on their servers? What “know your customer” requirements do they have for vetting either single purchaser or a multitude of cloud computing contracts coming in through sock-puppet accounts or hundreds or thousands of shell corporations? If it is possible to assemble enough compute to build an advanced LLM on their servers, or do other next-generation work, do they owe it to anyone to watch and report?
Should Nvidia and other advanced chips be registered, their owners licensed, digitally tagged and “ping” their digital location on a regular basis? Bad actors could silence them, but it becomes harder to hide their disappearance from the locations where they are supposed to be, especially if you need them their ostensible owners to collude with you by transmitting fake signals. If each ping is a rotating coded pattern, you might need to hack each chip and then decrypt what each signal is in advance, becoming prohibitively difficult.
What happens when someone asks ChatGPT or Bard to help them hack a computer or plant malware? To plan a crime? To commit a crime? To build a weapon of mass destruction, or a weapon of mass disruption? To build an AI of comparable capabilities - or just lesser, but still substantial ones - lacking whatever alignment, constraints or moral strictures they may possess now?
What can OpenAI and Google tell us about their work in AI alignment - getting artificial intelligence to embrace the core values we want it to serve - and what can they tell us about how well it is working? Is having the system work through its logic and share it via a “tree of thought” with explanations for each step sufficient? Or is it merely a shallow understanding or even entirely illusory?
How can we tell, and how can we test what we know?
And how much can they share with the industry on this issue and other elements of AI safety?
Should all AI instances be licensed and automatically, constantly and transparently monitored for dangerous or illegal behavior, in part by other AI systems? Multiple, independent neural networks trained to look for criminal, deviant or disquieting behavior in the actions and internal operations of systems could constantly assess other AIs in real time, generating internal alerts and reports while leaving certain interventions up to their human supervisors.
The NSA famously collects all the accessible data in the world, cuts out the transmissions from the Five Eyes – the US, UK, Canada, Australia and New Zealand – and parses the rest for threats.
What happens when they assign one or more advanced AIs to sift not only for conventional dangers, but AI and AGI activity?
What happens when some power instead of using malware to take over computers for a conventional botnet to act – and thus reveal their actions and their secrets – but to listen… say, for AI and AGI operations?
AI companies, key tech providers and national governments aren’t the only who need to be thinking.
What plans does your small business - or medium-sized or large business - have for the hyper-competitive environment created by AI?
What impact does it have on the work you do? Will it replace your staff, your market, your company?
If it does take away your existing job, customer base or business, how will you adapt in place, or move to another job, career or enterprise?
Can you continue your work with human-AI teaming? Even the relatively crude method of using AI to replace the work it’s most obviously suited for, and which it can do without drawbacks?
Are there more sophisticated strategies or tools which could help you stay afloat, or even surge ahead?
If not, do you need to be the ones who invent them?
Does AI-driven hacking already increase the level of risk for you? And if not now, then how soon?
What can you do, whether as a large government or multinational, or a very small business or individual, to protect yourself?
If you’re a cloud provider or otherwise provide critical tech services, to what degree is extraordinary security built into your work product?
How soon do you expect governments to require exceptional security as the bare minimum for being allowed to continue to do business?
As a government, how many cyber vulnerabilities will you allow to persist in your economy, your own systems, and your contractors?
Given allegations of questionable ties by certain tech providers - such as Kaspersky’s alleged ties to Russian intelligence in the FSB - how long will you let companies and organizations under your authority use those products or services?
Given the use of hardware built by Supermicro in espionage, should we be restricting computer-hardware purchases to trusted suppliers? And should we assess which components in that supply chain can not be trusted, even if the final seller otherwise is?
The US government uses procurement to require upgrades, simply by telling the market they won’t be buying anything which doesn’t meet their security standards. Not everyone has that weight, but it does illustrate how major purchasers can shape outcomes directly.
On the other hand, per Anne Neuberger, the Federal government requires companies which own critical infrastructure to inform them of cyber breaches, a direct regulatory tool.
As a government, how do you harden systems again enhanced cyberthreats in an AI era?
Obviously, plugging the most-obvious loopholes would help, but it’s the first step on a very long journey, to be accomplished very quickly.
And all of these questions and others lead to a final query facing anyone pushing at the boundaries of AI:
How can you handle the drive to remain competitive or even extend your lead with the conflicting need for safety, especially in dangerous applications?
This isn't an exhaustive list, by any means.
If you have more and better questions, please start asking them now, of everyone who can answer.