“Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”
Job Automation
Advanced AI models are expensive to train but cheap to run — often far cheaper than employing humans.
For a long time, experts believed that some jobs (such as driving trucks or taxis) could clearly be automated, while other jobs (such as making art or writing code) could only be done by humans.
Today, it’s clear — almost all jobs are under threat of automation. OpenAI has been explicit about this. Just read their website. They expect to build AGI, which they define as “highly autonomous systems that outperform humans at most economically valuable work“
Data Breaches
Data is at the heart of every frontier AI model today — and lots of it.
To train an advanced AI system, you need to feed it massive amounts of data: practically the entire internet.
For these systems to be maximally useful to individuals, companies, and governments, they will often be granted access to even more data, including personal information, communications records, and proprietary knowledge.
In fact, in a certain sense, the AI model itself is just data: a collection of information that can be stolen.
Data Breaches are a common occurrence on the internet. Thousands of events involving compromised data take place every year. The development and proliferation of frontier AI models may very well worsen this problem, leaking confidential information or the personal data of millions of people. Even the models themselves may potentially end up in the hands of malicious actors if we aren’t extremely careful.
Algorithmic Bias
Many AI models are fundamentally the product of human thought — human engineers, human data, and human use cases. So, it’s not all that shocking that these systems have absorbed our biases.
We are already seeing machine learning algorithms driving criminal sentencing, evaluating loans, assisting law enforcement, and more. We are digging ourselves deeper and deeper into a hole, reinforcing and spreading stereotypes and biases.
As AI models continue to grow, we need to find new ways to ensure that they don’t perpetuate harmful stereotypes and worsen systemic inequality. So far, the major AI labs are failing at this. While the models try to avoid biased and toxic outputs, they are easily “jailbroken” and, with little effort, can be cajoled into producing toxic outputs.
AI has no inherent prejudice. But unless we are extremely careful, it inherits and reflects our own. If AI labs keep building new systems at a breakneck pace with little regulation, how can we expect to prevent this?
Biological Weapons
In November 2020, one of the biggest problems in Biology was finally solved after fifty years. The so-called “protein-folding problem” wasn’t solved by a team of human biologists, however. It was solved by a machine.
With demonstrated capacity for superhuman performance in the field of biology, some experts now warn that the technology could soon churn out novel pathogens. Advanced AI may exponentially quicken the design and engineering of bioweapons.
Advanced AI is a dual-use technology. As advanced models become cheaper and more accessible, sovereign states or extremist groups could begin to use them to develop biological weapons that threaten the future of our species.
Misinformation
Generative AI makes the creation of new content both extremely easy and extremely quick. A large language model can produce thousands of articles in the time it takes for a human to write a single one. However, large language models are also notorious for producing text that is better at “sounding truthy” than being true.
But the risks go beyond just written content. Generative AI can also synthesize photorealistic media, enabling the creation of deepfake photos and videos. Even voices are now being deepfaked with astonishing accuracy, leading to the proliferation of scams and internet fraud.
Philosopher Daniel Dennet has argued that one of the most significant threats we face is the total breakdown of social trust that could result from a world in which any content — from the written word, to photos, videos, and live conversations — could be AI-genereated.
Concentration of Power
We already live in a highly unequal society — one in which certain governments, companies, and individuals wield vastly more control than their peers. As AI proliferates, some worry that its benefits might be captured asymmetrically by the wealthy and powerful — worsening this concentration of power.
Just imagine tech companies that amass and leverage huge amounts of personal data to make models specialized to interact with and persuade individuals. Governments adopt AI for surveillance and social control, and use their asymmetry of power to make demands of other countries. Workers lose livelihoods to automation while the profits of automation flow upward, instead of being distributed across the population.
These are all possibilities that will need to be addressed in order to ensure that AI makes empowers everybody, rather than worsening the disempowerment already spread throughout so much of the modern world.
Superintelligence
Many of the risks on this page are either already present, or will soon be present as we inch closer to the first human-level artificial general intelligence.
But there’s no reason to expect that all AI labs will unanimously stop developing their models once we merely reach human-level intelligence. Instead, many forecasters predict that in the years or decades following the first human-level intelligence, we will see the emergence of superintelligence — a system that is smarter and more capable than humans across virtually all relevant domains.
This might sound like science fiction, but there are no obvious signs that it couldn’t happen in our lifetime — and lots of reasons to suspect that we are well on our way to superintelligence.
It will be hard to predict the risks we’ll face when we can no longer keep up with, or even comprehend, the actions of the AI systems we’ve built. Until we know that we can avoid this future, or at least the dangers that it could bring, we should proceed with caution.