top of page
Search

AI Mis-Alignment: Possible Catastrophe we REALLY need to worry about!

By Mehar Bhasin


Companies and governments are spending billions of dollars a year developing Artificial Intelligence systems— and as these systems grow more advanced, they could (eventually) displace humans as the most intelligent things on the planet. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict).



There is significant probability of an existential catastrophe, given the high degree of both risk and neglect

The risk of existential catastrophe, which leads to human extinction, or an equally permanent and severe disempowerment of humanity, caused by AI within the next 100 years is guessed to be around 10%. Asides from catastrophe situation, here are examples of other potential risks posed by AI: (a) AI could worsen war (b) AI could be used to develop dangerous new technology (c) AI could empower totalitarian governments


It’s important to recognize that the fact that many experts recognize there’s a problem doesn’t mean that everything’s OK, the experts have got it covered. This problem remains highly ignored, with only around 300 people working directly on the issue worldwide. $50 million was spent on reducing the worst risks from AI in 2020 That might sound like a lot of money, but we’re spending something like 1,000 times that amount on speeding up the development of transformative AI via commercial capabilities research and engineering at large AI labs. To compare the $50 million spent on AI safety in 2020 to other well-known risks, we’re currently spending several hundreds of billions per year on tackling climate change. Potential development of rival intelligence on Earth in the near future coupled with overall neglect related to addressing the problem should a serious cause for concern.


Advanced AI systems can be misaligned.

That is, they will aim to do things that we don’t want them to do. The AI systems we’re considering have advanced capabilities — meaning they can do one or more tasks that grant people significant power when carried out well in today’s world. AI system would use its advanced capabilities to get power as part of the plan’s execution. As human beings, we generally feel bound by human norms and morality. We aren’t that much more capable or intelligent than one another. So even in cases where people aren’t held back by morality, they’re not able to take over the world. A sufficiently advanced AI wouldn’t have those limitations or morality


People might create and then deploy misaligned AI.

Because there are incentives to deploy systems sooner rather than later. We might expect some people with the ability to deploy a misaligned AI to charge ahead despite

any warning signs of misalignment that do come up, because of race dynamics — where people developing AI want to do so before anyone else. For example, if you’re developing an AI to improve military or political strategy, it’s much more useful if none of your rivals have a similarly powerful AI. These incentives apply even to people attempting to build an AI in the hopes of using it to make the world a better place. Also, it is possible that any misaligned AI that’s sophisticated enough will try to understand what the researchers want it to do and at least pretend to be doing that, deceiving the researchers into thinking it’s aligned.


Current focus and groups involved in tackling AI safety issues

Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. Two of the leading labs developing AI — DeepMind and OpenAI — have teams dedicated to figuring out how to solve technical safety issues that could lead to an existential threat to humanity. There are also several academic research groups (including at MIT, Oxford, Cambridge, Carnegie Mellon University, and UC Berkeley) focusing on these same technical AI safety problems.

As AI begins to impact our society more and more, it’ll be crucial that governments and corporations join hands and have the best policies in place to shape its development. Governments might be able to enforce agreements not to cut corners on safety, further the work of researchers who are less likely to cause harm or cause the benefits of AI to be distributed more evenly.


AI will bring huge benefits — if we avoid the risks.

The recent 2020 survey of AI researchers who published at NeurIPS and ICML (two of the most prestigious machine learning conferences) shared valuable insights. The median researcher thought that the chances that AI would be “extremely good” was reasonably high (10%). Indeed, AI systems are already having substantial positive effects — for example, in medicare or academic research. But in the same survey, the median researcher also estimated small — but certainly not negligible — chances that AI would be “extremely bad (e.g., human extinction)”: a 5% chance of extremely bad outcomes. There are surely high stakes involved. We shouldn’t just wait around, fingers crossed, watching from afar. Artificial intelligence could fundamentally change everything — so

working to shape its progress could just be the most important thing we can do.


SOURCE

This article’s primary source is an amazingly insightful and thoroughly researched article “Preventing an AI-related catastrophe: AI might bring huge benefits — if we avoid the risks” published by Benjamin Hilton (from @80,000 hours.org) in Aug, 2022. We have highlighted a few key points for the benefit of our high school audience to create awareness about the topic.

339 views0 comments
bottom of page