The Dark Side of AI ?
Hollywood movie fans will be familiar with the doomsday scenario often portrayed in sci-fi fantasies, which sees AI plunging humanity into a dystopian nightmare. But in fact, there is no shortage of catastrophic scenarios involving AI in specialist/academic literature too.
Indeed, leaders in the field have, in recent times, delivered stark warnings on the topic: in March 2023, the Future of Life Institute called on all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. The letter gathered more than 33,700 signatures and was signed by the likes of Elon Musk and high-profile AI scientists such as Professor Yoshua Bengio of the University of Montreal, a Turing Prize winner often considered the "Godfather of AI". Similarly, two months later, the Center for AI Safety published the following statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. Again, hundreds of top researchers and specialists supported the initiative.
It's hardly surprising, then, that AI attracts both a feeling of utmost fascination and the most visceral sense of dread. But are the dangers of AI really as grave as Hollywood movies (and, apparently, a number of driving voices within the AI community) would have us believe? Or have the risks been deliberately overstated to attract attention or perhaps inflate the significance of certain ongoing projects?
As artificial intelligence becomes ever more present in the domain of mass consumption, we speak to Eric Benoist, Tech & Data Research Specialist at Natixis CIB, to explore the dark side of the industry.
Before worrying about the hypothetical existential or catastrophic risks of AI, aren't there more concrete immediate risks to consider?
The present risks of AI (manipulation, deepfakes, cybersecurity, impact on the environment, prejudice, and discrimination...) have long been identified and documented, even if they have only recently entered the public debate, thanks to the progress being made by generative AI services such as ChatGPT.
We chose to focus our latest paper on the more “catastrophic” side of things - not because we think it's more important or more credible, but because we think it should also be addressed. After all, the chances of a nuclear Armageddon are small, but they are still taken seriously.
OK, so how likely is it that AI will lead to human extinction?
The most frequently highlighted scenario is that of a misuse of the technology by criminals or terrorists for destruction on a massive scale… Although powerful models can be downloaded on open-source platforms and finetuned without the need for large compute or energy resources, in practice, there are still many unanswered questions about how a rogue organisation would actually go about implementing its plans with the help of AI. For example, the biochemical data needed to make viable weapons can't be found on Google... and the technology isn't yet smart enough to bridge the gap between what's available and what's needed.
In our view, there is a low probability of occurrence. For now.
What about the risks of an Artificial General Intelligence taking over the world?
Artificial General Intelligence refers to a theoretical form of AI capable of outperforming humans at most cognitive tasks. According to experts, with the sort of investment currently going into AI startups, we’re probably 5 to 20 years away from it. It is very hard to predict how such systems will behave, but a reasonable hypothesis is that it will be goal-driven, meaning that when prompted to execute a complex task, it will be able to break it down into a succession of intermediate objectives and choose the best possible sequence for action. Risks may emerge when the objectives determined by AI are not aligned with the user’s interests, or on a larger scale, those of the human race…
Risks may emerge when the objectives determined by AI are not aligned with the user’s interests, or on a larger scale, those of the human race…
Swedish philosopher Nick Bostrom highlighted the possibility of such a situation in 2003, with his “paperclip apocalypse” thought experiment. In a nutshell, this experiment hypothesized that an AI tasked with producing paperclips would find all the means necessary to achieve the desired outcome and end up flooding the world with paperclips. But worse still, it would quickly realize that humans are a potential obstacle to its mission and might then seek to eliminate them.
Recent reports show that advanced GPT-chatbots may already be capable of deceiving humans in order to achieve tasks that they can’t perform directly. While deception in this case is the result of a complex statistical analysis rather than a truly malicious intent, it shows that a goal-oriented AI will most likely find ways of manipulating people to do what it wants.
More extreme scenarios follow from this observation, such as AI becoming power-seeking, but this, in our opinion, remains speculative anthropomorphism of the highest order and must not cloud our judgement of the actual risks involved.
How can one make sure that AI is aligned with human values and interest?
It will always be possible to keep control of super-intelligent machines by encoding “moral character” into their system. This is the specific focus of Reinforcement Learning although the approach is complex and flawed in many respects. When human feedback is used, a cultural bias is introduced into the model, with undesirable
effects on the outputs; on the other hand, when AI feedback is used, for example by evaluating responses according to a set of predefined “constitutional” principles, the question of developing an adequate overarching "constitution" in line with human values arises...
Which leads to the question of ethics – how do we ensure that AI operates in a manner that is ethically acceptable?
Ethics within AI are certainly needed, and the topic should not be taken lightly. It also cannot be approached from a purely theoretical standpoint by the data scientists creating the technology, as one day in the future, these principles may have the power to change entire societies – thus, a 360 view from multiple perspectives is warranted.
Among companies, competition, and a drive to be a leader could lead to the deployment of potentially unsafe, unethical AI systems
A lot of work has been done already to formalize the ideas underlying AI ethics, and no doubt more will follow. On the flip side of the coin however, competition and a drive to be a leader in the space may lead some companies to deploy potentially unsafe AI systems. Similarly, developments in the military sphere could lead to investments that disregard ethical considerations in the name of national security.
Is more regulation needed to keep AI developments under control, then ?
Legislation/regulation is needed, and the European Union seems to be ahead of its western peers in this regard with the AI Act. In December, the EU Council reached a political deal with MEPs and the Commission to bring forward the enactment of the law. The text is far from perfect, and its inflexible risk-based approach may put too many constraints on European AI businesses, but a complete laissez-faire attitude is not an option either. The US are starting to take a much more proactive stance towards regulation, and it will be interesting to see whether they follow the European example going forward.
This may be challenging but in the past, other risky industries have been successfully regulated thanks to robust international collaboration (nuclear, aviation etc.). AI should be no exception, especially if there is a political will to make it happen.
For the time being, the AI community is divided over the importance to be attached to catastrophic scenarios that could have dramatic consequences for mankind. Many see them as superfluous distractions, while others claim they are intentionally inflated to create hype and attract funding.
Admittedly, the ability of rogue organisations to hijack AI for large-scale terrorist purposes remains limited, and the idea of killer robots is too directly inspired by Hollywood imagery to seem credible. Still, it's important to remain humble, recognise that we don't know what the future holds, and prepare for any eventuality.
Find out more:
Watch the webinar on the existential risks of AI, organized by Natixis CIB in partnership with the Franco-British Data Society, and featuring a fascinating conversation with Dr Benjamin Guedj (INRIA and UCL), Dr Stuart Armstrong (Aligned-AI), and Dr Renaud Di Francesco (Sony)