Menu

Navigating the Artificial Intelligence (AI) Minefield: Palaeolithic Emotions, Medieval Institutions, and God-like Technology  

Navigating the Artificial Intelligence (AI) Minefield: Palaeolithic Emotions, Medieval Institutions, and God-like Technology  

Introduction

In the Centre for Humane Technology Conference 2023, ‘Palaeolithic emotions, medieval institutions, and god-like technology’ was revived as an allegory for the risks and opportunities of AI. 

In 2022, a survey of over 700 AI experts showed that 50% believed there was a 10% chance of human extinction or severe disempowerment from future AI systems. The speakers ask a compelling question: if half of aerospace engineers warned of a 10% risk of a plane crash, would you board? 

In this article, we examine the recommendations from the Centre for Humane Technology. We argue that whilst this tech has great potential, business leaders should be careful when implementing AI strategies, finding the middle ground between ungoverned use and extensive control over AI capabilities. When relating to AI, industry legislation should be treated with the same due diligence found in the Financial Services Industry. 

‘First Contact’ & The 3 Rules for Technology

The Centre for Humane Technology say humanity’s “First Contact” with AI was with social media, which they claim humanity lost! They use this as inspiration to implement “The 3 Rules of Humane Tech”: 

1. ‘When you invent a new technology, you uncover a new class of responsibilities’: Firms need to consider the ethics of AI and ensure responsible implementation to avoid unfair outcomes for customers due to biased and discriminatory AI models. 

 2. ‘If the technology confers power, it starts a race’: Business leaders are keen to adopt AI, with 74% of UK CEOs believing they must act now to gain a strategic advantage, according to EY. Whilst the race has started, businesses should not just charge blindly into it without considering the full picture. 

3. ‘If you do not coordinate, the race ends in tragedy’: The Centre points to the failure of legislators to effectively regulate social media as the result of our collective inability to coordinate a response to powerful innovative technologies. 

They go on to state: “No one company or actor can solve these systemic problems alone” – Industry players are working together to find solutions. Altus Consultancy has contributed to an industry-wide collaboration defining an AI Voluntary Code of Conduct for insurance claims; as well as general industry oversight from organisations such as The Institute for Ethics in AI.  

‘Second Contact’ 

In our ‘Second Contact’ with AI, there is an opportunity to avoid another disaster. The AlphaPersuade Large Language Model (LLM) crafts messages to influence human responses. Similarly, AlphaZero mastered chess within nine hours, surpassing human skill in two. Automated code exploitation allows users to prompt LLMs to detect vulnerabilities and generate malicious code. The consequences of ‘First Contact’ were manageable, the next phase demands vigilance.  

Before 2017, AI operated in separate fields. The ‘Great Consolidation’ birthed Large Language Models (LLMs), enabling AI to model and generate from diverse data, leading to unintended capabilities and showcasing exponential growth.

Exponential Growth & Safety Research 

Experts struggle with the pace of exponential growth. An example was illustrated in the conference, when a group of forecasters, well-acquainted with the intricacies of exponentials, sought to predict AI’s trajectory. Their task: anticipating when AI would achieve an 80% accuracy rate in solving competition-level mathematics. Last year, they estimated a 52% accuracy within four years. However, AI surpassed this in less than a year, outpacing human performance.  With 30 times fewer researchers working on safety as opposed to capabilities, the full extent of AI risk is unknown.  

AI capabilities consistently outpace predictions. Jack Clark, former policy director at OpenAI, emphasizes that such advancements unlock angles, which rivals may exploit. 

Conclusion 

We find ourselves at a crossroads between continual catastrophes and the looming spectre of forever dystopia. Picture the gutters in a bowling alley, with one side symbolizing unwavering trust in humanity (Open AI) and the other, a highly regulated feudal dystopia in which technology is limited to few ‘Elites’. Navigating this terrain calls for balance – an approach that preserves democratic values while leveraging the capabilities of 21st-century AI.

 Head of Data and AI, Sarah Bateman, proclaims that “In Financial Services we are seeing the increased use of AI, with pockets of innovation transforming business operations and improved customer services. Despite its potential, as an industry we must recognise the need for all to navigate AI ethically and effectively; to ensure the emergence of AI as a benign technology.”

The question remains: How do we board this AI plane safely?  

Keep exploring...

AI Bootcamp for Senior Leaders
The Dawn of AI? Or Much Hype Over What Are Essentially Intelligent Algorithms (IA)?

Subscribe

Don't miss out on news and opinion pieces from Altus experts

Insights - Subscribe form

Name
Business email preferable
Please confirm what you would like to receive from us