“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

by Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto.

Experts like Geoffrey are extremely concerned where AI could veer without expert controls and accountability policies firmly in place. There are two open letters published online; firstly, the quote above is a succinct statement held by The Centre for AI Safety (2023), and, at the time of writing this, 452 scientists and notables working in AI and computer science.  The second letter by the Future of Life Institute (2023), is a request to pause all development for at least six months. Currently, 33,700 people around the world agree with this sentiment.

Voicing fears alongside Geoffrey is Eliezer Yudkowsky, another co-founder of the field, who refused to sign, feeling it trivialises the issue. In the Times article (2023), he wants AI shut down immediately.

According to the EU Artificial Intelligence Act (2024), AI safety will be built into a treaty in a European Union freedom charter and propose it will ensure it serves humanity only for its wellbeing.

The chart titled "Recent AI model training runs have required orders of magnitude more compute" depicts the computational requirements for training AI models from 1945 to 2025. Computation is measured in total petaFLOP (10^15 floating-point operations). The y-axis is logarithmic, ranging from 10^-14 to 10^11 petaFLOP, and the x-axis spans from 1945 to 2025. Key data points include: Early AI models like Thesues (1950s) requiring minimal compute. The progression through models such as ELIZA (1960s), Neocognitron (1980s), and NetTalk (1980s). A significant increase in compute requirements starting in the 2010s with models like AlexNet, GPT, and AlphaFold. Recent models, such as GPT-3 (175B parameters), AlphaGo Zero, and Stable Diffusion, showing substantial compute needs. The trend continues with models like PaLM, Minerva, and GPT-4 (2020s), which require exponentially higher computational power. A dotted trend line highlights the exponential growth in computational requirements over the decades. The chart visually demonstrates how modern AI models demand vastly more computational resources compared to their predecessors.
Fig 5.1 Scaling chart 1945-2025, (Future of Life Institute, 2023) .

The UN, UNESCO, the G7, The UK’s Pro-Innovation AI Regulation (Digitalregulation.org, 2024), USA’s AI Bill of Right are taking steps to control its misuse but is it too late (Fig 5.1)?

Humanity must work together to form stringent regulations focusing on ethical standards, human welfare, safety and data protection.

[<< Back ]


[250 words]

References

Centre for AI Safety (2023) Statement on AI Risk | CAIS, Safe.ai. Available at: https://www.safe.ai/work/statement-on-ai-risk (Accessed: 2 June 2024).

Digitalregulation.org (2024) Digital Regulation Platform, Digitalregulation.org. Available at: https://digitalregulation.org/3004297-2/ (Accessed: 2 June 2024).

Eliezer Yudkowsky (2023) Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, TIME. Time. Available at: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ (Accessed: 2 June 2024).

EU Artificial Intelligence Act (2024) The Act Texts | EU Artificial Intelligence Act, Artificialintelligenceact.eu. Available at: https://artificialintelligenceact.eu/the-act/ (Accessed: 2 June 2024).

Future of Life Institute (2023) Policymaking In The Pause – Future of Life Institute, Future of Life Institute. Available at: https://futureoflife.org/document/policymaking-in-the-pause/ (Accessed: 2 June 2024).

Further Research

Miles, R. (2024) ‘AI Ruined My Year’, YouTube. Available at: https://www.youtube.com/watch?v=2ziuPUeewK0 (Accessed: 2 June 2024).