Artificial Intelligence (AI) is a field of computer science focused on developing intelligent computer systems and machines capable of performing tasks that require reasoning, learning, problem-solving, and natural language processing (NLP). John McCarthy coined the name at a conference in 1956, and since then AI has evolved from expert-fed AI models primarily used for applications such as fraud detection, aviation systems, and medical diagnosis.

AI advanced in 1958 with Frank Rosenblatt’s Perceptron, an early type of Artificial Neural Network (ANN). Inspired by the biology of the brain, ANNs were notably conceptualised by Warren McCulloch and Walter Pitts in the 1940s (McCulloch and Pitts, 1943). The analogy to biological systems explains the complex system of interconnected nodes characterising ANNs. The nodes, or artificial neurons, are adjustable during training to model non-linear relationships and spot patterns in large datasets. In 1982, John Hopfield introduced content-addressable memory which led to specialist innovations (Hopfield, 1982).

In 1986, the concept of Deep Learning (DL) was developed via the Backpropagation algorithm by Geoffrey Hinton, David Rumelhart, and Ronald Williams. The algorithm allowed the model to self-reflect on prediction errors but would take another twenty years to fully emerge. In 2001, Google’s search engine began utilising deep learning models to handle spelling errors in user queries (Hespell, 2023), and in 2006 they released ‘Translate’. DL thrived with big data and ran faster than ever before using increased layers of ANNs.

This journey has led us to the creation of transformers and the current model of Generative AI.

[Next >>]

[248 words]


Hespell, R. (2023) Our 10 biggest AI moments so farGoogle. Google. Available at: (Accessed: 29 May 2024).

Hopfield, J.J. (1982) ‘Neural networks and physical systems with emergent collective computational abilities.’, Proceedings of the National Academy of Sciences, 79(8), pp. 2554–2558. Available at:

McCulloch, W. and Pitts, W. (1943) ‘A logical calculus of the ideas immanent in nervous activity’, Bulletin of Mathematical Biology, 52(1-2), pp. 99–115. Available at:

Further Reading

Bohdan Macukow (2016) ‘Neural Networks – State of Art, Brief History, Basic Models and Architecture’, Lecture notes in computer science, 9842, pp. 3–14. Available at:

Aurora University (2014) Library Guides: ChatGPT, AI, and Implications for Higher Education: History of AI and Neural Networks, Available at: (Accessed: 29 May 2024).