Diagram showing the two different versions of the transformer model: On the left the encoder used in Google's BERT and the decoder used in ChatGPTs.
Fig 3.1. The Transformer model  (Vaswani et al., 2017).

Generative AI learns and creates in a completely new way than we are used to. As mentioned previously, the LLM transformer architecture requires gigantic amounts of pre-processed information to be trained effectively (see Fig 3.1). Words are embedded as vectors in multi-dimensional matrices and the patterns classify links between them by attention focus weights. It can predict the next word based on correlations and previous patterns. Complex algorithms then decide using probability scores. For example, the word ‘sail’ is more likely marine, and ‘sale’ is retail, therefore surrounding words support predictions.

Various types of learning are used in AI: Supervised learning trains with labelled datasets, learning from examples. Unsupervised learning works with unlabelled data, identifying patterns and structures. Reinforcement learning uses trial and error, with feedback. Each type plays a crucial role in AI development.

Another common type of GenAI, is Generative Adversarial Networks (GANSs) which consists of two ANNs, a generator and a discriminator that ‘compete’ to develop the most realistic outcome. The generator makes synthetic data, the discriminator evaluates it until a realistic outcome is agreed upon (Arora and Arora, 2022).

Both models are susceptible to bias and errors since the training corpus is curated by humans (Zhou et al., 2024). Without careful regulation, bias or disinformation could be introduced either accidentally or planned. Unfortunately, algorithmic errors can then spread leading to poor outputs, such as the founders of the United States recently portrayed misusing racial diversity rules.

[<< Back | Next >>]

[249 words]


Abdullahi, A. (2024) Generative AI Models: A Complete Guide, eWEEK. Available at: https://www.eweek.com/artificial-intelligence/generative-ai-model/ (Accessed: 2 June 2024).

Arora, A. and Arora, A. (2022) ‘Generative adversarial networks and synthetic patient data: current challenges and future perspectives’, Future healthcare journal, 9(2), pp. 190–193. Available at: https://doi.org/10.7861/fhj.2022-0013.

Vaswani, A. et al. (2017) Attention Is All You Need. Google Research. Available at: https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf (Accessed: 22 May 2024).

Zhou, M. et al. (2024) Bias in Generative AI, arXiv.org. Available at: https://arxiv.org/abs/2403.02726 (Accessed: 1 June 2024).

Further Reading

Brownlee, J. (2019) A Gentle Introduction to Generative Adversarial Networks (GANs) – MachineLearningMastery.com, MachineLearningMastery.com. Available at: https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/ (Accessed: 2 June 2024).