
Lost in the jungle of jargon?! This glossary offers clear, concise definitions for the core concepts, ideas, and methods shaping the field of generative AI. Designed to support both newcomers and experienced practitioners, it demystifies technical language and makes this area of exploration feel less intimidating without oversimplifying it.
For those looking to explore further, we’ve also included links to comprehensive A–Z glossaries from established institutions like the New York Times, MIT Sloan, and others.
Hallucination
This refers to the phenomenon where a model produces content that is factually incorrect, misleading, or entirely fabricated, despite being fluent and plausible-sounding. These errors are not due to random noise but arise from the model’s statistical training: it generates responses based on patterns in the data it has seen, without true understanding or access to real-time facts. As a result, hallucinations can include made-up citations, nonexistent events, inaccurate data, or statements presented with unwarranted confidence.
Machine Learning (ML)
A subset of artificial intelligence (AI) that enables computers to learn patterns from data and make predictions or decisions without being explicitly programmed.
Deep Learning
A branch of machine learning that uses artificial neural networks with multiple layers to analyze complex patterns in data, often used in image recognition, speech processing, and natural language understanding.
Neural Network
A computational model inspired by the structure of the human brain, consisting of layers of interconnected nodes (neurons) that process information and learn patterns from data.
Large Language Model (LLM)
A deep learning model trained on vast amounts of text data to understand, process, and generate human-like language, often used in chatbots, translations, and content generation.
Prompt Engineering
Prompt engineering is the practice of strategically designing and refining the input given to a generative AI model to guide it toward producing more accurate, relevant, or creative outputs. It involves choosing the appropriate wording, structure, and context in a prompt to elicit the desired behaviour from the model. This technique is especially important because the same model can yield vastly different results depending on how it is prompted.
Chain-of-Thought Prompting
A prompting technique used to improve the reasoning capabilities of LLMs by encouraging them to break down their thought process into intermediate steps before arriving at a final answer. Instead of prompting the model to respond with just an answer, the input includes cues (or examples) that guide it to “think aloud,” mimicking human step-by-step reasoning. This approach enhances performance on complex tasks such as multi-step math problems, logic puzzles, and contextual decision-making.
Natural Language Processing (NLP)
A field of AI focused on enabling machines to understand, interpret, and generate human language, powering applications like chatbots, sentiment analysis, and machine translation.
Zero-Shot Learning
A learning approach where an AI model makes predictions or classifies data without prior specific examples, relying on general knowledge learned from related tasks and prior knowledge from their training data.
Tokenization
The process of breaking text into smaller units, such as words, subwords, or characters, to help AI models process language efficiently.
Bias in AI
Systematic errors in AI predictions or decisions caused by imbalanced or unrepresentative training data, leading to unfair or inaccurate outcomes.
GAN (Generative Adversarial Network)
A type of AI model consisting of two competing neural networks—a generator that creates content and a discriminator that evaluates its authenticity—leading to improved content generation, often used in deep fake images, synthetic media, and artistic creativity.
RAG (Retrieval-Augmented Generation) Model
A model that combines two parts: retrieval and generation. First, it retrieves relevant information from an external source (like a specific document database) based on the input question. Then, it uses that information to generate a response with a language model. This setup helps the AI give more accurate and fact-based answers, especially when the model alone doesn’t have the information in its training data. RAG models are useful in tasks like question answering, chatbots, and summarization where up-to-date or specific knowledge is important.
Looking for more words you may not be sure of? We have got you covered!