Speak AI: A Plain-English Glossary for Dynamics Pros, Hiring Managers, any Anyone Tired of Nodding Along

Speak AI: A Plain-English Glossary for Dynamics Pros, Hiring Managers, and Anyone Tired of Nodding Along

By Bryan Ray, Dynamics Focus


Credit where it’s due. This article was inspired by — and draws directly from — a terrific glossary published by Natasha Lomas, Romain Dillet, Kyle Wiggers, and Lucas Ropek at TechCrunch on May 9, 2026: “So you’ve heard these AI terms and nodded along; let’s fix that.” Their definitions form the backbone of the explanations below. I’ve added context for the audience I work with every day — Microsoft Dynamics professionals, the leaders hiring them, and the candidates trying to position themselves for what’s coming next. You can (and should) read the full original glossary here: TechCrunch’s AI glossary.


Why I’m writing this

Spend five minutes in any Dynamics, ERP, or enterprise tech conversation right now and you’ll hear half a dozen AI terms thrown around like everyone in the room already knows what they mean. Sometimes everyone does. More often, a few people are quietly nodding along.

That’s a real problem in our world — because how you talk about AI in an interview, on a sales call, or in a steering committee meeting is starting to influence who gets hired, who gets promoted, and which projects get funded. The TechCrunch team did the heavy lifting of defining these terms clearly. What follows is a curated subset, with my recruiter’s lens added on top.


The terms worth knowing

AGI (Artificial General Intelligence)

TechCrunch describes AGI as a “nebulous term” generally referring to AI that’s more capable than the average human at most tasks. Sam Altman has called it the “equivalent of a median human you could hire as a co-worker.” Google DeepMind frames it as “AI at least as capable as humans at most cognitive tasks.”

Why it matters for our space: Nobody actually agrees on what AGI is — not even the people building it. So when a vendor, candidate, or client tells you their solution is “almost AGI,” that’s marketing, not a spec sheet. Ask follow-up questions.

AI Agent

Per TechCrunch, an AI agent is a tool that performs a series of tasks on your behalf — filing expenses, booking travel, even writing code — beyond what a basic chatbot can do. The concept implies an autonomous system that may chain multiple AI systems together to complete multi-step work.

Why it matters for our space: This is the term reshaping job descriptions right now. When a Dynamics implementation lead says they want someone with “agent experience,” they usually mean someone who has built or deployed autonomous workflows — not someone who has chatted with Copilot.

Coding Agents

A specialized AI agent applied to software development. As TechCrunch puts it, rather than suggesting code for a human to paste in, a coding agent can write, test, and debug autonomously — “like hiring a very fast intern who never sleeps and never loses focus.”

Why it matters for our space: This is changing what a junior developer role looks like. Hiring managers should be asking how candidates work with coding agents, not whether they’ll be replaced by them.

Large Language Model (LLM)

The TechCrunch glossary explains LLMs as the AI models behind ChatGPT, Claude, Gemini, Copilot, and others. They’re deep neural networks made of billions of numerical parameters that learn relationships between words and generate the most likely pattern that fits a prompt.

Why it matters for our space: If a candidate can’t explain what an LLM is in one sentence, that’s a flag in 2026 — regardless of whether they’re a developer or a functional consultant. Everyone in the Dynamics ecosystem needs at least a working definition.

Hallucination

TechCrunch’s definition is direct: the AI industry’s preferred term for AI models making stuff up. The cause is generally gaps in training data, and it’s pushing the industry toward more specialized, vertical models.

Why it matters for our space: This is the single biggest risk in deploying AI on top of business systems. If you’re putting an LLM in front of your ERP data, the entire conversation needs to be about how hallucinations are detected, contained, and reviewed.

Fine-tuning

Further training of an existing AI model to optimize it for a specific task or domain, typically by feeding in specialized data. Many startups take a general LLM and fine-tune it for their target sector.

Why it matters for our space: This is how AI gets useful for vertical work — manufacturing, finance, distribution, healthcare. When you hear “we fine-tuned our model on industry-specific data,” that’s a feature, not jargon.

Inference vs. Training

TechCrunch separates these clearly. Training is the process of teaching a model from data. Inference is the process of running that model to make predictions. Training is expensive and infrequent; inference happens every time someone uses the tool.

Why it matters for our space: Most enterprise AI conversations are really about inference costs. When a CFO asks “why is this so expensive at scale?” — they’re asking about inference economics, even if they don’t use the word.

Open Source

TechCrunch frames the open-vs-closed debate cleanly: open-source AI means the underlying code (and often the model weights) are publicly available. Meta’s Llama is the prominent example. OpenAI’s GPT models are the closed counterpart.

Why it matters for our space: This is becoming a procurement and compliance question, not just a technical one. Open-source AI lets you run things in your own environment with your own data — increasingly relevant for regulated industries.

Chain of Thought

A reasoning technique where an LLM breaks a problem into intermediate steps before answering — TechCrunch’s analogy is doing math with a pen and paper instead of in your head. It takes longer but is more accurate, especially for logic and coding tasks.

Why it matters for our space: This is why “reasoning models” cost more and respond slower — they’re showing their work. For complex Dynamics or financial logic, that’s usually worth it.

Reinforcement Learning (and RLHF)

TechCrunch’s analogy is perfect: it’s like training your pet with treats, except the pet is a neural network and the treat is a mathematical signal. Reinforcement learning from human feedback (RLHF) is central to how modern AI labs make their models more helpful and safe.

Why it matters for our space: When a vendor says their AI has been “aligned” or “tuned for enterprise use,” RLHF is usually what they mean.

Tokens and Token Throughput

Tokens are the small chunks of text — often parts of words — that LLMs process. Token throughput is how many of them a system can handle per second. Per TechCrunch, most AI services charge by the token, so this is also how you get billed.

Why it matters for our space: When IT leaders ask “what will this cost at scale?” — the honest answer is “it depends on tokens.” Anyone signing an enterprise AI contract should understand this.

RAMageddon

TechCrunch’s name for the ongoing shortage of RAM chips driven by AI data center buildouts. It’s pushing up the price of everything from gaming consoles to enterprise servers.

Why it matters for our space: If your 2026 IT budget feels tighter than expected, this is part of why. Hardware refresh cycles are getting longer and more expensive.


What I’d tell candidates and hiring managers

Three takeaways from spending time with this glossary:

A baseline vocabulary is now table stakes. You don’t need to be a researcher, but you do need to be able to follow the conversation. The terms above are the minimum.

The honest experts are humble. If someone explains all of this with absolute confidence and no caveats, be skeptical. The TechCrunch team explicitly notes that even the people building this stuff disagree on definitions.

Specialization is winning. Hallucinations, cost pressure, and compliance are all pushing toward narrower, domain-specific AI. In the Dynamics world, that’s good news — it means the people who understand both the business and the technology are about to be more valuable, not less.


Read the original

The full glossary, which covers several more terms I didn’t include here, is at TechCrunch:

“So you’ve heard these AI terms and nodded along; let’s fix that” by Natasha Lomas, Romain Dillet, Kyle Wiggers, and Lucas Ropek — TechCrunch, May 9, 2026.

All credit for the underlying definitions goes to them. The commentary, framing, and recruiter’s-eye-view above are mine.


Bryan Ray is the founder of Dynamics Focus, a recruiting firm specializing in the Microsoft Dynamics ecosystem. Connect with him on LinkedIn or at dynamicsfocus.com.