Learn

The glossary

Canonical definitions for the autonomous business era.

What's an AI employee? What's a one-person company? What's an autonomy slider? Clear, citable definitions we want ChatGPT to quote.

Free to startNo credit card required

Agentic AI Explained: Definition, Examples & Business Use

Agentic AI refers to AI systems that can plan, execute, and self-correct across multi-step tasks by using tools, memory, and reasoning loops. Unlike a traditional chatbot that produces text in response to a single prompt, an agentic system takes actions — browsing the web, sending emails, writing code, updating databases — and adjusts its plan based on the results.

AI Agent vs AI Employee: What's the Difference? (2026)

An AI agent is software that completes tasks autonomously using tools and reasoning. An AI employee is an AI agent plus persistent memory, a defined role, a scoped portfolio of responsibilities, and ongoing relationship with a business. Every AI employee is an AI agent, but not every AI agent is an AI employee — the employee framing adds role, memory, and continuity.

What is an AI Agent Benchmark? Definition & List (2026)

An AI agent benchmark is a standardized test suite with defined tasks, inputs, and automated scoring for measuring AI agent performance — particularly on end-to-end real-world tasks involving tools, reasoning, and multi-step planning. Major benchmarks include SWE-bench Verified (Princeton, 2024) for code, WebArena (CMU, 2023) for web browsing, GAIA (Meta AI, 2023) for general assistants, and TauBench (Sierra, 2024) for conversational tool use. Benchmarks let you compare agents apples-to-apples.

What is Agent Evaluation? Definition & Methods (2026)

Agent evaluation is the practice of systematically measuring AI agent quality against defined tasks and rubrics — accuracy, groundedness, tool-use correctness, end-to-end task success, and safety. It combines automated metrics (exact match, F1, BLEU), LLM-as-judge scoring, golden-set regression tests, and human review. Evals are the difference between 'the agent feels good' and 'the agent actually works in production at a known quality bar.'

What is AI Agent Memory? Definition & Types (2026)

AI agent memory is the persistent state an agent carries across sessions — facts about the user, past decisions, preferences, prior outputs — beyond what fits in a single LLM context window. It typically combines a short-term working buffer, a long-term store in a vector database or structured DB, and a retrieval policy that decides what to surface on each turn. Memory is what separates an AI employee from a chatbot.

What is an Agent Orchestrator? Definition & Examples (2026)

An agent orchestrator is the coordinator in a multi-agent AI system — the component that decides which agent runs next, passes context between agents, resolves conflicts, aggregates results, and handles failures. It is typically either an LLM-based 'supervisor agent' or a deterministic router, and is what turns a collection of specialist agents into a working team capable of tackling complex tasks end-to-end.

What Is Agentic Commerce? Definition & Examples (2026)

Agentic commerce is a transaction where an AI agent — not a human — browses options, compares terms, and completes the purchase on behalf of its principal. In 2026 this includes consumer agents booking travel, B2B agents renewing software, and one-person companies whose AI COO procures infrastructure autonomously.

What is an AI CEO? Definition, Role & How It Works

An AI CEO is the coordinator AI agent that runs a company's AI workforce. It translates the human founder's strategy into tasks, delegates to specialist AI employees (CMO, CTO, COO, etc.), tracks progress, handles cross-functional coordination, and surfaces decisions that need founder input. It is the single chat interface between a founder and the AI team.

What is an AI Employee? Definition & How It Works (2026)

An AI employee is an autonomous AI agent assigned to a specific functional role in a company — such as CMO, customer support lead, or content manager — with persistent memory of the business, a defined scope of work, configurable autonomy levels, and the ability to execute multi-step tasks across tools. Unlike a chatbot, an AI employee owns outcomes across time.

What is AI Governance? Definition & Frameworks (2026)

AI governance is the framework of policies, roles, controls, and audit processes an organization uses to manage AI systems responsibly — covering safety, bias, privacy, accuracy, security, and regulatory compliance. It spans the full lifecycle: data sourcing, model selection, deployment approval, monitoring, and decommissioning. Governance became an active requirement in 2024-2026 as the EU AI Act, NIST AI RMF, and ISO 42001 moved from guidance to enforced standards.

What Is AI Orchestration? Definition & Examples (2026)

AI orchestration is the coordination of multiple AI agents — each with a role, scope, and skills — into a team that executes real work. A single chatbot answers questions; orchestration handles handoffs, shared memory, escalation, and governance across agents so outputs compound rather than conflict.

What is AI Red Teaming? Definition & How It Works (2026)

AI red teaming is the practice of adversarially testing an AI system — prompting it with attacks, edge cases, and creative misuse scenarios — to surface harmful, biased, insecure, or incorrect behavior before deployment. It borrows from security red teaming but focuses on model-specific risks: jailbreaks, prompt injection, data exfiltration, bias amplification, and agent misuse. Red teaming is now standard practice at frontier labs and required under the EU AI Act for high-risk systems.

What is an AI Workforce? Definition, Roles & Examples

An AI workforce is a coordinated team of AI employees covering multiple business functions — typically marketing, sales, customer support, content, ops, and finance — working under human direction. Unlike a single AI tool or chatbot, an AI workforce functions as an org chart where specialized agents handle different roles and coordinate with each other.

What is an Autonomous Business? Definition & Examples

An autonomous business is a company in which most operational work — marketing, sales, support, content, ops — is performed by AI agents working within human-set strategy and guardrails. The human role shifts from execution to direction, approval, and exception handling, while AI agents run continuous workflows across functions.

What is an Autonomy Slider? AI Trust Control Explained

An autonomy slider is a control that sets how much independence an AI employee has to act without human approval. At the low end, every action requires human sign-off; at the high end, the AI employee runs fully autonomously, only escalating exceptions. The slider is per-agent and per-action-type, so a founder can tune trust as it is earned.

What is Chain-of-Thought Prompting? Definition (2026)

Chain-of-thought (CoT) prompting is a technique where an LLM is instructed to write out its intermediate reasoning steps before producing a final answer. Introduced by Wei et al. at Google Research in 2022, CoT dramatically improves accuracy on arithmetic, commonsense, and symbolic reasoning tasks by forcing the model to decompose problems instead of jumping to an answer.

What is Computer Use? AI Agents Controlling Your PC (2026)

Computer use is an AI capability where a vision-language model controls a computer by taking screenshots, reasoning about what's on screen, and issuing mouse, keyboard, and scrolling commands — letting it operate any application a human can. Anthropic shipped the first public computer-use API in October 2024 with Claude 3.5 Sonnet; OpenAI released Operator in January 2025 with a similar capability.

What is a Context Window? LLM Long Context Explained (2026)

A context window is the maximum number of tokens an LLM can process in a single inference request — including the system prompt, conversation history, retrieved documents, tool outputs, and the generated response. It is the hard ceiling on how much information the model can 'see' at once, and ranges from 8K tokens (GPT-3.5 era) to 1M+ tokens (Gemini 2.5 Pro, 2025).

What are Vector Embeddings? Definition & Examples (2026)

A vector embedding is a dense numerical representation of a piece of content — typically a 384 to 3072 dimensional float vector — produced by a neural network trained so that meaning-similar inputs yield geometrically close vectors. Embeddings turn text, code, images, or audio into a shared numerical space where cosine similarity approximates semantic similarity, enabling retrieval, clustering, classification, and recommendation without rule-based features.

What is LLM Fine-Tuning? Definition & When to Use It (2026)

Fine-tuning is the process of continuing to train a pretrained LLM on a smaller, task-specific dataset so the model internalizes a style, domain, or output format. It updates model weights, unlike prompting which leaves the model unchanged. Modern fine-tuning uses parameter-efficient methods like LoRA to update a small fraction of weights, making it affordable at $10-$500 per run rather than the millions required for pretraining.

What is Function Calling? LLM Tool Use Explained (2026)

Function calling is an LLM capability where the model, given a set of function schemas, outputs structured JSON indicating which function to call and with what arguments — letting your application invoke real code, APIs, or database queries. OpenAI introduced the feature in June 2023; it is now standard across GPT, Claude, Gemini, and open-source models like Llama 3.1 and Mistral.

What are AI Guardrails? Definition & Types (2026)

AI guardrails are runtime policies and filters that constrain what an LLM or AI agent can output or do — blocking unsafe content, PII leaks, off-topic responses, prompt injections, and out-of-scope tool calls. Implemented as input validation, output classification, and action-level policies, they sit between the raw model and the user to enforce business rules, safety requirements, and regulatory compliance that alignment training alone cannot guarantee.

What is LLM Hallucination? Definition & Mitigations (2026)

LLM hallucination is when a language model generates false or fabricated information with high confidence — invented citations, non-existent APIs, wrong dates, bogus quotes. The root cause is that LLMs are trained to predict plausible next tokens, not to be truthful; they have no built-in distinction between 'I know this' and 'I'm guessing.' Hallucination is the single biggest failure mode of LLMs in production and the main reason AI systems need RAG, guardrails, and evals.

What Is a Heartbeat Agent? The 2026 AI Pattern Explained

A heartbeat agent is an AI agent that wakes on a schedule — hourly, daily, or event-driven — and runs a defined task without a human prompt. Heartbeats turn agents from reactive chatbots into proactive employees. They are the primitive that makes autonomous businesses possible, because most business value lives in recurring work, not one-off questions.

What is Human-in-the-Loop (HITL)? Definition (2026)

Human-in-the-loop (HITL) is a design pattern where humans review, approve, or correct AI system outputs at designated decision points — combining AI scale and speed with human judgment and accountability. It is the dominant pattern for deploying AI in high-stakes domains like medicine, law, finance, and customer communication, and is the standard way to safely ramp up agent autonomy over time.

What is Hyperautomation? Definition & 2026 Landscape

Hyperautomation is a Gartner-coined term (2020) for the disciplined, business-driven use of multiple technologies — AI, machine learning, RPA, process mining, workflow automation, and event-driven architecture — to automate as many business and IT processes as possible. It is less a single technology than a program: identify every manual process, decide which to automate, and orchestrate the right tools to do it at scale.

What is Inference Cost? LLM Economics Explained (2026)

Inference cost is the per-token price of running a trained LLM to generate outputs, typically billed separately for input (prompt) tokens and output (response) tokens. In 2026 it ranges from ~$0.10 per million tokens for small open-source models up to $75 per million output tokens for frontier proprietary models. Inference cost — not training cost — dominates the economics of production AI applications at scale.

What is LLM Streaming? Definition & How It Works (2026)

LLM streaming is the technique of returning tokens from an LLM API as they are generated rather than waiting for the complete response. Implemented via server-sent events (SSE) or WebSocket, streaming reduces perceived latency from seconds to under 500ms by letting the UI show text as the model types. It is the default mode for chat interfaces like ChatGPT, Claude, and agentic platforms including Tycoon.

What is LLM Temperature? Definition & How to Set It (2026)

Temperature is a sampling parameter that controls how random an LLM's token selection is. At temperature 0 the model picks the single most probable next token every time (greedy decoding, fully deterministic); at higher values it samples from the probability distribution, yielding more varied output. Typical ranges are 0-0.3 for factual tasks, 0.5-0.8 for general chat, and 0.8-1.2 for creative writing. Temperature does not change what the model knows — only how it chooses words.

What is an LLM Token? Definition & Why It Matters (2026)

A token is the unit of text an LLM processes — typically a subword produced by the model's tokenizer. For English, one token averages about 0.75 words or 4 characters. A 1000-token prompt is roughly 750 words. Everything about LLMs is measured in tokens: context window size, API pricing, throughput, latency. Understanding tokens is the difference between reasoning about LLMs correctly and burning money unnecessarily.

What is MCP? Model Context Protocol Explained (2026)

The Model Context Protocol (MCP) is an open standard from Anthropic, published November 2024, that defines how LLM applications connect to external data sources and tools. It specifies a JSON-RPC-based protocol where MCP servers expose tools, resources, and prompts, and MCP clients (Claude Desktop, Claude Code, Cursor, and others) consume them, so any LLM app can plug into any MCP server without custom integration code.

What is a Multi-Agent System? Definition & Examples (2026)

A multi-agent system is an architecture where multiple autonomous AI agents — each with its own role, memory, and tools — communicate and coordinate to accomplish tasks that no single agent could solve alone. Each agent specializes in one domain, and a coordinator or orchestrator routes work, resolves conflicts, and aggregates results.

What is a One-Person Company? Definition & Examples (2026)

A one-person company is a business operated by a single human founder who uses AI employees instead of traditional staff to run marketing, sales, engineering, operations, and support. The human focuses on strategy and taste while AI agents execute the work that previously required a team of 10 to 50 employees.

What is Prompt Engineering? Definition, Role & Future (2026)

Prompt engineering is the practice of designing inputs to large language models — including instructions, examples, structural hints, and tool schemas — to reliably produce desired outputs. It emerged as a distinct skill in 2022-2023 with the rise of ChatGPT and GPT-4, and has evolved from a job title into an embedded competency expected of anyone building with LLMs.

What is RAG? Retrieval-Augmented Generation Explained (2026)

Retrieval-Augmented Generation (RAG) is a pattern where an AI system retrieves relevant documents from a vector database and includes them in the prompt to an LLM, allowing it to answer questions using information outside its training data. Introduced by Meta AI researchers in 2020, RAG is the standard way to give LLMs access to private data, current data, or data too large to fit in a single prompt.

What is RLHF? Reinforcement Learning from Human Feedback

RLHF is a training technique where human annotators rank or compare multiple LLM outputs, those rankings train a reward model that predicts human preferences, and the base LLM is then fine-tuned via reinforcement learning to maximize that reward model's score. Introduced by OpenAI in 2022 with InstructGPT and ChatGPT, RLHF is the core method that made raw language models helpful and harmless rather than just statistical next-token predictors.

What is RPA? Robotic Process Automation Explained (2026)

Robotic Process Automation (RPA) is software that automates repetitive digital tasks by controlling user interfaces — clicking buttons, typing into forms, copying data between applications — exactly as a human would. It was pioneered in the mid-2000s by vendors like Blue Prism and UiPath, and solves the integration problem for legacy systems that have no API by driving their UIs instead.

What is Semantic Search? Definition & How It Works (2026)

Semantic search is a retrieval technique that ranks documents by meaning similarity rather than keyword overlap. It converts both query and documents into vector embeddings and returns the closest matches by cosine similarity or dot product. Unlike traditional lexical search (BM25, tf-idf), it finds relevant results that share no literal words with the query, which is why it powers modern RAG, AI search, and agent memory systems.

What Is a Skill Marketplace? AI Skills Explained (2026)

A skill marketplace is a catalog of pluggable capabilities that AI employees install to gain new abilities — SEO, financial modeling, customer research, compliance audits. A skill packages the prompts, tool access, and decision logic for a function, so an AI CMO can pick up 'AEO Optimization' the same way a human would pick up a certification.

What is Tool Use in AI? Definition & Examples (2026)

Tool use is the umbrella term for an AI model invoking external tools — APIs, code execution environments, file systems, web browsers, databases — to accomplish tasks beyond generating text. It encompasses function calling (the API primitive), computer use (clicking/typing in a GUI), code execution, and web browsing, and is the foundational capability that separates a chatbot from an agent.

What is a Vector Database? Definition & Examples (2026)

A vector database is a specialized data store for high-dimensional embedding vectors that supports fast approximate nearest neighbor (ANN) search. It lets you store millions or billions of vectors (typically 384-3072 dimensions each) and retrieve the closest ones to a query vector in milliseconds using indexes like HNSW or IVF. Vector databases are the storage layer underneath RAG, semantic search, and AI agent memory.

What is Workflow Automation? vs AI Agents (2026)

Workflow automation is the use of software to execute predefined sequences of business tasks — 'if A then B then C' — without human intervention at each step. Tools like Zapier, Make, n8n, and Workato let non-developers connect apps via triggers and actions, turning repetitive manual work (copying data between systems, sending routine emails, updating records) into scheduled or event-driven pipelines.

Run your one-person company.

Hire your AI team in 30 seconds. Start for free.

Free to start · No credit card required · Set up in 30 seconds