Reference
Glossary
59 essential terms for understanding AI agents in law and finance
A comprehensive reference derived from Agentic AI in Law and Finance. These definitions span computer science, law, economics, and philosophy to provide a rigorous foundation for discussing AI agents.
Action
The Six PropertiesThe third foundational property (A in GPA). The ability to effect change in the environment through actuators, API calls, or tool use. Actions can be reversible or irreversible, and governance requires appropriate approval gates.
Adaptation
The Six PropertiesThe second operational property (A in IAT). The ability to modify behavior based on experience, feedback, or changing conditions. Adaptation can occur within a session or across sessions, and requires change control and revalidation.
Adverse Selection
Legal & EconomicA principal-agent problem where principals cannot accurately assess agent quality before engagement due to information asymmetry. In AI contexts, relates to difficulty evaluating AI system capabilities and limitations before deployment.
Agency Costs
Legal & EconomicEconomic costs arising from divergent interests between principals and agents, including monitoring costs (oversight), bonding costs (agent commitments), and residual losses (imperfect alignment). AI governance represents a form of monitoring cost.
Agency Relationship
Legal & EconomicA legal arrangement where one party (agent) acts on behalf of another (principal) with the principal's consent and subject to the principal's control. Creates fiduciary obligations of loyalty and care. The Restatement of Agency provides authoritative treatment in U.S. law.
Agent
Core FrameworkA system exhibiting the three foundational properties of Goal, Perception, and Action (GPA). An agent pursues objectives, observes its environment, and takes actions to achieve its goals. This represents Level 1 in the three-level hierarchy.
Agent-Based Modeling (ABM)
Specialized TechnicalA computational methodology where autonomous agents with simple rules interact to produce emergent macro-level patterns. Widely used in economics, finance, and social science to model markets, organizational behavior, and policy effects.
Agentic System
Core FrameworkA system exhibiting all six operational properties: Goal, Perception, Action, Iteration, Adaptation, and Termination (GPA+IAT). Agentic systems are production-ready and can operate across multiple cycles with learning and graceful stopping. This represents Level 2 in the three-level hierarchy.
AI Agent
Core FrameworkAn agentic system (Level 2) whose capabilities are powered by artificial intelligence or machine learning, particularly large language models (LLMs). This represents Level 3 in the three-level hierarchy.
Autonomy Spectrum
Analytical DimensionsThe degree to which an agent sets its own agenda versus following explicit instructions. Ranges from delegated proxies (executing specific commands) to self-directed entities (independently identifying and pursuing objectives). Higher autonomy requires stronger governance controls.
Causal Theory of Action
PhilosophyDavidson's theory that intentional actions are explained by an agent's beliefs and desires that causally produce the behavior. Provides philosophical grounding for understanding how mental states (or their computational analogues) drive agent behavior.
Chain-of-Thought
AI & Machine LearningA prompting technique where AI models generate intermediate reasoning steps before producing a final answer. Chain-of-thought improves accuracy on complex tasks and provides transparency into the agent's reasoning process, supporting audit and verification.
Confidence Thresholds
Specialized TechnicalPredetermined certainty levels below which an agent stops autonomous action and escalates to human oversight. Setting appropriate thresholds balances efficiency (avoiding unnecessary escalation) with safety (ensuring human review of uncertain situations).
Delegation
ArchitectureThe assignment of subtasks from one agent to another in multi-agent systems. Delegation patterns include hierarchical orchestration, peer coordination, and specialist routing.
Dimensional Calibration
GovernanceThe process of matching governance control intensity to system risk characteristics. The four key dimensions are autonomy level, entity frame, goal dynamics, and persistence.
Emergent Behavior
Specialized TechnicalProperties or behaviors exhibited by multi-agent systems that no individual agent possesses, arising from agent interactions. Emergent behavior can be beneficial (collective intelligence) or problematic (unexpected system dynamics), requiring system-level governance.
Entity Frame
Analytical DimensionsThe category of entity being analyzed for agency: human-centered (individual decision-makers), institutional (organizations acting through representatives), or machine-centered (AI systems). Different frames emphasize different aspects of agency and require different governance approaches.
Episodic Memory
ArchitectureThe history of actions and outcomes for a specific engagement—analogous to a matter file. Captures what the agent did, found, and observed, enabling continuity across sessions.
Escalation
ArchitectureThe process of transferring control from an agent to a human when the agent encounters situations beyond its competence, authority, or confidence threshold. Escalation is a safety mechanism distinct from termination.
Goal
The Six PropertiesThe first foundational property (G in GPA). An agent's objective or purpose that guides its behavior. Goals can be explicit instructions, implicit preferences, or emergent from training. Governance requires goal authorization, alignment verification, and monitoring.
Goal Dynamics
Analytical DimensionsHow an agent relates to its objectives over time: accepting fixed goals, negotiating modifications, or autonomously setting new objectives. Dynamic goals require governance mechanisms for goal authorization, drift detection, and alignment verification.
Governance Surface
GovernanceThe set of technical capabilities that enable oversight of agent behavior, including structured logging, override mechanisms, state snapshots, privilege management, and escalation hooks.
GPA (Goal, Perception, Action)
Core FrameworkThe three foundational properties that define minimal agency. Goal provides direction, Perception enables environmental awareness, and Action allows the system to effect change. Together, they form the basis for all agentic behavior.
Hallucination
AI & Machine LearningThe generation of plausible-sounding but false or fabricated information by an AI system. In legal contexts, this includes invented case citations or nonexistent statutes; in finance, fabricated data or regulations. Hallucination risk requires verification controls and human oversight.
Human-in-Command (HIC)
GovernanceA governance model where humans set policies and boundaries but agents operate with significant autonomy within those constraints. HIC is appropriate for well-understood, lower-risk tasks.
Human-in-the-Loop (HITL)
GovernanceA governance model where humans approve each significant agent action before execution. HITL provides maximum oversight but limits throughput and is appropriate for high-stakes, irreversible actions.
Human-on-the-Loop (HOTL)
GovernanceA governance model where agents operate autonomously but humans monitor dashboards and can intervene when needed. HOTL balances efficiency with oversight for medium-risk operations.
IAT (Iteration, Adaptation, Termination)
Core FrameworkThe three operational properties that distinguish production-ready agentic systems from basic agents. Iteration enables multi-step execution, Adaptation allows learning from experience, and Termination ensures graceful stopping.
In-Context Learning
Technical PatternsThe ability of language models to adapt behavior based on examples or instructions provided in the prompt, without updating model weights. This enables few-shot learning and dynamic capability extension.
Information Asymmetry
Legal & EconomicA condition where principals and agents have unequal access to relevant information, enabling agents to act in ways principals cannot fully observe or evaluate. AI systems often possess knowledge or reasoning that humans cannot directly inspect.
Intent
ArchitectureThe interpreted meaning behind a user's request that guides agent behavior. Intent extraction transforms ambiguous natural language into actionable goals, often requiring clarification or constraint validation.
Intentional Action
PhilosophyAnscombe's concept that actions are intentional "under a description"—the same physical movement can be intentional under one description and unintentional under another. Relevant for analyzing AI agent behavior and attributing responsibility.
Intentional Stance
PhilosophyDennett's pragmatic framework for understanding agency: treating entities as rational goal-pursuers when doing so yields reliable behavioral predictions, regardless of their internal mechanisms. Useful for analyzing AI systems without resolving metaphysical questions about machine consciousness.
Iteration
The Six PropertiesThe first operational property (I in IAT). The ability to execute multiple perceive-act cycles, building on prior state and environmental feedback. Iteration enables complex, multi-step tasks and requires audit trails for reproducibility.
Large Language Model (LLM)
AI & Machine LearningA type of artificial intelligence trained on vast text corpora to understand and generate human language. LLMs power most modern AI agents, enabling natural language interaction, reasoning, and task execution. Examples include GPT-4, Claude, and Gemini.
LLM-as-Agent Pattern
AI & Machine LearningThe contemporary architectural approach where a large language model iteratively orchestrates tool calls, observes results, and adapts its strategy to achieve goals. This pattern underlies most modern AI agents in professional applications.
MCP (Model Context Protocol)
Technical PatternsAn open protocol developed by Anthropic for connecting AI models to external tools and data sources. MCP standardizes how agents access capabilities like file systems, databases, and APIs.
Memory
ArchitectureThe mechanism by which agents retain information across interactions. Memory types include working memory (within session), episodic memory (past events), semantic memory (facts), and procedural memory (skills).
Moral Hazard
Legal & EconomicA principal-agent problem where agents take excessive risks or act against principal interests because they do not bear the full consequences. In AI contexts, relates to agents taking actions that benefit short-term metrics while creating long-term risks.
Multi-Agent System (MAS)
AI & Machine LearningA system where multiple autonomous agents interact, cooperate, or compete to achieve individual or collective goals. Examples include trading systems with multiple algorithms, distributed due diligence teams, or coordinated compliance monitoring.
Perception
The Six PropertiesThe second foundational property (P in GPA). The ability to observe and interpret the environment through sensors, APIs, or data sources. Perception determines what information an agent can access and use for decision-making.
Perception-Action Loop
Specialized TechnicalThe iterative cycle of sensing the environment, processing observations, taking actions, and observing consequences. This continuous loop distinguishes agents from systems that process input once and produce output without feedback.
Persistence
Analytical DimensionsThe characteristic of maintaining state and pursuing objectives over extended periods, distinguishing agents from one-shot reactive systems. Persistent agents accumulate context, learn from experience, and require governance for long-running operations.
Planning
ArchitectureThe process of decomposing goals into sequences of actions. Planning patterns include reactive (ReAct), hierarchical, and multi-agent orchestration. Planning determines how iteration cycles are structured.
Principal-Agent Relationship
Legal & EconomicAn economic framework analyzing relationships where principals engage agents with delegated decision-making authority. Focuses on incentive alignment, information asymmetry, and agency costs. Foundational for understanding AI alignment challenges.
RAG (Retrieval-Augmented Generation)
Technical PatternsA pattern that enhances language model responses by retrieving relevant documents from a knowledge base before generation. RAG improves accuracy and enables grounding in authoritative sources.
ReAct (Reasoning + Acting)
Technical PatternsAn agent architecture pattern that interleaves reasoning traces with action execution. The agent thinks about what to do, takes an action, observes the result, and continues the cycle until completion.
Reinforcement Learning (RL)
AI & Machine LearningA machine learning approach where agents learn optimal behavior through trial and error, receiving rewards or penalties for their actions. RL agents discover effective strategies without explicit programming, raising governance questions about learned behaviors.
Semantic Memory
ArchitectureGeneral principles and institutional knowledge available for retrieval—analogous to a precedent archive. Represents accumulated expertise that applies across engagements.
Stopping Conditions
Specialized TechnicalCriteria that determine when an agent terminates operation, including goal satisfaction, resource limits, time constraints, error thresholds, or confidence levels requiring human review. Well-defined stopping conditions prevent runaway execution.
Termination
The Six PropertiesThe third operational property (T in IAT). The ability to recognize when to stop executing, whether due to goal completion, resource limits, errors, or the need for human escalation. Proper termination prevents runaway execution.
Three-Level Hierarchy
Core FrameworkThe conceptual framework distinguishing three levels of agency: Level 1 (Agent) with GPA properties, Level 2 (Agentic System) with all six GPA+IAT properties, and Level 3 (AI Agent) where capabilities are AI-powered.
Tool Orchestration
Specialized TechnicalThe capability of an agent to independently select, invoke, and coordinate external tools (APIs, databases, services) based on task requirements. Tool orchestration represents high autonomy and requires governance of tool access permissions.
Tools
ArchitectureExternal capabilities that extend an agent's perception and action abilities. Tools include APIs, databases, file systems, and specialized functions. Tool access must be governed through least-privilege principles.
Trigger
ArchitectureThe event or condition that initiates agent execution. Triggers can be explicit (user command), scheduled (time-based), reactive (event-driven), or chained (from another agent). Understanding triggers is essential for governance.
Browse by Category
Terms are organized into ten categories spanning technical architecture, governance frameworks, and foundational concepts from law, economics, and philosophy.
Go Deeper
These definitions are drawn from Agentic AI in Law and Finance, which provides comprehensive treatment of each concept with examples, governance implications, and practical guidance.