That's because there are dozens of slightly (or significantly) different definitions floating around and everyone who uses the term likes to pretend that their definition is the only one out there and should be obvious to everyone else.
I collect agent definitions. I think the two most important at the moment are Anthropic's and OpenAI's.
I collect agent definitions. I think the two most important at the moment are Anthropic's and OpenAI's.
The Anthropic one boils down to this: "Agents are models using tools in a loop". It's a good technical definition which makes sense to software developers. https://simonwillison.net/2025/May/22/tools-in-a-loop/
The OpenAI one is a lot more vague: "AI agents are AI systems that can do work for you independently. You give them a task and they go off and do it." https://simonwillison.net/2025/Jan/23/introducing-operator/
I've collected a bunch more here: https://simonwillison.net/tags/agent-definitions/ but I think the above two are the most widely used, at least in the LLM space right now.