Site icon aivancity blog

Why is everyone talking about agents ?

By Dr. Burak ÇIVITCIOĞLU Associate Professor at aivancity in Artificial Intelligence, Machine Learning & Deep Learning 

Since the start of the AI revolution — let’s call that the release of ChatGPT-3 — the capabilities of large language models (LLMs) have been accelerating rapidly.

Let’s put this into perspective :

Just a few months ago, Veo 3 didn’t exist — and now, it’s setting new benchmarks in video generation.
A little further back, we were just beginning to see improved reasoning with Claude Sonnet 3.5 and GPT-4o.
Today, GPT-4o is the baseline. We use it for quick responses, not necessarily deep reasoning.

Since December 2024, we’ve entered what is now called the era of reasoning models. It began with o1, which has already been replaced. Now, we have access to o3, OpenAI’s most advanced reasoning model — as of today.

And yet, somehow, it already feels like these tools have always been with us. It’s hard to believe ChatGPT launched just 2.5 years ago.

It’s not enough to talk about smarter models without talking about how cheaper they’ve become. Let’s get specific.

LLM pricing is typically measured in price per one million tokens.
A token is basically a chunk of a word.

Now let us pause for a second. Can you guess what 1 million tokens might cost today, and what they cost when ChatGPT was first released?

Here’s the reality :

Worth noting: the cheapest model today is still more capable than GPT-3 in 2020.

This massive price drop changed everything. It unlocked access for researchers, developers, and hobbyists. Some models, like Mistral, are free for educational or personal use under the right conditions.

But how does this relate to AI Agents ?

OpenAI defines it simply:

“Agents are systems that independently accomplish tasks on your behalf”

Here’s how that works in practice :

  1. Task: The agent receives a natural language task.
  2. Plan: It breaks that task down into subtasks using LLMs.
  3. Tools: It executes the plan using tools — like browsing the web, running a script, or querying a database.

Let’s say you ask an agent to “find information about Aivancity School of AI and Data for Business and Society, its ranking and educational quality.”
Here’s what happens:

As a result, the agent will find that aivancity is 1st in France according to Eduniversal rankings of schools of AI and Data Science.

So you can think of an AI agent as having three main pillars:

Behind the scenes, all of this is made possible by LLMs: understanding what you say, deciding what to do, and executing with awareness of tool limitations and capabilities.

AI Agents have existed as a concept for a while. What’s changed is the economics, and the rapid performance increase of LLMs.

Thanks to lower costs and better models, we are no longer just generating text. We’re generating outcomes.

That means we can get slides from a lecture video, organized notes from raw transcripts, booking flights, summarizing emails by urgency or importance; all using AI Agents.

In other words, LLMs are now becoming doers.

And that is why everyone is talking about Agentic AI.

Quitter la version mobile