Agentic AI - Series 4

                                                     

In this blog, will how to create a simple agent using AI framework like Langchain with a simple LLM model gpt-4o-mini.

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
import os

# Creating a tool
@tool
def add_numbers(x: int, y:int) -> int:
    "Add two numbers"
    return x+y

@tool is a decorator to define a tool. This is a simple tool to add 2 numbers.

The below line creates a Langchain LLM wrapper around on the OpenAI chat mode 'gpt-4o-mini'

# Binding the tool to a model
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools([add_numbers])
response = llm.invoke("What is 5 + 7?")

When we print response, is the expected outcome? Is it 12. NOOOOO.....

Our expectation is that the tool add_numbers must be invoked by the LLM automatically.

LangChain does NOT automatically execute tools unless you do it explicitly.

So, what the response contains?

Response contains JSON output. In that pay attention to the variable called "tool_calls".

tool_calls=[{'name': 'add_numbers',
             'args': {'x': 5, 'y': 7},
             'id': 'call_82oUzryF6JwTy6xacplh894c',
             'type': 'tool_call'}]

This response is received as part of the LLM  invoke.

LLM model has parsed the given input "What is 5 + 7?" and used the tool assigned to it and identified the tool to be used is "add_numbers".

In the below example, I added 2 tools.

# Creating multiple tools
@tool
def add_numbers(x: int, y:int) -> int:
    "Add two numbers"
    return x+y

@tool
def subtract_numbers(x: int, y:int) -> int:
    "Add two numbers"
    return x+y

Lets invoke the agent with the prompt as "What is 5 - 7?"

tool_calls=[{'name': 'subtract_numbers',
             'args': {'x': 5, 'y': 7},
             'id': 'call_cAnd3D63Ycf20ks7KlMdWiRo',
             'type': 'tool_call'}]

From the response, we could see the tool "subtract_numbers" is used.

That is the smartness of the LLM model, which can select appropriate tool to be used for the given prompt.

In the below section, I will show how to invoke the tool manually.

tool_call = response.tool_calls[0]
result = add_numbers.invoke(tool_call['args'])
print(result)

This is method of invoking the tool manually.

So, What happens if my prompt is given and not suitable tools are found? What is 5 % 7?

tool_calls=[]

We got an empty response because NO TOOLS are bound to the LLM agent to perform the operation.

In the next section, we will see how to configure the LLM agent to invoke the tools dynamically based on the given prompt.













Comments

Popular posts from this blog

K8s - ETCD

SRE Interview Questions and Answers - Part II

K8s - Deployment and HPA replicas