Agentic AI - Series 2
Now we know the brain behind Agents are LLM models. So, how do I access and use the models? That where we rely on organizations like OpenAI, Google, Meta, Mircosoft, Hugging Face, Nvidia, Grog and others who primarily build LLM models that serves various purpose like Text Generation, Image Recognition, and others.
I am sure everyone have been using ChatGPT which is a service provided by OpenAI for interactive/chat based conversation for our daily activities starting from asking a riddle, solving math problem and other tasks.
This is my simple interaction with ChatGPT asking for the "Weather in California?"
Lets imagine building an Agent which needs perform the same action as above, then it must be done programmatically.
If its programmatically, then you need API credentials to perform the same. I have generated OpenAI API credentials via API keys - OpenAI API
Let's ask the same question to ChatGPT programmatically:
from openai import OpenAI
from dotenv import load_dotenv
import os
# Loading the OPEN AI API KEY
env_path=r'C:\Users\rajas\PycharmProjects\Agentic_AI\Crew_AI\openai_api.env'
load_dotenv(env_path)
# Initialize OpenAI Client
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
# Sending input to the OpenAI client using LLM model gpt-4o-mini
response = client.responses.create(
model="gpt-4o-mini",
input="Weather in California."
)
print(response.output_text)
Here is the output I received.
Now that we know how to use ChatGPT programmatically, lets continue the series on building a simple AI Agent.

Comments
Post a Comment