By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: How to Build Your First AI Agent and Deploy it to Sevalla | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > How to Build Your First AI Agent and Deploy it to Sevalla | HackerNoon
Computing

How to Build Your First AI Agent and Deploy it to Sevalla | HackerNoon

News Room
Last updated: 2026/01/06 at 8:28 AM
News Room Published 6 January 2026
Share
How to Build Your First AI Agent and Deploy it to Sevalla | HackerNoon
SHARE

Artificial intelligence is changing how we build software.

Just a few years ago, writing code that could talk, decide, or use external data felt hard.

Today, thanks to new tools, developers can build smart agents that read messages, reason about them, and call functions on their own.

One such platform that makes this easy is LangChain. With LangChain, you can link language models, tools, and apps together. You can also wrap your agent inside a FastAPI server, then push it to a cloud platform for deployment.

This article will walk you through building your first AI agent. You will learn what LangChain is, how to build an agent, how to serve it through FastAPI, and how to deploy it on Sevalla.

What is LangChain

LangChain is a framework for working with large language models. It helps you build apps that think, reason, and act.

A model on its own only gives text replies, but LangChain lets it do more. It lets a model call functions, use tools, connect with databases, and follow workflows.

Think of LangChain as a bridge. On one side is the language model. On the other side are your tools, data sources, and business logic. LangChain tells the model what tools exist, when to use them, and how to reply. This makes it ideal for building agents that answer questions, automate tasks, or handle complex flows.

Many developers use LangChain because it is flexible. It supports many AI models. It fits well with Python.

Langchain also makes it easier to move from prototype to production. Once you learn how to create an agent, you can reuse the pattern for more advanced use cases.

I have recently published a detailed langchain tutorial here.

Building Your First Agent with LangChain

Let us make our first agent. It will respond to user questions and call a tool when needed.

We will give it a simple weather tool, then ask it about the weather in a city. Before this, create a file called .env and add your openai api key. Langchain will automatically use it when making requests to Openai.

OPENAI_API_KEY=<key>

Here is the code for our agent:

from langchain.agents import create_agent
from dotenv import load_dotenv

# load environment variables
load_dotenv()

# defining the tool that LLM can call
def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

# Creating an agent
agent = create_agent(
    model="gpt-4o",
    tools=[get_weather],
    system_prompt="You are a helpful assistant",
)

result = agent.invoke({"messages":[{"role":"user","content":"What is the weather in san francisco?"}]})

This small program shows the power of LangChain agents.

First, we importcreate_agent, which helps us build the agent. Then we write a function called get_weather. It takes a city name and returns a friendly sentence.

The function acts as our tool. A tool is something the agent can use. In real projects, tools might fetch prices, store notes, or call APIs.

Next, we call create_agent. We give it three things. We pass the model we want to use. We list the tools we want it to call. And we give a system prompt. The system prompt tells the agent who it is and how it should behave.

Finally, we run the agent. We call invoke with a message.

The user asks for the weather in San Francisco. The agent reads this message. It sees that the question needs the weather function. So it calls our tool get_weather, passes the city, and returns an answer.

Even though this example is tiny, it captures the main idea. The agent reads natural language, figures out what tool to use, and sends a reply.

, you can add more tools or replace the weather function with one that connects to a real API. But this is enough for us to wrap and deploy.

Wrapping Your Agent with FastAPI

The next step is to serve our agent. FastAPI helps us expose our agent through an HTTP endpoint. That way, users and systems can call it through a URL, send messages, and get replies.

To begin, you install FastAPI and write a simple file like main.py. Inside it, you import FastAPI, load the agent, and write a route.

When someone posts a question, the api forwards it to the agent and returns the answer. The flow is simple.

The user talks to FastAPI. FastAPI talks to your agent. The agent thinks and replies.

Here is the FAST api wrapper for your agent.

from fastapi import FastAPI
from pydantic import BaseModel
import uvicorn
from langchain.agents import create_agent
from dotenv import load_dotenv
import os

load_dotenv()

# defining the tool that LLM can call
def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

# Creating an agent
agent = create_agent(
    model="gpt-4o",
    tools=[get_weather],
    system_prompt="You are a helpful assistant",
)

app = FastAPI()

class ChatRequest(BaseModel):
    message: str

@app.get("/")
def root():
    return {"message": "Welcome to your first agent"}

@app.post("/chat")
def chat(request: ChatRequest):
    result = agent.invoke({"messages":[{"role":"user","content":request.message}]})
    return {"reply": result["messages"][-1].content}

def main():
    port = int(os.getenv("PORT", 8000))
    uvicorn.run(app, host="0.0.0.0", port=port)

if __name__ == "__main__":
    main()

Here, FastAPI defines a /chat endpoint. When someone sends a message, the server calls our agent. The agent processes it as before. Then FastAPI returns a clean JSON reply. The API layer hides the complexity inside a simple interface.

At this point, you have a working agent server. You can run it on your machine, call it with Postman or cURL, and check responses. When this works, you are ready to deploy.

Postman Result

Deployment to Sevalla

You can choose any cloud provider, like AWS, DigitalOcean, or others to host your agent. I will be using Sevalla for this example.

Sevalla is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.

Every platform will charge you for creating a cloud resource. Sevalla comes with a $50 credit for us to use, so we won’t incur any costs for this example.

Let’s push this project to GitHub so that we can connect our repository to Sevalla. We can also enable auto-deployments so that any new change to the repository is automatically deployed.

You can also fork my repository from here.

Log in to Sevalla and click on Applications -> Create new application. You can see the option to link your GitHub repository to create a new application

Create application

Use the default settings. Click “Create application”. Now we have to add our openai api key to the environment variables. Click on the “Environment variables” section once the application is created, and save the OPENAI_API_KEY value as an environment variable.

Sevalla Environment Variables

Now we are ready to deploy our application. Click on “Deployments” and click “Deploy now”. It will take 2–3 minutes for the deployment to complete.

Sevalla Deployment

Once done, click on “Visit app”. You will see the application served via a url ending with sevalla.app . This is your new root url. You can replace localhost:8000 with this url and test in Postman.

Postman Response

Congrats! Your first AI agent with tool calling is now live. You can extend this by adding more tools and other capabilities, and push your code to GitHub, and Sevalla will automatically deploy your application to production.

Conclusion

Building AI agents is no longer a task for experts. With LangChain, you can write a few lines and create reasoning tools that respond to users and call functions on their own.

By wrapping the agent with FastAPI, you give it a doorway that apps and users can access. Finally, Sevalla makes it easy to push your agent live, monitor it, and run it in production.

This journey from agent idea to deployed service shows what modern AI development looks like. You start small. You explore tools. You wrap them and deploy them.

Then you iterate, add more capability, improve logic, and plug in real tools. Before long, you have a smart, living agent online. That is the power of this new wave of technology.

Hope you enjoyed this article. Signup for my free newsletter TuringTalks.ai for more hands-on tutorials on AI. You can also visit my website.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Microsoft’s Nadella wants us to stop thinking of AI as ‘slop’ |  News Microsoft’s Nadella wants us to stop thinking of AI as ‘slop’ | News
Next Article Want Better iPhone Battery Life? Adaptive Power in iOS 26 Is Secretly Boosting It Want Better iPhone Battery Life? Adaptive Power in iOS 26 Is Secretly Boosting It
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Take Control of Your Money With the Personal Finance Apps and Services Our Readers Trust Most
Take Control of Your Money With the Personal Finance Apps and Services Our Readers Trust Most
News
Apple posts record quarterly revenue driven by ‘staggering’ iPhone 17 demand, but investors shrug –  News
Apple posts record quarterly revenue driven by ‘staggering’ iPhone 17 demand, but investors shrug – News
News
The Belkin WeMo shutdown shows the dangers of smart hardware and why there needs to be better exit planning
The Belkin WeMo shutdown shows the dangers of smart hardware and why there needs to be better exit planning
Gadget
Sony Bravia Theater Bar 6 review: Big sound, zero hassle
Sony Bravia Theater Bar 6 review: Big sound, zero hassle
News

You Might also Like

Iran-Linked RedKitten Cyber Campaign Targets Human Rights NGOs and Activists
Computing

Iran-Linked RedKitten Cyber Campaign Targets Human Rights NGOs and Activists

8 Min Read
GTK Developers Plot Improvements To Tackle This Year – Possible Opt-In Unstable API
Computing

GTK Developers Plot Improvements To Tackle This Year – Possible Opt-In Unstable API

1 Min Read
Kennedy Adetayo left his MBA behind. Now, he’s building a career leading expansions into African markets
Computing

Kennedy Adetayo left his MBA behind. Now, he’s building a career leading expansions into African markets

13 Min Read
The 5 Highest-Grossing Comic Book Movies of All Time | HackerNoon
Computing

The 5 Highest-Grossing Comic Book Movies of All Time | HackerNoon

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?