By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Why Golang Belongs in Your AI Stack—Especially for Preprocessing Pipelines | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Why Golang Belongs in Your AI Stack—Especially for Preprocessing Pipelines | HackerNoon
Computing

Why Golang Belongs in Your AI Stack—Especially for Preprocessing Pipelines | HackerNoon

News Room
Last updated: 2025/07/10 at 5:34 PM
News Room Published 10 July 2025
Share
SHARE

A few years back, whenever I heard of AI/ML engineering, I would assume that using Python for these programs was essential. But this seems to be changing with the exponential increase in usage and deployment of AI applications. While Python continues to be the most commonly used programming language for training ML models and AI engineering experimentation, there are many components involved in AI engineering today that could benefit from other languages. One such powerful language (and my favorite) is Golang. We can use Golang for developing production-ready AI programs where high-performance and low latency is crucial, and we can also benefit from using Golang to provide infrastructure for AI applications.

Python for experiments, Golang for operations

Python remains the go-to choice for training models and quick experimentation. But when it’s time to utilize these models through applications run on a large scale, choosing Golang will be beneficial as it offers:

  • Fast startup and low memory footprint
  • Built-in concurrency with goroutines
  • Single binary builds (perfect for Docker and k8s)

That’s why many companies use Go in their production stacks.

Golang’s built-in support for concurrency, and low memory footprint compared to other languages make it ideal for many use cases. In this post I specifically want to talk about how beneficial Golang can be when used in preprocessing pipelines for LLMs.

High throughput pre-processing of prompts

LLM pipelines often involve multiple steps: preprocessing, prompt augmentation, vector search, post-processing. Go’s concurrency model makes this easy to parallelize with the use of goroutines and channels.

Let’s consider the use case of building an application that pre-processes prompts before sending to LLM. Pre-processing could involve many steps: enriching the prompt with additional context, running safety filters or removing PII. If we want to perform this in a high throughput low-latency setup, Golang’s concurrency constructs such as goroutines and wait groups will be great for this application. Each pre-processing step can be CPU-bound or I/O-bound. Some tasks may take longer based on the nature of the task. Even if other tasks could be faster, performing all tasks sequentially can lead to high latency.

Parallel processing of prompts on the other hand can reduce the total time taken for processing prompts, and keep the overall latency of the system low. This also improves utilization of the available CPU cores.

This approach can be used either for applications that stream prompts as they come, or process and send them in batches.

The example below will cover a very simple pre-processing use case: PII redaction to remove email addresses, and converting the prompt to lowercase. Using Golang, we can pre-process several prompts in parallel through goroutines.

func processBatch(prompts []string) []string {
	var wg sync.WaitGroup
	responses := make([]string, len(prompts))

	for i, prompt := range prompts {
		wg.Add(1)
		go func(i int, prompt string) {
			defer wg.Done()
			processed := processPrompt(prompt)
			response, err := sendToOllama(processed)
			if err != nil {
				panic(err)
			}
			responses[i] = response
		}(i, prompt)
	}

	wg.Wait()
	return responses
}

func processPrompt(prompt string) string {
	prompt = lowercase(prompt)
	prompt = redactPII(prompt)
	return prompt
}

func lowercase(prompt string) string {
	return strings.ToLower(prompt)
}

func redactPII(prompt string) string {
	var emailRegex = regexp.MustCompile(`[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,}`)
	redactedWord := emailRegex.ReplaceAllString(prompt, "email")
	return redactedWord
}

For this example we’re going to run a local LLM server through Ollama. It is open source and can be setup easily.

LLM server installation

brew install ollama
ollama serve

In a new terminal tab

ollama run mistral

Sending pre-processed prompts to LLM

Locally running LLM server through Ollama can accept JSON requests at path http://localhost:11434/api/generate

We can send the pre-processed prompts in POST request bodies to that endpoint

type OllamaRequest struct {
	Model  string `json:"model"`
	Prompt string `json:"prompt"`
	Stream bool   `json:"stream"`
}

type OllamaResponse struct {
	Response string `json:"response"`
}

func sendToOllama(prompt string) (string, error) {
	reqBody := OllamaRequest{
		Model:  "mistral",
		Prompt: prompt,
		Stream: false,
	}
	data, _ := json.Marshal(reqBody)

	resp, err := http.Post("http://localhost:11434/api/generate", "application/json", bytes.NewBuffer(data))
	if err != nil {
		return "", err
	}
	defer resp.Body.Close()

	var result OllamaResponse
	if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
		return "", err
	}

	return result.Response, nil
}

Verification

Here’s what a sample input of prompts can look like:

	var prompts = []string{
		"What's the weather today?",
		"My email is [email protected]. Can you summarize this?",
	}

And the corresponding output:

2025/07/05 23:57:20  I don't have real-time capabilities to check the weather for you, but I can help you find out if you tell me your location! For example, you could ask "What's the weather like in San Francisco?" or specify a more specific location.
2025/07/05 23:57:20  I'm sorry for the confusion, but it seems there's no text provided for me to summarize. If you have a specific email or message that you'd like me to summarize, please paste it here or provide a link if it's accessible online. I'll do my best to help!

Real-world applications

The example we saw above was a very simplified version of what a pre-processing pipeline could look like. Data processing pipelines are extremely important for variety of reasons. For instance, when developing an AI chatbot, a pre-processing pipeline is required for cleaning input and standardization and tokenization. Redacting PII before prompts are passed to LLMs is essential for user privacy and can be done through preprocessing pipelines. Golang’s lightweight concurrency model makes it ideal for scaling these pipelines without the overhead of managing thread pools or async callbacks.

What’s next?

Python will continue to be the most widely used language in AI/ML space, due to its vast support for libraries required in this space and widespread adoption. But I think we should continue exploring usage of other languages where applicable depending on the use case. For instance, utilizing Golang’s best features where needed such as in high-throughput services, low-latency preprocessing, and infrastructure that needs to scale. The preprocessing pipeline we built here is just one example – you could extend this pattern for rate limiting, caching, or building API gateways for your AI services.

What’s your experience been with Go in AI infrastructure? I’d love to hear about other use cases in the comments.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article I’ve used iPads for over a decade but I’ve never seen a deal this good
Next Article 50 Cent tried to ‘hold movie hostage over $5m,’ TikTok star Bryce Hall says
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Prime Day, Day 3: The top 21 deals to shop today
News
Is Musk Starting a Movement… or a Market Rally?
News
Apple’s iPhone Fold just took a big step towards its 2026 release
News
AWS reportedly set to launch agentic AI marketplace with Anthropic next week – News
News

You Might also Like

Computing

👨🏿‍🚀 Daily – Safaricom scores a hat trick |

3 Min Read
Computing

How to Build AI Agentic Workflows for Automation and Efficiency?

23 Min Read
Computing

CISA Adds Citrix NetScaler CVE-2025-5777 to KEV Catalog as Active Exploits Target Enterprises

5 Min Read
Computing

ByteDance releases Ola Friend, its first AI smart earbuds · TechNode

1 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?