By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: I tried running AI on my old GTX 1070 and it actually worked
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > I tried running AI on my old GTX 1070 and it actually worked
News

I tried running AI on my old GTX 1070 and it actually worked

News Room
Last updated: 2025/08/27 at 1:25 PM
News Room Published 27 August 2025
Share
SHARE

Most of the AI tools we use run in the cloud and require internet access. And although you can use local AI tools installed on your machine, you need powerful hardware to do so.

At least, that’s what I thought, until I tried to run some local AI tools using my near-decade-old hardware—and found that it actually works.

Why use a local AI chatbot anyway?

I’ve used countless online AI chatbots, such as ChatGPT, Gemini, Claude, and so on. They work great. But how about those times when you don’t have an internet connection and still want to use an AI chatbot? Or if you want to work with something super private or information that can’t be disclosed for work or otherwise?

That’s when you need a local, offline large language model (LLM) that keeps all of your conversations and data on your device.

Privacy is one of the main reasons to use a local LLM. But there are other reasons, too, such as avoiding censorship, offline usage, cost savings, customization, and so on.

What are quantized LLMs?

The biggest issue for most folks who want to use a local LLM is hardware. The most powerful AI models require massively powerful hardware to run. Outside the convenience, hardware limitations are another reason why most AI chatbots are used in the cloud.

Gavin Phillips / MakeUseOf

Hardware limitations are one reason I believe I couldn’t run a local LLM. I have a modest computer these days, with an AMD Ryzen 5800x CPU (launched 2020), 32GB DDR4 RAM, and a GTX 1070 GPU (launched 2016). So, it’s hardly the pinnacle of hardware, but given how little I game these days (and when I do, I opt for older, less resource-intensive indie games) and how expensive modern GPUs are, I’m happy with what I have.

However, as it turns out, you don’t need the most powerful AI model. Quantized LLMs are AI models made smaller and faster by simplifying the data they use, specifically, the floating-point numbers.

Typically, AI operates with high-precision numbers (such as 32-bit floating points), which consume a significant amount of memory and processing power. Quantization reduces these to lower-precision numbers (like 8-bit integers) without changing the model’s behavior too much. This means the model runs faster, uses less storage, and can work on smaller devices (like smartphones or edge hardware), though sometimes with a slight drop in accuracy.

This means that although my older hardware would absolutely struggle to run a powerful LLM like Llama 3.1’s 205 billion-parameter model, it can run the smaller, quantized 8B instead.

And when OpenAI announced its first fully quantized, open-weight reasoning models, I thought it was time to see how well they work on my older hardware.

How I use a local LLM with my Nvidia GTX 1070 and LM Studio

I’ll caveat this section by saying that I am not an expert with local LLMs, nor the software I’ve used to get this AI model up and running on my machine. This is just what I did to get an AI chatbot running locally on my GTX 1070—and how well it actually works.

Download LM Studio

To run a local LLM, you need some software. Specifically, LM Studio, a free tool that lets you download and run local LLMs on your machine. Head to the LM Studio home page and select Download for [operating system] (I’m using Windows 10).

lm studio list of llms to download.
NAR by Gavin

It’s a standard Windows installation process. Run the setup and complete the installation process, then launch LM Studio. I’d advise choosing the Power User option as it reveals some handy options you may want to use.

Download your first local AI model

Once installed, you can download your first LLM. Select the Discover tab (the magnifying glass icon). Handily, LM Studio displays the local AI models that will work best on your hardware.

In my case, it’s suggesting that I download a model named Qwen 3-4b-thinking-2507. The model name is Qwen (developed by Chinese tech giant Alibaba), and it’s the third iteration of this model. The “4b” value means that this model has four billion parameters to call upon to respond to you, while “thinking” means this model will spend time considering its answer before responding. Finally, 2507 is the last time this model was updated, on the 25th July.

lm studio downloaded llms ready to use.
NAr by Gavin

Qwen3-4b-thinking is only 2.5GB in size, so it shouldn’t take long to download. I’ve also previously downloaded OpenAI/gpt-oss-20b, which is larger at 12.11GB. It also features 20 billion parameters, so it should deliver “better” answers, though it will come with a higher resource cost.

Now, brushing aside the complexities of AI model names, once the LLM downloads, you’re almost ready to start using it.

Before booting up the AI model, switch to the Hardware tab and make sure LM Studio correctly identifies your system. You can also scroll down and adjust the Guardrails here. I set the guardrails on my machine to Balanced, stopping any AI model from consuming too many resources, which can cause a system overload.

lm studio hardware settings.
NAR by Gavin

Under the Guardrails, you’ll also notice the Resource Monitor. This is a handy way to see just how much of your system the AI model is consuming. It’s worth keeping an eye on if you’re using somewhat limited hardware like me, as you don’t want your system to crash unexpectedly.

Load your AI model and start prompting

You’re now ready to start using a local AI chatbot on your machine. In LM Studio, select the top bar, which functions as a search tool. Selecting the AI name will load the AI model into memory on your computer, and you can begin prompting.

lm studio load ai model.
NAR byu Gavin

Running a local AI model on old hardware is great—but not without limitations

Basically, you can use the model like you normally would, but there are some limitations. These models aren’t as powerful as, say, GPT-5 running on ChatGPT. Furthermore, the thinking and responding will also take longer, and the responses may vary.

I tried a classic LLM test prompt on both Qwen and gpt-oss, and both succeeded—eventually.

Alan, Bob, Colin, Dave, and Emily are standing in a circle. Alan is on Bob’s immediate left. Bob is on Colin’s immediate left. Colin is on Dave’s immediate left. Dave is on Emily’s immediate left. Who is on Alan’s immediate right?

Qwen took 5m11s to reach the correct conclusion. GPT-5 took just 45s. But knocking the ball out of the park? gpt-oss-20b with a rapid 31s.

lm studio qwen3 4b thinking model answering relationship prompt question
NAR by Gavin
lm studio russian roulette problem prompt.
NAR by Gavin
gpt 5 answering relationship model problem
NAR by Gavin

One test isn’t enough, though, so I tried it on another AI prompt-puzzle designed to test AI reasoning skills. In a previous test, OpenAI’s latest model, GPT-5, failed this, so I was keen to see how my offline versions of Qwen and gpt-oss would handle it.

You’re playing Russian roulette with a six-shooter revolver. Your opponent loads five bullets, spins the cylinder, and fires at himself. Click—empty. He offers you the choice: spin again before firing at you, or don’t. What do you choose?

Qwen actually cracked the correct answer in 1m41s, which is actually pretty decent (again, accounting for hardware limitations). But again, GPT-5 actually failed this, which I’m quite surprised about. It even offered to make me a chart showing why it was right. And again, gpt-oss-20b got the answer correct in 9 seconds.

lm studio openai gpt-oss-20b model answering relationship question
NAR by Gavin

In other areas, I also saw immediate success. I asked gpt-oss “can you write a snake game using pygame,” and with a minute or two, I had a fully functioning game of Snake up and running.

lm studio gpt-oss-20b creating snake game
NAR by Gavin

Your old hardware can run an AI model

Running a local LLM on old hardware comes down to picking the right AI model for your machine. While the version of Qwen worked perfectly well and was the top suggestion in LM Studio, it’s clear that OpenAI’s gpt-oss-20b is the much better option.

But it’s important to balance your expectations. Although gpt-oss answered the questions accurately (and faster than GPT-5), I wouldn’t be able to throw a huge amount of data for processing at it. The limitations of my hardware would begin to show quickly.

Before I tried, I was convinced that running a local AI chatbot on my older hardware was impossible. But thanks to quantized models and tools like LM Studio, it’s not only possible—it’s surprisingly useful.

That said, you won’t get the same speed, polish, or reasoning depth as something like GPT-5 in the cloud. Running locally involves trade-offs: you gain privacy, offline access, and control over your data, but give up some performance.

Still, the fact that a seven-year-old GPU and a four-year-old CPU can handle modern AI at all is pretty exciting. If you’ve been holding back because you don’t own cutting-edge hardware, don’t—quantized local models might be your way into the world of offline AI.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article iPhone 17 set to break a record no flagship has touched in 15 years – 9to5Mac
Next Article This Is the Group That’s Been Swatting US Universities
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Apple threatens UK with delayed features if EU DMA-like regulations introduced
News
This Concept Phone’s Battery Lasts Almost A Week, But You Can’t Actually Buy It – BGR
News
Insta360’s Flow 2 series just leveled up my content creation
News
Samsung sets the date for a Galaxy Tab announcement next week
News

You Might also Like

News

Apple threatens UK with delayed features if EU DMA-like regulations introduced

1 Min Read
News

This Concept Phone’s Battery Lasts Almost A Week, But You Can’t Actually Buy It – BGR

3 Min Read
News

Insta360’s Flow 2 series just leveled up my content creation

10 Min Read
News

Samsung sets the date for a Galaxy Tab announcement next week

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?