By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: I Saw the AI Future of Video Games: It Starts With a Character Hopping Over a Box
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > I Saw the AI Future of Video Games: It Starts With a Character Hopping Over a Box
News

I Saw the AI Future of Video Games: It Starts With a Character Hopping Over a Box

News Room
Last updated: 2025/03/30 at 10:36 AM
News Room Published 30 March 2025
Share
SHARE

At its own GTC AI show in San Jose, California, earlier this month, graphics-chip maker Nvidia unveiled a plethora of partnerships and announcements for its generative AI products and platforms. At the same time, in San Francisco, Nvidia held behind-closed-doors showcases alongside the Game Developers Conference to show game-makers and media how its generative AI technology could augment the video games of the future. 

Last year, Nvidia’s GDC 2024 showcase had hands-on demonstrations where I was able to speak with AI-powered nonplayable characters, or NPCs, in pseudo-conversations. They replied to things I typed out, with reasonably contextual responses (though not quite as natural as scripted ones). AI also radically modernized old games for a contemporary graphics look. 

This year, at GDC 2025, Nvidia once again invited industry members and press into a hotel room near the Moscone Center, where the convention was held. In a large room ringed with computer rigs packed with its latest GeForce 5070, 5080 and 5090 GPUs, the company showed off more ways gamers could see generative AI remastering old games, offering new options for animators, and evolving NPC interactions. 

Nvidia also demonstrated how its latest AI graphics rendering tech, DLSS 4 for its GPU line, improves image quality, light path and framerates in modern games, features that affect gamers every day, though these efforts by Nvidia are more conventional than its other experiments. While some of these advancements rely on studios to implement new tech into their games, others are available right now for gamers to try.

A computer screen showing the animation software Maya and a character being led through timing and motion.

David Lumb/

Making animations from text prompts

Nvidia detailed a new tool that generates character model animations based on text prompts — sort of like if you could use ChatGPT in iMovie to make your game’s characters move around in scripted action. The goal? Save developers time. Using the tool could turn programming a several-hour sequence into a several-minute task.

Body Motion, as the tool is called, can be plugged into many digital content creation platforms; Nvidia Senior Product Manager John Malaska, who ran my demo, used Autodesk Maya. To start the demonstration, Malaska set up a sample situation in which he wanted one character to hop over a box, land and move forward. On the timeline for the scene, he selected the moment for each of those three actions and wrote text prompts to have the software generate the animation. Then it was time to tinker.

To refine his animation, he used Body Motion to generate four different variations of the character hopping and chose the one he wanted. (All animations are generated from licensed motion capture data, Malaska said.) Then he specified where exactly he wanted the character to land, and then selected where he wanted them to end up. Body Motion simulated all the frames in between those carefully selected motion pivot points, and boom: animation segment achieved.

In the next section of the demo, Malaska had the same character walking through a fountain to get to a set of stairs. He could edit with text prompts and timeline markers to have the character sneak around and circumvent the courtyard fixtures. 

“We’re excited about this,” Malaska said. “It’s really going to help people speed up and accelerate workflows.”

He pointed to situations where a developer may get an animation but want it to run slightly differently and send it back to the animators for edits. A far more time-consuming scenario would be if the animations had been based on actual motion capture, and if the game required such fidelity, getting mocap actors back to record could take days, weeks or months. Tweaking animations with Body Motion based on a library of motion capture data can circumvent all that.

I’d be remiss not to worry for motion capture artists and whether Body Motion could be used to circumvent their work in part or in whole. Generously, this tool could be put to good use making animatics and virtually storyboarding sequences before bringing in professional artists to motion capture finalized scenes. But like any tool, it all depends on who’s using it.

Body Motion is scheduled to be released later in 2025 under the Nvidia Enterprise License.

Another stab at remastering Half-Life 2 using RTX Remix

At last year’s GDC, I’d seen some remastering of Half-Life 2 with Nvidia’s platform for modders, RTX Remix, which is meant to breathe new life into old games. Nvidia’s latest stab at reviving Valve’s classic game was released to the public as a free demo, which gamers can download on Steam to check out for themselves. What I saw of it in Nvidia’s press room was ultimately a tech demo (and not the full game), but it still shows off what RTX Remix can do to update old games to meet modern graphics expectations.

Last year’s RTX Remix Half-Life 2 demonstration was about seeing how old, flat wall textures could be updated with depth effects to, say, make them look like grouted cobblestone, and that’s present here too. When looking at a wall, “the bricks seem to jut out because they use parallax occlusion mapping,” said Nyle Usmani, senior product manager of RTX Remix, who led the demo. But this year’s demo was more about lighting interaction — even to the point of simulating the shadow passing through the glass covering the dial of a gas meter.

Two displays showing the original Half-Life 2 on the left and RTX Remastered version on the right.

David Lumb/

Usmani walked me through all the lighting and fire effects, which modernized some of the more iconically haunting parts of Half-Life 2’s fallen Ravenholm area. But the most striking application was in an area where the iconic headcrab enemies attack, when Usmani paused and pointed out how backlight was filtering through the fleshy parts of the grotesque pseudo-zombies, which made them glow a translucent red, much like what happens when you put a finger in front of a flashlight. Coinciding with GDC, Nvidia released this effect, called subsurface scattering, in a software development kit so game developers can start using it.

Two displays showing old Half-Life 2 (left) and new RTX Remastered (right) footage, with the latter showing a headcrab with translucent reddish limbs where the light shines through.

David Lumb/

RTX Remix has other tricks that Usmani pointed out, like a new neural shader for the latest version of the platform — the one in the Half-Life 2 demo. Essentially, he explained, a bunch of neural networks train live on the game data as you play, and tailor the indirect lighting to what the player sees, making areas lit more like they’d be in real life. In an example, he swapped between old and new RTX Remix versions, showing, in the new version, light properly filtering through the broken rafters of a garage. Better still, it bumped the frames per second to 100, up from 87.

“Traditionally, we would trace a ray and bounce it many times to illuminate a room,” Usmani said. “Now we trace a ray and bounce it only two to three times and then we terminate it, and the AI infers a multitude of bounces after. Over enough frames, it’s almost like it’s calculating an infinite amount of bounces, so we’re able to get more accuracy because it’s tracing less rays [and getting] more performance.”

Still, I was seeing the demo on an RTX 5070 GPU, which retails for $550, and the demo requires at least an RTX 3060 Ti, so owners of graphics cards older than that are out of luck. “That’s purely because path tracing is very expensive — I mean, it’s the future, basically the cutting edge, and it’s the most advanced path tracing,” Usmani said.

A computer screen showing a game, InZOI, where players can change the thoughts of their semi-autonomous characters.

David Lumb/

Nvidia ACE uses AI to help NPCs think

Last year’s NPC AI station demonstrated how nonplayer characters can uniquely respond to the player, but this year’s Nvidia ACE tech showed how players can suggest new thoughts for NPCs that’ll change their behavior and the lives around them. 

The GPU maker demonstrated the tech as plugged into InZoi, a Sims-like game where players care for NPCs with their own behaviors. But with an upcoming update, players can toggle on Smart Zoi, which uses Nvidia ACE to insert thoughts directly into the minds of the Zois (characters) they oversee… and then watch them react accordingly. These thoughts can’t go against their own traits, explained Nvidia Geforce Tech Marketing Analyst Wynne Riawan, so they’ll send the Zoi in directions that make sense.

“So, by encouraging them, for example, ‘I want to make people’s day feel better,” it’ll encourage them to talk to more Zois around them,” Riawan said. “Try is the key word: They do still fail. They’re just like humans.”

Riawan inserted a thought into the Zoi’s head: “What if I’m just an AI in a simulation?” The poor Zoi freaked out but still ran to the public bathroom to brush her teeth, which fit her traits of, apparently, being really into dental hygiene. 

Those NPC actions following up on player-inserted thoughts are powered by a small language model with half a billion parameters (large language models can go from 1 billion to over 30 billion parameters, with higher giving more opportunity for nuanced responses). The one used in-game is based on the 8 billion parameter Mistral NeMo Minitron model shrunken down to be able to be used by older and less powerful GPUs. 

“We do purposely squish down the model to a smaller model so that it’s accessible to more people,” Riawan said. 

The Nvidia ACE tech runs on-device using computer GPUs — Krafton, the publisher behind InZoi, recommends a minimum GPU spec of an Nvidia RTX 3060 with 8GB of virtual memory to use this feature, Riawan said. Krafton gave Nvidia a “budget” of one gigabyte of VRAM in order to ensure the graphics card has enough resources to render, well, the graphics. Hence the need to minimize the parameters. 

Nvidia is still internally discussing how or whether to unlock the ability to use larger-parameter language models if players have more powerful GPUs. Players may be able to see the difference, as the NPCs “do react more dynamically as they react better to your surroundings with a bigger model,” Riawan said. “Right now, with this, the emphasis is mostly on their thoughts and feelings.”

An early access version of the Smart Zoi feature will go out to all users for free, starting March 28. Nvidia sees it and the Nvidia ACE technology as a stepping stone that could one day lead to truly dynamic NPCs.

“If you have MMORPGs with Nvidia ACE in it, NPCs will not be stagnant and just keep repeating the same dialogue — they can just be more dynamic and generate their own responses based on your reputation or something. Like, Hey, you’re a bad person, I don’t want to sell my goods to you,” Riawan said.

Watch this: Everything Announced at Nvidia’s CES Event in 12 Minutes

11:47

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Trump asks Supreme Court to halt US TikTok ban · TechNode
Next Article Responsible AI: SAS leads with ethical and human-centered innovation – News
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

New study suggests Alzheimer’s may be reversible
News
White House: ‘Ridiculous’ to think Trump doing anything for his own personal wealth
News
Weeks after IPO, CoreWeave reportedly seeking $1.5B in debt financing – News
News
Google Agrees to Pay $1.4 Billion to Settle 2 Privacy Lawsuits
News

You Might also Like

New study suggests Alzheimer’s may be reversible

3 Min Read
News

White House: ‘Ridiculous’ to think Trump doing anything for his own personal wealth

4 Min Read
News

Weeks after IPO, CoreWeave reportedly seeking $1.5B in debt financing – News

4 Min Read
News

Google Agrees to Pay $1.4 Billion to Settle 2 Privacy Lawsuits

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?