I detest clothes shopping. Talk about patterns and cut and sizing and all of the trying things on and evaluating what’s working or what’s not, all in the hope of finding the one thing that will work out. Some people may love it, but that’s definitely not for me. I realize that this is cliché, but I’m a man who would rather get a root canal than spend the day shopping.
But late last year, I learned I would be moderating a panel at CES. Which meant hauling myself up on stage alongside some sharply dressed executives and AI experts and leading a conversation about the ins and outs of AI in enterprise. I quickly realized I didn’t have the wardrobe for this gig.
Sure, I had my usual event uniform of a basic buttoned, collared shirt and some nice-looking jeans. If needed, I could rustle up some polished shoes and a tie. But working from home, my daily uniform is jogging pants and a hoodie. Even putting on jeans feels like dressing up. But that wasn’t going to cut it in front of an audience of business leaders at CES.
Here’s how that might look, as visualized by AI:
(Credit: Google/Brian Westover/Consumer Technology Association (CTA))
I needed a professional upgrade, without the shopping-mall fatigue.
Gemini Pro: My AI Stylist
To solve my wardrobe crisis, I turned to my go-to AI chatbot, Google Gemini 3 Pro. With its multimodal capabilities, it can analyze images, generate new ones, and offer pretty well-reasoned analysis using Gemini’s “Thinking” model.
But the real star of the show is Google’s Nano Banana image model, which not only can view and create images, but it can also do precise editing, changing just one thing about an image, such as adding an article of clothing or changing the color of an item. And because it also has good character consistency, I pretty much look like me in all of the images it generates.
It’s even smart enough to match the lighting and composition of the shot I’m editing, so that generating an image of me in a blazer or a sweater looks like a realistic image of me trying something on, rather than just a digital paper doll with clothing pasted on.
That meant that with nothing more than a mirror selfie snapped in my messy bedroom, I could virtually try on different looks before ever setting foot in a store.
Dressing Up: From Base Layers to Business Professional
I gave the AI a couple of pictures of my “event uniform” and asked it to generate images of that same outfit with different dress-up options. My goal was to see how my existing wardrobe would look with various layers: blazers, vests, and sweaters.
I started with a targeted prompt:
"Let me see this outfit with three different blazer options to help me find something appropriate for moderating an industry panel about AI at CES."
With my original photo attached, the prompt requests changes to the image and to the context, and implies that I wanted some analysis as well.
(Credit: Google/Brian Westover)
I did these in batches, starting with some blazer options and then other choices, like a vest, a sweater, or something in tweed. But I didn’t need to reupload my original image. This allowed me to quickly iterate with a simple follow-up prompt. All I had to do was ask for additional variations:
"Let's try a few different options, like a vest, a zip sweater, and another blazer in tweed."
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
(Credit: Google/Brian Westover)
In short order, I had six different options to choose from.
(Credit: Google/Brian Westover)
In addition to the generated images, it also provided analysis of what looked best and what was most appropriate for a business event. And this is the real magic of the multi-modal reasoning model.
(Credit: Google/Brian Westover)
Since I was using Gemini’s Thinking Model, it had no problem adding real analysis to the image generation. Most importantly, this process is pretty seamless—just a couple of years ago, switching back and forth between text chat and image generation would have required two different tools and none of the contextual awareness of the images. But now, I can virtually try on an entire wardrobe and get advice about the different looks, all in the time it would normally take to find a parking spot at the mall.
With this palette of options to choose from, I was able to narrow down what I wanted and what looked best without having to try on a half-dozen different items at several stores. Instead, I was able to make a single trip to Men’s Warehouse, tell the sales guy exactly what I was after, and then find the closest option from what they had on hand.
Now, in all fairness, there are some things that an AI image can’t tell you. It can’t exactly match the fabrics and styles that a real store will have on hand. It doesn’t know whether the shoulders are too tight, whether the fabric is itchy, or whether the shirt and blazer coordinate in terms of color and texture.
Sure, there was still some trying on and checking the fit, but far less than if I’d gone in unprepared. After trying on a few options, I settled on a blazer.
A (Clothes) Horse of a Different Color
Once I had the blazer, I still needed to do some fine-tuning. CES is a whirlwind of meetings, events, and the occasional business dinner, which means knowing what to wear to each. As we’ve established, I’m not the most sartorially inclined man, so I turned to Gemini again.
(Credit: Google/Brian Westover)
This time I provided a photo of myself in the new blazer, but I wanted to see how it looked with different shirt colors. Thanks to Nano Banana’s photo editing capabilities, that color swap was pretty easy to request, and I soon had the same image of myself in the outfit in three shirt colors: blue, burgundy, and black. And thanks to the image’s consistency, it always looked like me, and the blazer always had the correct color and pattern.
And the best part? I didn’t have to try on a single thing.
(Credit: Google/Brian Westover)
Not only did I have the visuals for decision-making before I packed my bags, but I was even able to ask which look would be better for the different situations I knew I’d face.
(Credit: Google/Brian Westover)
Having my wardrobe sorted before packing made my preparations for the trip a lot easier. And getting on stage was a little less nerve-racking dressed in an outfit I felt good about. (The panel went well, too.) I even got some compliments from coworkers and panelists on the jacket I bought.
(Credit: Brian Westover/Consumer Technology Association (CTA))
From Virtual Try-On to Real-World Confidence
Using Gemini as my AI stylist isn’t just about saving myself a trip to the mall. It’s also a great example of how AI can bridge the gap between imagination and reality. Thanks to Gemini’s image rendering, I was able to go from a simple bedroom selfie to a more polished presence on stage, all while skipping some real-world drudgery. And with Gemini as a source of advice and a partner in thinking things through, I knew I was making better decisions before I even got on the plane to Las Vegas.
This kind of virtual try-on isn’t limited to business casual workwear. In fact, you can use it for almost anything: haircuts, facial hair, sunglasses, home decor, closet organization, and so on. It’s the same flexibility we’ve used for picking paint colors and speeding up art projects. Whether I want to see how a standing desk will look in that corner of the room, how that watch will look on my wrist, or how my yard will look with a water feature, there are plenty of ways to combine Gemini’s image editing and generation capability with some dead-simple snapshots. Visualizing something new is as simple as asking for it.
But here’s the crucial thing: Because it was based on my images, my personal needs and context, and the decision-making stayed firmly in my hands, there was never any worry that the AI’s advice would be too generic. I didn’t worry about losing my own sense of style. Instead of Gemini making the final call, it acted as a mirror for my own intent, reflecting back options that I then curated based on what felt right for me, personally and professionally.
About Our Expert
Brian Westover
Principal Writer, Hardware
Experience
From the laptops on your desk to satellites in space and AI that seems to be everywhere, I cover many topics at PCMag. I’ve covered PCs and technology products for over 15 years at PCMag and other publications, among them Tom’s Guide, Laptop Mag, and TWICE. As a hardware reviewer, I’ve handled dozens of MacBooks, 2-in-1 laptops, Chromebooks, and the latest AI PCs. As the resident Starlink expert, I’ve done years of hands-on testing with the satellite service. I also explore the most valuable ways to use the latest AI tools and features in our Try AI column.
Read Full Bio
