Let’s be honest, AI is stunningly cool—until it’s also stunningly predictable.
By now, you’ve likely seen some headline-stealing examples of generative AI conjuring up surreal art, dazzling visuals, or impossibly creative designs. Ask it to imagine alien cities bathed in neon light or forests where trees grow bioluminescent flowers, and—boom!—you’re presented with imagery that pushes the boundaries of what humans would normally conceive of.
But then, you ask an AI to draw a watch. And all the magic screeches to a halt. What do you get? A watch stubbornly stuck at 10:10.
It’s almost laughable: no matter how you prompt the AI—“draw a vintage wristwatch!” “a futuristic watch!” or even “a melted Dali-like clock!”—those watch hands somehow find their way to that oddly cheery 10:10 position. If AI is supposed to understand nuance, randomness, and creativity, why is it so stuck on this?
The answer isn’t just an amusing artifact of training models but a microcosmic look at the bigger challenges AI faces when it comes to understanding creativity, bias, and breaking free of well-worn conventions. So, fasten your wristband, and let’s delve deeper into this surprisingly philosophical—and deeply technical—mystery.
The 10:10 Phenomenon: A Human Legacy
Before we start wagging fingers at AI, let’s talk about us. The reason for AI’s predilection toward 10:10 doesn’t come from the algorithm deciding, “Yes, this is where time feels perfect.” Nope—it’s simply regurgitating a behavior we humans have baked into watch design for decades.
Virtually every watch advertisement you’ve ever seen uses the same iconic 10:10 timestamp. And no, this isn’t because every product photographer in the world collectively joined a “10:10 cult.” Here’s why this time choice is so dominant:
-
Symmetry Looks Good: At 10:10, the hands of the clock create a nice sense of visual harmony. It’s symmetrical, but not overly rigid. It also frames the brand logo perfectly, which is often smack dab at the 12 o’clock position on most watches.
-
The ‘Smiling Watch’ Effect: Look closely: At 10:10, the upwards-curving hands mimic the shape of a smile. Whether consciously or subliminally, brands understand that happy, welcoming design cues sell more products.
-
Marketing Overload: Once this convention became dominant, it snowballed. From ads to stock images to catalog photos, everywhere a watch appeared, 10:10 was the standard. It became a self-perpetuating design rule.
For decades, we’ve consistently fed the world this visual, making it so omnipresent that even our brains default to it when imagining a watch face. We don’t even think about it—we just expect it.
And now, AI does too.
AI’s Mirror Problem
To understand why AI, sometimes called “the great imitator,” can’t break free from 10:10, let’s quickly unpack how these models learn.
Every generative AI model—including powerhouses like Stable Diffusion, DALL-E 2, and MidJourney—relies on massive datasets for its training. These datasets are enormous collections of images (often billions) scraped from the internet: stock photography, online repositories, user-generated content, you name it.
When an AI learns the concept of “watch” from these images, it’s not merely analyzing the aesthetics or function of a watch. It’s looking for patterns of repetition.
Guess what dominates the internet’s imagery of watches? Yup, 10:10.
To the AI’s uncritical “mind,” the most statistically significant truth about watches is not that they tell time. It’s that they almost always look like this:
- Symmetrical hands pointing at 10 and 2.
- A logo sitting pridefully at the 12 o’clock mark.
- And, sometimes, bonus complications like chronograph dials nestled like window dressing.
If 95% of the “watch” images the algorithm sees are essentially identical, guess what happens when you ask it to create a watch? The AI doesn’t know any better. It assumes you want whatever version of a watch is most familiar to it—10:10.
But Wait—AI Isn’t Just Following Data… Right?
You might be thinking: “Hold on, AI is supposed to be creative! Why doesn’t it rebel?”
That’s where things get tricky. AI might seem creative—as if it’s pulling ideas out of thin air—but it’s not. Instead, it works probabilistically, pulling from patterns it’s learned during training. Let me demystify that.
Think of AI’s brain as a gigantic game of “autocomplete.” Imagine typing “dog breeds” into Google—autocomplete suggestions like “Labrador” or “German Shepherd” pop up because they’re the most common. Similarly, when an AI generates an image of “a wristwatch,” it samples what it thinks the average wristwatch looks like based on patterns it has already seen.
Here’s a key technical detail:
Generative models create images by exploring their “latent space,” a high-dimensional mathematical representation of everything they’ve learned. Imagine this latent space as a dense galaxy made up of patterns, ideas, and shapes. Objects like “watch faces” form clusters in this galaxy, and in the case of watches… the densest, most easily accessible part of that cluster is—you guessed it—10:10.
When the model begins generating an image, these dense areas act like gravitational wells. It’s more likely to pick something nearby rather than wander off into “creative randomness.”
Mode Collapse: The Trap AI Can’t Escape
There’s also something else at play here: mode collapse.
Mode collapse is a common pitfall in machine learning where an AI model starts to favor only a narrow subset of possibilities, ignoring less frequently seen options. It’s like a spotlight shining on only the most common examples while the rest fade into darkness. Because watches at 10:10 are dramatically overrepresented in AI training datasets, they become the “default.” Every time you prompt the AI, it falls back on this safe and familiar choice.
Here’s the thing: this isn’t just about watches. The same bias creeps into all kinds of generative outputs. Ask AI to generate, say, a generic image of “a businessman,” and you’ll often get a stereotypical Western male wearing a suit and tie—because that’s what dominates stock images. AI is only as unbiased as its data—and datasets, as we know, are laden with decades, even centuries, of human bias.
Wait… Can’t We Just Fix It?
Theoretically, yes. Technically? It’s a much tougher nut to crack.
For AI to break out of its 10:10 rut—or any other deeply ingrained cultural bias—it needs data and algorithms that actively resist the safety net of the average. Here’s what that might look like:
-
Diversifying Datasets: First, ensure that training datasets feature underrepresented alternatives. If an AI’s training data featured watches at random times as often as 10:10, we could soften this bias. But scaling this to massive datasets is no small feat—and cleaning datasets takes significant computational and human resources.
-
Reweighting Probabilities: Engineers could tweak an AI’s reward algorithms to actively promote more unusual outputs. For example, they could add penalties for gravitating too strongly toward default outputs like 10:10.
-
Injecting Noise into Prompts: Advanced systems could introduce “prompt noise,” explicitly forcing AI to randomize subtle aspects of its outputs, like the position of hands on a watch—or, more broadly, exploring underexplored areas of the latent space.
-
Custom Fine-Tuning: Models can also be fine-tuned to nudge creations toward greater creativity. By training smaller, specialized models on more diverse or niche data (like a dataset of watches at 7:13 or 4:47), creators can bias certain outputs toward breaking the mold.
That said, there’s a slippery slope here. Encouraging too much randomness means AI could lose its grounding altogether, creating outputs that feel disjointed or nonsensical rather than “creative.” Finding the sweet spot between default patterns and true innovation remains one of the biggest dilemmas in AI development today.
So, What’s the Big Takeaway?
The reason AI keeps drawing watches stuck at 10:10 isn’t just about its training data or coding quirks—it’s a microcosm of how generative AI reflects the limits of our creativity, our biases, and our data. When we expect AI to “think outside the box,” we forget that it was built inside our box to begin with.
What fascinates me about this isn’t the technical humdrum of how latent spaces or training distributions work (though I’ll admit, that’s wildly cool in its own right). What’s striking here is how AI forces us to reckon with our own patterns. We made 10:10 the universal symbol of timepieces. And until we change our conventions—or teach AI to value diversity over-familiarity—it will continue to echo those choices back to us.
So, the next time you ask an AI to create a watch stuck in the past, consider it a gentle reminder: creativity isn’t always about algorithms. It’s about intention.
And for now, AI’s watch face still smiles at you, forever frozen at 10 past 10.