It’s the last week of Black History Month (BHM) and it’s clear Americans are over performative values. Trite BHM-inspired merchandise sits on retailer shelves untouched while media is abuzz covering the artistry, activism, and symbolism of Bad Bunny’s Super Bowl halftime show. The signal is clear: consumers are looking to brands for real solutions to real problems, not products that commodify culture.
Most companies build everything from advertising to AI for the “average user,” but in doing so, they react to rather than lead markets. Strategic leaders look to growth audiences—underserved groups who are the fastest-growing demographics—as lead users. They are the “canaries in the coal mine” because they navigate the highest levels of systemic friction, making them the first to experience “average” design failures.
What does championing these lead users look like at a communications, product, or systems level? It looks like Elijah McCoy automating engine lubrication—an innovation bred from the friction between his engineering degree and the menial labor he was forced to perform, thus creating the “real McCoy” quality standard. It looks like Jerry Lawson changing the economics of the gaming industry by inventing the video game cartridge that divorced its hardware from its software. And it looks like emergency medicine becoming a global standard after being piloted by the Pittsburgh Freedom House Ambulance Service who, in the face of medical bias and systemic unemployment, also redefined emergency care as a public right.
Drawing from their lived experiences in underserved groups, these pioneers didn’t just solve problems; they mastered environmental friction. Today, that friction also manifests in algorithms. Championing growth audiences as lead users means ensuring they are critical AI system “stress testers.” When we fail to design for them, we allow AI data, development, and deployment to default to obtuse “averages” that can frustrate or drive away valuable customers. Three recent examples highlight issues and opportunities.
Relying on ‘Data Infallibility’ versus Lived Realities
In this Infallibility Loop bias, a brand’s AI trusts a data source—like a flawed GPS coordinate or outdated government map—as an absolute truth, even when customers provide contrary evidence. This is a digital echo of historical redlining: a systemic refusal to see humans over faulty data.
The Experience: A Black homeowner in an affluent area is penalized by an AI that confuses her address with a property in a different town, automatically forcing unnecessary flood insurance onto her mortgage and increasing the payments. Despite providing human-verified deeds and highlighting known GPS errors, the AI blocks her “incomplete” payments and triggers automated credit hits. A resolution only came months later after the consumer filed state-level servicer complaints.
The Fix: Prioritize Dynamic Qualitative Data Collection. Design should allow real-time, contextual evidence to override static, biased datasets. True brand innovation requires systems to yield to the experts: their customers.
