By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: I Gave 5 Teams the Same Dashboard – Only 1 Made a Decision With It | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > I Gave 5 Teams the Same Dashboard – Only 1 Made a Decision With It | HackerNoon
Computing

I Gave 5 Teams the Same Dashboard – Only 1 Made a Decision With It | HackerNoon

News Room
Last updated: 2026/03/05 at 1:23 PM
News Room Published 5 March 2026
Share
I Gave 5 Teams the Same Dashboard – Only 1 Made a Decision With It | HackerNoon
SHARE

We spent four months building self-serve analytics. Then I watched people actually try to use it.

I’ll be honest about what prompted this. It wasn’t intellectual curiosity. It was a Jira ticket.

The ticket said: “Dashboard not working.” I opened it expecting a broken filter or a missing data refresh. Instead, I found a three-paragraph message from a product manager explaining that she had been staring at the dashboard for twenty minutes and still couldn’t figure out whether her feature launch had been successful.

The dashboard was working perfectly. Every chart is loaded. Every filter responded. The data was fresh, accurate, and well-modeled. I had built it myself, and I was proud of it.

She still couldn’t answer her question.

That ticket sat in my head for weeks. Eventually, I decided to do something I’d never done before. I took one of our most-used dashboards and watched five different teams try to use it for a real decision. Not a demo. Not a training session. A real question they actually needed answered that week.

The Setup

The dashboard was our product engagement summary. It had daily active users, feature adoption rates, retention curves, and a handful of revenue-adjacent metrics. It was built in Looker, connected to a well-tested dbt model, refreshed every four hours. By every engineering standard, it was solid work.

I picked five teams that regularly accessed this dashboard according to our usage logs. Each team had a question they were actively trying to answer.

Product wanted to know if a recent onboarding redesign had improved activation.

Marketing wanted to know which channels were driving the highest-value users.

Customer Success wanted to know which accounts were showing early churn signals.

Sales wanted to know if usage patterns correlated with upsell readiness.

Finance wanted to know if the per-user cost trend was sustainable at the current growth.

Same dashboard. Same data. Five real questions from five real teams, all due that week.

I asked each team if I could sit with them (or join their video call) while they worked through their question using the dashboard. No coaching, no hints. I just wanted to watch.

I’ll admit my bias upfront. I expected at least three of the five to get what they needed. I had built this dashboard with cross-functional use in mind. I had added context descriptions to every chart. I had even written a short how-to doc that nobody, as it turned out, had read.

What Actually Happened

Product

The product team spent the first eight minutes trying to isolate their onboarding cohort. The dashboard had a date filter and a feature filter, but no cohort filter. They needed to compare users who signed up before the redesign with users who signed up after it. The dashboard couldn’t do this without a workaround.

The PM opened a SQL editor in a separate tab. She wrote a query to pull user IDs by signup date, then tried to paste them into the dashboard filter. The filter had a 100-value limit. Her cohort had 4,200 users.

She gave up and Slacked me, asking for a custom cut of the data. Total time on the dashboard: fourteen minutes. Decision made from the dashboard: none.

Marketing

Marketing had the most interesting failure because they technically found what they were looking for. The dashboard showed acquisition channel breakdowns. They could see that paid social was driving the most new users. They concluded that paid social was their best-performing channel and started drafting a budget reallocation proposal.

The problem: the dashboard showed volume, not value. Paid social had the most users but the lowest retention rate (which was visible on a different tab; they didn’t click) and the lowest average contract value (which was on a different dashboard entirely). Their “best” channel was actually their most expensive per retained dollar.

They made a decision. It was the wrong one. I flagged it after the session. They were grateful but also frustrated. “Why doesn’t the dashboard just show us which channel is best?” one of them asked.

Because “best” means something different to every team. The dashboard showed metrics. It didn’t show answers. That’s a design philosophy we’ll come back to.

Customer Success

Customer Success was the team that actually succeeded, and it’s worth understanding why.

Their lead had spent three months building a mental model of what early churn looks like. She knew the specific combination of signals: declining weekly logins, no feature adoption beyond the core workflow, and support ticket frequency above a threshold. She didn’t need the dashboard to tell her the answer. She needed it to confirm what she already suspected.

She opened the dashboard, filtered to her accounts, checked three charts in sequence, and said, “Yep, these four accounts need intervention calls this week.” Total time: six minutes.

The dashboard worked for her because she already knew what to look for and how to read the signals together. She was interpreting, not just reading.

Sales

Sales spent twelve minutes on the dashboard and then asked me a question that I still think about: “Can you just tell me which accounts to call?”

They wanted a ranked list. The dashboard gave them dimensions and measures. They could see usage trends per account, but translating “this account’s usage went up 40% last month” into “this account is ready for an upsell conversation” required context that the dashboard didn’t have. It didn’t know the contract renewal date, the existing plan tier, or whether the account had already been contacted.

The sales rep was polite about it. She said the dashboard was “interesting”, which is the business equivalent of “I will never open this again.”

Finance

Finance didn’t use the dashboard. They exported the underlying data to a spreadsheet within the first two minutes. When I asked why, the answer was straightforward: “I need to build a model, not look at a chart.”

They needed to run scenarios. What happens to per-user cost at 20% growth versus 40% growth? What’s the breakeven point? The dashboard showed the current state. Finance needed to manipulate the inputs. A static visualization, no matter how accurate, cannot be a planning tool.

I don’t count this as a dashboard failure. It’s a use case mismatch. But it raises a question about whether we should have built this team a spreadsheet template instead of a dashboard tab. We didn’t ask. We assumed everyone wants dashboards.

The Scorecard

| Team | Found relevant data? | Answered their question? | Made a decision? | Time spent |
|—-|—-|—-|—-|—-|
| Product | Partially | No | No | 14 min |
| Marketing | Yes | Yes (incorrectly) | Yes (wrong one) | 18 min |
| Customer Success | Yes | Yes | Yes | 6 min |
| Sales | Partially | No | No | 12 min |
| Finance | Yes | N/A (wrong format) | No | 2 min |

What I Learned That I Didn’t Expect

The most dangerous outcome is a confident wrong answer. Marketing’s experience scared me more than any other. The dashboard gave them data. They interpreted the data. They concluded. The conclusion was wrong because the dashboard showed an incomplete picture, and they had no reason to suspect it was incomplete. A dashboard that gives you a wrong answer, you believe, is worse than a dashboard that gives you nothing at all. At least “I don’t know” doesn’t trigger a budget reallocation.

The gap between metrics and decisions is enormous. Every team came in with a question. Not one of those questions could be answered by a single chart. Decisions require combining multiple signals, applying business context, and making a judgment call. Dashboards show metrics. The translation from metrics to decisions happens entirely in the human’s head, and most dashboards give that translation process zero support.

Domain expertise is the real filter. Customer Success succeeded because of their lead’s experience, not because of the dashboard’s design. She knew what to look for. Everyone else was browsing, hoping the right insight would jump out. Self-serve analytics assumes domain expertise that most teams haven’t built yet. We ship the tool before we teach the thinking.

Nobody reads the documentation. I wrote context descriptions. I wrote a how-to guide. I added tooltip explanations to every metric. The usage data shows that zero people (not low, zero) clicked on the metric descriptions in the past quarter. People don’t read inline documentation in dashboards for the same reason they don’t read terms of service. They came to do something, not to learn something.

The teams that export to spreadsheets aren’t doing it wrong. Finance’s instinct to export was the most rational response to their actual need. We treat spreadsheet exports as a failure of our BI layer. Maybe it’s actually a signal that some users need interactive, manipulable data rather than curated visualizations. Building a dashboard for someone who needs a spreadsheet is like building a report for someone who needs a conversation.

“Self-serve” is a spectrum, not a switch. Customer Success was self-serve. The product could have been self-serve with a better cohort filter. Marketing needed guardrails. Sales needed a recommendation engine, not a dashboard. Finance needed raw data with scenario modeling. We called them all “self-serve analytics users” and gave them all the same tool. That’s a governance failure disguised as a product decision.

What I Changed Afterwards

I rebuilt the dashboard. But the rebuild wasn’t what you’d expect.

I didn’t add more charts. I added fewer. I removed three tabs and consolidated the remaining one around a single question: “Is this metric healthy, concerning, or critical?” I added conditional formatting that turns cards red, yellow, or green based on thresholds the team defined. I’m not proud of how simple it looks now. It looks like something a first-year analyst would build. But it gets used.

For Product, I built a separate cohort comparison tool. It’s ugly. It’s a parameterized Looker Explore with two date pickers and a delta calculation. It does one thing.

For Marketing, I added a composite score that combines volume, retention, and contract value into a single channel effectiveness metric. They can still see the components, but the default view answers the actual question: which channel is best per dollar retained.

For Sales, I gave up on dashboards entirely. I built a scored account list that runs weekly, ranks accounts by upsell signals, and lands in a Slack channel. It took less time to build than the dashboard tab I replaced.

For Finance, I created a Google Sheet template that pulls live data via the Looker API. They can model on top of real numbers without exporting manually.

The common thread: I stopped asking “what metrics does this team need?” and started asking “what decision does this team make repeatedly, and what’s the shortest path from data to that decision?”

What I’d Tell My Past Self

Build for the decision, not the data. If you can’t name the specific decision a dashboard is supposed to support, you’re building a museum exhibit. People will visit once, nod politely, and never come back.

Watch someone use your dashboard before you ship it. Not a stakeholder demo where they nod along. An actual work session where they try to answer a question they care about. You will learn more in twenty minutes of observation than in a month of requirements gathering.

The best BI isn’t always a dashboard. Sometimes it’s a Slack alert. Sometimes it’s a spreadsheet with live data. Sometimes it’s a scored list. The medium should match the decision pattern, not the other way around.

And if your usage logs show that people open your dashboard and leave within two minutes, the dashboard isn’t slow. It’s not answering their question.

The hardest part of business intelligence isn’t building the data model. It’s understanding what people actually do when they open your work.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya Meta’s AI glasses reportedly send sensitive footage to human reviewers in Kenya
Next Article Trump, Bondi sued over TikTok deal Trump, Bondi sued over TikTok deal
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

This Ultra phone is the first to use Sony’s brand-new 200MP camera sensor
This Ultra phone is the first to use Sony’s brand-new 200MP camera sensor
News
The conflict between Iran and AI: HBM and the Achilles’ heel called Hormuz
The conflict between Iran and AI: HBM and the Achilles’ heel called Hormuz
Mobile
How to run an Instagram account performance review that leads to growth |
Computing
Improving transatlantic cooperation on digital competition
Improving transatlantic cooperation on digital competition
News

You Might also Like

How to run an Instagram account performance review that leads to growth |

6 Min Read
What Happens if You Remove ReLU From a Deep Neural Network? | HackerNoon
Computing

What Happens if You Remove ReLU From a Deep Neural Network? | HackerNoon

2 Min Read
A Formal Core for Dependent Data and Codata with Type Soundness Guarantees | HackerNoon
Computing

A Formal Core for Dependent Data and Codata with Type Soundness Guarantees | HackerNoon

16 Min Read
Tech Moves: Amperity and Siteimprove name CMOs; AWS director departs; Gong’s new exec
Computing

Tech Moves: Amperity and Siteimprove name CMOs; AWS director departs; Gong’s new exec

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?