Transcript
Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture podcast. Today I’m sitting down with John Heintz. John, thanks for taking the time to talk to us today.
John Heintz: Thank you, Shane. I appreciate the invitation. It’s great to be here.
Shane Hastie: My normal starting point is who’s John?
Introductions [00:49]
John Heintz: John is a technologist that learned how to do a lot of agile and test automation and DevOps, and eventually learned how to do more of the people side of things, understanding how to organize bigger systems, became a co-founder of a product company, an AI system. I’ve done some consulting in my career. I’ve done some technology leadership in my career. I have managed to co-found and sell one business.
Shane Hastie: Cool. What got us talking today was a talk that you gave talking about how does this data make me feel? Why should I care how the data makes me feel?
Why we should care about emotions when conveying data [01:33]
John Heintz: Well, that’s a great question. We all as humans need to care how data makes us feel so that we can make better decisions with the data. The way that I got to framing the topic of the presentation that you saw about was we were trying to create an AI forecasting system that was producing a lot of data, really interesting data, and as a technologist, and I was being mentored by my co-founder in some really interesting aspects of data science and data processing. I loved the data. It was all really cool.
However, the intended users of our system were not able to understand the data the same way we did. That began a journey in us not just creating really cool math and really cool algorithms, but in figuring out how do we make this data work for our end users and help them understand what they should do and make better decisions with it. That course took me through some psychology into UX and we learned some interesting lessons. The way that I ended up conveying that is, “How does this data make me feel”, is the important kind of phrase that we started with for how to deal with that system.
Shane Hastie: Let’s dig into some of those lessons. How should the data make me feel?
John Heintz: I suppose the simplest answer is that data that indicates good positive things should make us feel good, and data that indicates worrying, troublesome warning signs should make us feel more cautious or more nervous or not good, bad. The intuitive reaction that we have to data can help guide our responses to what we see. If we’re looking at a speedometer on a car and we look at the speedometer and the speedometer is way up really high, where the RPMs are way high into the red zone, that can give us a cause for concern. We might be worried about the health of the engine or the speed that we’re going. Data can make us feel good or bad, positive or negative. Leaning into that and accentuating that in our data systems is a very useful tool to help convey to our users, to ourselves what we’re looking at.
Shane Hastie: As a technologist, building user interfaces where I’m trying to communicate something to the end user, how do I make this useful for me?
The implications for interface design [03:57]
John Heintz: The first step is understanding what’s important to your users. That sounds really boring and really mundane in many senses, but it’s actually very, very important. Really, truly understanding what does good news and what does bad news look like for your customers means that you understand how to build a user interface that can convey that to them. The systems that we were designing for were showing project forecast and schedule forecasting. That’s not a super exciting user interface all the time, but if your project is in the green and you’re on schedule, that means very important good things.
If your project is in the red, it means important negative things that need the users to come in and take different actions, be able to make some corrective measures to get back on track or to communicate about the changes and the updates, and so understanding what do users need to know, and then framing the system to generate the information that they need to know at an intuitive level.
I can stop for just a second and say about the psychology that we were leaning into at that time was Thinking, Fast and Slow by Kahneman. Our brains have at least two different modes of thinking. One mode is fast thinking, where we’re doing intuitive natural reactions to things. If we think we’re going to step on something that looks like a stick or that looks like a snake, our brain will react very differently. If we think there’s a snake there, we’ll jump back real fast. Our brain is giving us a very strong intuitive impulse to back away from something that could be dangerous to us. Thinking slow is doing algebraic formulas by longhand on paper, that slow thinking, engaging all the faculties of the brain. What we realized is that we needed our user interfaces to communicate to the fast thinking part of the brain that would have a very quick intuitive natural reaction.
We wanted to give that fast thinking part of our user’s brains the right information, the right signal. If the system is indicating that there’s a risk to the project, then we want to give our users some visual indications that trigger that fast thinking so that they have a feeling about the data that is negative or worrisome and that it triggers them to take some actions. If the system is indicating and predicting that the system is on track and everything’s good, we want to give our users this fast thinking positive indication that everything is good. The feeling of the data is do I feel that the state is good or bad and do I need to feel positive and successful right now about it? Or do I need to take action and do something because I’m spurred to fix some problem? Those are some of the psychology that we were using was thinking fast and slow.
Shane Hastie: Is this as simple as red and green?
John Heintz: It can be that simple, but in our situation, and we’re on a podcast so you can’t even see me waving my hands around, but it would be as simple as red and green if we were talking about a very simple problem space, but we’re not. We’re talking about complex system visualization. If you imagine any kind of complex chart or interpretation system that you or your stakeholders or your customers need to look at and understand, often there’s time involved, like what’s happening over time this week, things might be in one state last week in a different state. There’s trending, are we trending in the right direction or re-trending in the wrong direction for things? Being able to convey things with temporal components and being able to convey a broad set of things all at the same time are much more complicated than just red or green, although yet it does boil down to that might be the simplest thing.
If there’s a red warning light, you should look under the hood. But what do you see when you look there at the next level of dashboard? In the situation that we were working in, we were building a burn-down chart that was a risk burn-down chart. The idea is at the beginning of a project, there’s some amount of unknowns and risks and uncertainty, and that’s very natural. That’s fine. When you’re building something new, there’s always uncertainties involved in it. But by the end of the project, as you get to be right before delivery or finishing this effort, the number of unknowns should be gone and everything else should be all good. Tomorrow we’re going to release this, we’re going to give this to our customers, and everything is fine.
Over time, that has to go down. How far below the risk burn-down line you are is one aspect of it. Or if you’re above and you’re in the red zone, how far above is also an important part of this. When you’re looking at a dashboard, you want to be able to have the dashboard give you an indication of positive and negative as well as how severely positive or how severely negative this is. Do we celebrate or do we jump into action right now would be the other aspect of it.
Shane Hastie: How do we bring in… Going beyond that thinking fast, thinking slow, how do we bring in elements of usability and accessibility in this?
Including usability and accessibility [09:15]
John Heintz: That’s a good question. The situation that we were in, we were three engineers. My co-founder had a PhD in math. Our lead engineer was a super seasoned software developer. I’d been a software developer for most of my career. We were the classic anti-pattern of engineers trying to build a user interface. We hilariously created some things that were not that great. We realized we were trying to go for these principles of thinking fast and slow. We knew we were trying to trigger the fast thinking part of the brain. We didn’t succeed at creating our own user interfaces to do that very well. What we did is actually reach out, get a good recommendation for a good UX group that was able to work with us. As a startup, we paid money for this. We really invested in this literally, both our time and some of our money.
We ended up working with that group to help build this interface with our users in mind. They did a great job of taking all of the things that we were trying to do and they distilled them down into a type of visualization that, in hindsight really was obvious. We could kick ourselves for not having thought of it without having an extra group, but we didn’t. Like all good innovations, everything in hindsight should seem pretty obvious and this one does, too, but I don’t regret it at all. We definitely needed their help. We brought in some UX experts for the piece of this puzzle that we were not able to figure out ourselves. We knew we hadn’t figured it out, because our end users, no matter how much we wrote or created documentation or talked with them and explained the system, that training just didn’t work.
They just didn’t get it. They didn’t like it. They didn’t appreciate what we were trying to show them. They liked what we interpreted and told them was true about the system and what was going on. They liked the information, but the presentation, the UX that we had built was definitely not up to par. After we worked with this group, the new version of the system was much more quickly adopted, training took less than an hour and it was all said and done.
Shane Hastie: What are some of the other psychological lessons that you’ve learned that as technologists we should think about?
Psychological nudges applied to software design [11:39]
John Heintz: Well, there was one more specific area of psychology that we did bring into this, and I think it’s applicable generally for all kinds of technologists. This is a book called Nudge by Thaler and Sunstein. The book is about human psychology. It’s often used in advertising and big organizations will often push user interfaces and different systems that will give humans the nudge to buy the products they’re recommending, the way that they’re recommending them. This applies for humans in general. All of us, when we’re shopping on Amazon or any other big site, we are being nudged all the time. The best buy, the recommended box, the most people in your network use this box, all of those are nudges. A nudge works by looking like the default most common option. Humans are wired so that we tend to look at those in our social group.
We look at what’s working for our neighbors and we generally assume that it’s a safe choice. Well, if we go back to our tribal heritage and wanderers and gatherers, this was very true. Everybody around you was surviving and they were doing things a certain way. They’d learned some things. We adopted and began to use that as a checkbox to say, “This is a safe way to make choices. This is a safe way to live. We can use it”. Our brain psychologically assumes that the default choice is a safe choice and we often will go with it. Well, this can be used for good or bad.
When advertisers are convincing us to buy stuff we don’t need, we don’t think that that’s necessarily a great thing for me or for society, but the fact is that’s how the psychology in our brains does work. What we were very aware of and what I would say all technologists should consider is all of the systems that we build that do any descriptive or predictive analysis are giving our users something at the top of the list as “This is probably the thing that is the most important, the most likely”, whatever it is that’s the most. We’re giving our users lists and the ordering of those lists is psychologically very important. Whatever we put at the top of the list and say, “This is the most likely answer”, will have a very strong psychological nudge that our users will go with it.
To come back to the system that we were building, we were building a project schedule and risk system that was identifying what was causing risks to delivering a project on schedule. The probabilities that we were assigning, things could be calculated as the odds or the number of days likely that this will cause an impact. What we wanted to do was very carefully give our users a recommendation engine to say, “If your project has gone into the red, here’s the top one, or two, or three things that are causing the most likelihood that this is the issue and the problem”. The important aspect of our system was to provide recommendations that had the highest chances of improving and making the system better.
Shane Hastie: Stepping away from the psychology of the design of the products and maybe thinking a little bit about the psychology of people and teams, a common challenge for technical leaders is how much currency do I need with the technology that I work with?
Staying current as a technical leader [15:25]
John Heintz: That’s a good question. I think that being able to understand the technologies that exist today and the trending, I’ve got a trick that I try to use to understand how much do I need to know about what’s going on today with any given technology, and whether that’s understanding the psychology with humans, whether that’s understanding a new technology for integrating distributed systems. What I try to do is I try to look at what was true in the earliest literature where things were initially talked about and published. The benefit of this is back when any given field is young, the number of publications is much smaller, and so you can at least understand how many there are and read some of them. As opposed to later, in any given life cycle of psychology or technology, the number of publications is just daunting.
What I will tend to do is I will look back at the early stages of any given field, understand what are some of the pivotal seminal publishings and topics that were true at the beginning of it, and then do a quick survey of what’s still true today and whatever’s consistent, whatever carried forward from the early days is likely super, super important forever is my assumption. What I try to do is I try to stay connected with the founding principles of whatever field it is, and then I try to pick something that’s still present today that connects to that and practice with that. That way I feel the most connected to the original ideas of something as well as to what’s current and relevant today. In my own work, my coding, I’m not full stack. I don’t code everywhere anymore, but my own coding now, I’m doing Python, I’m doing data and numerical analysis and Bayesian analysis and techniques.
I’m still building some predictive systems. I’m still coding, but I’m not trying to learn everything. I’m not trying to be everywhere at once. The other technique that I will adopt is I try to have really good friends and colleagues that are experts in these other areas. When a question comes up that I know that I don’t have a deep experience in, I will ask somebody a question and I’ve got a number of people in different areas where I can really trust their answer will put me on the right track much faster than I would be able to on my own.
Shane Hastie: We were chatting before we started about something that’s been around in our industry for a long, long time and it’s popping up, and popping up, and popping up all the time. Conway’s Law, and you have a story of an organization where you’re dealing with that.
Applying Conway’s Law in practice [18:09]
John Heintz: Yes. I find Conway’s Law is often talked about in a negative term as a warning, as a hammer that you’re going to get hit with. My own perception of Conway’s Law is it’s actually an opportunity. It gives us a chance to look at the design of our software systems, and it gives us an opportunity to look at the design of our human systems, our organization structures, and both of them are flexible in different ways. Conway’s Law basically says that both of those designs of human and software systems need to relate together in healthy effective ways. The number of times that these types of problems pop up, these types of issues where there’s a mismatch between the software and the organization is something that does just keep popping up in our industry again and again and again.
I think it’s a result of our human systems and the struggle we always have to try and design computer systems from human systems without always paying attention to both of them at the same time, equally. There’s a team and a group that I’m working with, they’ve got a distributed system, and one of the things that just became really relevant and obvious with them is that the design of their distributed system isn’t as clear, isn’t as coherent as they thought it was going to be at this point. The building blocks are really good.
The technology pieces, the way they do the event integration between the services, all really cool tech, all really good and effective, but the big picture is not quite as obviously present right now. That’s the piece that we’re all looking at and recognizing that some of the team organization structures have started creating extra services in different areas that were a surprise from the technological perspective.
It’s not that it’s wrong in any case, it’s just this is a really good opportunity to step back, understand the architecture of this distributed system and look at the design of the human system team structures and the communication patterns and rethink what should they both look like. Conway’s Law is the perfect framing for that because both of these systems have different shapes and structures, and there’s a couple of places where there’s obviously going to be a bit a mismatch and some scribble. The opportunity to use Conway’s Law as a way of thinking about both of these positively at the same time I think is a really wonderful thing.
Shane Hastie: At a practical level, how do we do that?
John Heintz: That’s a great question. Practically what I usually like to do is look at the nature of the system, look at the design of the products that we’re trying to create, and naturally what underlying software systems, components, and architectures should support those products as my first step in this. Then my second step, after understanding the vision of the systems that we want to create, then we pull back and look at the human organizations that are the right shape and structure and grouping to support creating those systems.
I start with, I practically, and this is still a little bit abstract, but I start with understanding the nature of the systems first and then working back to what would be an optimum human organization structure for those systems. Even more practically than that, every given organization will have structures, and those structures will never perfectly match the software systems. Humans are not as fungible or spin-upable as computer software is.
I think practically the other aspect to that is understanding wherever there’s a mismatch between the design of the software systems and the structure of the humans creating virtual teams and putting together, I like Team of Teams, that’s another book reference, but I like that reference because it does a much more effective job of horizontal communication structures.
The key takeaway from that is if you’ve got two different teams that do need to collaborate, one person from team A joins many of the team B meetings and status and communication structures. One person from team B joins many of the team A meetings as well. You have at least one individual from both sides being very aware and in tight communication with the other team. You don’t have one single massive team, you still have two separate teams, but what you’ve done is you’ve cross-pollinated to a very significant degree. I like that as a way of dealing with some of the natural frictions and structures that might occur when the team structures and the system structures don’t match exactly.
Shane Hastie: John, a lot of interesting stuff there. If people want to continue the conversation, where can they find you?
John Heintz: Well, my LinkedIn profile is a great place to find me, John Heintz, LinkedIn. I’m sure with that search you’ll be able to find me or I might be in a link somewhere below on this podcast.
Shane Hastie: You will, indeed. Thanks so much for taking the time to talk to us today.
John Heintz: Absolutely. Thank you, Shane. I appreciate the invite and it was great getting a chance to catch up.
Mentioned:
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.