By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The Form of AI
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > The Form of AI
News

The Form of AI

News Room
Last updated: 2025/08/06 at 7:31 AM
News Room Published 6 August 2025
Share
SHARE

Transcript

Kunovsky: I am Savannah Kunovsky. I started my career as a software engineer, an entrepreneur, and now I lead IDEO’s Emerging Tech Lab. I’ve spent a lot of time moving back and forth across the boundary between building things from a technical different approach and building things using a design-led approach. Before we dive in, let’s define design. It’s not just user interfaces or user experience or how a piece of hardware looks.

Collectively, design is about how things work. Anything and everything is designed. I want to talk about something I’ve noticed over and again in my work, especially when it comes to AI and other emerging technologies. When these big technology waves come along, like AI, what usually happens? Business leaders go straight to the technologists. They go to the scientists, the researchers, and engineers in the room, and they ask, what do we do with this? How does it work? What’s the potential here? For good reason. These technologies are complex, they’re evolving rapidly. Trying to figure out what to do with them has absolutely involved some big, messy technical questions. What I’ve also seen time and time again is that if we rely primarily on technology-driven explorations, if we only ask what can we build, then we end up missing out on something critical, what really matters to people.

That’s a problem, because when companies only focus on what’s technologically possible, they often find themselves in a situation where they’ve invested millions or even billions of dollars into R&D only to release a product or an internal tool that often falls flat. People don’t use it. They don’t find it valuable. Suddenly everybody is asking why. I know this well, because companies often call me when this is happening.

What I want to talk about is how design can help to bridge that gap. How we can make sure that the things we build aren’t just technically impressive, but also genuinely impactful. I believe that great design commercializes emerging technology, like AI, by marrying what’s possible, that is, what’s technologically feasible, with what people want and need. I think the key to great products lies in combining engineering rigor with design thinking.

It’s about bringing creative, human-centered approaches to the table, not just at the end of the process, but right from the start. I’ll walk you through how that actually works in practice. I’ll share some examples from our work at IDEO, including projects where we’ve collaborated with engineers and technologists to turn technical capabilities into things people actually want and need. We’ll walk through work that we’ve done at IDEO over many years, prodding at the edges of technology. We wanted this session to show how design and speculative design can be used to figure out how to make emerging technologies more useful and exciting to people. We’ll look at some of that together. Hopefully, by the end of this talk, you’ll see why I believe that engineers are the next great designers, and why the moment to act is now.

We Live in The Future

To start, I’m going to make a bold claim. We live in the future. We have cars that drive themselves. They’re all over where I live in San Francisco. I’ve had a small computer in my pocket for almost two decades, that has more compute than the first space missions. The future has crept in, but the world has been so chaotic that we haven’t paused to face this fact. Let’s look at a few purposefully wild, but very real examples. We’re seeing this man control Mario Kart using a chip implanted in his brain that Elon Musk owns. We made it through the crypto craze of 2022, and are seeing the impact of questionable choices made by the tech ecosystem now. In 1993, we experienced CGI at scale for the first time in Jurassic Park.

Thirty years later, we can step into a dinosaur world using the Apple Vision Pro. We’re questioning reality and then questioning it again because the tools of truth for the age that we live in do not yet exist. Can ChatGPT replace me? What jobs will generative AI replace? No, I’ll be fine. We’re wondering what our lives, businesses, and futures might look like when technology can replace what we as white-collar workers with our titles and degrees do, but when? Which reminds me of this meme. We question the sanity and the reality of the whole thing. Sometimes it’s funny, but also, WTF? We’ve gone from the science fiction visions of the 1970s to the 1990s, through the booms and busts of the 2000s, excitement and rampant funding of the 2010s, the hype cycles and the swirls of the 2020s, and we’re less than halfway through. We’ve now sailed into the first new region that is the future.

The future is a new place, one that has a new map, different from our past, different from what we are used to. When it comes to emerging technology, the map is non-obvious. We read headlines and don’t know exactly how new technologies like AI will evolve and how they will impact our businesses, lives, and governments. It can feel overwhelming to understand and keep up and to stay on the forefront. Instead of getting swept up in the overwhelm of what might be, let’s take this moment as an opportunity to take a collective pause.

Designing a Computer Mouse: Affordable Personal Computing Systems

The new map of the world is being drawn right now. What future do we want to live in? How do we want to harness emerging technologies to drive that future? I think that emerging tech can make things much better for people, it all depends on how it’s designed. Let me share a story that represents how design has historically drawn the map of our shared technological future. 1981 in Palo Alto, California. As one of IDEO’s founding members, Dennis Boyle, says, in 1985, a young man walked into what was soon to become IDEO. He ran a company that had the name of a fruit, and he was building what he thought would be the future of computing, the first personal computer.

From the stories I hear from folks who were around then, this guy had great taste, but he yelled a lot. He had a huge vision and even higher standards. That man was Steve Jobs, and the company was Apple. He was looking to create the first affordable personal computing systems. He needed a way for people to interact with it, what he called a computer mouse. The team accepted the job and set out in search of inspiration. They didn’t have to go too far. In the 1980s, there were a lot of arcades. These arcade games had some pretty interesting control systems, buttons, knobs, but most notably, a trackball. That trackball was a really nice way to scroll around a screen. Remember, this was before scrolling was really a thing. I’ve heard from some sources that early inspiration was the roller on a deodorant ball, and that the first prototype was made from a butter dish. When you were designing a computer mouse before anybody knew what a computer mouse was, you had to find inspiration from unexpected places.

Using this inspiration, they started to design. Breaking down the component parts, figuring out what it would be like to hold a mouse comfortably in your hand, control a computer, and have it sitting at your desk. They made tons of prototypes and got creative with how they tested these mice to make sure that they really worked, since most people didn’t have a personal computer at that time. In those days, people were using a lot of pencils and erasers. Those designers did some testing where they strapped the prototype to a turntable, which was also something people had a lot of at that time, and sprinkled in pencil shavings and dust in order to figure out what types of mechanisms a mouse owner would need in order to open up that casing and clean all of that out. Anyone here remember having to do that?

The team thought that it was hilarious that this thing he was trying to make was called a mouse, so this was just a fun sketch of an actual mouse as a computer mouse. Here we finally have, in 1981, the first Apple mouse. I start the stories that I’ll share today with a past to say that for a long time, designers have been involved in the process of making technology more useful to people. Without design, without creative thinking, technology is 1s, 0s, and scraps of metal. Using creativity, we unlock new ways of doing things, new ways of playing, connecting, and driving society forward. Being creators of technology means that we are making how humans and future AIs do all of those things. That’s a big responsibility and one that I hope we all take seriously.

How Do We Want to Interact With AI?

Because of our history, when the generative AI boom came around, we knew that there would be a significant role for design. Today, many people interact with GenAI like this. We are in the early, awkward days of this technology. The early days of any new technology are heavily influenced by what came before. Chatbots have existed since the early days of the internet. Another early example of technology interactions being influenced by what came before, The Voder which produced synthetic speech. It was designed to be interacted with like a piano because that was the interaction paradigm familiar to early designers. Generative AI was deployed to us at scale not too long ago. We’re in the awkward transition moment before it actually gets really good. As designers and future makers and technologists, we wanted to challenge ourselves to imagine a way to get all the way out there, a whole new paradigm.

In the AI space, we’ve seen so many people experimenting with form, not necessarily successfully. We’re in a defining moment, and with new possibilities, we get to rethink how we use technology. We asked ourselves, how do we want to interact with AI? We did what designers do, we went out in search of inspiration. We found inspiration from the movie, Her, an emotional handcrafted future where Theodore Twombly literally falls in love with his AI, Samantha. What if we made AI that we really felt connected to? Another piece of inspiration, the Calm Technology Movement.

This movement is all about unobtrusive, helpful technology that’s embedded into our everyday lives. Like this New York Times daily changing front page that is also a picture hanging on your walls. What if our AI was so useful that it faded into the backgrounds of our lives, there when we needed it, not the centerpiece of our existence? We recognize that generative AI allows us to interact with technology more naturally through a wider variety of inputs, such as voice, music, images, or data, to produce a wider variety of outputs.

Gen Z and GenAI

Before just going out and creating things because of what was technologically feasible, we went out to some of the people who are going to be most impacted by generative AI, but have the least to say about how it’s going to be designed, youth, Gen Z specifically. Gen Z is coming into purchasing power and starting to even have major influence over culture. They’ve grown up completely oversaturated with technology, especially living through a pandemic, which increased reliance on technology for their social lives and their school. As they enter the workforce, their entry-level jobs are the first to be put into question on which functions can be automated or augmented by AI. Let me tell you, as somebody who is not Gen Z, but is sometimes influenced by their fashion choices, I’m incredibly impressed.

The insights we found genuinely feel like a much healthier way for anybody to interact with technology. Let’s hear about what we did. Doing research in emerging technology means that we have to use more experimental design methods to get answers. We’re not testing things that people have seen before. It’s all brand new. What we wanted to do was work alongside Gen Z folks, have conversations, put some ideas in front of them, and have them build on it, and tell us where to draw our lines and where to push it further. We got ourselves into some pretty scrappy experiments to learn more. We did social listening, scraping social media videos and comments to understand how people were talking about GenAI organically online. We created design provocations, early concepts that illustrate potential form factors for future GenAI systems. We did co-creation sessions, bringing together a youth council of Gen Zs to guide the design and to guide our insights. Of course, some of the researchers on the project were Gen Z themselves.

In the first round of this research, we created six design provocations. These are early ideas that are meant to be really experimental and provocative and speculative. We don’t have the answers, but as IDEO does, we always build to think. We built early speculative ideas in order to get more tangible, to learn and put them in front of youth, and see how this might change how they view their own futures. Our concepting phase helped us synthesize our learnings from social listening and provoke our thinking on what future solutions are possible. I’ll share a couple with you.

This is the passion coach. Think of it as an AI service that uncovers your skills through games and conversation, to suggest interests to fit your strengths, what you could be good at or what you might be passionate about. Once you have a learning goal, like learning a craft, acquiring a skill or learning about a topic, it can generate a personalized curriculum and respond to you like an instructor, helping you learn at your own pace. Another concept, an AI friend who’s part of your friend group and helps your friend group interact and connect better. They’re able to do things like resolve conflicts among the group, suggest personalized fun activities for the group to do together, and can even help you stay connected if you move to a different city to reach out and get updates on what your friends are up to.

The most interesting part of this concept was not the concept itself, it was how we tested it. We had IDEOers pretend to be an AI. Imperfect, jittery in their response, they acted like an AI. Before we were able to actually prototype and test with the technology itself, we have to get creative with our research methods. Since the interaction paradigm that we were testing out was chat-based, but it wasn’t possible for us to have ChatGPT in an actual group chat, we made a really robotic human that we had Gen Z research participants interact with. That gave us insights about the technology before it existed.

Finally, we put these sacrificial concepts in front of Gen Z so they could unpack them and tell us where to push further and where to draw the lines, which led us to a few principles for designing AI experiences with and for Gen Z. The first principle, there’s great appreciation for efficiency and productivity that GenAI can enable. However, when this process takes away from the potential for human connection, for example, the relationships you might form by collaborating with others in solving problems or creating a product, this was not exciting to our research participants. The principle of deepen my connections, don’t close them down, encourages us to really think about how GenAI can balance solutions while also ensuring greater connection to other people.

This one is pretty easy to understand when you think about the fact that this generation grew up with firsthand experiences around the negative mental health impact of the unreal perfection being reflected through social media. Honor the messiness of being human. Don’t pretend we’re perfect. When it comes to GenAI, this cohort is actually quite worried that it will be an even bigger step towards setting unrealistic expectations and causing even more harm in the long run. We heard Gen Z say, I want to go on the bad date or the terrible vacation. This was something we heard over and again, that Gen Z is not looking for the perfect life. They want to experience all the facets of life, the good and the bad, and they want to experience it for themselves, not through an avatar, so they can build their own intuitions.

The question we leave you with on this is, how can GenAI be leveraged to really celebrate what makes us human versus erasing all that’s imperfect? Doing this research created the foundations of how we design generative AI experiences at IDEO. It’s public, you can access it through this QR code. Having all these methods and learnings makes it possible for us to work with clients who are interested in understanding what youth thinks about generative AI. These insights are genuinely just useful for a healthier way of interacting with technology that’s applicable to all of us. We used these insights when designing Ethiqly, which is a GenAI EdTech venture.

Ethiqly – GenAI EdTech Venture

Mid-2022, before GenAI reached the full hype cycle, a few Silicon Valley veteran execs approached us. They wanted to explore a new EdTech venture. They had some ideas about how technology can make education better, but they had little exposure to generative AI. You probably remember that right when ChatGPT was released, you could almost feel a collective shudder from teachers all around the world. A piece in the publication, The Atlantic, penned by a veteran teacher called this latest AI innovation, the end of high school English class in the U.S. He claimed this would be like, if the printing press, the steam drill, and the light bulb had a baby, and that baby had access to the entire corpus of human knowledge and understanding.

Rather than leaning into those fears, rather than asking how generative AI might be used to replace writing in schools, they asked, now that these tools exist, how does that change what it means to be a student? Together, we designed Ethiqly, an education technology venture built for high school teachers and students. Through our research, we uncovered the fact that learning to write essays is not just about the words on the paper. It’s about what you learn about yourself along the way and how your teacher is able to help you uncover your own capabilities in the process. Sounds like our earlier principle of honor the messiness of being human. It was amazing to see how the earlier research I shared, which was proactive and exploratory, folded into the design of a real-world problem. This is Ethiqly. It’s a shared digital hub for students and for teachers. It’s built on a few key insights, which in many ways tie to the work that we had done with Gen Z and how Gen Z wants AI to nurture and make authentic their relationships and their lives.

For teachers, Ethiqly means less time correcting and more time connecting with students. Teachers in many public schools in the U.S., have hundreds of students, and that means hundreds of essays to grade. There’s no way that they can really get to know every individual equitably and deeply. Ethiqly provides personalized assignment reviews in a fraction of the time with detailed contextualized feedback recommendations powered by AI and edited by teachers.

For students, Ethiqly is a writing companion from beginning to end, that breaks down the writing process into easy self-paced steps, so students can really find their own voice and figure out what they want to say through their essays. In our research, we found that it increases their writing confidence and also their grades. A feature native to this generation of GenAI, the talk-it-out feature, where you can talk out loud about your point of view and get a set of themes from which to build your essay. It’s really like having someone next to you who can listen to you, understand you, and give you great ideas.

We tested and prototyped throughout the process, hearing loud and clear from teachers and students when we went off course. Building with and for an AI tool lets more of the team engage in prototyping. After all, using an LLM means expressing the intent of the product in plain language. While our prototypes were built by software designers and data scientists, the whole team was able to understand and weigh in on the core of how the AI tool works. We wrote the code much faster than we could before.

For example, we were able to test that talk-it-out feature in scrappy ways like this really early on in our design process, with students before dedicating too much time to designing and coding production features. By testing prototypes throughout the process of creating Ethiqly, we heard loud and clear from teachers and students when we went off course. As we crystallized around the final idea, we heard that they love it since it supercharges teaching and learning. In some ways, this is the printing press, the steam drill, and the light bulb having a baby.

The possibility for hardworking students and teachers is actually really thrilling. Ethiqly is now a live venture being used in over 35 countries around the world. The story of Ethiqly is one that highlights how we can use emerging technology to genuinely make the experience of being a student and a teacher better. I wanted to share this story back-to-back with the Gen Z research to illustrate the power of co-design when we design emerging technologies like AI. With technologies that are so highly scalable and so impactful, it’s incredibly important that we lay the right foundations for the future that we want to live in.

Now hear it from the voices of the folks that we worked with instead.

Teacher: That there are 140 assignments that I need to get through. As much as I want to give actionable feedback and quality feedback to these students, it is a bit time consuming. I wish that there was a better system to be able to do it.

Student: Sometimes it’s difficult if I’m given a prompt that I don’t know much about. That’s when I have to research it and figure out, what do I think of this?

Teacher: A lot of times, writing an assignment or a prompt can be very daunting for a student. I try to break it down into smaller pieces so that they could focus specifically on each step. With the help of AI, I think it gives me a better picture of what my students are doing holistically. Being able to get a holistic score from an AI, which is not a complete judgment on what the students are writing about, but taking a look at the nuanced mechanics of writing. Then being able to get that feedback and transform that into feedback that I can actually give students.

Student: When someone else asks me why I wrote a certain thing or why I want to include it, it just makes me think a lot and makes my writing more special.

Teacher: While it gives a holistic look, I can actually figure out the amount of depth and time that I need to dedicate in order to support my students. That also definitely cuts down on the amount of time that I spend grading, which I think is a big plus. With leveraging AI more in education, I think that there’s this fear that it’s deprofessionalizing the field or that it’s a direct replacement for a teacher in a classroom. I think it gives us meaningful data so that we can actually teach the way that we want to in order to support our students.

Kunovsky: This is all to say that when we design emerging technology effectively, we make things much better for people. Emerging technology creates new value. They give us new interaction paradigms, new ways of unlocking human potential. When harnessed and designed effectively, it’s about making the products, services, and experiences that we offer better for us and for our customers.

Talking to Pants – The Power of Storytelling

After talking to so many humans, we decided to get AI to help us talk to pants. In the design world, we’re all about the power of storytelling. Great provocations and ideas enable us to tell better stories. We wanted to figure out how to work more with the retail industry when generative AI came out. In order to do that, we wanted to be able to tell stories about the future of the retail industry and how AI might help companies that sell clothes, more climate friendly. We figured that since being able to talk to something makes it so you can attach a history to it, by adding GenAI to a pair of pants, you could ask it about its history. If it had a really cool history, then maybe people would be more likely to keep their garments for longer instead of throwing them away. Let me tell you a story. If there’s one main takeaway from this, it’s going to be that we think we’re going to be talking to a lot more non-humans on a regular basis.

If that didn’t quite make sense, by the end of this segment, I think that it will. This concept started because we were thinking about the changing trend around how people decide what they’re going to buy. We found that people are shopping based on their personal values. Some research from Forbes suggests that Gen Z are the first generations where their values are more important for picking a product than brand loyalty. When values are bigger than brands, shopping increasingly resembles desk research. It requires a totally different skillset, searching, cross-referencing, and evaluating. A lot of those types of skills are difficult. The barriers to switching over to that mode are big enough that many people who want to shop according to their values still aren’t able to. Let’s consider an alternative. With GenAI, you can add personality to an object. You can actually start chatting with the jeans like you would to a person.

All of a sudden, you have a new way of learning about other things that are important to you. What do you want to know? There’s a backstory. Who designed this product and who are their influences? There’s the deeply serious aspects like, what’s the carbon footprint of manufacturing of the natural materials? How were the workers compensated? Or more practical aspects of ownership like, how does it like to get washed? There’s the cultural significance. What eras were these most popular in and who iconically wore them? In a world with an increasing appetite for secondhand, there are many directions to go.

One of the most tempting directions is plain silliness. How do jeans really feel about khaki pants? Building in real personalities to our everyday objects lets us provide and service a totally new set of values than the current norms of tags and labels. For people, personality makes something memorable and engaging, and makes shopping a bit more like play. For brands, personality can differentiate your products. Even if you’re selling a white T-shirt that just looks like other T-shirts, you can still make a lasting impression. GenAI adds new dimensions to what we sell. It taps into a major value shift for consumers. As a v2, the team took it to an even bigger level, literally, by creating the longest pair of pants that I’ve ever seen. The pants have an embedded RFID chip which you can tap with your phone to chat with the jeans and hear stories of the relationships between real people and their favorite pants. I’m still waiting for the CEO of Levi’s to call me back. That’s enough of talking pants. Let’s move on to a few more stories.

Envisioning the Future (Downtown, Washington, D.C.)

In 2023, the Federal City Council of Washington, D.C., asked us to help them envision the future of their downtown in the post-COVID era. This was around the same time that text and image generation tools got good enough that you could really start to bring the ideas in your head to reality. As we often do, we looked at the past to explore the future. We looked to the retrofuturism of the View-Master. The View-Master is a device created for the World Fair in 1939. You have a card with a few images on it that you slip into the device, and then you can see the images by looking through the View-Master, almost like a static VR headset. With the challenges of reimagining and redesigning a physical space, and using the power of Midjourney’s text-to-image capabilities, we created custom films for the View-Master. We used Midjourney to create images of possible futures for various iconic downtown spaces in D.C., turning them into stereoscopic images.

We then hosted a workshop with over 100 stakeholders, and the View-Masters allowed those workshop participants to literally see the future of what D.C. might become. We paired this with a futuring process designed to help them envision desired futures that might feel out of reach based on their day-to-day experiences. As part of this, the team leveraged the View-Masters as a tangible, visual experience that felt both nostalgic and forward-looking. I love the story of the View-Masters because it allowed us as designers to leverage our past as designers of physical things, and IDEO actually worked on the designs of some early View-Masters with the most cutting-edge technology of today. You can read more about the work here if you would like to.

AI Assistants – Intuitive, Social, Trusted, Multimodal, and Nurturing

A story from last year. Another moment of creating a deep level of tangibility for the sake of making abstract ideas about what the future might hold, super concrete. How many of you have heard the phrase AI assistant tossed around? In early 2024, that phrase was all the rage. The hype cycle propaganda machine that is Twitter/X made us all think that AI assistants are the future, especially for those like me who live in the epicenter in San Francisco. I couldn’t go into a coffee shop without overhearing people talking about AI assistants. The funny thing was, I don’t think that many people knew what that phrase actually meant. They definitely didn’t know what it meant for people, for consumers, and for how people wanted AI assistants to integrate into their lives. We did what we do best, and we started to design and research.

Intuitive, social, trusted, nurturing, multimodal, some of these aren’t words we hear a lot about generative AI, particularly not in online conversations. Instead, when we conducted an experiment to synthesize general sentiment about AI trends and emerging AI case studies, we found a pretty mixed review. Different people had different needs and expectations, and the future of AI is not one-size-fits-all, or even custom in the ways that we understand it today. Instead, it will be defined by the proliferation of personalized, adaptive, and specialized solutions to a broad range of needs defined for each individual user. Today, users turn to AI assistants to explore with an understanding that the outputs need to be reviewed with a critical eye. Widespread adoption of AI assistants in everyday life will require that users can have confidence in their outputs and their actions.

To succeed, we believe these products will need to be five things: intuitive, social, trusted, multimodal, and nurturing. We designed some provocations going into our shop to make them physical to illustrate how. I’ll show you a few. The first, Look-Out For Me. With the rise of misinformation and AI-generated content, Loupe monitors across mediums to let you determine authenticity, whether something is AI generated or not. Whether you’re looking at a book excerpt, a video on your phone, a street billboard, it offers glanceable indicators to inspire confidence. Sift Through For Me. Instead of providing a direct answer or overwhelming you with responses to questions like, what should I have for dinner? Magic Ball consolidates and sifts through information from multiple sources, offering just enough of a response to inspire you.

Last, Summarize for Me. Balancing family dynamics involves managing schedules, understanding motivations, and interpreting behaviors. Barometer is an ambient assistant for the home, summarizing these intricacies and acting as a command center for family togetherness. These three are part of a larger set of provocations we created, set in research about what people are truly looking for in their AI assistants. You can read the full piece here.

Designing Sustainable Toys, with GenAI Tools

Last, a story from last year. It’s a story about how generative AI tools completely changed the game of how we do research during a sprint in our Play Lab to uncover new, more climate-friendly materials for children’s toys. Instead of me sharing, I’m excited to have you hear it from my colleague Tomocini, who led this work and decided to pretend to be a YouTuber for this video. I think that he did a really great job.

Narrator: Welcome to Toyland. In Toyland, wooden animals come to life with magic and adventure, ready to journey with you to the real world. Imagine holding a magical animal in your hands. You can add wings to make it a flying unicorn, or a horn to create the strongest unicorn ever, maybe even a cape for new, amazing powers. Together, you and your unicorn will explore incredible places, solve challenges, find courage, make new friends, and discover magical secrets. The magic of Toyland, where every toy has a story and you are the hero.

Tomocini: This is Tomocini, a business designer from IDEO’s Play Lab. What you just saw is a video we made using only generative AI tools. The animation was made on Runway. The voice narration was made on ElevenLabs. The narration script was made on ChatGPT. Most importantly, directed and prompted by humans. Stepping back a little, this project was a very quick design sprint. Our challenge was to design sustainable toys that reduce plastic pollution and increase the play value at the same time. We only had four weeks to research, prototype, and test our ideas. We had to be really quick, efficient, but without compromising the quality of our design. That’s when generative AI tools came in. Broadly speaking, we used them in two ways.

First, we used DALL·E and Midjourney to turn our brainstorm sketches into more realistic concept images. We produced over 40 images and used them in our research as sacrificial concepts. Another way we used these AI tools is to create the video you saw at the beginning. Thanks to Runway, we were able to prototype an animated kids’ show trailer. Again, that video was created in just a couple of hours. The video is not perfect. Obviously, there’s some strange animation, but I can tell you that kids and parents that we talked to, watched them, and gave us feedback on the content itself. Nobody questioned the strange morphing movement of the unicorns.

Let’s talk about how this was different than our usual project. First of all, with the advancement of these generative AI tools, we were able to achieve both quantity and just enough quality. To me, the best part about it was that we could now fine-tune our research content for each user. Before, you would go into user interviews with pretty much the same set of sacrificial concepts. Now, if, for example, we learn from parents around in-store experience of buying toys, discovering new toys, then we can quickly show a modern-looking, attractive secondhand toy shop like this. Or, if we learned that kids’ shows are a big influence of what toys kids want, then we can quickly prototype a new show, which we couldn’t have done before.

Basically, the medium in which you can prototype is no longer constrained by your ability. You can prototype ultra-fast, like so fast that you can even prototype new concepts and show it to users during the user interview, on the fly, in real time, on the go. Go out there and experiment.

Kunovsky: The thing that I love the most about the story and this work is that Tomocini is not a visual designer at all. He’s a business designer. His job is typically to design business models and strategies for companies and ventures. Because of GenAI tools, like Runway, ElevenLabs, and ChatGPT, he could bring his vision to life in under two hours. Although I definitely feel worried about the ethical repercussions in many spaces around AI with artists and creatives, it is astounding how we can supercharge people using these tools in our workflows and make our research and design process much more rigorous, which takes us to today.

Conclusion

Going back to the start of this session, I claimed that we live in the future. As a group of technologists, I’m sure that we all have a lot to say about how the future is designed. I want to leave you with a few questions to consider as you go out and design and build and ship the map of our shared future. With so many fears around automation, death robots, and the apocalypse, what can you do today to design the technology future that you want to live in? Who should be at the table? Which voices, people, communities are being impacted by the choices made by designers, business folks, and technologists? How can you use your role as a technologist to include those voices to design a better future? As our capabilities to create diverse types of assets broaden, thanks to AI tools, and as parametric design stolen from architecture where algorithms dynamically shape user experiences become the norm, will the builders, the technologists become the next great designers?

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Tesla Sharehlders Sue Elon Musk For Allegedly Hyping Up Faltering Robotaxi
Next Article The smallest change in the iOS 26 beta should be a guiding principle
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

AlmaLinux Introduces Native NVIDIA Support Using Open-Source Kernel Driver
Computing
Android may soon get its own version of Apple’s Hot Corners on Macs
News
Combating Domestic Violent Extremism Is No Longer a FEMA Priority
Gadget
Li Auto reduces prices of first electric SUV just one week after launch · TechNode
Computing

You Might also Like

News

Android may soon get its own version of Apple’s Hot Corners on Macs

3 Min Read
News

Fan kicked out of US stadium for wearing a Maga hat sparks fury

6 Min Read
News

President Trump to Announce Apple Investment of $100 Billion in US Manufacturing

5 Min Read
News

The 6 iPhone ‘secret phrases’ to find lost phone & stop snoopers reading texts

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?