Video Thumbnail

Stanford Webinar - Forecasting Disruption: The Possibilities and Threats of Technology

Speaker: we would like to take this opportunity to officially welcome you to today's session, forecasting disruption, how to think about the future in a rapidly changing world, with bill burnett and cynthia benjamin. Now, we'l turn it over to our presenters for introductions. Bill burnett:. Thanks, annette. That's great. I'm happy to see everybody here. We're going to have a really interesting conversation about the future of technology in this crazy, rapidly changing world. So we've got a couple of amazing-- well, one amazing presenter and then just me. Let me introduce cynthia benjamin. She is the co-founder and chief strategy and innovation officer of together senior health, which is a digital health care company working in the alzheimer's and dementia space.

She's also been a lecturer at stanford for a bunch of years. Has taught a couple of different programs. A fantastic class called me 101 visual thinking but currently, a class 297 that is called forecasting for innovators. And she's going to be leading us through some forecasting strategies for this particular webinar. Cynthia benjamin:. Thanks, bill. And let me introduce bill burnett. Bill is an adjunct professor in the design group at stanford. He's actually been the executive director of the design impact program and the undergrad program for, gosh, a long time. Bill, you're going to have to remind me how many years. Bill burnett: about 15. Cynthia benjamin: about 15 years. And before that, bill had a distinguished career as a designer and a design leader at apple, at kenner toys at several startups.

And so has a depth of experience in design and design education. He's also a co-author of a book called designing your life, which has came out of stanford, but really resonated deeply across a wide swath of folks from people thinking about what they're going to do coming out of college to what they might do in their encore career. And then a second book, designing your new work life. So bill, really happy to share this webinar with you. Bill burnett:. Yeah, this is going to be a lot of fun. In the designing your life curriculum, we think about five-year futures. And that's actually what we're going to be talking about here, but we're going to be talking about it through the lens of design thinking and also technologies that are impacting our world today. So for those of you who've probably heard of the phrase "design thinking" or maybe "human centered design" is what we used to call it. We have started the david kelley, who started the "" at stanford, our institute to teach design thinking and basically, to the world. Said, "design thinking is all about unlocking potential and the creative confidence of our students.". David's book, creative confidence that he wrote with his brother tom kelly was also a big hit. And this idea of design thinking, although the started in 2006, our program in design goes back to the '60s all the way back to even 1957 when we taught the very first class in design. And it's always been a mixture of design, art, psychology, sociology. We like to think about designing and designer's way of thinking as a powerful problem solving tool.

It's a process. We say you don't start with a problem. You start with people. You start with empathy, and really try to understand deeply, what are the issues that people are facing? And what are the ways in which you might solve some of those problems? We're going to talk a lot about technology, and technology futures today. But both cynthia and i share the opinion that lots of startups start with a technology first and then go try to figure out who needs it. In the design thinking process, we would do the exactly the opposite. First, we try to understand, what's the real deep human need, and not just the need that people describe when you talk to them? But what's really going on past what they say and do to what they really think and feel? So we start with empathy. We redefine the problem. We come up with lots of ideas because we know if we have lots of ideas, we'l have better choices. And then the whole idea of prototyping or designing little experiments to tweak the future and see what it's all about. Prototyping and testing is the strategy for building our way forward. So design thinking is kind of the undercurrent of everything we do in the design group at stanford. It's interesting. It's a dynamic approach to problem-solving because we use lots of techniques like brainstorming and mind mapping. And we'l talk about some of those today. But it's also using prototyping and rapid cycles of iteration and experimentation to figure out how to increase the innervation of outcomes. But more than that, it's a really great approach to problem finding. And one of my favorite quotes from peter drucker, the sort of business guru, management professor, is "there's nothing quite so foolish as doing something very well that never needed to be done in the first place.". And that describes a lot of the technology startups we see in silicon valley that start with some cool piece of technology and never find a need for it.

So problem finding is where we think there's the highest leverage when we reframe problems based on human needs. And we use ethnography, empathic observations and other techniques that we borrow from anthropology and sociology to truly understand what users need and what's the best problem to solve. And because of that, we believe it's become a new perspective on value creation because when you can match a human-centered need to an actual technology and into a market that's large and robust, you have a really fantastic way of creating innovation and value creation. So today, we're going to talk a little bit about this idea of forecasting and design thinking because the needs and design principles that we use to create startups, or new products or new actions in the marketplace-- these change over time. And certainly, we're in the period right now with things like ai, chatgpt, robotics, automation, where things are changing rapidly. And so the needs of people, consumers, organizations, everybody-- the needs are changing rapidly over time.

And if we don't have some way of putting kind of a structure around this or at least, a framework around how we think about the future, we can really fall behind. And so forecasting changes in tracking technology. We think is really critical to any design or innovation strategy. And that's why for--. Oh, i don't you and paul saffo have been teaching this class for over 15 years in the design group. We really support the notion that we have to, not just think about what do people need today, but what's going to change as technology evolves in the next 5, 10, even 50 years. So that's what we're going to be talking about. And cynthia, you're the expert on this subject. Why should innovators and people, who might want to come to the workshop are going to be running in march-- people who might be the cto, or the person in charge of the innovation organization at their company, or somebody who's an entrepreneur thinking about the application of new technologies? Why is it important for them to think about the future? Cynthia benjamin:. Well, it's really important for a lot of reasons. If you want to be competitive, if you want to bring real innovation out to the marketplace, everything is connected, right? It's both obvious and not, right? A lot of times, we think about our innovations in isolation. Like if i just do this cool thing, better mousetrap, well, people will beat a path to my door. But in real life, everything is connected, and so it's important to think strategically about the future. And you'l have a real advantage if you understand the context of your innovation, both how we got to this place, the history of it and possible futures.

One of the reasons that we applied design thinking tools here is the very first thing bill talked about in design thinking. These are tricky problems without a lot of boundaries. And design thinking tools can be really helpful in that context. And if you want to build sustainable solutions that are around for a long time not just for today, it's important to think about the context of your innovation. And the biggest problems are pretty complex. If you want to have real impact in the world, applying these kinds of tools to bring some boundaries to the problem, to think creatively about alternatives and not just build on the first hypothesis that pops into mind, these are great tools for thinking that way.

And then lastly, this last bullet is kind of important. "the future can have a range of possible outcomes," right? A lot of times, we think about forecasting in the same way we think about prediction. And it's not about predicting the future because we don't have a crystal ball. And in fact, the actions that we take today can impact the future. Actions that we take tomorrow can impact the future. So there are a range of possibilities out there, not just one. And so if we're thinking strategically about how we want to play in the future, how we want to win in the future, how we want to influence the future, it's important to understand that range of possible outcomes and that cone of uncertainty that i'l refer to again later. And then you can decide where you want to play and where you can have a real advantage. So let me just put a little bit of a framework out here that we'l talk through today, and i'l talk through an example as well.

A lot of times, when we start thinking about forecasting, we will pick-- we will define a central question. And what i mean by that is like, what's the area that you want to be thinking about? A lot of times, we think about, oh, well, what's the world going to look like in 20 years? Which is an interesting question, but it gets so many variables that it's hard to wrap your head around much less make some predictions about that can be useful. So we want to make this useful. We want to make this something that you can take away and take some action about if you're thinking about 20 years from now, what's the world going to look like that? That's an interesting conversation, but it's not anything that you can really bring into today and take action on. So we want to focus on some boundaries about what is the question that we want answers to in this particular exercise. The next thing that we spend a fair amount of time doing in class, and obviously, we won't spend a lot of time today, but talking about the context of the space that you're looking at. Figuring out how we got here and doing some research, doing some background and/or bringing in experts, who know the space that you're playing in. It's important to understand the context so that one, you don't reinvent the wheel, which we see a lot of people coming out with some great innovation. Turns out, somebody already did that. And in fact, maybe, they failed a few years ago, and you could have learned something new if you had just understood what happened with them.

So kind of getting the lay of the land. Then look around for what are the drivers of change in the space. What are the key elements that are going to drive change in the space that you're interested in and pulling out some insight into which ones are important? Which ones are going to have the biggest impact? Where's the biggest uncertainty in the space going forward? And extracting those out. So then you can think about what possible outcomes could there be. What are the different futures that could happen in my space? Which ones are most likely? And putting some structure around that can be helpful so that then you can think about your options and take action. Why do you care? How will you participate in these futures? If the range of possibility is vast, how will you manage that, how will you learn about the future? What should you be looking for between here and there to understand which of these possible futures is likely to emerge? So we have this structure that we use in the class that bill referred to me 297 forecasting for innovators.

And we'l take students through this week by week. And we work with we work in teams. And it's a great way to both explore a new area if students want to explore someplace new or to go deeper into an area of expertise. So i'm going to take you through this process, and we're going to use an example as well. So the first thing, as i mentioned, is about a central question, broadly defining the space with a grand aspirational vision. And characteristics of a good central question are to make it interesting, essentially. You're not hypothesizing a feature, but you are thinking about what's a good question to ask that's going to stimulate thinking and thought provoking.

It depends on this. It depends on that. It might be a hypothesis, but the point is to generate learning at this first stage. So we're going to talk about robotics for this example here today. And a question that people might ask would be, what is the future of robotics look like? Which is interesting. But that's kind of a researchy question, and it's kind of boring. How about a question instead of, "when will the human marry a robot?". So i like that question because it makes you kind of stop and think for a minute.

Like what would that mean? What would that look like? And it could stimulate a fair amount of debate. Some people might think, well, that's pretty close in. Some folks might think, well, that'l never happen. But what it does is it kind of gets you thinking about what the next questions are. And the next question should be about context. Like how did we get here? Understanding how things have changed in this space. And the design tools that we use typically solve problems for today and can be super useful about need-finding, understanding users, understanding the issues. But in designing for tomorrow, some of these design tools allow us to understand the history of this space-- the technology that is in place today and the rate of change, which can be actually pretty important. When we think about innovation and technology, we're often optimistic that this is going to happen really fast. But some of these questions about robotics that we're looking at here today in this example are the same questions we were asking 10 years ago and 10 years before that. And so the rate of change is really important. Change around some of the technologies is pretty rapid. The rate of change around some of the cultural factors, some of the ways that we use robotics can be much slower. And when you think about how those are connected or linked together, that's where you really start to understand, "what are the factors that could either accelerate or potentially, impede adoption of this technology that we're working on?". The other reason to look at context-- don't reinvent the wheel. Oftentimes, there's some things out there that you may not have been aware of that you could build off of or leverage or learn from.

The next piece is where we're going to spend a little bit more time here today. So thinking about driving forces and the drivers of change. We like to think about three types of driving forces. We're going to start at the bottom here instead of the top. Technology is where a lot of change happens, and it can be quite rapid. And that's where a lot of innovators kind of focus heavily on, but we might also overlook some use cases or culture. These are the things that need to align to bring adoption or to adopt change over time. And all of these things will feed into the innovation going forward.

So let's start here talking about technology. Let's go back to the example that i brought up of a central question. "when will a human marry a robot?". So technology that needs to be in place for-- imagine yourself out there in the future or one of your children or somebody coming to the place, where they would consider marrying a robot. What are the scientific breakthroughs or the technological advances that need to have happened to make a machine functional as a human partner that somebody would want to marry? So bill, let's chat a little bit about that here. What are some of the technologies that you think might need to be in place to marry a robot? Bill burnett:.

Well, you probably want to have certainly some kind of a physical instance of the thing that it wasn't scary or weird. You'd probably want to have something you could talk to. We wouldn't have to type, input or something like that. You could talk and have a sort of a normal conversation of some sort. And if you're thinking about a life partner versus just a robot that cooks and cleans or something, then-- i mean, i'm kind of a hopeless romantic. I want something with a little soul, with a little bit of uniqueness, with something interesting or curious for me to learn about because i don't want a machine. Machine is predictable. I want something that's a little more, well, human. Cynthia benjamin:. Yeah. So yeah, there are a bunch of technologies that would need to come together to even create something that we could start to think about partnering with over time, right? And some things as straightforward as sensor technology. Like can this thing make its way through the world? Does this thing recognize that i am its partner, right? So vision technologies, communication technologies, speech, language. Battery power. Like is my partner going to get plugged into the wall, or is my partner going to be out in the world with me? Mobility. Things some basic building blocks of this thing that i could partner with. So there's a lot of technology that would need to be in place here for sure.

So let's think about use cases then. So use cases. What i mean by use cases are these kind of intermediary products, or services, or functions that would need to come together to create a desirable life partner? So bill, you mentioned empathy but some kind of companionship, right? Or intimacy, right? That's a use case that would combine some of the technologies that one might expect in one's life partner. Household support service is another use case, like a picture of the robot in the jetsons, right? So wouldn't it be awesome if we all had that kind of characteristics of a household support person? Other use cases. What do you think, bill burnett: well-- cynthia benjamin: how might we use robots in this interim period? Bill burnett:. Yeah, well, you mentioned mobility.

I mean, i want to travel with my partner. I want to get on an airplane and go places. I want to rent a car, and i want them to be able to drive it, or maybe, it'l be an autonomous car. And they can just jack in, and tell it where you want to go. But if you think about all the emotional aspects of companionship, it gets pretty tricky to think about creating a machine that has that kind of emotional depth. Cynthia benjamin:. Yeah, it does. It does. And so let's actually start-- let's think also then about some of the cultural drivers of change? What are some social or political or legal or regulatory issues that might emerge as we think about people marrying machines? So this is where things get really pretty tricky. Like would it be legal to marry a machine, right? Bill burnett:. Yeah, 'cause yeah-- if you think about the legal thing, i mean, we assume that when two people get married, they can give consent. I want to marry this person. I'm not being forced to do so. I want to marry this person. How can a machine give consent? We don't have-- i mean, doesn't that require-- cynthia benjamin: free will?

Bill burnett: --of consciousness or intelligence. And i mean, you look at some cultures like japan, where robotics has been adopted for eldercare and other things, they might be more open to this. But i think in the us, people would be kind of freaked out if i was sitting at a nice restaurant having dinner with a robot. Cynthia benjamin:. And why would that-- and if the robot actually has free will to marry you as opposed to being your servant? If they don't have free will, like it starts to open up all sorts of interesting kind of cultural, societal questions. Bill burnett:. Yeah. Cynthia benjamin:. Yeah, when-- bill burnett:. Can a robot file for divorce? Cynthia benjamin:. Yeah, would a robot file for divorce? Or what about procreation? Are you going to have children with your robot? Whoa. What does your legacy look like? Will you even need a will because your robot might live forever, and you obviously, won't. You have a human body.

So all sorts of interesting questions start to emerge around these kind of cultural side of this question, whereas if you're just thinking about the technology side, you might think, yes. We could physically build a robot that you could marry, but wow. Like what's the setting of this the culture? So i think this is a great way to think about, what are some of those underlying drivers of adoption or something that might impede adoption of robotics in general? So even though we're focused on this question of human marrying a robot, we're also eliciting all sorts of questions that are relevant. Even if i'm not looking at marriage--. But i want to understand the future of robotics in general because all of these things will come into play as a society adopt more robotics into our world.

Bill burnett:. Well, this stuff has come up in factories, right? Where robots are working side by side with human workers. And the human workers are unionized, and the robots are not. And then there's all these debates. And was it even safe to work next to a robot because the robot has no sense of the "human being fragile"? I mean, i love this forcing question because although marrying a robot may be a little bit extreme, you can easily downselect to situations we're already in, where robots are working. And amazon wants to have all their factories full of robots, but there's still people there. So how do we navigate these important cultural and use-case issues? Cynthia benjamin:. Right. There are also, i think, some great examples out there of places we've gotten stuck in this technologies level, right? Thinking that technology is going to drive change all on its own.

You and i were talking earlier about the vr glasses, right? Google's doing these glasses. Apple's done the glasses. Snap has done these glasses. And it's this really cool technology, but nobody's adopted them. And i think it's because one there haven't been real good use cases for them. There are some very niche use cases around kind of medical robotics and things like where that could be really relevant, but there aren't a lot of use cases. So people haven't started to use this and gotten comfortable with them. And from a cultural perspective, we, kind of as a group of people, have not yet decided that walking around with computers on our faces is comfortable or useful or societally acceptable. So no matter how great that technology might be, it's not adapted. Bill burnett:. Well, it's a perfect example also of looking backward in technology, people think apple's vision pro--.

Wow. It's brand new or the oculus that-- meta's stuff is based on the oculus headset. But i had friends in the '80s who started a company called fake space labs. And they were doing vr with two crts. Literally, two televisions on a balanced boom that you could put your face in, and you could have the exact same experience you're having today. And all that's happened is the technology got faster. It got smaller. It got lighter, but still-- cynthia benjamin: not enough. Bill burnett: --and whatever that is. 50 years. Nobody's come up with a reason that "i want to do this vr -ar stuff," and that it would be acceptable in our culture to be wearing like you said, a computer on my face.

Cynthia benjamin:. Yeah. Bill burnett:. So we got a long way to go. Cynthia benjamin:. We got a long way to go. So let's pop back to some methods here and think about what you do with these drivers, right? So there's lots of ways to think about generating this list of drivers of change. One is discussion like we're having. Another might be a tool like an idea map that we use a lot in design and design thinking, bill, you want to take just a minute and talk about how you might use this kind of tool? Bill burnett:. Sure. Idea maps are sometimes they're called mind maps or just a great way of really, loosely exploring an area and being very-- using both your intuition and your sort of creative mind to quickly map out all the ideas that are connected.

So you have something idea in the center. In this case, it might be robotics and autonomy or robotics and robot that you can marry. And then off of that, you brainstorm a few options. And then off of that, you brainstorm options off of each of those. And as the mindmap gets bigger and bigger, it brings in more and more of the different domains. And it's not limited to just the technology piece. You can have a whole. A whole part of the map might just be around the use cases. A whole part of the map might be around the cultural or social issues. And it's just a great way of a team really fleshing out all of the connections and interconnections between the central idea and lots and lots of other ideas. And it's a great way to get teams brainstorming together as well. Cynthia benjamin:. Yeah, great. And so here's an example in this example that we've been talking about, where a human marries a robot.

So what we did was start with an idea map, and generating lots of these different elements. Now, i wouldn't expect anybody to actually read all of the little tiny things on this screen. But i wanted to share this kind of as an overview in terms of how you might use an idea map-- to start generating things, put them up on a wall, and then you can start to group them. And we used, in this case, the blue ones-- we started thinking about what were the technologies. And the orange ones, what were some of the use cases and applications of robotics? And the green ones were about society and culture. But then we started rearranging them on the wall because the last thing you want to do in a brainstorming session is put a bunch of post-its on the wall and walk away. It feels great in the moment, and then immediately, it becomes useless. So in this case, we started rearranging these and started thinking about what were the linkages, how were they connected, and what were some of the most important elements that we wanted to consider going forward as we were building a forecast in this space? So you can see things were circled. There's lots of arrows. Some things were underlined. And so those were the things that we took forward because to try to build a map with a 100 elements here it just doesn't make sense.

So you try and figure out which ones are the biggest impact and which ones have the greatest uncertainty. And the top drivers have both because the things that or can predict in fairly reasonable ways are going to be the underlying factors in any future. But if you're trying to understand the breadth of possible futures, let's look at the ones that we think will have the biggest impact positively or negatively or speed-wise. And which ones have the greatest uncertainty? Because that's going to kind of give us the breadth of possibility in our cone of uncertainty. So we just kind of skim quickly through this example. We took the items that were circled or squared on that map. We started just kind of going through them in terms of impact and uncertainty. So some of the things that we identified-- and you as you look at these, might think differently. So this is a tool for conversation. It's qualitative tool, obviously. Not quantitative, and there's a lot of room for discussion here. So which ones do you think would have the highest impact and highest uncertainty? We picked empathy. Like how do you even create empathy from a technology perspective? Huge impact on whether this is going to unfold, how it's going to unfold and a lot of uncertainty. Learning as well seem to be a high. Companionship, like we talked about earlier, in terms of a use case seems pretty high impact. How do we do that?

What does that look like? Timing. How could that unfold? And then interestingly, a bunch of these things under culture. Legal marriage. The legal elements around how we typically think about marriage. And somebody i see in the q&a asks like, what is even marriage? Should we be talking about marriage in this space? What would a legal union look like as opposed to a legal kind of a human marriage? It might evolve to be something different or defined differently. And this notion of free will. Can we build robots that have enough free will? And frankly, if they have free will they want to marry us?

Right? That's a huge uncertainty and a huge impact on the answer to this question. So because i'm a consultant and a lecturer and all, i like to put a little framework on things. And so for me, it's helpful to graph those. So what are the things that have the highest impact and the highest uncertainty? And those are the elements that are most likely to tip the future one way or another. So those are the things in the top right here. Empathy, learning, companionship, the notion of legal marriage, free will, relationship, trust. And when i do this it really helps me to look at things graphically. And so it when i look at the things in the bottom left-- low uncertainty and low impact--. It's not that they're not unimportant because they are critical.

But creation of a physical body somehow and even with development of sensors look like-- those are i wouldn't say predictable, but you can foresee a pretty clear future in development around sensor technology, right? Historically, they've gotten faster, smaller, more sensitive. So i think compared to some of these other things, that's a relatively straightforward. I would call that kind of part of the landscape going forward. I would assume sensors will continue to get smarter, smaller, better. Language. I'm assuming that the language technologies will continue to get better, faster, more useful.

Some of these other questions-- empathy? I don't know what that's going to look like, right? So oftentimes, it is the technology pieces that are in this more predictable part of the landscape and some of the cultural stuff that is the more uncertain, which i think is kind of interesting, particularly for somebody who is a technology person. It's like, oh,. Yeah,. There's all these other things. So how we would work with this next is thinking about what order? How do how do these things relate to each other? I grabbed the things from that top right, the high impact, high empathy, and kind of laid them out. I think we have to figure out empathy before i begin to trust a machine. I need to have trust in place before i could build an actual relationship with that machine. There needs to have some sense of relationships. And we need to figure out free will before we define what a legal marriage or a legal union could look like. So just kind of generally laying these things out and starting to think about how they connect, what order they're in, and then how they connect to each other. Starting layering in some of these other factors. And you know, we've got physical body, mobility, language, sensors in here. They all need to get dealt with pretty early before we can start to do these other things.

Now, i want to stop here for a second and reference the workshop that bill you're going to talk about at the end here as we look at the expertise of the stanford professors in the engineering department and the students that are working today in the phds, in the labs. And then a lot of them are making huge advances in these technology areas, and some of them in the use case areas as well. And so going deep into these elements gives you a lot more information about how they connect to the other elements. And i just would encourage people, as they're going deep into these spaces, to think about what either what are the technologies that underlie them, and/or what are the cultural factors that need to be in play for adoption?

So after we lay these things out, let's build some stories. Let's get to that kind of uncertainty. So you can lay these out and start thinking about the top piece of this-- gets us to legal marriage. So let's say the first things out there are robots around sex and intimacy. So i might say that some of those exist, today right? Household service bots. Maybe next thing is starting to build some relationships with these ai friends. Skin, motion, pain, replicated, fully mobile humanoid-- because most of us when we picture a human marrying a robot, it's a humanoid robot. You're not marrying a box, right? It's unlikely that you could have all of these things in place with a big metal box. But then what happens when you've got a fully mobile humanoid that maybe has free will? Well, maybe robots are going to start doing things independently because they have free will. They might want to be citizens. They might want to vote, right? They might want to have choice. And probably, that's the only point where i can imagine legal marriage kind of being redefined to include being married to some kind of artificial intelligence.

And it's a reasonable story. It's a rational story. And it's an outcome that when you understand that elements of it's not out of the realm of possibility. Flip side of that is when you start thinking about how some of those elements might turn out differently. In terms of some of the elements that we've talked about, realistic conversation has got to be in place. High capacity batteries have got to be in place. But what if we get to a place where that uncanny valley still remains and we're not able to build these humanoid robots that are good enough? So that we start to back away from the notion of robots as human and start moving towards robots that are embedded in our environment, which is also a feature that could make a lot of sense, right? You walk into your home and you don't have a humanoid service robot. You just walk into your home and say, do my laundry, home. It's just, make this happen, world-around-me. And so the robots are not humanoid at all. Maybe, they're just embedded around us and we, and they start to become of our world. And so then kind of trust issues evolve a little bit differently. And maybe, they're so embedded in our world that we don't need people anymore in a lot of places. First grown adult with no other human contact. There have been some sci-fi books and movies written with this future in mind.

And it's also plausible given what we just talked about. All of those elements, if they don't turn out one way, they might turn out a different way and combine in a wholly different way, where we are here for the robots as opposed to the robots being here for us. And so building out these stories gives us a sense of the range of possibility. And you can look across these and see a baseline future, where robots are starting as service elements for us, then we start to build relationships with them, and then they find some kind of independence. And it's that range of things around independence can be kind of scary, but you also can see how it starts much earlier to move towards one side or the other.

So as we think about the range of possibilities, why do we care? What can you do with this information? Right? Where does your company fit in? Are you working in a technology company that will enable any future going forward, which would be good to know? Are you working in a space that kind of will depend on one future or another? When i talk with my students, a lot of them want to know, they'l come at this and say, i don't like this particular feature. I don't want the world to come out, where we are here for the robots. So what can i do today to influence the future? What can i do today if knowing that this is a possibility? How can i think about influencing the future or playing along in the future or leveraging the future? But knowing that this is the range of outcomes and that these are the things that are likely to unfold can give you a real leg up no matter how you want to play in this particular space.

So i'm going to wrap up there on the robotics and the forecasting, and see. Questions are--. bill burnett:. I wanted to jump in on a couple of things here because it's such an interesting idea. There was a comment in the chat about, oh, hey, they already have companion robots in japan, and that's true. Nothing like what we're talking about, but it is one of the things i would put on the chart. They got little dogs that run around and bark. They've got little companions, but it's also interesting because japan is going to be a super aging society in the next 10 years. A vast number of japanese are over 5 or 60. And in other parts of asia, the home care for your older adults, for your parents, has been solved by importing inexpensive you know caregivers from different parts of the world. I mean, i used to have an office in hong kong. Many people in hong kong have household helpers that they've brought in from the philippines, from indonesia, from other places.

And that's how they take care of their elders. Japan, because of cultural issues, has decided not to do that. They don't allow that kind of immigration, and so instead they've turned to robotics as a solution. But it's a really interesting, sort of extreme case, right? Where the obvious solution is to go find some really compassionate caregiver to take care of grandma. And instead, the japanese, for cultural reasons, would prefer to offload that to a robot. And there was even a fun movie made about this, where there was a robot who was sort of helping an older guy age.

And so the cultural issue there is critical. And the other one that you brought up, which i think is really interesting to think about is trust. I'm not even sure i trust my bank with my financial information because they get hacked all the time. I'm certainly not sure that i trust some of the big companies in silicon valley with my personal data, which they seem to sell all the time. And somebody's going to make and sell this robot. And i really want to know what that maker and what robots-are-us-company that sold me my companion. I really want to know what they're going to do with the data that's generated with that because probably, the most personal data is the data about the person i love.

So wow. I mean, again, it's an extreme question, but i think the extreme question brings out all these interesting combinations of what's culture, what's technology, what's a use case. And that's the kind of stuff that i think really drives an interesting conversation about, what's the next 5 years the next 10 years going to look like?". And at the rate of change that we're seeing right now with things like chatgpt 4 and other ai implementation starting to flood into the marketplace, we're at, what we think is a point of incredible disruption.

And that's why we put together this webinar to talk a little bit about forecasting but also, the program going to run in march for 2 and 1/2 days live on the stanford campus. Aren't you dying to get out of your office now that covid is over and finally go to a live thing, where we're going to have amazing researchers showing us what they're doing in robotics, in autonomy, in ai and other things so that we can build for your own organization these kinds of forecasting tools? And you walk away with a forecast for the next five years for what you think is going to happen in your organization. And particularly, this idea of the "cone of uncertainty.". What happens if everything goes great? What happens if everything goes south? I think that's exactly the kind of way we should be thinking about the future.

So i think annette, are you going to-- can you curate a few questions-- speaker:. Yeah, bill burnett:. . Speaker:. Well, first and foremost, thank you and cynthia for such a fascinating discussion. I think a lot of folks were very engaged. So we will open it up to questions shortly. If you haven't submitted yours, please do so in the q&a box. If you have one right on your console? Now, let's go into some questions. The first one we have here. "what are the key drivers of change in our world today, and how might they evolve in the coming years?". Cynthia benjamin: oof. That's a big question. Really big question and i would suggest breaking it apart a little bit, right? You can see how putting some constraints around the initial question allows you to go deep and then kind of surface back up to questions that are relevant. Like a lot of these questions that came up are relevant to society in general, and understanding how this world is going to evolve. But we didn't get there by starting with that huge question. Like, how's the world going to change? It's a valid question for sure, but it's not necessarily a useful way to get to answers, right? So i would suggest coming up with some provocative question here. And i'l bet, when we started this, folks were going,. Yeah, pick a question, and then go for it. But then when you do this, you realize that how a provocative question like that can really help you go deep into an area, and then come up with some answers that are more broad. Bill burnett:. Yeah, and just to build on that i think the value of asking a good question-- for instance, if i'm a policy maker and i'm thinking about public policy in the city of san francisco for the next 10 years, what are my drivers?

Well, this the economics of the region. That's certainly climate change and how that's going to impact the region, both for like where's it going to be safe to live, and how many seawalls am i going to have to build, and blah. But if i have a focus or i have a point of view or a lens to look through, then i can figure out what the drivers are. To say, what are the mega drivers for the world for the next 10 years, i'm not sure you can get any traction on that question because you end up with generalities like well, economics, and social unrest and-- cynthia benjamin: climate change. World peace. Bill burnett: --change. Cynthia benjamin: you know, yeah, those aren't useful. Bill burnett: you can't solve the problem. So i think this is where diving into a particular focus and then fleshing out a cone of uncertainty could raise the next level of questions. And then doing it again and again, you'l end up with five or six of these kind of cones and forecasts. And then perhaps, you could synthesize something down to say, well, if i am the president of the united states, my number one priorities need to be xy and z.

But even that is going to come through the lens of what's good for the us rather than the world. So i think asking the right question is critical to get to something that's actually actionable. Cynthia benjamin:. Yeah, and let me give you another example. So some other questions that we have posed in this class are, we wanted to look at health care for example. A pretty big topic. And the question that we posed was, will i or will the students in the class live to be a 120 years old? Will i live to be 120? So if somewhat provocative question, it's fairly general but specific to health care. So that really allowed us to dig into health care in an interesting way. And what are, not just longevity technologies, but what's going on in biotech, and social services, and what else is going on in society that will accelerate or impede longevity in these questions? And it got us to a lot of really interesting questions about retirement and social policy, as well as technology behind the field of longevity and other health care issues. So by asking a question with some boundaries on it in a provocative way really allows you to open up and go deep and then open up. Speaker: i love that.

Thank you. I think the next one is a great question around applicability of this methodology. And this participant would like to know, "what is the role of a design leader to apply forecasting into corporate strategy?". Cynthia benjamin: hmm. I love that question. I think the role of a designer can be to both provoke and put some boundaries on stuff. I think good design often thrives with a little bit of constraint. And focusing on that central question-- encouraging people to put some constraints on their questions instead of, "what is the future of robotics?". Like encouraging people to put something down that we can all discuss and rally around. And then bringing some of these design tools into play. Encouraging people to think about alternatives and alternative futures. Encouraging people to think about the human side of these questions, not just the technology side. I think a lot of design tools are part of this conversation and really can be. Speaker: great. Next question. So let's see. "what questions or frameworks do you use to assess where one's analysis might be wrong?". Cynthia benjamin: "where ones analysis might be wrong.". Bill burnett:. ,, let me take a-- cynthia benjamin:. Yeah, go ahead. Bill burnett: let me take a whack at that, and then you can follow up.

The whole idea of this is that we're using brainstorming, mind-mapping, design principles. Somebody said can you use chatgpt to come up with drivers? Sure. Why not? I use chatgpt in my classes all the time as a brainstorming tool. Chat doesn't know what it's talking about, but it's good at randomly generating interesting work and interesting things to think about. But the whole idea of forecasting a cone of uncertainty is there's no right or wrong. There's a probability that the future will be this. There's another probability that the future will be wildly different than that. And a different probability that will be the negative of that. And so when you look at that and you think about your own organization-- let's say you're a pharmaceutical company, and you're thinking about drug discovery and drug delivery systems for the future. And where is that going to go? And how can ai influence speeding that up? Or you're an energy company, and you're trying to think about transitioning to the green economy. Or tech company, and you've got some brand new thing that you want to make an app that has something to do with ai and the financial markets. So you take these tools, and you start to think about, what's going to change in the future? And how can my company respond to those changes? And what's the likelihood that the worst case is going to happen? And what's the likelihood that the best case is going to happen?

So we're not talking about coming-- again, as cynthia said in the beginning, we're not predicting the future. We're coming up with a range of possibilities and probabilities that the future can sit inside what i love first, the cone of uncertainty. And the nice part is say you start with the forecast and you're 5 years out, and you've got your cone and blah. A year later, you're in a completely different point in that cone of uncertainty, right? And you can start all over again. Go, so these are the things that actually happened. This is how my assumptions change. What's the new cone? So it becomes a dynamic tool for thinking about the future in a structured way without getting into i'm right or i'm wrong, or i'm predicting or i'm not predicting.

Speaker: all right. Cynthia benjamin:. I think that's great. Let's see if we can get another question or two. Speaker:. Yeah, let's get into the next one. The next one that i have here is, "can ai be applied to a leadership framework in the future? Will ai become part of an executive team as well? What are your thoughts on that? Cynthia benjamin:. Certainly could be. And frankly, i'd be surprised. I'd be surprised if it doesn't somehow become part of the executive suite. "how so" is a different question, right? So using a framework like this, i think would really help us tease out, "what are those possibilities?". And then you've got your own context for asking that question. Why do you want to know that question? Because you're an executive and you want to figure that out. Or because you're a shareholder, and you want to invest in more of that or less of that. Or because you are an employee, and you don't want to be led by an ai or you think that's great. So the context really matters in why you're asking that question in terms of what you're going to do with that information. But i got to tell you, i love that some of the questions here are being asked about the framework. And so many of them are being asked about the content here. And people have some really interesting thoughts about robots, and robotics, and the future of robotics, which i really love that this is stimulating kind of thinking. And that is really the value in a kind of a framework like this-- to get you really thinking about hard questions. Bill burnett:. cynthia benjamin:. Because this is not-- bill burnett: --question about ethics. cynthia benjamin: you can talk about it superficially. Oh, i heard this in the news. I saw that in the news. But to really understand it-- having a little bit of structure like this. And look at all these awesome questions that have come up and the comments people are making about ethics, and the legal nature of robotics, and ai, and how that's involved and all these other interesting things. So i'm really loving the variety of questions. I wish we could share these all out. Speaker:. Yeah. Bill burnett:. One thing about-- speaker: go ahead, bill. Bill burnett:. Every question has an embedded assumption in it. So, "will ai be part of the executive suite?". Well, in my kind of uncertainty, ai replaces executives because if most of what executives do is try to make optimal decisions, ai will be better at it than they are. Now, if you're talking about leadership, that's a different thing. But this is going to force executives to separate what they do about decision-making, which isn't leadership-- it's just management-- versus leadership.

And so to me, if you look at the changes that are coming, why would i pay an investment banker 2% of my fortune to invest for me when i can hire an ai that's going to outperform that human by 10%? Why would i pay a ceo millions of dollars to simply make decisions about the company when the ai will make better decisions? So i think in one version of this, there is no c-suite. Companies are run by ais. And more efficiently. And with less humanity, right? Because they'l just make rational decisions. But boy. Be careful of the question.

The question has an embedded assumption that the folks who are asking it will still exist when the answer occurs. speaker: wow. Well, thank you. I mean, i agree with somewhat-- it's a very thought provoking conversation. I loved it. I hope you, all, on the line loved it as well. We're at time. Thank you, bill and cynthia, for this interesting insightful conversation. Like i said to everyone in the audience live with us today, thank you for all your questions and super engaging participation. I want to remind you that today's session was recorded, and a link to this will be sent and made available to you all within a week.

👇 Give it a try