Simple Steps To Get AI On Your Side And Future-Proof Your Business, with Evan Ryan

May 28, 2024
Dan Sullivan

Everyone knows that AI is going to be an increasing factor in business success and business growth, and it’s essential that entrepreneurs are aware of the technology’s limitations as well as its potential. In this episode, business coaches Dan Sullivan and Shannon Waller speak with special guest, AI expert Evan Ryan, about what’s holding back the productive application of AI and what you can do instead to best take advantage of AI in your organization.

Here’s some of what you’ll learn in this episode:

  • How to use AI, an intangible, to achieve measurable goals.
  • Where Evan has seen the most success in companies’ use of AI.
  • The first question every executive asks Evan.
  • Ways of thinking that make AI more accessible.
  • Why many people are hesitating to adopt AI in their businesses.
  • Examples of where hesitation to use AI has prevented business growth.
  • Entrepreneur ideas supporting making the change to AI.
  • How to convince people to take a big leap using AI.
  • The way AI disrupts established thinking about budgets.
  • Why the successful use of AI requires a growth mindset.

Show Notes:

No matter how fast the technology itself moves, it's as slow as the humans that are adopting it.

If a solution works for one person, you know 50% of what it would take to work for 10 people.

Humans don't naturally think in terms of exponentials because nothing in our world really operates exponentially.

If you experience sudden growth, and it’s behind you, you can do your own exponentials going forward.

If you don't know where the leadership is, you don't know where the rest of the organization is.

It's hard for people to grasp intangibles unless they're conceptually prone, so you need tangible proof of selling an intangible.

If software is magic, AI is magic times a million.

Something that’s inherently unclear and inherently vague is inherently a little scary.

Technology doesn't become normal until it becomes boring.

We have to normalize our way into the future. And that means that you have to start small and get used to it.

San Francisco, Silicon Valley, and the media talk about AI like it's the end times.

A lot of what goes on in Silicon Valley is getting people to bet on the bet. They're not actually betting on the technology.

One new capability always introduces new capabilities. That's a feature of technology.

The problems we want to solve are the same. We just keep getting better technology with which to solve them.

To grasp future jumps, people need to grasp past jumps.

Technology is automated teamwork.

AI won't necessarily replace people, but people who know AI will replace people who don't.

Resources:

AI As Your Teammate by Evan Ryan

TeammateAI.com

The Gap And The Gain by Dan Sullivan and Dr. Benjamin Hardy

Your Life As A Strategy Circle by Dan Sullivan 

ChatGPT

Perplexity.ai

Unique Ability®

Who Not How by Dan Sullivan and Dr. Benjamin Hardy

The Self-Managing Company by Dan Sullivan

Article about The Experience Transformer®: “Transforming Experiences Into Multipliers”

Article: “What Free Days Are, And How To Know When You Need Them”

Deep D.O.S. Innovation by Dan Sullivan

Shannon Waller: Hi, Shannon Waller here and welcome to Inside Strategic Coach with Dan Sullivan and very special guest, Evan Ryan. Evan, thrilled to have you here today as an AI expert. You've got a fabulous company, Teammate AI, and you've written a great book, AI As Your Teammate. So we're kind of really interested to hear your thinking, what you found, and as we were just talking about, some human limitations to AI. So thank you very much for joining us.
 
Evan Ryan: Oh, well, thanks for having me. I'm looking forward to the time.
 
Shannon Waller: Awesome. All right. So we were just starting to embark on a conversation like, let's hit record on this. Evan, you found a couple of, I'm going to call them human limitations to technology. Can you kind of set that up for us? Because then I'd be really fun to unpack it and see where we go.
 
Evan Ryan: Yeah, so I was in a couple of situations recently. The first was where we were working with a client, and the client said, you know, we've got this task that we have to do, and we hate it, it's terrible, but it has to get done. And then we started shadowing the person who does the task. And the person who does the task said, I hate this, this is terrible, but it has to get done, so I'm doing it. And we came back to the client, we said, hey, you know, we think for about $10,000, we can automate the entirety of the task. Like 100% of it completely gone. No human ever has to touch this again. And they came back, their original response was, no, and this is far more than I expected it to be, meaning far more expensive than I expected it to be. Then the second situation that we ran into was, we were talking to a very, very large company. And we said, hey, we think that the total impact of the work here is going to be around two and a half million hours saved per year. But every employee needs one $30-per-month subscription. And that was a very bitter pill to swallow. And so who knows whether we'll move forward or not. But I was just kind of really curious about this. I've been thinking about it for the last sort of week or so. And possibly the idea here being that it doesn't matter how fast the technology itself moves, because it's only as slow as the humans that are adopting it. And so we're just really interested in unpacking it.
 
Shannon Waller: Dan, what are your thoughts on this?
 
Dan Sullivan: First of all, there's a word which I rely on a lot, and it's "plausible." Plausible doesn't seem kind of right. So in both situations, I think you as an actor of the solution have to give a thought that what are the obstacles telling me about my approach to introducing AI to people? It's kind of like when I first heard of the idea of Singularity University, they have a requirement that if you go there with a product idea, you should think in terms of a billion customers. And I said, a billion's nice, but does it even work for one person? Because you could spend a lot of time on something that's supposed to send a billion, and you got it wrong. Okay, so that's probably an expense. So I said, you know, if it works for one person, you know 50% of what it would work for 10 people. If it works for 10 people, you know 50% of what it would work for 100. So, you know, just from a standpoint, I would always say, well, let's just take this situation. But the first thing I would do is, what that person is going to do if they're not doing that job. And I would get them clear about what that person's going to do when they're freed up, and I would include the person themselves in the plan. And if you were freed up, and I understand it was 45 hours, it would free up, right? A week?
 
Evan Ryan: Yeah, that's correct.
 
Dan Sullivan: Yeah. I would get clear about where the 45 hours are going next before I came back, because the way you're doing it now requires a convincing argument, and I would go for the compelling offer. The compelling offer is, this person very enthusiastically would be able to do this with the 45 hours.
 
Evan Ryan: Right. That's so interesting for a couple of reasons. The first is, when we're talking to clients, the first question that any client, anybody, whenever I'm doing a talk or I'm teaching a workshop, the first question any executive ever brings me is, okay, I know AI is really interesting, but what do I do with it? And the way that we take people through the process is, we say, first and foremost, where do you want to be? I can't see out any more than three years. I can't forecast any more than three years. I'd like to say five. I can't. So I say, where do you want to be in three years? And then, who's going to help you get there? Who are the people on your team that are going to help you get there? What are the things that they have to be doing in two years to be able to accomplish your three-year goal? Then immediately it rings true. Okay, great. If they are going to be doing these things to accomplish the three-year goal, they can't be doing those things. They can't be doing the current things. And so it's immediately clear what their future has to be. And of course, AI is going to be the solution. In this case, we weren't following that methodology. And so it makes total sense.
 
Dan Sullivan: Yeah, you were acting like a hot shot.
 
Evan Ryan: A little bit, a little bit. Well, we got excited about it because, for us, it's like this is such a huge increase in efficiency that they're going to get. We can't wait to free these people up. But the human element of it is really important. Another situation, I think that's kind of very similar, but on the opposite end of the timescale was, we had a client where we only expected to make about 15 hours a week worth of difference. So the client signs up, the people all know how they're going to be ushered into their sort of like new lifestyles at work or their new work styles. Everything's already set. And then we showed up and we kind of got under the hood. We realized, oh, no, this is going to be like a 1,000x improvement. And so what we thought was going to be 15 hours a week was actually going to require complete reorganization of how they're kind of structuring their work because of how efficient the computer was. In this particular case, it actually created the exact same hesitation, but on the opposite side, which was, you know, you can deal with the exponential aspect of it up front, or you can deal with the exponential aspect of it after it's done. But, you know, in the middle, it's just technology being built.
 
Dan Sullivan: Yeah, the other thing is that I think when you have that much change, you know, and it's a nice model, you're giving just one person 45 hours, you know, and then a large organization freeing up millions of hours. Humans don't naturally think in terms of exponentials. Because nothing in our world really operates exponentially. I mean, compound interest would be probably the most, and that takes years, compound interest. But if they experience sudden growth, and it's behind them, and it's in the bank, they can do their own exponentials forward. You have to give them a small version of exponential, first of all.
 
Take 10 people who would be the most eager people to jump to a higher level, and let's just work with them for a month and see what we can do with 10 people. They would go for that. Big organization. You know, and you say, this is how much we would charge to get 10 people charged up. All you need to do is get your foot in the door, Evan. But it may be a bad organization. I mean there may be all sorts of internal problems they're having right now that they wouldn't reveal to you up front, and that amount of change may take what's a tense internal situation and it might tip them over the edge and they're worried about that.
 
Evan Ryan: Well, so that was actually something that I was thinking about. I want to get your opinion on it. Do you think that one of the big challenges with adopting this technology is that for so long, so many budgets have been justified based off of number of people? And, you know, "I need this many people to do this task to be able to achieve this outcome." And when a new sort of technology like this comes around, and it sort of causes everybody to rethink everything, do you think that that raises tensions at the senior-most levels first, when we're talking about budgets, when we're talking about head counts, we're talking about PR and other things like that, more than it does sort of at the implementation level of these discussions?
 
Dan Sullivan: Yeah, if you don't know where the leadership is, you don't know where the rest of the organization is. The other problem with AI, different from computers, computers is a piece of equipment. AI is invisible. And it's hard for people to tangibly grasp an intangible.
 
Evan Ryan: And one of the things that I even say, so it's funny that I keep saying this because it's like I'm like hearing you say a lot of the stuff that I say, but because you're saying it, it's like I have fresh ears. I say all the time to people that don't understand how software works, software is magic. And then AI is like magic times a million. So it's inherently unclear. It's inherently vague. And so it's inherently a little bit scary.
 
Dan Sullivan: Yeah, we have a discussion group. Shannon and I are a member of a quarterly discussion group, and everybody sends in articles, which we put everything into 8.5 by 11, and then we have a spiral. It's a nice book. It's actually a really good book. And there was an article that Paul Bourbonniere, he's actually longest running of my clients. He'll be 37 years in July, he'll be in the Program. Okay, which is eight years longer than you've been on the planet, Evan. And it was about how Canada as a country is just clueless about IP, about intellectual property. And they're producing a lot, but they don't actually own the ideas of the machines that they're producing. And they were talking, you know, it was the graph that Keegan Caldwell shows us of S&P 500 in 1980, 82% valuation on tangibles, 18 on intangibles, 40 years later, just the opposite. And it's hard for people to grasp intangibles unless they're conceptually prone. So my sense is, you're trying to sell an invisible. So what you need is tangible proof of selling an intangible. You've got enough people now that have really received the benefit of what you do, that you can show this person did this and this, this person did this, this company did- I think you got to start showing cases. So we had a client who is in the 10x Connector this morning. He had dealt with you last week and they freed up three people, 6,000 hours, you reduced it to 100 hours. Andy.
 
Evan Ryan: Oh, yeah. Awesome. I'm glad he said that out loud.
 
Dan Sullivan: I'm giving you that your test case and then he's going to see you with a group of 12?
 
Evan Ryan: Yeah.
 
Dan Sullivan: And then he's going to see you in a couple of weeks with a group of 120. Which sounds like an exponential to me.
 
Evan Ryan: It is quite exponential, and actually I was preparing for the event where I'll be speaking in front of 120, and a woman who works for one of the people that we're collaborating with walked in and she said, "You know, I'm somebody who filled out a survey and said 'AI is not for me,'" with like a whole big checkmark motion as she said it. And she said, "I looked at what you guys put together, and I thought, I don't know how I can get more of that, but I need it." And so, you know, I think that there's certainly a case there. Yeah.
 
Dan Sullivan: Yeah, I think what you have to do, you have to go on The Gap And The Gain concept. The reason why "The Gain" works is that it's already happened. I mean, it's going to become, you know, more widely known, more widely talked about, and people are going to be giving examples. But I think most people are very practical, and they have to see practical evidence. They can't just imagine. I mean, I can sort of see the future pretty well and work backwards from the future and see what we have to do now if we're going to get to that future. Most people can't do that. I mean, it's Strategy Circle time. You have the vision and then you say, okay, tell me what the obstacles are. I think it's Strategy Circle time.
 
Evan Ryan: That is such a very interesting thing. And one of the things that I've been thinking about a lot lately is how the computer and the mobile phone, like the computer and the smartphone, are fundamentally different from the internet, where the internet is a technology that allows you to sort of distribute ideas. But it's invisible. You can't see the internet. Whereas the computer and the mobile phone are something that are physical, you can see. And so I've been kind of thinking about the idea where maybe this kind of technology won't take off sort of like the iPhone did. Although ChatGPT has been so successful so fast, it doesn't have the lasting like kind of staying power that the iPhone has had. And maybe it is because of that sort of invisibility where the use cases or the capabilities of technology creep in over time in ways that seem really familiar versus, you know, ushering in a new epoch of society like the Gutenberg press or the personal computer did.
 
Dan Sullivan: Yeah. Shannon, what's your take on this?
 
Shannon Waller: It's interesting. I was just thinking as you guys were both talking that, yes, ChatGPT, big splash, all the things. But in terms of most people's day-to-day life, even, you know, Uber drivers, I was going to the airport the other day. AI is so scary. So I'm like, oh no, ChatGPT is your thinking partner and Perplexity.ai is your research partner. I was giving all the stuff I've learned from you, Evan.
 
Dan Sullivan: I wouldn't start with limousine drivers. I really wouldn't start with them.
 
Shannon Waller: No Uber drivers?
 
Dan Sullivan: Yeah. That's not who I would start with. They're just trying to learn how to game the system that they're in.
 
Shannon Waller: Totally. But I do think it is one of those things where it's going to be deceptive, like underneath, and then all of a sudden the knee of the curve will happen and then, you know, it will be even more exponential. We're still, even with all the hoopla and all the press, I think we're still in that kind of, you know, people aren't really sure, they're uncertain, most people don't do it. And it's fascinating to me, I mean, because one of the questions I'd love us to talk about is, what kind of mindset do you need to take full advantage? Like for me, with your first scenario that you were describing, I'm like, okay, I'm looking at that person's salary, and I'm comparing it to $10,000. $10,000 is going to be a fraction because no one's working 45 hours a week for 10 grand a year. So it's like, what kind of mindset's required? To your point, Dan, I think a slightly more exponential mindset, if you could develop such a thing. But what will allow people to take full advantage and not make what looks like kind of strange economic decisions that you've been running into?
 
Evan Ryan: So I'm 29. And this is going to get very philosophical, then I promise the question will get answered. I'm 29. I'm friends with a lot of people that are 29. And one of the sort of things that I'm seeing happen as people turn my age and then eventually turn 30, the whole big like birthday and all this stuff is, a lot of people, I think, are starting to sort of ask themselves, like, is this all there is? Like I thought that if things were going to be different or like all the problems of my life were going to be solved or whatever it might be, but I'm starting to get this sort of sense of, is this all there is? And then those people are sort of being divided into two camps. There's camp A that sort of accepts defeat, and then there's camp B that embraces whatever it is that they've truly been wanting or desiring, or the life that they've been wanting to live, or whatever it is, fill in the blank. But what's interesting is I've been watching this in my personal life, and last week I was giving a talk, and somebody asked me, at the very end of a very long, like three- or four-hour seminar, they asked me, where does this all end?
 
Shannon Waller: Oh dear.
 
Dan Sullivan: Well, no, because, you know, I mean, in practical terms and practical life, things have a beginning, a middle, and an end, you know. And again, as you're dealing with exponentials. I have a whole series of files of quotes of mine. And I said, you know, I find that technology doesn't become normal until it becomes boring.
 
Shannon Waller: Oh, I like that.
 
Evan Ryan: That is really good.
 
Shannon Waller: Technology doesn't become normal until it becomes boring.
 
Evan Ryan: That is great.
 
Dan Sullivan: It's everyday. Oh, you're talking about AI stuff. Not, oh, this is brand new and it's scary. We're going to hear some more of that AI stuff. I've heard a lot of AI stuff and everything. And what it means, it's been normalized. I had a conversation with Peter Diamandis. I said, you know, we're not going to get to the future until it's normal. Whatever you think the future is, you won't be there until it's normal. We have to normalize our way into the future. And that means that you have to start small and get used to it. I'm having a lot of fun with Perplexity at your recommendation, Evan, and it's a very cordial companion. I mean, it goes out of its way to help me. I mean, it's one of the more helpful things. So I asked it a question. The question is, of all the Americans born in 1944, what percentage of them are still alive today? It took a while, you know, like 20 seconds. And it came back to me and says, actually, there aren't very good statistics on this. And it says the best I can come up with is a study was done of people born in 1947, and in 2021, 63% of them were still alive. So maybe you can sort of figure- And we find that the death rates are pretty constant. So that was three years ago, and they were three years younger. So maybe you can work backward that maybe it's around 58%, but it went out of its way to try to help me think it through.
 
Evan Ryan: That is so interesting.
 
Dan Sullivan: And I'm finding it very congenial. You said on your podcast two weeks ago that you feel like it's on the same side of the table with you. For most people, AI is not on the same side of the table with them.
 
Evan Ryan: Yeah, it's really their competition, or at least they perceive it that way.
 
Dan Sullivan: Or it's the the boogeyman under the bed and you can't go to the bathroom at night. It's going to get you. It may not get you onto the way to the bathroom, but when you're coming back...
 
Evan Ryan: It lets you go to the bathroom first.
 
Dan Sullivan: So one approach you might, how do we normalize AI?
 
Evan Ryan: You know, I think like the simple ways that's kind of a strategy that I took a little bit from you is like, what's the last piece of technology that everybody used? And then what was before that?
 
Dan Sullivan: The phone.
 
Evan Ryan: Yeah. So the phone is like a great example. Well, what did you do before your phone? Well, you had like you'd carry around all this stuff in your pockets. You'd have to like pick up the phone and like actually dial the numbers. You wouldn't be able to see people. You didn't have your email. You had to pay like per kilobyte to be able to access the internet or even better yet, if you didn't have an internet plan and you hit the internet button, you hit end, end, end, end, end to try to get it so that you didn't get charged. There are all these things, but then also there's like just simple things. Like before Google Drive, you had to get up from your desk and you had to walk over to the filing cabinet, find the right file, get the piece of paper out, make a copy of it, put the file back, and then walk back to your desk. And I think that that strategy of showing people, hey, these kinds of technological revolutions have happened before and they'll happen again, it is one that's successful. Because I think to people, it makes it immediately sort of accessible. The other thing is I think that San Francisco and Silicon Valley and the media both talk about the AI like it's the end times. And when you get a question like, how does this end? It's like it's the end and not an end.
 
Dan Sullivan: They're positioning it like it's religion and it's the Messiah coming. I believe that a lot of what goes on in Silicon Valley is either a religion or it's like Las Vegas. They're getting people to bet on the bet. They're not actually betting on the technology. They want to get you to bet on the bet. And my sense is, it's just normal. I mean, Louis CK, comedian. And he says, you know, remember the old phone and you had a operator and, you know, you put through a phone call and it was a hundred dollars a minute to California. And you had to talk really fast because the seconds were going by. And he says, now I see people with their iPhone. And they're going like this, you know, and it's been five seconds, he says, come on, come on, come on. You know, I don't know who it was, but they added up all the separate technologies and what those technologies were each worth that are in an iPhone. You know, I mean, it's like, this is more power than the entire Apollo project used to put a man on the moon is available to you at any time. And you're saying, ah, yeah, so what, you know. And I think that so what is the time when the technology really starts getting used.
 
Evan Ryan: Right, when it's finally normal.
 
Dan Sullivan: Yeah, when it's normal. So I think an approach for you is to say, I want to tell you what AI is going to be like when you consider it normal. You're doing this, you're doing this, and just take them through a case study of what you've done. And you said, well, first of all, you identified what you wanted to do if you were freed up by 20 hours a week from this. Okay. So then you create the thing. So now you've got a real reason to make the change. Okay. I would just create a five-step normalization process for AI.
 
Evan Ryan: That is an excellent, excellent idea. I'm really going to take that back and think about that.
 
Dan Sullivan: You're 29. I mean, it takes me a little longer to adjust the things that you do, but I do have a record of how I've adjusted to 10 technologies in my life. And I know that after a while, it was just normal. I have the success record that I do adjust, and I'm going to adjust to this one.
 
Evan Ryan: Well, I think one of the things that makes you in Strategic Coach very unique, I'm interested to hear your opinion on this, is your vision of the future is not one where the technology is the future. It's one where your technology enables that future to happen faster. You're excellent users of technology, but nobody in Strategic Coach has talked about taking your IP, for example, and then loading it into a chatbot so people can start to tinker with it. Nobody in strategic coach is talking about how can you use technology to create new offshoot businesses, for example. It's always in search of a future that's not technology related, that's transformation related. What do you think about that? Do you think that that's what allows you to think about the tech differently? Yeah.
 
Dan Sullivan: Yeah, I have a quarterly book, and I think it'll probably be sometime in the next 12 months. It's called Timeless Technology. And I said there's a way of thinking about technology, that technology has always done this for 100,000 years, you know. That first of all, one new capability always introduces new capabilities. That's a feature of technology. It always becomes an independent factor in itself once it's created and proves itself useful. It always takes on an independent status. Okay, technology becomes autonomous, you know. It has a natural tendency to take one technology and try to see if you can connect it to another technology to create a third better technology. It always does this. Technology does this right from the beginning. I would start with what you want your check writers to normalize in their life, and I think it's a strategy circle. Here's a little test you can do. You can say, three years from today, I think it's possible for you to have achieved this, and this, and this, and this, and this, using not only the artificial intelligence applications that are available right now, but the applications that will become available over the next three years. I think it's possible for you to do this. Now you tell me the 10 obstacles why you won't do that. And then they bring up the obstacles, and you say, well, that's an interesting obstacle. Let's look at it. And it becomes a strategy circle. And you lay out the future for them, because all the time they're featuring the obstacles, they're entertaining the future. And you got them caught in the conversation now.
 
Shannon Waller: I think they really, you said this earlier, start off philosophical and end up really practical. I think that's a very practical application. Like, oh, instead of using, you know, the dial-up telephone, you'll be using this, you know, or instead of doing this kind of search, you'll be doing that. You know, there was a great article we read for the discussion group, Dan, that talked about the utility. Like, the problems you want to solve are the same. We just keep having better technology with which to solve it. And so we're very clear on who we are, who we're for, what we're doing. And so we just want better and better tech that will help us accomplish it.
 
Dan Sullivan: Yeah. About 10 years ago, I had a client in the Program and he had a steel fabrication factory in Pittsburgh area. I asked him something, and he had 600 workers in his factory. And he says, my grandfather started this business 50 years ago. And I said, so for the amount of work you're getting done now, how many workers did he require? And he said, "He had 3,000. I have 600. I'm outproducing him." And if I had talked to him today, he could probably do the same amount with 60 workers with robot-enabled machinery. Okay, and I think people need these past jumps to grasp future jumps. I think it's necessary. I worked at an ad agency in the 1970s, and our artists in Toronto, you know, our computer artists and the writing with the software, the writing, the cartooning, everything else, we can produce more in a week than 15 than the creative department could produce in a month. We can produce more just with our computers than they could. The cartoonist I have on my quarterly books, I get the equivalent of one new of him every year just because of the speed of the computer and the new software. And I think you have to give a lot of past exponential jumps to actually make it okay for them to do a future exponential jump.
 
Evan Ryan: Right. And I think one of the things, too, as I'm kind of processing this in real time, one of the stories that I'm kind of drawing on right now, when we're thinking about future exponential jumps, I think one of the things that a lot of people think about is, well, what is my future? Like, what is my ambition when these big problems that I've been trying to solve, like beating my head against the wall for six months or 12 months or two years or however long, once these are solved? And one of the stories that I'm thinking of is that when Steve Jobs was at Pixar, he wrote an email to his employees, and he said that when they finished Toy Story, which was the first movie from Pixar, and when they were building it, their strategy was, we're gonna build Toy Story, we're gonna build all the building blocks to create an animated movie, and then we're just going to combine them in new ways, and we can just churn and burn animated movies—my language—churn and burn animated movies, and the only limitation is really gonna be our storytelling and how fast we can get these new ones out. He said the problem was that as they accomplished Toy Story, then their ambitions grew on how they wanted to push the envelope for the next animated movie. And now here we are 30 years later, and they're animating individual strands of hair on Rapunzel. And so I think that there is a lot of difficulty, and I guess this goes to AI as your competition with most employees as well, which is, well, when my current present is no longer my present, what is my future?
 
Dan Sullivan: Who am I?
 
Evan Ryan: Yeah, when the things I've been struggling with are when my strategy is truly overcome or when my strategy is truly to fruition, you know, what is that future there?
 
Shannon Waller: Well, and that goes back to the beginning when, you know, talking about that guy whose job gets completely liberated, you know, what is his future?
 
Dan Sullivan: That was a job and an employee, right?
 
Shannon Waller: True. Was it a guy? I don't even know.
 
Evan Ryan: It was a job and an employee.
 
Shannon Waller: Okay, there you go. A job and an employee. But what does that person do when they're freed up? And what is the higher level work? Admittedly, it wasn't a fun job. You know, it wasn't satisfying. So I think remembering that human element, that might be the limitation, as we're talking about this, to how fast people embrace it.
 
Evan Ryan: Well, and what's interesting in my mind, too, is going back to first year of Coach and talking about Strategy Circle. But I also think that Unique Ability is going to play such a huge role in this, because if you're unclear about that, then I think that you're inherently unclear about the future.
 
Dan Sullivan: Well, I think if you're not operating in your Unique Ability, you think you're the stuff of your job. And your identity is tied up. I mean, one of the things, you know, I put a lot of emphasis: work you don't like, find somebody who loves doing it. And a lot of the Strategic Coach clients, they develop this in their business and they do a good job with it. You know, they constantly grow with it, but then they go home and they're, you know, the person they're married to in this case, it'd be their wife. They're saying, you know, you don't have to do the housework. You don't have to do the cooking. We can do that. And they said, we can get people in. We have enough money now. We can get people in to do it. But it's her identity. And you're asking her to give up her identity. That's her crucial importance every day. You're asking her to give up her, or we can have nannies, and we can, and everything. But who she is is tied up with those activities. So there's a lot of complexity. Technological change seems like a simple thing. If you're only talking about the technology, it is. But humans are multifaceted. They've got a lot of different dimensions. And they all have to come forward in an agreeable manner. So I think it's a great topic that you've introduced today.
 
Evan Ryan: I think it's going to be one that spans the next, you know, decade or two is really as these tools continue to develop.
 
Dan Sullivan: You know, Moore's Law became sort of talked about and the microchip started getting talked about right when I created Strategic Coach. Well, I created coaching. I didn't have the name for it at that point. So there's all this talk at various technology conferences, how productivity is going to go through the roof. So I asked Perplexity an interesting question. I said, Perplexity, what is the percentage of rate of increase of productivity from 1974 to 2024, 50-year period? Then productivity has to do with labor productivity. I mean, a lot of people call productivity as machines turning out a lot of stuff. It's not. It's measured in labor, how much human time is involved. And compare that with the 50 years before 1974, and the rates of productivity were much higher before 1974 than they've been in the last 50 years. And what's misleading is huge amounts of money have been made by a small number of people but not much money has been earned by a large number of people. And that drags down productivity rates. Productivity doesn't mean anything unless everybody's gaining. Perplexity and I have wonderful conversations.
 
Evan Ryan: Yeah, I'm sitting and processing a little bit here.
 
Dan Sullivan: No, I'm just treating Perplexity as a person who's really gifted.
 
Shannon Waller: It's interesting, Dan, that there have been no increases in productivity in the last 50 years.
 
Dan Sullivan: There's been increases, but not as much as the 50 years before that. And part of the reason was industrialization has a different kind of productivity than microchip productivity.
 
Evan Ryan: You know, and something that I'm curious about is if industrialization really led to up-skilling—because it was clear you were working with your hands and you made one thing and now you're working with your hands and you're making something that's much more complex—you can see where the skills got better. But what I'm curious about is if the microchip really, in large part, led to a re-skilling where you are just kind of robbing Peter to pay Paul a little bit, which is what kept the productivity the same.
 
Dan Sullivan: Yeah, well, part of it is that it's an extreme form of creative destruction. You know, Joseph Schumpeter, he's an Austrian economist, he coined the term, that capitalism was a process of creative destruction, that new things would be created that destroyed old things. Well, creating new things is productive, but destroying old things is unproductive. And, you know, and the rates, I mean, when they built a steel factory in the 1920s, it was amortized over 50 years. Things had to stay exactly the same over 50 years. You couldn't be fooling around with the technology. Now, new technologies are replacing other technologies in months, in weeks, in days. And so, things are in a constant state of disruption that's not productive. If I look at our model of Strategic Coach, it's not very different in 2024 than it was in 1989. Basically the same target audience, basically the same workshop process. I mean, we've gotten better at it, and we've brought in technology to enhance it, but basically the model hasn't changed in 35 years. You know, that really produces productivity if you can stay with something for a long time.
 
Evan Ryan: Right, and it goes back to what we were saying regarding compounding and how that can continue to compound over time, but if you're in a create-destroy loop, there's very little compounding that can take place.
 
Dan Sullivan: Yeah. It's like, you really want to make money, invest in residential real estate in Toronto. That really, really makes money. I'll tell you over long-term, it really makes money.
 
Evan Ryan: Well, Shannon, I'm curious what your opinion is, especially as somebody who's a leader of teams. In terms of just the general productivity of it and how it's being successfully integrated, but also unsuccessfully integrated, which I think is the other primary fear. How do I make sure I do this and don't do it unsuccessfully?
 
Shannon Waller: And that's the fear of every leader, that they'll bring it in and spend money investing in people's training and time and licenses, and then it won't get used. That's the number one fear. It's kind of why I asked my question earlier, what mindset is required for this? It's obviously a growth mindset. And to your point also about Unique Ability, when people know what their Unique Ability is, what they love to do and are best at, when they're a hero to other people, you produce exponential results yourself, then anything to free you up to do more of that is embraced. You're like, oh my God, I cannot wait to get freed up from this level of work so I can do this level of work. And to my mind, by definition, that's a creative endeavor. That's something that AI is not capable of doing. So I think you kind of need that Unique Ability mindset. You need that growth mindset. And certainly as a business owner, you're going to want to find people who have that mindset or it's going to be really frustrating to work with them, Evan.
 
And I think the jury's out in terms of how people are receiving it. There's a lot more fear than there is, I know, certainly, the clientele that we work with, Dan, and they're like, yes, you know, so many people in our program are, in fact, leaders in AI as well. You're ahead of the pack there, Evan. Well, you and Mike. You know, it's just really fun to see how people are just doing incredible things, but that's not the norm. I think a lot of people are really afraid of it. The one comment I keep coming back to, and I share a lot when I'm coaching, Dan, is that technology is automated teamwork. And so, there's teamwork and there's automated teamwork. And when you put it like that, it just lowers the fear factor. And they're like, oh yeah, it is. Like the number of devices I have on this one is kind of incredible. I've got a phone, I've got Zoom, I've got all the things. But when you think of it that way, you're like, oh, okay, I don't really need to be afraid of it because I do know teamwork. So, I think there are some ways of getting people's mindsets a little clearer, but we still have a- It's really new. And like most new things, a lot of people shy away from it.
 
Dan Sullivan: On the other hand, I use my cell phone. I keep it charged up because I get to read my Oura ring in the morning. Otherwise, it would go weeks without being charged up. I do nothing with my cell phone.
 
Shannon Waller: No, you don't. You really don't.
 
Evan Ryan: Yeah. Shannon, what do you think about, I guess, for teams, for teams where you're now discovering, if you're a team leader and you're now discovering that some of the members of your team don't have the growth mindset that you originally thought that they had, what are your thoughts or messages there?
 
Shannon Waller: Well, I'm of the mindset, and this was expressed several times at our Free Zone Summit last year, is that AI won't necessarily replace people, but people who know AI will replace people who don't. And that is, to my mind, 100% true. And it doesn't have to be AI. You can put in any technology there. So if someone's not willing to grow, adapt, learn a more efficient, more productive way of doing things, I'm sorry. They're not an A player on your team. And if you want to grow and if you want to accelerate, I think you need to have some discernment about that. That may not sound terribly compassionate, but if you want to grow, if you want to have a Self-Managing Company, you better have people who are self-managing. If you want a growth-oriented company, you need people in it who are growth-minded. You know, not everyone has to jump on board at the same moment. I'm an early adopter. Other people aren't. But it's like, they can't actually be resistant. I don't see a long-term sustainability plan for that.
 
Dan Sullivan: Evan, two weeks ago on your podcast, you said that there was a separation between the learners and the non-learners. I think that's a very, very useful distinction, but I think there's a time frame that there's people who are the lead learners. In other words, that we played around with the concept of tinkering. Who are the tinkerers? They're playing with new stuff all the time. And my attitude is, I always play with the players. In other words, whoever already are the existing players, then I'm going to work with them because they're going to become the leaders of the other people. You know, and in analysis, all you want is a job well done. You want people, regardless of what job they're doing, you would like to have the job well done. And I would say that we have invaluable people in our company who, from an outside observer, I wouldn't say that that person is a learner of new things, but they're always improving what they're doing and doing a great job. So you couldn't interest them in doing something new, but they're always discovering new ways to improve what they're doing.
 
Shannon Waller: And that, to me, Dan, is someone who's growth-oriented, who's growth-minded, right? They're embracing efficiencies and effectiveness, and they're doing Experience Transformers and learning how to do things better. So, yes. And as my husband, Bruce, said, even slow learners are learners.
 
Dan Sullivan: Well, sometimes slow learners are better learners than fast learners.
 
Shannon Waller: True story, true story. Yeah, that's awesome.
 
Dan Sullivan: But you're a pioneer, so you have a real keen interest in not talking to people who are going to waste your time.
 
Shannon Waller: Yeah. Are you saying that to me or to Evan?
 
Dan Sullivan: I'm saying to Evan that Evan, in terms of what he's doing with AI, is a lead pioneer. You're a teacher of other people on how to turn AI into a great teammate of your company.
 
Evan Ryan: Well, I think what's interesting is that one of the dynamics that we see, and this is just all stuff that we're watching unfold in real time. You talk about kind of creativity and destruction at the same time. Yeah, we're living in that a little bit right now as the technology develops. Every day, there's a new announcement of something cool that's happening. But one of the things that is really interesting as we're going about this, and maybe it's worth exploring, maybe it's not, is that sometimes the lower levels of the company, if you look at an org chart, the lower levels of the company are all in, gung-ho, ready to move on AI, and it's the senior levels that were originally very interested in the shiny object and then all of a sudden sort of back away slowly, or sometimes back away quickly. And I'm not really sure what makes a senior-level person do that versus not yet. Like I haven't observed enough cases of it. But I think the other interesting point or the other interesting discussion there is, if you're an employee at one of those businesses, how are you identifying something like that and then starting to look at other options or what are the other options that you decide to look at? Because clearly the leadership is a little bit slow on the uptake.
 
Shannon Waller: I like your point that sometimes the lower levels are like, yes, because they're the ones dealing with the inefficiencies, the slowdowns. They're looking to get freed up to do more interesting work. So they're like, yeah, they actually have very few barriers to entry with this. I can't put myself in the minds of those senior leaders, other than they're often used to being the middle person. And I think they get a little worried about their jobs if they're not the one controlling it.
 
Dan Sullivan: Where are they in their career? They may only be planning another five years, you know, 10 years.
 
Evan Ryan: Well, one of the things that this is actually bringing up right now, it changed my life when Kristi Chambers said this. She said, a Free Day for you is a Free Day for your team from you. It seems almost like very similar to that. It's like, you know, an automation for you is not really for you. Right. Like you see it on your P&L, but it's not really for you. It's actually for everybody else. Consider it like a gift.
 
Shannon Waller: I like that. So just as we wrap up, what are some action items? What are some ways that people can put some of what we've talked about in terms of productivity into play? How can we expand some people's thinking? What would you, Dan and Evan, what would you recommend?
 
Dan Sullivan: I'd find the obstacles. I'd approach it from the Strategy Circle approach, that you know what it can look like in three years, so that's your specialty, or at least a year, let's say a year, that a year from now, this is possible, this is possible, this is possible, this is possible, and I can prove from case studies that this has happened. What I want to know is if you'd like that, first of all, I'd like to know if you would actually like that, and what would be the benefits of achieving that in a year. Because it's hard to create a vision out of something that's intangible. It's very, very hard. And I think you have to put it in terms of hours saved, new projects started because people were freed up from doing this, you know, and everything like that. I think you have to work what's possible. I don't think with AI you should go too much further than a year. Because you can really visualize a year, and you can really visualize that, and then say, OK, so let's talk about all the obstacles that would prevent us from doing that. And you'll learn a lot from that, because there will be categories of obstacles. If you do it with 10 different companies, there will be categories that fall out. But the subject is to normalize AI as a regular, automatic teammate, to normalize the concept that we're going to have automatic teammates going forward.
 
Shannon Waller: I love that.
 
Dan Sullivan: That's my take on it.
 
Shannon Waller: Perfect. Thank you, Dan. And Evan, from your success stories and people who have adopted it quickly and well, what are some of the things that you observe? What were some of the common elements from them?
 
Evan Ryan: The people that were the most successful, I think, are the ones that used AI to free up their best team members. And it's very tempting, I think, especially if you have underperformers on your team that you're desperate to get rid of, it's very tempting to use the AI there. And that's the fastest way to build the corporate immune system against the AI. But the ones that were the most successful were always the ones that were freeing up their best team members, because really their best team members were the ones that were making the one- or the two-year vision happen. Like freeing them up immediately up-skilled or kind of up-leveled them. And then everybody else was able to follow suit. And so I think leaving this, if you're somebody listening, wondering, you know, where do I start with AI? It's really, how do you want to use your best people better?
 
Shannon Waller: That is such a great question to leave with. You know, what's bogging them down? What's holding them back? What's taking their time? How can we get those irritation factors out to free them to do what they're best at? I mean, what a great question.
 
Evan Ryan: Yeah, or even looking at a combination of, you know, what are all the tasks that they do that they don't like to do? But then also, what's their D.O.S.? Like, what's their personal D.O.S.? And how can you help exemplify that personal D.O.S. for them?
 
Shannon Waller: If anyone listening is not familiar with D.O.S., it's what are their dangers—what are they worried about? What are their opportunities, which is what they're excited about? And then what are their strengths to be leveraged? And that's what they're confident about. So, yes, when you really understand their dangers, opportunities, and strengths, you can really customize a result for them. Awesome.
 
Dan Sullivan: When I'm 100 and you're approaching 50, Evan, we're still going to be talking about normalizing AI?
 
Evan Ryan: I think we still will be.
 
Shannon Waller: I think that's true.
 
Dan Sullivan: When are we going to get there? There's no there to get to.
 
Shannon Waller: When's this going to end? Never.
 
Evan Ryan: I don't think it does.
 
Dan Sullivan: It's not going to end. That's what's foolish about the attitude out of Silicon Valley and the media, that this changes everything. No, it doesn't. It's just, what humans have always done is normalize themselves to a new situation. I'm a real war history buff, and in the Second World War, both in the European war and the Pacific war, the United States was very, very smart how they used their pilots because they were brand new technologies. I mean, the jump in aeronautical technology in about a five-year period during the Second World War was very profound. Especially in the Pacific, where they had to land the planes on a boat, you know, which the waves are going like this, there's winds coming in. And they noticed something, that the Japanese had great pilots, the best pilots in the world at the beginning of the Second World War, and both routines were the Japanese. They were just phenomenal. But they had a limited number of them, they were highly skilled, and they used them until they died. And the Americans had really great pilots. They used them for 20 or 30 missions, took them out of action, sent them back to the United States to train the new pilots. And they kept taking their best pilots and turning them into teachers. I think AI is going to be the same way.
 
Shannon Waller: I like that a lot, Dan. That's a great way to think about it.
 
Dan Sullivan: Yeah. Some of the German pilots had 600 missions before they were killed. You know, Americans that have 50, 60 missions, they're out of there. And they knew at 50 missions they were gone. So they got their missions in real fast so that they could get stateside.
 
Shannon Waller: Awesome. Evan, I'm sure people will want to know more about AI and how they can make it a great teammate. So how can people find you, reach out to you? How can people connect?
 
Evan Ryan: Yeah. TeammateAI.com or check out my book, AI As Your Teammate, which is a great starting place.
 
Shannon Waller: It sure is. Awesome.Great. Well, fabulous conversation. I really appreciate the human angle on how that might be a limitation to AI. I think that was fascinating and interesting. And I think coaching from you, Dan, about making it really practical and really getting some successes, making it normal, figuring out the specific obstacles. Everything that you've talked about, Evan, with kind of just people thinking it through in a new way is really powerful. So, thank you both. Great conversation.
 
Dan Sullivan: And Evan, thanks for helping us navigate our way into this new field.
 
Evan Ryan: Oh, thank you for giving me a framework for thinking about a lot of this stuff.
 
Shannon Waller: Awesome.

Most Recent Articles