May 05, 2025

00:48:06

POWER CEOS (Aired 05-05-25) AI and Authenticity: Why Human Connection Still Wins in Business

Show Notes

Discover why authentic human connection remains key in business, even in the age of AI. Explore the balance between tech and trust for lasting relationships.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: SA. [00:00:29] Speaker B: Welcome to Power the truth behind the Business. I'm Jen Goday, your fearless host, entrepreneur, investor and business strategist. Why are we here? Because iron sharpens iron. And when we bring industry leaders, entrepreneurs, investors who are moving and shaking and doing amazing things, we all have an opportunity to learn and grow. As a result, our businesses grow and the ripple effect impacts not only ourselves, our team and their families, but also our communities and our world. You are in for a treat. I have a very special guest here today talking about one of our favorite topics, AI in an AI powered economy. Ethics sometimes are an afterthought, but they can't be. The question isn't if we integrate ethics, but how do we hardwire it into every decision. I have to speak to weigh in on the topic of AI and ethics in business. Victor Cho, welcome to the show. [00:01:27] Speaker A: Oh, it's a pleasure to be here. Thank you so much. [00:01:30] Speaker B: Listen, you have really made some waves. You have a really exciting technology that's humanizing AI. And we had the opportunity to speak not too long ago, to meet not too long ago, and we talked a lot about ethics. And you really have sort of a framework for how this looks for entrepreneurs. Can you break that down for me? [00:01:53] Speaker A: Yeah, for sure. So I was lucky when I was fairly young in my career to get exposed to a bunch of companies that thought about their stakeholders in a very structured way. And Microsoft was actually the very first. Surprisingly, in the 90s, this was way, way before Google had built the Googleplex. Microsoft was actually one of the best companies in terms of treating employees. And, and so I walked out of that experience with this clear vision of like, oh, you actually need to treat your employees well. It sounds crazy, but yes, there's an employee stakeholder. And then I did a role at intuit personal finance company, maker of TurboTax. And they really came to the fore with, okay, well, you also have your shareholders. You need to balance them and your customers. So I got my deep customer centricity from that experience and it was clear to me this idea of you have stakeholders to manage was just obvious. That's how you run your business. There's a missing fourth stakeholder which is not talked about or it's very difficult to talk about, which is society. Society is this big amorphous cloud. There's lots of people with lots of needs. But as a business, it's your responsibility to also serve that group as a stakeholder. And so I've published something called the fourth Stakeholder Framework, which is just a way to operationalize that As a business leader, the other ones all have great systems. It's like you have your employee engine, your customer engine. You need a societal engine for the fourth stakeholder. [00:03:19] Speaker B: And a lot of times we talk on the show about for purpose business, and for purpose business attempts to think about exactly that, that societal impact. And one of the things that's really interesting with artificial intelligence that we're starting to see with some people who have integrated is this idea of second order impact, those unintended consequences. And so you speak a lot about this and you speak in a, you break it down in such an easy manner for people to understand. So can you kind of touch on second order impacts? What are they and how does this tie into speaking to all of the stakeholders in your framework? [00:04:03] Speaker A: Yeah, so the idea of a second order impact in some ways is it's simple, but it's also very hard to solve. And it's what's happening as a result of your business downstream. Your first order impacts as a business are pretty clear because you can see it, you can feel it. Second order impacts happen further down. So a great example of second order impacts that I know everyone in the audience, if they're on social media and have children, are grappling with. What about depression? What about loneliness? These are second order impacts of social networking. Nobody built their social network to do those things right. They actually, in fact, people find this kind of funny. Even the cigarette companies didn't build cigarettes knowingly to kill people. Back then. They didn't even know cigarettes kill people. They were like, oh, this feels good, customers want it. So it wasn't until later they realized, oh, there's a downstream impact of this that we need to deal with. And so this idea of second order impacts, it's clear when they start to occur because you, you smell the smoke. The role of business leaders is now to do something about it. Because you don't have to. You don't have to because it's far removed. Right. Facebook can say, I'm one of many social media networks. There's the phone thing. It's like, I don't need to deal with it. And I think a true balanced stakeholder leader will own up and say, no, no, I have a role to play in creating that impact and therefore I have a role to play in solving it. [00:05:26] Speaker B: And so as we embark on this amazing age, how do we anticipate or try to put guardrails, ethical guardrails, around technology, specifically because it's accelerating so rapidly, even the experts don't know what's Coming two days from now, it really has hit that inflection point. So how do we think about that as entrepreneurs, as business owners, as executives? [00:05:49] Speaker A: Yeah. So this, this exact question is the reason I'm doing a startup at this stage of my career, which is I went through Microsoft and huge companies and then I was the CEO of multiple businesses. And some of my friends were like, what, you're going to go do a startup? And it was because of this reason, though. It's because we have created, I think it will be the most societal changing technology on the planet ever, like beyond anything else we've ever invented. And it won't necessarily happen in a year, but over the next few 50 years, what does the world look like when we have untapped human intelligence? Which we have done, we have done that and now we're starting to see it ripple through at this crazy innovation pace. So I think this moment in time, it's incredibly important for business leaders, especially if you're working around AI, to try to get ahead of it as best as you can. And the way to do that, going back to your question, is I think you have to operate from a very firm principle set out the gates. I'm running a new business called Imovid, which does video imaging. And you know, some of the worst things you could do with technology are done with video imaging Deepfakes, which everybody is concerned about, is a horrible use case. But if you're, if your principle is no, no, we're going to use this imaging technology for positive goals, you can do that. It just bounds what you do, but it'll kind of provide the goal posts that you're navigating towards, if that makes sense. [00:07:15] Speaker B: You know, it's really interesting because I do a lot of strategy for companies, their AI setting, their AI strategy. Where are we going to leverage the technology? Is it going to be product differentiation? Are we going to productize it? Are we going to do operational efficiencies? And some of the things that come up a lot of times are I asked this question and they don't understand the question. So I would love your take on it. And that's, what do you not want AI to do? What are the things that AI absolutely can, cannot do within your organization? And I immediately go to, I mean, I don't exactly want AI deciding when a weapon of mass destruction is going on. You know, I mean, it seems like a funny example, but I mean, what do you not want it to touch? What is it that we're not going to leverage the tool for. And it gives a lot of entrepreneurs pause and they don't know how to answer the question. [00:08:00] Speaker A: Yeah, I know it's a great one because I think you can answer it so many different ways. And yes, I agree. We don't want them controlling when there's a mass destruction. So I'll give you two separate answers. So one is because one's related to my business, which is around authentic communication. So if you take any business on the planet, you can draw this very simple line through it, which is I call the relationship versus transactional line or the relational versus transactional line. So imagine you draw a line on the transactional side of the line are all the things that you do in the course of your business that actually AI can bring great efficiencies because they're by nature transactional. They're things where you're basically wasting time today and you're not adding a lot of value. And so AI is great in those environments. We should absolutely deploy them. But there is a line that is the relationship side of the line. Like, we are human beings at our deep core, and business success is ultimately going to ride on. No matter how much AI there is, it's going to ride on human connection. And so where I don't want AI to play, and you can see it wanting to creep in, is creeping past that transaction line into the relationship side. People thinking that they could, oh, let me delegate this relationship building to my bot. Talking to your bot, and I'm like, that's, no, that's not gonna work. If my bot likes your bot, it doesn't really matter. Like, I need to look you in the eye if we're gonna do a big deal and feel comfortable, right? And have trust. And so that's one, that's one way that I think about it. The other simple line that I have, it's kind of just a personal hot button, is the what replaces human beings versus what amplifies human beings. It's very clear AI, we can build solutions that basically decimate huge chunks of the knowledge worker industry because it's going to do it faster and cheaper. But just because we have technology that can do something doesn't mean we should deploy it there necessarily. So I'm really pushing for people to say, like, let's take this technology, just uplift people, let's give them more skills, let's help them, as opposed to right out the gate saying, like, oh yeah, here's million people, and we can do that cheaper because you could do that, but you could also do the other thing. And I think that's a better starting point. [00:10:05] Speaker B: Yeah. And I mean, we talked about second order impacts. If all of a sudden millions of people have no jobs like you, basically, for all intents and purposes, can collapse the economic system, as we know. [00:10:14] Speaker A: That's right. [00:10:15] Speaker B: So there's like huge implications to this. And as entrepreneurs and investors in this space especially, and every business is looking at this, we have to be the leaders here. [00:10:25] Speaker A: That's right. I'll give you one tactic. I just had this conversation the other day in D.C. because I was at a trust summit and someone asked me, well, what about the call centers? Maybe that is transactional and should move into AI. I thought about it and I was thinking, yeah, that's probably true. That's probably on the transactional side. It can be more efficient. But what all those companies that are building those call center solutions should do if they're going to go that route, is they should also band together and mitigate the impact. So why don't those same companies say, oh, yes, we may displace a gazillion workers, but here's a free set of training tools to help you move into new careers. Here is a path that you guys can take. They could argue that's not their impact or that not their responsibility. I would argue that is clearly a big second order impact of what they're doing and they should help alleviate it, mitigate it. [00:11:12] Speaker B: Right. And I mean, there's something for goodwill and a lot of other aspects as well. We do have to take a brief break, unfortunately, because this has been phenomenal. We are moving along. Stay tuned. You're not going to want to go anywhere. We'll be right back after these messages. [00:11:43] Speaker A: Foreign. [00:11:58] Speaker B: Welcome Back to power CEOs the truth behind the Business. Wow. I can't even talk today. That's fantastic, Jen. We, we are going to pick up right where we left off. We are talking about all things AI, our favorite topic these days. And we left off before the break talking about what guardrail should we have, what choices should we make about what things we want to leverage the tool with and what things we don't. And Victor Cho is here with me today and he shared that there's a transactional versus relational line and he prefers to stick in the transactional aspect. But when we do some of those things, there might be large second order unintended consequences that might look like I'm going to automate my call center. And as a result, many, many employees are now displaced and we have choices to make. How do we ethically implement this, some options might be to mitigate those second order impacts. And that's where we left off. But I want to really dive in deeper here because right now we see a tale of kind of two stories. We see the companies that are really trying to ethically upskill, amplify their teams, help their teams to become better, faster, more effective, more efficient on the transactional side so that they can get back to being human. And on the other side, we have pretty aggressive business goals that are looking at it from a different standpoint and they're looking at not hiring and it could have some of these impacts. So I'm going to ask you a question and you may or may not have the answer. But can ethical AI coexist seamlessly with aggressive business goals? And if so, what strategies could help maintain this balance effectively? [00:13:41] Speaker A: Yeah, that's a great question. And I think in some ways it comes down to what's going to be the engine of your growth, like what's most important. So if you think of, imagine you had an army and you had a choice, which is, I'm going to take my fighting force and I'm going to keep them all and they're all going to be ten times more impactful. We're going to be able to go fight some better battles. So that's one way you can get more aggressive by amplifying. You could take that same army and say, oh yeah, I'm going to take this army and now I am going to be able to fight the equivalent army that I fight today, but with two people, because I'm going to get rid of these eight, because AI can do that work great. Well, now you're going to be able to still go fight two people. So at a high level, there's a cost savings version. I think that's a harder lane for you to go really aggressive. The army that amplifies itself by 10 can go take bigger swings. Now there is a balance because you need to be profitable as a business. Of course, that all feeds into itself. So it's a balance. But you can't drive your way purely with efficiency to the next state of where business is going. [00:14:44] Speaker B: That's really a scarcity mindset. If all we're looking at is efficiency, it's a scarcity mindset. And what I've learned over many years, many years, is that if we operate from that place of abundance, and we're looking at it as AI, as an amplifier, it's a way to magnify an open volume for us we're able to serve more people and have a bigger impact. That's the abundance mindset. So I think it really is a tale of two mindsets and I don't know necessarily that that messages out a whole heck of a lot in the general space because everybody's looking at either efficiency, productivity, operational efficiency, or they're looking at productization and some of these other things. So I would really love to hear some stories because you play in this space, you've been in a lot of different companies. Can you dive deeper into an instance where AI has dramatically impacted or changed decision making process in this space? Because it's evolving. We're trying to do everything we can and stay ahead of it. But do you have an example? [00:15:47] Speaker A: I have a very bad example. [00:15:48] Speaker B: That's okay. Sometimes we learn from the bad example. [00:15:50] Speaker A: And I don't know that it's been validated, but this is very timely. The whole tariff, the whole tariff decision making. I know there's still reporting that is being done, but there is a fairly hot reporting thread that says the core tariff strategy that was attempted actually came from AI. And I don't know that it's been definitively proven, but there's a lot of smoke that says that might be where it ended up. So that would be one example of. [00:16:12] Speaker B: A. I heard that smoke that would be. [00:16:15] Speaker A: That may, if it's true, that would become maybe one of the most damaging uses of AI ever if you think about what just happened over the last 30 days. [00:16:25] Speaker B: And so what we're seeing is we're seeing a lot of people relying on artificial intelligence and relying on it for faster decision making process. But it's not always reliable and there's intrinsic bias, especially if we're looking at anything that's open source. So how do we mitigate that as decision makers in our organizations? [00:16:47] Speaker A: Yeah, the way I think about that today is, and it's going to change over time because again the level of progression of capability is going so high. [00:16:56] Speaker B: So today just wait for quantum compute commercialized. [00:16:59] Speaker A: I mean today one thing I think AI does very poorly is the kind of the highest level systemic strategy. Right. Looking across the entire. It just doesn't have enough data points and I don't know that it actually is intelligent enough to make those calls. And so you can't delegate that. You should not delegate that to a system. You can delegate kind of tasks today like it does, it does amazing research. So it's kind of at the lower, kind of the lower mid level ranks, but it is starting to creep Up. Like I have no doubt that in a couple of years, right, there are many strategic decisions that you may be better off feeding it into the ia, at least for the first thought exercise of like, what should we go do? But today I wouldn't let an AI make any really big important decisions like tariffs. [00:17:47] Speaker B: Right. Or you know, other use cases. I think immediately, like, would you want, I don't know, a mechanic doing brain surgery? It's kind of similar because it's not the skill set that is trained, trained on high level impact, high level decision making and strategic thinking. When it comes to that meta level, a lot of times especially this conversation is a perfect example. We don't know the answers, we don't have enough data points because it's happening so rapidly and escalating. So if that is indeed the case, the human in the loop conversation, where is the human in the loop? [00:18:29] Speaker A: For sure. The other one is just humans. Humans are messy. Right. So depending on your mindset, you could think of us as just very sophisticated biological neural nets, but super sophisticated. Right. So for a non biological AI to try to understand and think of all the craziness that's in this, at least here I know you're probably less crazy, but. [00:18:49] Speaker B: Oh, I've got plenty of crazy. Don't worry. [00:18:52] Speaker A: If you think of the decisions of an executive managing their team, it's not just about the numbers. It's about who is that person and what are they going through in their lives. I know, what does that mean? Given their religious belief and what I know of their what that's not going to be in an AI system. Right. But those are needed for the right decision, interestingly enough. [00:19:11] Speaker B: Okay, I'm going to just say it. I'm a no B.S. kind of straight shooting kind of person and I'm a consultant. Well, I can't just say what I'm thinking. I have to mitigate that. I actually leverage AI to give me different people's perspectives and to say, hey, what is the tone of what I'm about to put out, especially in written communication and some of the other things because I can get. Okay, what is the perspective of someone who doesn't have this background? Yeah, you know, I've always been in professional services. I was in medicine. So like give me the mindset of someone who has come through this particular socioeconomic upbringing, this particular kind of job, this particular kind of faith. Because I was raised, I'm from New Orleans, I was raised as a Catholic, I went to all girls school, you know, so I have a certain level of experience in that space. But I'm not over here. [00:19:58] Speaker A: That's a cool use. That's a very cool use. [00:19:59] Speaker B: You do use the technology to learn. And it's really interesting because everybody says bias is bad in these models, but sometimes bias is a little good if you're using it to understand the perspective of another human being. What are your thoughts on that? [00:20:16] Speaker A: I would totally agree. I also think this idea that you can have an unbiased AI system is not possible. Right. Because there is no such thing. There are very few things in the universe that are true. True to the point where you can just like two and two is four. Yes, that's true. Right. But most of the. Yeah, there's actually. There's probably some scenarios where that's not true for sure. But everything else, it's all shades of gray. Right? It's all shades of gray. And so a system that has a point of view is it's always going to be misaligned with somebody just in the same way that human beings are misaligned with each other. It is an inevitable. I call it asymptotic curve. Right. Like, the more sophisticated it gets, the more it's going to have to deal with what we deal with as human beings. [00:21:01] Speaker B: Absolutely. I know that we have just a short period of time before we have to break again, but I'm going to ask you, based off of what we've been talking about for everybody who's watching out there and they're hearing this, what is the one thing that you want them to take away and chew on for the next couple of minutes. [00:21:23] Speaker A: Related to just use of AI? [00:21:24] Speaker B: Yeah. And thinking about ethical guardrails and uses versus. [00:21:29] Speaker A: I think the one question, especially if you're running businesses or you're thinking about deploying, I find this a very powerful question. What does your business, or if you're a large business, what does the world look like in the limit of success of what you're doing? Meaning not if 1,000 people are doing it, but what if it just keeps going? What if it just goes all the way to the limit? So I'll give you a tactical example. There are companies that are meeting digital avatars that you can have a relationship with. And the current pitch is, hey, this is going to mitigate some loneliness. And I'm like, okay, I could see that. But now, what's the limit? What's the limit of a world where everyone is interacting with a digital avatar? It's like, that's not a good world. We got zero birth rate. Or something else going on. [00:22:15] Speaker B: I mean, population control solved might be. [00:22:19] Speaker A: A flip side positive to that one. But that's a great way to shake out the second order impacts. Because if you think of that, you're like, oh yeah, that's true. At some point this becomes an issue. Right. If the social networks have thought of what happens in a world where all we're doing is interacting on social networks and we're not spending any time together. [00:22:37] Speaker B: Well, we say that right now. We're living it. [00:22:38] Speaker A: Yeah, exactly. Right. Maybe they could have done a better job at preempting some of these things. [00:22:44] Speaker B: Absolutely. What a powerful question. So you heard it right here. What if what I'm integrating or implementing goes on indefinitely, limitlessly? Where does it become a concern and how can I mitigate that concern? We are going to be right back with this after these messages. [00:23:20] Speaker A: Foreign. [00:23:35] Speaker B: Welcome back to power CEOs. If you're just tuning in, you're going to want to go to NowMedia TV, click on shows and go to power CEOs and watch the first half of today. It has been epic. We have dove deep into the ethics of, of AI. We've talked about ethical considerations of all stakeholders. We've talked about societal impact, we've talked about those unintended consequences that happen and so much more. You're not going to want to miss a moment of it. But I am here with Victor Chow and we are diving deep into all things AI ethics. And really, I have a burning question for you. There is a lot of ethical theater happening in the AI space. And by that I mean that we're, we're putting guardrails in place, we're doing the most ethical thing, but there's no accountability to that because then what you see after integration. And I could give you an example in a moment. After integration, well, we didn't know that could be possible. And I'm going to give a very basic big mistake that happened. Oh, we're doing all the right things. We're, we're, it's a recruiting company. They're placing people and they trained the model. It filtered out every female applicant, everyone. And it went on for over two months before anybody noticed that there was something amiss. You can imagine that is a massive impact. And so one of the easiest answers is, well, you pilot before you roll out across your, so some of these things. And so, you know, that's a really, it's kind of a no brainer, sort of silly example. But it is exactly what we're seeing as people talk about Ethics and AI, but then they're not putting their money where their mouth is. What are you seeing and how do we move more towards accountability in this, in this age? [00:25:25] Speaker A: You know, I think it's so critical. So one, these systems have a tendency to drift. Right. And so you can't, they're not like previous software solutions where you could build it, test it, deploy it and feel pretty good. Like you gotta have to, you kind of have to continually monitor what's happening. So in this instance, yeah, it sounds like. Actually the funny thing is it's either monitoring wasn't happening or maybe they were just all interviewing men before because like how would you not realize that no females were coming through the pipeline? [00:25:55] Speaker B: I'm like, I don't know. [00:25:57] Speaker A: But yes, they should have had a process there to say is this a balanced representative group of candidates coming through? But no, but that idea of continual monitoring on these systems is going to be so key. [00:26:09] Speaker B: Yeah. And the idea of human in the loop and monitoring as we move forward, it actually is a high level human in the loop if you think about it. Because you're not just thinking about is this doing what I want it to do, but also is it having an adverse consequence? So that's a skill set that we really haven't trained into our workforce. What's the answer for that? [00:26:32] Speaker A: The irony of this one is I think a lot of that can be done by AI. Oh right, you can have AI and companies are doing this today, but you could easily have another AI agent looking at the output and ask this AI hey, is the output of this AI representative? And it would have flagged it. And so what you will see as we move forward is you'll increasingly see the control systems being looped through other AI systems which will then need their own controls. So it'll become a complex web of monitoring. But I think the good news is some of it will get easier because of AI. [00:27:06] Speaker B: Yeah, well, that's an easy answer. That was a softball I should have thrown. [00:27:11] Speaker A: AI is going to do everything. [00:27:14] Speaker B: So talk to me a little bit about this because we touched on this and we touched on what happens when AI replaces an entire skill set or knowledge based workers. That's a large chunk of our economy. So it can amplify or it can render our skills obsolete. How do we strategically choose, how do we think about this and choose between which ones we're going to upskill and amplify and which ones we're going to. Maybe we don't need to continue to hire and really would be Better served. And I'm using education. This might be a hot topic, but education has not evolved. We're still teaching the way we taught in the 90s. And with technology today that dramatically needs to evolve. It just needs to. And so in my mind, that's an example of exactly this crisis. Like these workers, the way things have been done needs to change, but they don't have the skill. Or now that we have the systems, how do we make those decisions and how do we implement them in a way that has minimal adverse impact but actually gets the job done? [00:28:25] Speaker A: I don't know that there's a great answer to that. All I know is that if we do it too aggressively, then the system has the issue of we may see large chunks of unemployment and our systems do a fairly poor job of digesting huge swaths of unemployment rapidly. And the crazy part about AI is we've never had that with the knowledge worker segment. Right. All previous displacements have really been physical. Workers one and two, it's been a little bit slower because of the capital intensity of the physical work. So meaning like if you think of the industrial revolution, it happened fast, but not fast like AI can happen because oh, you had to build an actual factory right now you could deploy software, press a button and five years from. [00:29:08] Speaker B: Now the world looks very different. [00:29:10] Speaker A: Exactly. So how do we control it? I think we just need to, as lame as this sounds, right, we need to monitor it very, very closely and make sure we're micro adjusting. I think if we have the right mindset in leaders that are doing this right balance of amplification versus just pure replacement, we'll have a better balance at least. So we won't all just be moving towards let's go replace. And then hopefully if we see examples of that ending up in business, results that will become a self fulfilling engine which is like, oh, I see. I can't. I cut my way to cost savings. That company became a 10x army and now they're kicking my butt. Maybe I should have gone and do that. And hopefully we'll kind of counterbalance it with those. [00:29:49] Speaker B: And how does IMOVID fit into this equation? [00:29:53] Speaker A: Yeah, so IMOVID at a high level, it is the new way to communicate in a world of AI. At a high level today in the world of business, you have text, you have email, and you have two ways to build deep relationship. This, which is awesome, face to face and zoom. If you think of that transactional relationship line, email as a way to build relationship is going to move into agentic systems. Very quickly. So today, if you're using email to try to build a relationship in the future, this might even be happening for you. Now I'm like, oh, that's a really nice note you sent with a nice little poem that I know you can't rhyme, so I know that's not you. So immovit is what we call multimodal communication, but it's video centric, but asynchronous. So imagine that I send you a message in video form, you send me back a message. Could be video, could be audio, could be text. But it's this ability to maintain our face to face dynamic, but asynchronously. So that's what a movid is. Oh, sorry, your question was how does a movid fit into this amplification? Yeah, yeah, At a super high level. IMOVID is like a superpower of communication in the simplest form. So I have today probably 100 relationship building threads highly efficiently going out in the same way I used to have email going out, but now I'm actually building relationship and trust. And so that was impossible before. So it gives you a, what we call a relationship building superpower. So if your role requires building relationships in any way, shape or form, this is like a must have tool. [00:31:26] Speaker B: And it's an amplification, not a replacement. [00:31:28] Speaker A: And it's an amplification, not a replacement. [00:31:30] Speaker B: So it's not a deep fake. [00:31:31] Speaker A: That's right. [00:31:34] Speaker B: The world that we live in. If I would, if you would have told me 10 years ago we would be where we are today, I would not have believed you. 20 years ago, forget it. I was in medical school when the Internet was, was being rolled out dial up. Imagine being in med school, dial up. And just when you get about halfway through the page, it goes away. Yeah, that, that. If you would have told me then that we would be talking about deep fakes and some of this other stuff, I would have laughed at you. That's Terminator stuff. [00:31:59] Speaker A: Now we are actually this, you know, this decade is the closest to all the sci fi that we grew up with. [00:32:04] Speaker B: It really is. [00:32:05] Speaker A: Think about it. [00:32:06] Speaker B: It really is. [00:32:07] Speaker A: It's exciting, but also can be right. [00:32:09] Speaker B: A little bit frightening. [00:32:09] Speaker A: Could be frightening if we don't deal with it correctly. [00:32:12] Speaker B: And so we're dealing with a workforce that grew up with those frightening science fiction movies. And one of the things that I'm noticing on the implementation and integration side because I do a lot of strategy with companies is that employees are self sabotaging because they're afraid that they're training their replacement. How do we fix that problem? [00:32:34] Speaker A: Yeah, it's so funny. So I sit on an advisory board for the school I went to, University of Pennsylvania, they have an AI board at the Wharton School. And we were literally just talking about this because they were saying how many students and workers are using AI but they don't tell anybody because one, they're getting better output, they're having better balance, but they're like, oh, if I disclose that I'm using this then I'm going to lose my job or they're going to cut my hours or something. So yeah, there's this weird organizational disincentive to showcase that you're using it. I think that gets solved at the C level down, which is you just need to be very explicit, I think with your teams that no, we want you to find tools, efficiencies, amplify yourselves. You're not going to get penalized if you come up with a solution that takes 90% of your job and automates it. You should be rewarded for that, which is like great, now how do you add more value in the organization? [00:33:26] Speaker B: Yeah. And so let me ask you the counter to that because what, what is happening is some of these models and some of these agency models especially, they're looking at the frontline workers and how they're interacting and they're documenting their processes. And how do you mitigate them? Intentionally leaving steps out or self sabotaging the AI because they have this fear inside of them that I'm now going to be not useful anymore. [00:33:57] Speaker A: So you're talking about an AI system like watching retail. How fast are you managing the till? [00:34:03] Speaker B: Yeah. Or even like I think about this in a medicine like taking, taking the automation of intake to taking the notes and submitting to insurance and the bailing process. There's a lot of these things that have been heavily manual and when they go to train they're leaving steps out. So. So it's not effectively creating what you're desiring from that automation. Fear of loss of job. [00:34:30] Speaker A: That's what it is. Yeah, yeah, yeah. I don't know that I have a great answer to that one other than I am not a big fan of like AI automated surveillance systems across your workforce. Because just at a deep psychological level, that's not. If you think of your employee as a stakeholder, that's a very bad employee experience. Like nobody wants to live under the evil eye of AI to know like, am I working fast enough? [00:34:55] Speaker B: I don't see you sweating enough. [00:34:56] Speaker A: Yeah, no, it's like, again, it's a world we can build. We can use the tool to do that. But let's find a different. I mean, you could still drive. You can drive the same. You could find the same efficiencies or amplification without the surveillance state. Right. It just takes a tiny bit of creativity. And. [00:35:12] Speaker B: Well, and I think surveillance comes from. Yeah, Well, I think also surveillance comes from around micromanagement, which usually comes from scarcity, which we've already kind of talked about as well. [00:35:20] Speaker A: That's right. That's right. [00:35:22] Speaker B: This has been really engaging. I'm enjoying this. I hope you are, too. It really has been enlightening to kind of see some of the different perspectives of some of the people in the space. And I hope if you're watching that, you're seeing, okay, here's a use case that I really could use. And this is a place where I probably don't need to be leveraging AI because AI is not a magic button. It's not going to be something that you just poof. Everything is done. There are places where we're going to have humans in the loop. There are places we're going to amplify our team so that we can amplify our business and our impact. We do have to take a brief break, but we'll be right back after these important messages. Welcome Back to power CEOs Truth behind the Business. I am here with Victor Cho, and we are talking all things AI. We've talked about unintended consequences, ethical guardrails. How can we be leaders in this age where digital transformation is happening exponentially? How can we do things in a way that empowers and amplifies our people rather than replacing them and so much more? And so one of the things that we were talking about in the break was authenticity and relationship building. And really, how do we get that human connection? And, you know, on this show, a lot of times I talk about AI is all about becoming more human. How can we get back to doing the more human things? So I'm going to ask you that question. What are some of the ways that we can leverage the technology to get back to being human or more human with one another? [00:37:15] Speaker A: Yeah. So I love this question because it's the heart of what we're doing. With the company that I'm running now, Imovid, it's going to be very tempting to chase the shiny AI pebble that seems like it's going to bring in efficiency. But my message for the audience would be the currency of legitimate touch and Knowing that the person is a real person is only going to go up in value today if you have executives on the show. Like, I'm sure you're already getting barraged with. I'm starting to see this now. Right. You get barraged with these things, and now people are starting to say, no, really, I'm not a bot. Like, I'm a human being. Because it's so clear. And it's just the first touch solicitation wave is all getting automated with AI so what's. What's going to happen is that's all going to become useless. And so if you jump into that bandwagon, you're going to enter this stream of crud. I almost said a bad word. You're going to enter this stream of. [00:38:07] Speaker B: Taking it out in the garbage. [00:38:09] Speaker A: Yeah. And so you need to be very purposeful about your use of these technologies and making sure that in the important moments that you're staying authentic, that that message is you and that people know that that message is you. Which is our whole platform, actually. When you send a message, it says, like, this is Victor you can trust. Victor sent you this message. It hasn't been modified. So looking for solutions like that that can really verify the authenticity of what you're receiving is important. [00:38:34] Speaker B: You know, it's really funny. I've gone to a couple of conferences, and I was early on on the avatar bandwagon just to see what was possible. And my whole purpose for doing that was we get a lot of calls from our clients all the time, and a lot of them are easily answered, but it's taking coaches or consultants time. And so the goal was to train it on, like. And we did. We trained it on my brain, all of my content, everything. And it was like, what would Jen do? So I had a generative Jen and I did the avatar thing, and I'm like, yeah, this doesn't. But I tested it and I showed people in conferences, and I'd be like, this is what's possible. But I tried it and it didn't have a good response because it still not human. Now, the communication and the easy answering your questions, perfect helps a lot. They love that. But the actual physical representation and the voice, they hate it. And after I showed that, people now ask me in person, am I speaking to Jen Jen or am I speaking? So I see this completely right now, and it's really amazing to me because my first impact was, wow, this is really awesome. I did it like 18 months ago, whatever. And the technology has completely continued to evolve and I've evolved it with. Just so that I can spot other people's face. It's a game for me. [00:39:54] Speaker A: Yeah. There is so much. The human brain is so tuned to the micro expressions in the face that I contend AI will never be able to authentically replicate how you would speak to somebody for a couple of reasons. One, how I say hello to you, even though it's the same words, is different from how I say hello to my wife. [00:40:17] Speaker B: Of course. [00:40:18] Speaker A: Right. And how I say hello to my wife if we. Not that we argue, but if we did and we had an argument versus, like after a vacation. So the, the historic context of your interaction matters. Right. Who you're talking to matters. The. The weather can matter. Like, you're never going to be able to capture that into an agentic system and just have it feed up, at least not in a way that's authentic. [00:40:38] Speaker B: And I think that's why they have a really hard time with certain languages. I do a lot of medical education work in Vietnam, and there is no good Vietnamese translation system because they have so many different inflections and tonalities and things that translation does not work well. And I don't suggest you try it because I don't speak it. So, like, when everybody starts laughing, obviously it was off the mark. So it's really interesting because authentic communication and what you're doing with a Movit is really revolutionary because, you know, we think about those. The other thing that comes to my mind is, you know, I meet people in person and I've seen their super filtered AI generated headshots and they look nothing like that. They're, you know, 30 years older, different way different. Like different hair color. Like it's completely different. And I'm immediately, like, at a disadvantage because they know who I am, I have no idea who they are. And I might have interacted with them on Zoom or some other digital format and I meet them in person and I look like a moron, quite frankly, because I don't recognize them, but they don't look anything like they do in person. So you're kind of solving that solution. [00:41:54] Speaker A: Hoping to bring more. Yeah, more maintain and increase the authenticity of communication in this. It's going to be more and more valuable, more and more important. That's right, Absolutely. I have heard that the average person on one of the dating sites, the men are like a half an inch shorter than they claim to be. Yeah. [00:42:13] Speaker B: We live in such a weird world. If I would have known this when I was growing up, we were told not to Talk to strangers now. Now we use an app on our phone to contact a stranger, to take us places. Don't get in a car with strangers. But here we are. [00:42:25] Speaker A: Oh, no, no. So here's. Here's one example of, like, one of the. It was one of the worst things. Like, I want pull out the company because it's. I just don't like to throw shade on people. But they were so excited. They were like, look, we're going to build an agentic system, and, like, we can make one for you, the husband, and then there can be one for you, the wife. Your agents can just talk to each other and resolve all. And I'm like, what are you talking about? No, we don't. We don't want that world. I don't want that world. [00:42:54] Speaker B: You made me cry already. This is great. Yeah. So getting back on topic, it's really important that we have authentic cubic communication. And it matters because if we're constantly throwing up something that's not real, when we come to the real situation, we've already broken trust. I think that's where I was going with the whole Fletcher conversation, because then I'm like, okay, well, you don't look like. You don't look like you. You put me at a disadvantage because I have no idea who you are. And I'm like, I'm sorry, who are you? And we have a relationship online that we've been meeting, and you just. It's. It's inauthentic at that point in time. And for. For me personally, one of my core values is authenticity. What you see is what you get. [00:43:34] Speaker A: Yes. [00:43:35] Speaker B: And no one can fake being me. I can't fake being other people either. So. But, you know, it's kind of like a break in trust when we have all these avatars and some of these other areas. And trust is key to relationships and to growing business. And so as we get kind of towards the conclusion of our episode, I'm going to ask you, what is the future and where do you see this going? And how can we remain authentic and build authentic relationships in this world in a way that when we meet in person, it's not a break and change trust, It's a continuation of that relationship. [00:44:13] Speaker A: Yeah. So I think my simple answer on that. And before this role, I was running Evite, the online invitation network, which is all about bringing people together face to face. You have to come together face to face at some point with some kind of multiple touchpoint interaction in order to build that trust and relationship. It's not going to happen with avatars, it's not going to happen with email. So whether that's flying on a plane and coming to Houston or spending some time on Zoom or we're adding the third, sending in a movid like you have, you need those face to face channels, otherwise you're gonna have no idea who you're dealing with. [00:44:51] Speaker B: Yeah, absolutely. Thank you for that. And so you heard it here, folks. The future of business is getting back to being human, being authentic, leveraging technology where it makes sense in the transactional side of your business. And so I'm going to put a challenge to everybody who's watching because, you know, I'm a coach and you have to have an action step. I want you to think about the ways that you're leveraging technology if you already are. And what do your touch points look like? Are you doing those Autobots and those automated touch points and how is that working? Get real. Be very realistic with yourself and ask yourself, is this how I want to do business in a world that is increasingly looking for authenticity and that face to face interaction? So I want you to do an audit of everything that you've integrated, all your technology and where are you being human and where are those key quality touch points with all of your stakeholders, your investors, your team members, your clients and the world, the society around you? Victor, do you have any last things that you want to share? [00:45:52] Speaker A: No, just such a pleasure having this conversation and I love the fact that we share this authenticity ethos at our core. [00:45:59] Speaker B: Absolutely. How can people reach out to you and learn more about what you're doing? [00:46:04] Speaker A: IMOVID is at the URL www.emovid.com and you can find out all about me, my background. I have a bunch of free actually courses, materials. My fourth stakeholder framework is at www.Victorcho. v I c t o r c-h o dot com. [00:46:24] Speaker B: Yeah, it's been phenomenal having you here. And I'm gonna ask you the one last question before you went on this journey, because you've been in the tech space for a long time before you went on this journey. If you could go back and whisper something in your ear today, with everything that you know and where we are today, what would it be? [00:46:39] Speaker A: Oh, that's easy. It comes back to the power of face to face and connection and relationship. So I think for the first 10 or 15 years of my career, because no one had given me that coaching, I was very much in a transactional mindset around people. It was like, oh, here's what I do. Here's what you do. How can I help you? How can you help me? Everything was transaction. And I realized that's. You don't build deep relationships that way. Right. That you need to actually spend time and cultivate your relationships. And that network of value that you create around you is. Is probably the most important thing you can build. [00:47:13] Speaker B: Fantastic. You heard it here. Victor Cho, very successful. Thank you for sharing and thank you for being here today. [00:47:18] Speaker A: Oh, no, thank you. [00:47:19] Speaker B: And you. Yes, you. I know all good things come to an end, including this show. But the good news is, is you have actions to take. Take that action today. Do that audit on yourself and figure out what is that space? What is that line for you? What are your guardrails? With your next integration, don't wait for tomorrow. Do it right now while you're thinking about it. That way, we have positive momentum. And when we have positive momentum, we grow, our businesses grow, and everybody feels the positive impact. Until we meet again, same time, same station, next week. I want you to win today, win this week, and I'll see you next time. [00:47:59] Speaker A: This has been a NOW Media Network's feature presentation. All rights reserved.

Other Episodes