with Ulrich Irnich & Markus Kuckertz

Shownotes

In this thought-provoking episode, we dive deep into the future of leadership, technology, and human agency with Dr. Steven D. Carter – TEDx speaker, author, and Harvard Senior Executive Fellow. From AI’s role as a silent co-pilot in daily life to the disruptive potential of quantum computing, this conversation explores how emerging technologies challenge traditional decision-making, leadership, and ethics.

What you’ll learn:
• How AI is reshaping leadership and strategic thinking
• Why AI won’t take your career—but may change your job
• The ethical blind spots of biased algorithms
• What quantum computing really means for the future of work
• How leaders can prepare their teams without creating fear

Connect on LinkedIn:

  • Dr. Steven D. Carter – linkedin.com/in/drstevendcarter/
  • Ulrich Irnich – linkedin.com/in/ulrich-irnich
  • Markus Kuckertz – linkedin.com/in/markuskuckertz

Alle podcast episodes: digitalpacemaker.de

We would love to hear your feedback. Let’s talk!

Summary

In this episode, we explore the profound implications of artificial intelligence (AI) and quantum computing on our daily lives and decision-making processes with Dr. Steven D. Carter, a distinguished TEDx speaker, author, and Harvard Senior Executive Fellow. Dr. Carter draws from his extensive experience across various sectors, providing unique insights into how AI is not just a tool but a transformative force shaping our future.

We begin by delving into the evolution of AI, highlighting its omnipresence in our lives, long before many realized it. Dr. Carter reflects on the early signs of AI’s influence, such as tailored advertisements on social media platforms that emerged nearly fifteen years ago. He emphasizes that, much like the introduction of the iPod, AI has become an indispensable part of our routines, enhancing our capabilities in ways we have yet to fully comprehend.

The conversation shifts to the critical responsibility that comes with AI’s power, acknowledging its ability to amplify biases and potentially disrupt human judgment. Dr. Carter insists that understanding these risks is essential; otherwise, we risk creating an uncontrollable future. He challenges the prevailing narrative of fear surrounding AI, positing instead that it should be viewed as an opportunity for enhanced efficiency and creativity in our personal and professional lives.

As we examine the intersection of AI and leadership, Dr. Carter makes the compelling argument that today’s leaders must evolve into “sense makers” – those who can interpret vast amounts of data, aided by AI, to ensure empathetic and informed decision-making. He stresses that while AI can provide real-time insights, it should complement, not replace, human intuition. The importance of fostering a culture of AI literacy within organizations is highlighted as a cornerstone of successful adaptation to this new paradigm.

The risks associated with AI adoption are also scrutinized, including the potential for unethical applications and data mismanagement. Dr. Carter discusses the necessity for diverse and inclusive data sets, advocating that the creators of algorithms represent multifaceted viewpoints to avoid skewed results. The discussion brings to light the ethical considerations involved in AI usage, urging listeners to carefully evaluate the impact of their decisions on society.

Moreover, we delve into the future of technology, particularly the convergence of AI and quantum computing. Dr. Carter expresses excitement about quantum advancements, which promise to amplify AI’s capabilities exponentially. He asserts that this merger will allow us to tackle complex problems, such as climate change and health interventions, with unprecedented precision and speed.

Finally, Dr. Carter concludes by reinforcing that we are already at the forefront of utilizing these transformative technologies. The appetite for AI is growing, and he encourages a proactive approach to harness its capabilities while remaining ethically grounded. By asking the right questions and understanding the implications of our actions, we can steer the future of AI and quantum computing for the betterment of society.

This episode is a thought-provoking journey through the present and future landscape of technology, emphasizing the need for curiosity, ethical stewardship, and a commitment to continuous learning in the face of rapid technological advancement.

Transcript

Dr. Steven D. Carter:[0:00] So one of my professors said to me, he said, look, Harvard’s not going to make you smarter, but you will learn to ask better questions.

Dr. Steven D. Carter:[0:08] That’s exactly what AI is doing. It’s teaching us how to ask better questions because it will help us to interpret data wisely and it will help us learn how to lead with empathy.

Music:[0:21] Music

Markus Kuckertz:[0:37] Welcome back to the Digital Pacemaker Podcast with Uli Oehnecht and with me, Markus Kuckertz. Uli, it sounds that you are in a greater audience or that there are some more people where you are sitting. I think you are at the Digital Transformation World in Copenhagen. How is it going there and what are the key topics?

Ulrich Irnich:[0:55] Well, as you can imagine, one of the main topics are autonomous, which has something to do with AI, but also how, for example, a network can run completely autonomous. And you know that from cars, right, on level five automation, that means you don’t need a driver or a steering wheel or something else. The car drives its own. And that’s also the main topic here on networking and auto operations, how we get to a point that I would say the network sends the problem itself and heals it itself, right? And of course, there is no sentence where you don’t need or where you don’t have two times AI in, right? And that’s also a phenomenon which you see at this conference here.

Markus Kuckertz:[1:42] Sounds really good. And this is quite a lot relating to our topic. We will talk today about our topic, Can We Stay Human in the Age of AI and Quantum Power? And I’m very pleased to have a really good guest who can tell us a lot about this area. And we will jointly explore these topics. Our guest today is Dr. Stephen D. Carter, TEDx speaker, author, and Harvard Senior Executive Fellow with over 20 years of experience across public and private sectors. He currently serves at U.S. Africa Command and advises organizations at the intersection of strategy, innovation, and digital transformation. Dr. Carter holds a doctorate in business and teaches as an adjunct professor at the University of Maryland Global Campus and Morris Brown College. He’s also a member of the Harvard Business Review Advisory Council and a regular contributor on leadership, system thinking, and technology-enabled change. Welcome, Stephen.

Dr. Steven D. Carter:[2:39] Thank you, gentlemen. It is great to be with you both. I’m very excited to be here to talk about one of my favorite topics.

Markus Kuckertz:[2:45] It’s really a pleasure to have you today here in our podcast episode.

Markus Kuckertz:[2:48] And yeah, today we’d like to speak about the following topics. Firstly, artificial intelligence is no longer a future concept. It’s already reshaping how we live and work, from daily routines to complex decision-making. AI is becoming a quiet co-pilot in everything we do. With great power comes real risk. AI systems can amplify bias, increase surveillance, and disrupt human judgment. If we ignore the downsides, we risk designing a future we can’t control. And lastly, the convergence of artificial intelligence and quantum computing will redefine what’s technologically possible and ethically necessary. We are not just facing faster machines, but deeper questions about agency values and the human role in decision-making. Uli, how often have you found yourself wondering lately whether you are talking to a human or a machine.

Ulrich Irnich:[3:38] Well, that happens at the moment quite often, right, as you see that the technology evolves, right, and you don’t can really differentiate to whom you are talking now, which I would say indicates a new, I would say not a problem, but a challenge we have, right? How can we trust things we see and hear? And that comes also with, I would say, a responsibility of us to make sure that we have trusty world systems and also that fosters digital sovereignty. Because if you want to be in a digital ecosystem, you need to make sure that you can trust the things you are seeing and hearing. And that’s also fantastic to have you here, Stephen. Welcome. I know that you’re thinking deeply about the impact of AI on your society. And I’ve also heard you talking about leadership on that kind of thing. But the question I have to you

Ulrich Irnich:[4:37] is, when did you first realize that AI was more than just a technology? That it might actually reshape how we think, act, and work?

Dr. Steven D. Carter:[4:48] Well, honestly, this goes back several, several years. So I’ve actively been using AI in both the government sector, the academic sector, and in the public sector and private sector, but then personally as well. But we really first started seeing this. I mean, really, really seeing it around 2008, 2009. We really started seeing it in our social media. What was interesting is how you could have a conversation with someone on a social media platform, Facebook or the like, and in your feed, you would find an advertisement for that thing that you were talking about.

Dr. Steven D. Carter:[5:33] AI was already embedded into the commercial apparatus. So we saw it, but we didn’t see it or we didn’t acknowledge what it was at the time. But of course, in later years, when it became more commercially acceptable by the general public, then we began to understand what we were experiencing was AI, not only in our social media, but also even in our GPS platforms. It’s baked into our everyday lives. But we have to remember, because it was often invisible for so many years, we just didn’t realize it was there. Now, it’s about raising that awareness And the first step is responsible engagement.

Ulrich Irnich:[6:19] Now, as you said, it’s already since a while there, and I know that people talking about AI and the science and the background of it already since five, six decades, right? So it’s not really new, right? But we didn’t realize it and might ignore it. But how do you see at the moment how people adopt to that, right? Because that’s also a human thing. It takes a while before we realize it. But do we adopt freely?

Dr. Steven D. Carter:[6:52] Well, that is a good question. Do we adopt anything freely? I think it’s one of those things. It’s like I would equate it to the iPod. How many people knew that they needed an iPod? In fact, nobody knew until you got your hands on one, until you started watching your television shows, your movies, your music. Now it’s a phenomenon. Now everybody has to have one, every man, woman, and child, regardless of age, gender, religion, ethnicity. We all have to have an iPod.

Dr. Steven D. Carter:[7:32] That is akin to what AI is doing. It’s one of those things that now that it is commercially available and people are slowly, slowly warming up to the idea of how it can improve, enhance, or create efficiencies within their lives, now it’s something that we have to have. So, and you have to think this appetite is going to continue to grow the more we find out what we can actually do with AI.

Markus Kuckertz:[8:01] Steven, do you think people are aware they are already relying on AI more than they realize?

Dr. Steven D. Carter:[8:06] Honestly, I think we have three groups of people. We have those that are constantly seeking newer, better, more efficient. We have those that accidentally fall into the space. And then you have those that I don’t understand it. I don’t know what it is. And therefore, I’m not going to do it. So for those three groups of people.

Dr. Steven D. Carter:[8:34] What’s going to happen is that first group is going to find it. They’re going to exploit it. They’re going to do great things with it. But that second group is probably the largest group because they’re still warming up to the idea, whether it’s in the playlist that knows your particular vibe or whether it’s the spell chat and the grammar corrections or even the language translation feature. Or even in our GPS, how the phone can reroute in real time on your journey. We have to look at this and look at how this is once again becoming something that is incorporated within our daily lives and our daily activities to include school, which include work, and which include our private hobbies. For example, for someone who wants to bake or wants to get into baking and does not want to go buy a cookbook, doesn’t want to spend the money because they’re cheap, kind of like me, you can use the AI to say, hey, I want to create a recipe for brownies, but I want them this way. I want brownies that are really moist. I want brownies that are succulent. I want brownies that are spicy. I want brownies that are green. The AI can help us do that. And that’s what we’re seeing in everyday life.

Ulrich Irnich:[9:58] And it’s more, Stephen, how we enhance our capabilities, right? It’s not because I feel often in conversations that people think that this is a threat, right? Of course, you can see it as a threat, but it’s also from my perspective and opportunity how we increase our capabilities. And I think we are often not pushing that in that direction as media always see it in a negative way, right? I saw the newsletter starting paper already. Well, we are killing thousands, 100,000 of jobs because of AI. And 10 years ago, you know that, right? It was robotics. And years ago, it was IT. And years ago, it was whatever, right? But I think we need to embrace a bit that people understand the possibilities behind.

Dr. Steven D. Carter:[10:50] Uli, I think you make a great point. But you know what?

Dr. Steven D. Carter:[10:54] Gentlemen, if we’re being honest, and I say this to the audience as well, if we’re being honest, we heard this conversation before. We heard it in 95 with the World Wide Web. We heard it with cryptocurrency. We’ve heard this argument. And what I would like to do is share with you both and with your audience, I would like to dispel some myths and some rumors out there. First and foremost, AI will not take your job. So let me rephrase that. If you have a career, AI definitely will not take your career. But now there are some jobs that we as humans probably shouldn’t be doing. Because we should be doing other things that are more meaningful. For example, an AI is not going to defend you in court. And AI is not going to groom, train, and mentor the next batch of business leaders. AI is not going to do that. But some jobs that AI can take that we probably don’t need to be doing are running help desks.

Dr. Steven D. Carter:[12:04] Okay. Real people don’t need to do it. We need real people helping real people. The AI is just a tool. It is a support system. It is a difference within an ecosystem of support. And so when we talk about AI as a threat, I agree with you, Uli, it is the exact opposite.

Dr. Steven D. Carter:[12:26] It is an opportunity, but only if we choose to take it.

Markus Kuckertz:[12:32] And if we look at the leadership part and the accountability part in business, how do you see AI shifting the nature of leadership or strategic decision-making organizations? What’s the impact?

Dr. Steven D. Carter:[12:44] Oh, this is a fantastic question. It’s almost like I was born for this. I was actually born for this question. So this is what AI does. It completely changes the game. It provides leaders with real-time insights and predictive power. That’s something we didn’t have before. Where decisions were based on how I feel or data heuristics, well, this is what we did in the past, or this is what I saw someone else do, that’s not data. And not only is it not data, it cannot show trend analysis, it cannot give us insights, it cannot do analyses at the speed at which we need decisions made. But now here’s what it doesn’t do. AI does not replace intuition, But it does augment it. So the leaders of tomorrow aren’t going to be just decision makers. Let’s face it. They’re going to be sense makers. They’re going to make sense of all this data. So one of the data points I shared at a recent AI summit was last year, 120 zeta bytes of data.

Dr. Steven D. Carter:[13:51] 120 zeta bytes of data were processed. But less than 10% of that was analyzed. That’s outrageous. We have to make sense of this data. But now I’ll share something with you that was shared with me while I was at Harvard. So one of my professors said to me, he said, look, Harvard’s not going to make you smarter, but you will learn to ask better questions. That’s exactly what AI is doing. It’s teaching us how to ask better questions because it will help us to interpret data wisely, And it will help us learn how to lead with empathy as the AI handles more of the, if you will, the what, so we can focus more on the why.

Markus Kuckertz:[14:39] So in your experience, how should leaders be prepared for an AI-powered work culture without creating fear?

Dr. Steven D. Carter:[14:46] There is only one answer to this question. If anyone tells you anything different, run away from them. So the correct answer on this is we must elevate the level of AI literacy. We have to have trainings. We have to have education. We have to demystify AI. AI is not this mythical creature. It’s not the Loch Ness Monster. It’s not Bigfoot. AI is something that we can use. It is a tool. Just like I grew up and I would go in my grandfather’s garage and he had this big red toolbox. And there were a gazillion tools in it. Me, I only used two. The screwdriver and the hammer. Those were the two I was familiar with. But my grandfather actually used all the tools, the drill bits, the pliers, the wrench, the Phillips head. There was no limit, but that is exactly what AI is. So we’re at a pivotal moment right now to where we’re learning enough about AI to be personally and professionally dangerous. But we still do not fully understand all the capabilities of what AI can do. We’re just barely scratching the epidermis here.

Ulrich Irnich:[16:00] Barely. I think you made a very good point, Stephen, or more than just one, but I want to reflect on one, especially for leaders. That means you need to understand technology. That’s number one. And number two is you also need to be honest and vulnerable to say, I don’t got it, right? Because then people can help you, right? But, you know, knowing everything, that is gone, right? Because if we democratize AI to the wider world, that will be available everywhere. Now it’s what you said. It’s now the point of leadership to connect the dots and make people feel safe in changing their way of working. That’s the mission of leaders.

Dr. Steven D. Carter:[16:50] Yes, I mean, I tell you that’s a great point. And you brought up something else, Uli, as you always do, as you always do. The democratization of this capability, because AI is a capability. I say it’s a tool. Yes, fundamentally, it is a tool. But in its application, it is a capability. It can fundamentally change how we operate. It can fundamentally change how we do business. It can fundamentally change our decision-making process. So think about it like this.

Dr. Steven D. Carter:[17:25] Here’s a great anecdote. Even your bank now, I remember when I came here to Germany back in the early to mid-90s, I got a bank statement and it just had a number on it. It had one number. This is your balance. That was it. That was the whole statement. This is your balance. But now if you look at your bank statement, it breaks out. Hey, not only is this your balance, these are all your accounts. This is what you spent during the month. Oh, here’s a pie chart to show where you spent your money. And then underneath it has what’s called insights. It says, well, if you spend less money in groceries or utilities and less money and entertainment, you can have more money for hobbies. So it’s giving you insight, saying, hey, if you do these things, you can achieve these other things. That’s what AI does for us now in seconds or a fraction of a second. It helps us get those key insights and those data points that we would have otherwise overlooked.

Markus Kuckertz:[18:31] Let’s talk also a bit about the downsides and risks of AI. Uli, what do you see as the biggest current risks of widespread AI adoption?

Ulrich Irnich:[18:40] The first risk is that I see a lot of C-level hallucination about AI. That’s, I would say, number one. And number two is that you build up your AI on messy data, right? Because if your data are not good, your AI is all the crap, right? Garbage in, garbage out.

Ulrich Irnich:[18:59] And if you look at that, right, you also need to be aware of the downside of technology, because technology is not all, right? It’s one part of the equation. And we need to be at some stages also skeptical, because if you see what is possible, because there’s also a dark side of AI, where people are using that to get, let’s say, criminal things done. And that’s also something we need to understand, how does this work, and also to be a bit, I would say,

Ulrich Irnich:[19:34] Critical in our mindset and thinking about it when we got a message which sounds too perfect, that it can be true. And that’s also something we need to realize. All about when we apply AI, it should have ethical rules, right, which is followed. And it’s not unsupervised so that the AI decides what’s possible and not. And that’s just a very cold mathematical thing. It just works with probabilities, right? And if we teach an AI wrongly, it doesn’t understand it’s right or wrong. It just sees that’s the highest probability. And I will continue on that. And we know all this Microsoft disaster they had with their Cortina, where it put on slang and talked about racism and all kinds of things, right? But it’s not because the technology is wrong. It just works with probabilities. And if you teach a system with the wrong, let’s say, probabilities and capabilities and you don’t supervise it, that can go wrong.

Markus Kuckertz:[20:42] Stephen, what do you think, especially when you think about ethical spots?

Dr. Steven D. Carter:[20:46] I think Uli hit on some great, great areas, and I just want to build on a few of those. One of those is, it’s a matter of who is creating the algorithms. To me, that’s another one of the risks, because you can have, like Uli was saying, you can have bad data, or it can be narrowly focused data. Because one of the things I talked about even at the AI summit was, if you look at a lot of your, let’s take open AI, for example. We know that is coming out of the West. That’s coming in mostly America. And we know that most of our data engineers and the creators, the ones who build the algorithms are generally young to middle-aged white men. So therefore, that is the lean. That’s where it’s going to lean. So if you’re an Indian woman or if you’re an Asian woman and you ask some things, you might not get the particular leaning that fits your particular cultural group. And so that’s a risk. I know we talked about corrupt data and data poisoning. That can happen, too. So here’s something that’s very interesting. And I just shared this with someone earlier today.

Dr. Steven D. Carter:[22:00] In about third or fifth grade, we learned this English mnemonic. It’s called a syllogism. So, syllogism. Everyone learns a new word today. So, in a syllogism, it creates a false or maybe not false narrative. So, here’s an example. Roses are red. This flower is red. Therefore, this flower is a rose.

Dr. Steven D. Carter:[22:30] So the premise started out absolutely true. But as soon as I injected something else that seemed true, it corrupted the output. Because now that flower could be a tulip, because tulips are red too. It could be a daisy. It could be anything. But that’s how data gets corrupted, just in that anecdotal sense. And that’s why we have to be good stewards of the data. And so those who are putting in the data must be very diverse teams. They must be of all ages, all ethnicities, all backgrounds, all economic strata, because that’s what makes the data more valid and makes it more uncorruptible.

Markus Kuckertz:[23:17] And let’s talk a bit about quantum computing as well, Stephen. How do you envision the convergence of AI and quantum computing and what excites you most about it?

Dr. Steven D. Carter:[23:27] Okay, so I’m going to tell you why I’m really excited about quantum computing. Because it’s coming. It is coming. We are not getting away from it. So what I suspect in looking at the amount of investment from other countries who shall remain nameless for this conversation, Just looking at the sheer volume and amount of investment, this is coming more sooner than we think. So we can’t talk about it once it gets here. So that is basically the equivalent of looking for good insurance or looking for good car insurance after you’ve had the accident. So we have to start talking about this now because the convergence has the potential to supercharge AI’s capabilities. Please, please don’t think anything you’re seeing right now is fast or it’s the latest and greatest because it is not. Quantum computing will supercharge AI’s capabilities. It will solve problems we can’t even approach today. So, and I’m going to tell you another thing that really excites me about this, because it’s a leap from computation to cognition.

Dr. Steven D. Carter:[24:39] So think about it. We can optimize drug discovery. We can model climate interventions in ways that we have not been able to heretofore, or even simulate scenarios in ways classical computers, they just can’t match. So if we think about it, what are computers based on now? Basic hexadecimal systems, binary code, zeros and ones. Well, quantum computing has quibits and quibits can fit neatly between all those numbers from 0.000000001, So well over two. So if you can do that, then that means you can get between the binary code, which means you can be even more disruptive to traditional computing than any hacker. Just in sheer quantum computing capability.

Markus Kuckertz:[25:33] So Uli, when you listen to Steven and what you just mentioned, that’s massive. So are we ready for that?

Ulrich Irnich:[25:40] Well, you know, in my former career, I worked in a nuclear resource center and had the privilege to work with some of the brightest minds who were working in such kind of topics, which was more theoretical because it was already some centuries ago, right? But to be very honest, sometimes I also step out of that conversation because I understand the principle of quantum computing, right? That you have more statuses than just zero and one, right? And you have the difference between wavelengths and a particular thing, right? Which gives us the ability to work with massive of data and compute them at the same time. That also, this kind of superpower also, let’s say, gets a thread that you can crack all this kind of encryption stuff we know today in minutes, right? So where you normally need years, you can now encrypt in minutes.

Ulrich Irnich:[26:46] And now think about that this is in the wrong hands, this kind of capability. If everybody has this kind of capability, then it will be equalized because then you can encrypt in the same speed and also in a better, let’s say, length than everybody else. But if somebody has just this capability, that gives an unfair advantage to someone, right? So that’s also something we need to remember. It gives us also the opportunity to crack problems we couldn’t crack so far, right, because of the sheer complexity of data, which gives us another opportunity to get answers to some of the world problems questions, right, where we are still struggling. If you look still at the weather forecast system, right, we just can see one or two days to be very reliable. And then the system is that complex that we can not run any forecast.

Ulrich Irnich:[27:46] But with a quantum computing system, you can do more, right? So you can see longer and you can have more variables.

Ulrich Irnich:[27:54] What most of the people don’t know, you can have massive of data and can compute them in parallel. But there’s a but, right?

Ulrich Irnich:[28:04] Some of the people in my age know the Pentium problem, right? Where you have some issues after some digits behind the comma, right? And quantum computing can handle a lot of massive data, but it is very imprecise, right? So if you compare it to, let’s say, a Pentium computer, right? It calculates 100 dots behind the comma, right? Where a quantum computer can handle a lot of digits before the comma, but it gets weak when you go behind the comma, right? So that’s something you should realize. That gives you also an idea of how technology evolves and can help us. But it has also some downside we should know, right? And that’s the only thing I would say when I talk about quantum computing. And it’s exciting because if you look at the next generation of networks on a 6G area where you bring more intelligence into the network and also this kind of self-healing mechanism where you have a lot of data and you have petabytes of data in a second which are going into that kind of network, you need to have a compute power which is able to handle that and give us the right direction. Let’s assume all people of the world drive an autonomous car.

Ulrich Irnich:[29:28] You can imagine how much data are going to the network. And if you want to serve that, you need to have a very, very strong power. And that can only be handled by a supercomputer like a quantum computer.

Dr. Steven D. Carter:[29:43] So, Stephen, what do you think?

Markus Kuckertz:[29:44] How can businesses and organizations prepare for that world today?

Dr. Steven D. Carter:[29:48] I tell you, that is a difficult question. So we all know that it’s all about culture. It’s all about culture. I would say the first step in any type of organizational change or any type of change, the first thing that has to change 100% of the time is always going to be the culture. Otherwise, people are going to use the familiar saying, well, this is how we’ve always done it. Why should we change? It’s always worked when we do it this way. We just have to encourage people to be more curious. So another thing I shared with people at the AI Summit I was at was I asked everybody, how many of you watched one of my favorite shows, Ted Lasso?

Dr. Steven D. Carter:[30:38] Because Ted Lasso teaches a lot of lessons in the TV show. So one of the lessons he teaches is to always be curious and not judgmental. But in order to build the type of organization or company that you are going to need to survive and be sustainable for the foreseeable future, It has to be a learning culture and it has to be a culture of curiosity where we invest in the foundational literacy about AI and emerging tech. It doesn’t have to be advanced. It’s at the foundational level. And then on top of that, we go back to that diversity and building those cross disciplinary teams. So it’s going to have to work. So your HR is going to have to be able to work across finance and finance across IT and IT across logistics. We have to have these cross-functional interdisciplinary teams. But it’s not always about predicting the future because we don’t have to be perfect. And that’s not the requirements that we be perfect. It’s about being more agile and being more adaptive. And to go back to something Uli said, we just have to be ethically grounded. We have to be ethically grounded enough to adapt to this capability and do it responsibly. and it’s easy to do it responsibly, all you have to do is ask a few key questions.

Dr. Steven D. Carter:[32:07] When we do this or if we do this, who does it impact?

Dr. Steven D. Carter:[32:13] And then after we ask, who does it impact, how does it impact them? And that’s when we’ll find out if it’s negative or positive. And of course, if it’s negative, then we probably shouldn’t do it. If it’s positive, then we should probably do more of it.

Markus Kuckertz:[32:27] So if you look at that future, what might be the first use cases in business that we might be able to observe? So when do we see that this world gets realized?

Dr. Steven D. Carter:[32:38] That is a good question. Honestly, I think we’re there. I think we’re there already. Because it starts with the appetite has grown. Because remember, about 10 years ago, trust me, AI was there. People didn’t realize it, but there was no appetite for it personally or anything outside of the commercial sector. There was no appetite for it. But now that there is an appetite, it is shaping the future. So there was a quote I used, and I see it’s being circulated quite a bit on LinkedIn. in. There was a quote I used during the AI summit. I said, the best way to predict the future is to invent it.

Dr. Steven D. Carter:[33:19] That’s basically what we’re doing right now. We are shaping the future. In some areas, we have absolutely no control. But in other areas, like Uli alluded to earlier, we have absolute and total control. We have control over the ethical nature of AI. We have control over the type of data that goes into AI. We have control over even to an extent how it’s used. We have control over that because I’ll give you a perfect example. And this is one thing for the feature you’re going to see a lot more of. People are going to create AI tools for very specific purposes. And you’ve already started to see some of that. For example, if you’re a mechanic that works in a mechanic shop, you might have that particular company develop not an LLM, but an SLM. And it only talks about how to repair these type of cars and how to troubleshoot the type of cars that they service. But that is strictly and exclusive to them. So you’re not going to be able to go out and find out how to bake a great brownie recipe, but you will be able to understand how to troubleshoot a car that has smoke coming out of the rear tailpipe.

Dr. Steven D. Carter:[34:48] And like even the medical community, you can see it being used there as well. I would actually recommend an LLM or an SLM very, very specific to the medical community vice a very generic platform such as a ChatGPT or a DeepSeek or a Cloud or a Gemini, which is pulling data from everywhere, which goes back to what Uli was saying earlier, which can create your hallucinations

Dr. Steven D. Carter:[35:16] or can create a bad response or an inaccurate response that could ultimately hurt someone.

Markus Kuckertz:[35:22] Thanks a lot for this interesting conversation. And like always at this stage of our episode, we would like to talk about the key takeaways. And usually Uli is going first.

Ulrich Irnich:[35:34] Perfect. I noted down four key takeaways and I’m pretty sure there are more than 400, but I noticed down four. So thank you very much, Stephen. The first one is AI will democratize new capabilities, and it’s on us now to use it and use it for the right purpose. That’s number one. I’m not sure if I spell it right. That’s number two. Solicity, which is how we make sure that the data we are feeding, the AI, will be diverse and also not manipulated and foster for the bigger grade. Right. That’s number two. I took. Number three is Ted Lasso, right? How we stay curious, right? And try to be open and experimental and not just sixth minded, which bring us only onto the old world, right? And the last one I put, Stephen, is seeing is believing. AI is there and it’s all around. And the best way to predict the future is created.

Markus Kuckertz:[36:42] Thanks, Uli. So, Stephen, what What are your thoughts?

Dr. Steven D. Carter:[36:45] Now, some key takeaways I took away. And I learned that Uli worked in the nuclear environment. That’s what I like. That was a big takeaway for me. So now, wow, I know Uli is uber smart now. As if I didn’t already know that. But one of the key things I took away is how in hearing what Uli was saying and even our march toward the future, We really have to start talking about quantum computing and how it’s going to be integrated into our daily lives. And this is this is one of those things. It’s like the microwave. In the 1950s, it was just a DARPA.

Dr. Steven D. Carter:[37:25] But in the 1970s, it came into the homes. We have to start talking about it now. The second key takeaway I took away was also what Uli said about the democratization, the tool and the capability. But I’m going to take it one step further. It’s going to be the democratization of all these emerging technologies and how they’re going to go from pilot to project, or they’re going to go from government use only to personal and private use. We have to be agile and we have to be adaptive. We are no longer in a space where we can afford to say, I know everything or I know enough. Or what we’ve done heretofore was good enough. The third takeaway was something very, very important for business.

Dr. Steven D. Carter:[38:19] Businesses are going to have to embrace this. And your sustainability and survivability and ability to be resilient moving forward depends on how well you adopt AI and how well you’re able to bring your workforce along. And the last and fourth point that I want to highlight is something that was stated from the very beginning, and we talked about the risk. The risk. Yes, yes, yes. There are significant risks. But what I will tell you is the risks are comparable to a car.

Dr. Steven D. Carter:[38:59] Can a car be used to drive you to work, to take the kids to soccer practice? Absolutely. Can they be used in the commission of a felony or a crime? And absolutely. But just because you can use it as a getaway car doesn’t mean we’re going to stop driving cars.

Dr. Steven D. Carter:[39:17] And so we have to get out of the mindset that, like, let me give you another example, because I want to just take this opportunity. I was in a faculty meeting yesterday, because, of course, you know, I teach at two different schools. So I was in the business faculty meeting. So we do it once a quarter. And what I noticed some of the other teachers talked about with AI was how the students were going to cheat. I heard that same conversation, 95, when the World Wide Web came out. And everyone said, oh, well, the students are going to use the internet to cheat. Well, guess what? They might use the internet to get smarter. So I would say apply that same logic to AI. The kids just might use it and learn to ask better questions. Or maybe we as educators need to make better assignments. How about that? So, so everybody’s got a leg in these pair of blue jeans and we just all have to just put a leg in and walk together. We do. This is not a, you know, I got there first. It’s not a race. It’s not a race. We can all get there together. And those are the four key takeaways that I came away with.

Ulrich Irnich:[40:30] You know, Stephen, that reminds me in one of the, that’s far ahead of our times or in the past, right? when they introduced the railways, right? There was in Germany the big believing that when the railway goes over 80 kilometers per hour, people get insane and they will get in hospitals.

Dr. Steven D. Carter:[40:56] Which we found out later that was not true.

Ulrich Irnich:[40:58] I would say there were already people insane if they are in train or not.

Dr. Steven D. Carter:[41:03] Yes, yes, exactly. So once again, it goes back to the syllogism. It starts out with a true premise. But depending upon what you inject later, it can completely change the output and make it completely untrue.

Markus Kuckertz:[41:18] So syllogism is definitely a new word I learned today. So thanks a lot for that, Stephen, for my whole life. When I hear that word, I will sing about our episode here. So thanks a lot, Stephen, for the time and for the insights. It was really a pleasure to have you in one of our episodes. And thanks for the time.

Dr. Steven D. Carter:[41:36] Thank you. It was a pleasure was all mine. I enjoyed being here. Thank you very much.

Markus Kuckertz:[41:40] That was the Digital Pacemaker Podcast with Dr. Stephen D.

Dr. Steven D. Carter:[41:44] Carter.

Markus Kuckertz:[41:44] Follow us now on your favorite podcast platform and never miss an episode. Thanks for listening and see you soon. Here’s Uli and Markus.

Music:[41:52] Music