Episode 4 - Accelerating Human Potential
00:00 Integrating Humans and AI in the Workplace: Getting the Fundamentals Right
06:25 AI as an Accelerator of Human Potential: The Role of Personal Assistants
13:08 Creating New Jobs: The Impact of AI on Job Roles
15:30 Challenges in Integrating Intelligent Technologies: Organisational Culture and Values
Welcome to the Humans and AI in the Workplace podcast. Over the last few years, it's become clear that artificial intelligence, AI, is one of the most impactful and disruptive transformations in the workplace. As a leader, you may be wondering how to get started and how to do it in an intelligent way. Or you may be stuck on how to overcome some of the people issues and human bottlenecks your AI has crashed into. We are here today with Dr. Deborah Panipucci.
and Leisa Hart from AI Adaptive, and our special guest, Patrick Beraud from Tonomus Venture Studios in Dubai to discuss today's topic of getting the fundamentals right for integrating humans and AI in your business.
Thank you for joining us. I'm Lisa Hart. And I'm Dr. Debra Panipucci. This is part A of our guest episode with Patrick Burrow, venture builder, technologist and business strategist. I'm really excited to talk with Patrick. Patrick has worked all around the world at the cutting edge of technology since 2002 and is currently in Dubai at the Tonomus Venture Studios. Yeah, it's going to be a really great chat with Patrick.
Patrick is going to talk to us about the use of AI to accelerate human potential and the importance of getting the workplace fundamentals right. Hi Patrick. It's an exciting time in the world of humans and AI. And yeah, so we were excited to talk to you to get your perspective on what you see as some of the opportunities, some of the wins, some of the watch outs. We call it our wow, we'll come to that later. But what are your first thoughts when I say to you Humans and AI in the workplace.
I'm very believer on, I don't know who said it, I've heard it somewhere, they say, we don't take technology and go and find what problem can we solve with that technology. We start from the problem, basically what problem do you have as a human being? And then we go and find a solution and the technology could be part of that solution, it's not always a solution. And I think that's a very good segue to what you are talking about. You're talking about AI and human, right? So The word AI or artificial intelligence has been around for a very, very long time, but it's only over the past, I don't know what else, five years, if my research is correct, that the acceleration has just gone over the roof. And especially in the past one year, one and a half year, you couldn't talk about technology without talking about AI. Yes, we've seen that too in terms of it's really accelerated in the last year and a half.
Well, just over a year since. In organisations in particular, but also with just people playing with it as part of their daily lives. Exactly. Look a little bit at history, any new technology, you know, the first thing that people are afraid when there was internet, people were talking about how library will disappear when there was, you know, all this kind of thing. So there's always an educational path to it when there's a new technology. And particularly when it comes to AI, from my perspective, I think, that education, we need to start basically letting people know that technology is just an extension of who we are. It's not separate from us because the tech came from where? It came from you, the human, who has this inside or this vision who need to do something, then you come for it. So there's no such separation. Which, once you start educating people that is the extension of you,that fear that maybe AI is going to take my work or AI is going to do bad things and all these kind of things, it can be managed rationally as opposed to having what I call irrational fear when it comes to AI.
Yeah, fear is a natural response to change from a physiological perspective in our brain and we see that happens all the time in organisations and just in personal life too. So when it comes to...technology and the fear of AI taking their jobs. I think what's different is the technology and the pace of it in most people's mind is moving faster than our ability to adapt to it within organisations to do the education and look at, okay, what skills have we got? What skills can I move to personally? Those conversations aren't happening in a way that people feel like there's opportunity. All they're hearing is the media reports of AI is coming to take your jobs and a one or two use cases where, you know, in call centres, a large volume of calls has been taken over by conversational AI. And so those examples are getting highlighted, which is driving more of the conversation around, will I lose my job? Because there's a void of the proactive conversation going, actually, you've got these skills.
Yes, maybe this particular AI can do these tasks. It won't necessarily replace your whole job or maybe you need to work in collaboration with the AI versus full automation. And so without the conversation, the fear can just grow. So that means how do they engage their people early? How do they build their knowledge? So to your point, their education, how do they build their AI fluency and their knowledge so that they can speak to their team in meaningful, thoughtful ways and talk about the opportunities and those transitions of capabilities and where the AI is coming and where it's not yet. I'm actually very much in line with what you just said. You say how would the leaders speak to their team in a very meaningful, thoughtful way. And for anyone listening to us, I'll start by saying that actually they should be very excited about AI, extremely excited. And just to put in a very layman term, why you should be excited.
It's like with AI, everybody will have a personal assistant. Yes. Yeah, we say that quite often. Yeah. Exactly. So if we start from what I'll call, I don't know what I should call it, the philosophical potential by saying AI is an accelerator of human potential. And then we start discounting it down to the basic level, whether you are at work or you are at home or you are with family, you have,thousands of thousands of tasks to do to be able to deliver a value. What is the value? It's a number of tasks that you group together, you execute together to deliver a value. Now imagine it doesn't matter the role in which you are, whether you are the secretary or whether you are the leaders making a decision. Imagine that you have a particular value to deliver. And to deliver value, you know that you have what we call personal agent.
Each agent is specialised in a very small task towards that value. And you send all those agents automatically to go and do all these tasks and bring it together, which become almost 70 % of the value. And you as a human genius add the remaining 30 % on top of it to deliver it. That's how you should look at it from an AI perspective. I love that human genius. Yeah. I love that. Yeah. And right now, I mean,
Currently, my role is what we basically build ventures, right? So we could not stay away from those ventures that are AI oriented. So we are speaking with many, many of those tech ventures out there. And you can see that all of them are coming out with solutions that have what I just mentioned called agents, right? There are some agents specialised in...just book an airline, the best ticket for you. There are some agents specialised on managing my calendar for me. There are some agents specialising on giving me the molecular value of this particular thing, something like that. And they say very, very, it's a, what I'll call a tight specialisation. So it's not broad. It take a particular task. So is that narrow AI? Each agent is specialised in something very specific. Then there's an agent that manage all those agents.
Get out. And then you manage the agent that is managing them. So I've tested products where I was, it totally blew me out, where I have a master agent in a strategy consulting that actually, me making a decision in terms of where we should be investing over the next three, over the next five years. But there are specific tasks that I needed to give out. So I gave it to about 10 different agents. Each of them went there and do that. And then there's another agent that's able to combine all this to give me a solution on which, based on certain insight that I have with human interaction, which the AI does not have any value system, I'm able to refine it, to finance it, to deliver out there. It's just amazing what is happening out there with those AI agents, and it seems to be the future. Everybody has been a personal assistant one way or another. So that master agent was it like an aggregator of all the outputs of the individual narrow focused agents and then surfacing to you a recommendation? So that master agent I'll call it maybe a little bit more closer to you with regard to understanding the final value, how it should be like. Each of the other agents is just focused on a particular domain, particular task. So that measure is now when taking all this activity, all these tasks completed together, surface it to you to another level. And you, the genius human, which you just mentioned, is able now to add that level to the top. So that is the end of it, where you take something that is already done for you, like 60, 70%, and you add to it. So it's an accelerator.
It really accelerates what you do.
You know, things that you could spend a whole year doing and you could literally do it, I don't know, in less than a day if you have those agents. That's the future. Yeah. We, I often talk about the use of chat GPT in someone's role as, you know, a really good headstart, you know, at, depending on how complex the task is that you're asking it to perform, you know, a large language model like that could give you maybe 75 to 80 % of something.
And then your job is to make sure it's accurate and it's not hallucinating. And also that it's contextualised to the way that you write or the way that your organisation uses its tone of voice. So I think that's a pretty generous percentage again depending on the complexity of the tasks that you're asking it to do. But if I could get a leg up or a head start on a leg up or a head start on every task because I've either got a digital agent or I've got a way of accessing a large language model that could preempt or do some of that heavy lifting. That sounds amazing to me. It definitely frees up people's time because you're not spending all of that time sitting there going.
How do I even start? Let me just write something down that's not very good, then refine it. You're actually getting the AI to create that for you that you then look at and go, okay, now I'm straight into refinement. That took two seconds, let's do it. Well, I want to use that. Then you're on your way. So it can still supercharge your work by giving you a 10 or 15 or 20 % leg up versus a bigger chunk. Yeah. So, and you know, when people talk about because of AI, I need to be retraining to say that, you know,
If you look at history, we had many of those kinds of conversations when call centres started, we started moving call centres to what I'll call a lower cost location at the time it was India or something. In Australia, people were saying, I'm going to lose my job. The question is where are all these people now? They moved to another job that a high value oriented job, right? So that's how they have to think about it. I would like to come back very quickly to you saying something about charge GPT that's very important, right? So It has a data set, right? And you give it a task and you have something generic, usually, right? So it still hasn't reached that level of fineness where you can just take it and use it. You still have to modify, but it does give you the leg up that you are talking about. That leg up. That leg up. It's a very Australian term. Yeah, I know. And you know, in Chatt GPT, they're actually a new job role now called prompt engineer. We were just talking about that and we were talking about the inequality in the conversation around your job's going to be stolen but not enough talk about actually all these new jobs that we don't even know some of yet will be created. Gartner says by 2033 that there'll be half a billion net new human jobs created and the example we gave was there are things that we've never heard of or until recently, like prompt engineer is a whole new job. Data ethicist is a whole new job that came out a couple of years ago. So there are so many that we don't know about that are yet to be created that humans have really good skills that can be pivoted to or up skilled for. And if we try to build on that, I actually foresee very soon that, okay, we have all those prompt engineer who knows how to basically give instruction to the AI in terms of this is what we want you to do, where you don't need to worry about that. You worry about some other things that you are good at doing. I think we will reach very quickly a point that those agents that I was talking about, some people will be specialists of just building those agents for you. So Lisa, you could come to me and say, Patrick, this is what I do A, B, C, and D. And we have this agent that does this for A. And we believe that we...we can have better agent than that to do S, Y, and Z. And I'll be great at just creating that agent for you, but then you are able to use agent because you know your domain of work, you are expecting your domain. So when the agent deliver those tasks, you can add a value to the top of it. So those agents are not even just going to come out of the blue. People become specialist of building the agent and various people with various way of thinking will contribute to building those agents. Then those agents will deliver value.
And in fact, you could charge people for using your agent. wow. So I could build my own agent and then like white label it. And you know, your agent get better and better the more it's used, right? So at some point it's smarter. You can advertise like, you know, people talk about API in our world where you can actually, you know, sell your agent. I don't know what is on eBay. Who knows? Everything is possible.
It's exciting. It is, isn't it? Yeah. So, you know, you've been involved in lots of transformations in organisations. We talk about some of the challenges in the context of humans and AI as those bottlenecks that AI crashes into. What kind of bottlenecks do you see, Patrick, or what kind of struggles or challenges do you see when organisations are wanting to introduce? intelligent technologies from the human side. So they are from the human side, not from the technical side, right? Technology is so easy compared to humans. Don't even get me started.
First of all, based on experience what I've seen is that before you are able to define the human side, get what I'll call your culture right, you know.
Because I don't want to digress, we've seen AI out there that became racist and all these kind of things. And at the end of the day, we are building those technologies so they are kind of a reflection of us. So if those problems happen, you get to ask the question, why those problems happened? Those machines didn't just become that, they came from somewhere. So As an organisation, if you get your culture right, who you are, why you exist, and how you treat your people and all these kind of things, automatically any new technology like AI will reflect, I mean, your culture will reflect through those things. So are you saying the way that...
The AI would be influenced by the DNA of the culture of the organisation. Yeah, the AI would enhance the problems in the organisation. So if the organisation... Accelerate the problems. Yeah, so if the organisation already has silos, I expect that when they bring in AI, the AI will exist within silos and will perpetuate those silos and make those silos much worse from what I would call a lot more behaviour and culture. Let me take, I don't know what I should call it, a real well -known example right now in my world. Look at AI. So we are building what we call cognitive city. It's like a city of the future. And it's in Saudi Arabia, it's in NEO, and we are doing a lot of things over there. Now, AI is the foundation of everything that we do. Right at the beginning, although we know that we could not build something like that without embedding that future technological AI. You could see some of the problems that we are facing now. It's kind of we are racing to get all these ethicists on board and to see how do we embed those people into the various projects that we do. And then there's the question of what is actually our AI strategy when it comes to ethics in all these kind of things. Ethics and responsible AI and the governance is so important. So if I try to roll back and say, it's not that we didn't have our culture correct, it's not that we didn't have our value system correct, it's okay, those value system or vision that you define, you define and then you just leave it there and just go around into certain things. Then there's a new technology that comes and then suddenly you are trying to match it too and suddenly you are trying to see are we using it responsibly and all these kind of things. And if you don't forget about those cultures beforehand, if you don't forget about those visions beforehand, if you continually embed them into who you are, why you exist and all these kind of things, any new technology is just going to fall into that. You are not going to find that gap now that you are trying to fill by you know, how do we get ATCs into this? How do we use people data responsibly? Because you are already responsible. So to take it back to the example you had before about built -in biases, if you've got some level of bias already in your processes, that's just going to be replicated and accelerated by the technology. So if you haven't addressed that already in your business, so if people aren't great at...you know, sharing knowledge across different teams and they do have that protectionism and they work in their silos, well this is just going to come in and accelerate that as Dr. Deb said before. So the technology is not doing anything different than what you were doing, it's just doing it better. If that data was not done responsibly before, it's just accelerated that by several factors. So...
And you're like, oops, how will I address that? So I mean, from leaders perspective, I would just say that we should always roll back to the fundamental. And the fundamental is what? Who are you as an organisation? Right? And what define you? Why do you exist in the first place? Before you can start talking about technology as a leader, that's what it should always be. And if that fundamental is very well embedded and is very well shared by everyone in the organisation,
Tackling the fear of AI is just, I would say, is the easiest part. Because why would you be fearful that the AI is going to do something so wrong or so damaging if you are already, your fundamental is solid.
When your fundamental is solid, you can catch those things beforehand, you know how to work through. It's just a compliment. Any technology is an extension of who we are. It's no different, it's not separate to us. So it's going to be an extension of...your value system, your behaviours, your vision as an organisation, your reason to live as an organisation. So if your organisation has a lack of trust, then you expect your employees to also not trust the AI and be more fearful of the AI doing bad things, which then creates a need for more data ethicists processes and policies and rules and things like that in place. What I'd also like to just check is my interpretation of what you've said to Patrick is if within your organisation you haven't yet built the habits within your people about being able to have open and honest and frank conversations and you know some...
People call that difficult conversations or crucial conversations when things aren't quite going right. If you don't have the rhythm and the habit for that before you add more complexity through intelligent technology such as AI, well then that's just going to make that worse, isn't it? Because if someone's bringing in something and other people don't feel like they're involved in that and they don't know how to navigate that conversation or if they see that there's issues with the data or if there's issues with the responsible aspects of it based on their organisation's values. If they're not willing to have that conversation or they don't have the skills, they don't practice that, well then they're not gonna start doing it all of a sudden. So take the technology that we are calling AI here. It needs something to learn to operate, to become smart. And what is that in data?
Where does the data come from? The data come from the interaction we have in our organisation in decision making, in business making, right? So if you look at the whole value chain, right, there's an output here that is used as an input for this technology that become an output out there. So if you put a garbage here, a garbage will also come out. So the value system of the organisation, you know, the way they treat their people, the way they do business with their customers, all these things are encoded into value system and people behave or people embody those value system in their day to day interaction. The data that come from it, how they use it, how they make the decision, they are just going to take all this and mimic that on bigger, bigger scales. So, leave the technology behind come back to your fundamental, who are you? Why do you exist? Why do you do the way you do things and where are problems that you can solve? Solve that first and the AI should be able to just fit into it. It's not separate from you. That's how I see it.
I was just reflecting on a conversation we had earlier today around how it is very difficult for leaders out there to be able to make really good decisions in relation to the business because there is that lack of difficult conversations happening and you know Leisa and I have worked with some organisations, we won't name any names, but we have seen cultures of rosie reporting where people are fearful of raising all of their issues because it'll make them look bad so instead they take it out of all the reports so the reports come through and go yeah there's some challenges but we're working through it instead of saying red flags red flags we need to do something about this and if leaders aren't getting that information then...you know, it comes down to the whistle blowing process and by then it's too late. So yeah, if you've got problems with AI in an organisation like this, you're not going to find out the problems until it's too late. It's going to maximize, magnify the AI is what it's just another tool to accelerate the way you do things, the way you behave, the way you serve your customer, the way you take care of your employee, the way you do business, all these kind of things. So yeah go back to the fundamental, that's my view about it. And of course, the fundamental are always the most difficult. Yes, always, always.
For our leaders who are listening today, we hope that this episode has given you some actionable insights for looking at the fundamentals in your workplace. And also to identify what needs to shift for humans and AI to be integrated and successful. So if you enjoyed this episode with Patrick.
There's part B as well, where Patrick shares how AI adoption is being fast -tracked in the Middle East, where he lives and works. Thanks for listening. Humans and AI in the workplace is brought to you by AI Adaptive. Thank you so much for listening today. You can help us continue to supercharge workplaces with AI by subscribing and sharing this podcast and joining us on LinkedIn.