Yup, it does....
Every time I open LinkedIn or read articles about AI+people, the narrative feels the same – AI is the ultimate efficiency machine that will free us all from the most mundane, manual tasks. Not untrue, but there’s so much nuance that some people seem to be missing. Here’s the uncomfortable truth: without intentional human partnership, your AI tool that you just implemented is prone to errors and biases… it will basically fail. The real controversy isn’t whether or not AI will take our jobs (I have a lot of opinions about that as well…. but that’s for another blog post!) – the real, real controversy is whether we’re/you’re ready to step up and take the irreplaceable place of guiding it!
We can’t deny that every company has, in a smaller or bigger scale, tried or adopted some sort of AI or automation tool. But, as the latest MIT report showed, this hasn’t really worked out or translated into meaningful transformation – 95% of GenAI pilots at companies fail to produce any ROI! As a digital transformation expert and cyberpsychologist, I see it as pretty clear: it’s a collaboration, NOT a delegation. If these tools are left on their own, it will lead to problems:
- Biases, Biases, and More Biases: How does AI learn? Yup, you got it – from the human data we feed it. Human data is messy and biased. Of course, there is “good data,” but who’s checking? Imagine an HR AI tool tasked to screen candidates – inevitably, these biases will come into play. Imagine an extremely successful engineer who decides to take a 3-year career break for parenthood being automatically screened out by an AI tool trained on data that doesn’t account for career breaks. This is a very big failure and compliance issue in the talent pipeline that reinforces the same inequities you’re trying to solve.
Ethical Blind Spots: AI=tool. It can optimize for a goal without understanding any of the ethical implications. Sure, it can identify (if prompted correctly) the most “at-risk” employees for turnover, but it is not able to figure out how to morally help them, which can lead to a negative impact on psychological safety. At the end of the day, it lacks the important human moral compass.
Contextual Failures: Sure, AI can analyze a report and summarize every single important point, but can it “read a room” and understand the many, many nuances of tense conversations or subtext? This inability is why some AI-driven decisions can feel very robotic or jarring, destroying the human trust needed for any real progress.
The Indispensable Human Touch in the New AI World
You know those reports that come out and tell you exactly what skills will be the most important in the future? I think “human touch” should be there! It is the most critical component for successful AI integration in any project. It provides direction, meaning, and ethical knowledge. Human-AI collaboration is a dance (I had to add a dance reference, given my love for Zumba!!), and we know that every dance needs a strong lead:
You are the Strategic Visionary: As the human you are, you define the goals, interpret the results, and then guide AI toward what actually matters. This makes sure the entire AI power is focused on the right problems to solve.
You are the Ethical Guardian: Only you, as the human, can guide and train AI with moral principles and ensure it operates within what’s deemed “accepted” – societal and organizational values.
You are the Creative Catalyst: How does innovation come along? Well, more often than not, it comes from identifying new problems and thinking of new solutions to those same problems. You bring the “what if” and AI helps you explore (my favorite ideation buddy!).
This is not me saying “NO” to AI or resisting it; quite the opposite – you should 10000% embrace it because it is here to stay. This is me asking leaders to invest as much in human intelligence as they are now doing in AI. Failure to prepare people for their new roles – that’s one of the biggest challenges to effective AI adoption and human-AI collaboration. Leaders need to promote and model:
Critical Oversight: Teach people to push back at AI and its outputs. Only this way will they be able to understand and identify its flaws.
Ethical Reasoning: Develop the frameworks that guide AI’s adoption and deployment responsibly.
Psychological Safety: A must!! Create an environment where people feel safe to collaborate, to challenge, to experiment, and even override AI when they know it’s not right, without any fear!
The goal? Make sure AI serves and further human potential, and not the other way around!