From a coder's perspective: psychological safety, ego and the difficulty of openness in the age of artificial intelligence

06 | 2025 Arto Kellokoski, Agile Coach & partner Kipinä

As an Agile Coach in digital development teams in different companies, I have learned that in a safe atmosphere, people dare to be genuinely unfinished. But what about when you have public power and responsibility, say as CEO? Would you dare to ask an AI something you would otherwise not dare to ask anyone? What about in the presence of others?

We all have an ego to protect, but as a side effect it is also the most common obstacle to individual and community development. It slows down the true potential of AI because it requires honesty with oneself to harness it: What don't I know? What am I stuck on? The quality of the questions determines what you can understand - and most of the time the best questions are the ones you wouldn't dare say out loud.

In these situations, I often find myself returning to the words of actor Eero Milanoff, which I remembered from an interview in Helsingin Sanomat (2.5.2021):

"If you can't open yourself mentally to your work and to the situations that come up, you have to protect yourself. When you have to protect yourself, choices are made through fear and not in the moment. It closes off opportunities to achieve something interesting."

I think the same wisdom applies to AI: if you are afraid to show you don't know, you will never get to what you could know. So instead of protecting themselves, leaders need to be curious and have the courage to look at themselves and their decisions in the mirror, including with the help of AI.

Who are you asking and what?

In my work, I have started to build small AI personalities - agents, each with their own perspective and context. I use them to challenge my own thinking, to bring in multiple perspectives and to make hidden 'biases' visible.

The answer always depends on who you ask and how you ask. This is no different from real life. As with humans, when asking AI, you need to understand what kind of "personality" is responding, what kind of worldview it operates in. In other words, for an AI agent to truly support decision making, there always needs to be a context behind it: why is this perspective relevant, and what does it bring to the conversation?

As a leader - or even an Agile Coach - it is important to challenge your own decision-making from different perspectives. AI is a good tool to help you do that. When you bring in multiple agents, such as a critic, a dissenter, an optimist and an ethical reviewer, you can mirror your thinking from many directions. Not because one of them is right, but because they help you ask better questions. Sometimes I also get the agents to talk to each other, as if I were sitting them at the same table. For example, when I'm thinking about how to give difficult feedback, one agent will bring up the direct approach, another will focus on learning, and a third will ask what might be left unsaid. This kind of small AI workshop often yields insights that I wouldn't reach on my own.

As a leader, it is also important to use AI to practice challenging your own decision-making with these perspectives. By bringing in multiple 'agents' - a dissenter, a critic, an optimist, an ethical reviewer - you can test your own thinking and seek a more balanced, considered perspective. Not because AI gives the right answer, but because it helps you ask better questions.

A new layer of interaction

The way people perceive and perceive things affects the outcome. The same goes for AI - interactions with it are full of human choices, interpretations and emotions. In my work with teams, I use AI to iterate and generate insights. It helps me articulate problems that are often rooted in social dynamics: misunderstandings, tensions, unclear roles.

I find that the challenges are particularly acute in remote work, where much of the interaction takes place through written communication channels. Digital information is fragmented, tones of voice and embodied communication disappear, and tacit information rarely gets out. That's why I've personally brought in AI to help pick up the fragments of interaction and structure them into an intelligible whole. To pick up on the tacit nuances of what might otherwise be completely missed in the Slack columns. 

When talking about using AI to improve interaction, the importance of self-regulation and awareness must be stressed. For me personally, AI is not only a tool but also a mirror. It not only makes my work more efficient, but helps me to pause and reflect: Why am I thinking this way? What if I saw it differently? This also makes AI an emotional partner - not a replacement for a human, but an extension of my own thinking.

Messy humanity and the clear logic of machines

There is a certain messiness to being human. Perfection is not the goal: not in humans, not in AI. It is imperfection that makes us understandable and connects us to each other. More important than efficiency, then, is presence. 

Given the huge potential of AI, it is good to bear in mind its potential for abuse. As efficiency increases, new distortions can develop in teams and companies and in human interactions. Alongside the power of AI, making mistakes remains one of the most valuable gifts of development, and maintaining empathy one of the most important values. By keeping humans at the centre of interactions, AI can act as a support - not just as an efficiency booster, but also as a reflective enabler.

Let me return once again to the words of Eero Milanoff, which sum up the essence of this text:

"When you have to protect yourself, choices are made through fear and not in the moment. It closes off opportunities to make something interesting happen."

My own goal is that AI will not build stronger walls and defences, but help us open up what is interesting, unfinished, complex and yet meaningful.


Stop by for a coffee!
 
 

Arto Kellokoski, Agile Coach & partner Kipinä

Arto Kellokoski is an experienced Agile Coach with over 20 years of background in software development. He combines technical expertise and agile best practices to help teams and organisations develop interaction, collaboration and practices - with the goal of sustainable development. People and genuine interaction are at the heart of Arto's work: he actively listens, supports change and empathically empowers team growth.

Previous
Previous

Where does AI really fit in - and where doesn't it? An in-depth guide for tech decision-makers

Next
Next

From a coder's perspective: the best AI solutions are always based on high-quality data.