From a coder's perspective: in the crossroads of AI and security, human responsibility is highlighted.

01 | 2025 Erkka Korpi, Senior software & cloud engineer, Partner Kipinä

As a coder, my standards are always high - I want to deliver cutting-edge IT solutions for our customer Supercell quickly and efficiently. The pressure I feel in my subconscious is mainly of my own making. The same pressure drove me to a situation where I relied too much on AI-generated code. The temptation to solve the problem quickly was simply too great. When I realised that the AI code was not working as it should, I had to go back several steps. The speed I was looking for had taken me in all the wrong directions - and into potentially dangerous waters.

 

While attending the annual AWS re:Invent technology conference in Las Vegas, this moment came back to me and really made me realize my responsibility as a developer when using AI-generated code. I'm sure we've all made the same mistake: as a coder, you've relied on something - a third-party library, an off-the-shelf solution, or even a colleague's recommendation - that you don't fully understand.

As you might expect, the main focus of AWS re:Invent was AI and related technological innovations. Sessions on security by Fernando Cardoso of Trend Micro, and Raj Pathak, Qingwei Lin and Ankur Tiwari of AWS, all focused on security, gave me a concrete insight into the fast-paced race between AI and security. Too many important issues are easily overlooked as AI solutions take over software development at an accelerating pace.

 
 

"As coders and customers, are we sufficiently aware of the race between AI and security in software development? Do we recognize the risks of the rush and the ever-increasing pace?"

 
 

These questions serve as the spark behind this blog post. I hope that my perspectives will spark ideas among journalists, customers and fellow coders about the security risks associated with AI and the importance of addressing them. New solutions that are faster and seem easier than ever can fascinate anyone. On the vendor side in particular, this creates pressure to deliver AI-generated solutions - often too quickly. The risks of blindness to speed are obvious.

Everything new brings security threats

Historically, when something technologically sophisticated has been rapidly adapted for 'mass' use, the change has brought with it security risks. Vulnerabilities, often hidden by a lack of skills. 

Information security is a speciality in software development for a reason. That's why security professionals who know how to identify and manage risks are often used to test applications.

The rise of cloud computing brought a completely new environment compared to traditional IT infrastructure. PaaS (Platform as a Service) services operate under a so-called shared responsibility model, where the responsibility for security is shared between the service provider and the customer.

  • The service provider is responsible for the security of the platform infrastructure, such as encrypting traffic between data centres and maintaining the security of the server platform.

  • The customer is responsible for the secure configuration of the services, such as managing firewall rules and protecting their data.

Often shortcomings in customer responsibilities have led to serious security breaches. For example Tesla's 2018 data leak was caused by misconfigured cloud services.

 
 

"AI services have a significant potential risk of causing similar leaks and even disasters many times more severe than those experienced so far."

 
 

For example, consider a situation where a company has trained a large language model (LLM) with its own data, possibly accumulated over decades. One misconfigured service can, in the worst case, open up unauthorized access to this model. In this case, a so-called 'black hat hacker' could simply ask the model for the information he wants without having to search for it.

This risk is similar to traditional infrastructure protection, which defines exactly who has access to systems and from where. However, in a modern AI attack, the situation can be more complex. While anyone can have a conversation with a model, a black hat hacker can phrase his question (prompt) so cleverly that the model reveals information that should not be public. This is called a prompt injection attack.

In the same way that the development of AI services has lowered the threshold for software development, it has also made hacking easier for those who previously lacked the necessary know-how. In addition to prompt injection, AI poses other specific security threats, such as:

  • Data poisoning: an attacker intentionally manipulates the AI's training data, which can alter the model's behavior to benefit the attacker.

  • Model extraction: the attacker uses carefully targeted questions to copy the model's operating logic and create his own version of the model and its features.

For information on managing security risks, I recommend that you consult the material maintained by OWASP, in particular the recently published OWASP Top 10 for LLM Applications 2025 -white paper.

Good questions in the interplay between data security and AI include: 

  • Does your company know what AI models are in use and what data is fed into them? 

  • Are different teams in the organisation using different models - for example, one team using OpenAI's ChatGPT and another using Google's Gemini? 

  • Has the company defined clear rules for the use of AI or has it not been thought through at all?

Feet on the ground and common sense

The message about the potential of AI, which is resounding in the marketplace, requires a feet firmly on the ground attitude and a healthy dose of criticism. From a coder's perspective, the message of AI triumph is coloured by a glaring conflict of interest on the part of service providers.

It is important to understand that AI service providers will directly benefit from a future where coders would no longer be needed. Operators such as Nvidia, OpenAI and Cognition are boldly painting a picture of a future where AI does all the programming for humans - coding skills would be redundant.

Ironically, at the AWS re:Invent event, the AI security message was, in my opinion, the opposite. The need for human expertise is highlighted as the security responsibility for AI solutions ultimately falls to humans. Service providers like AWS are investing heavily in shared responsibility model -platforms on which AI services are developed (e.g. AWS Bedrock). At least for now, we cannot outsource security responsibility to AI, but we can make our work more efficient with it.

 
 

"This mixed message has certainly been part of the reason for my own reservations about AI solutions. Many promises sound too good to be true. A future in which humans become more or less redundant or a mere extension of AI has felt alien and even unrealistic."

 
 

However, the fact is that while AI tools such as ChatGPT and GitHub Copilot multiply the productivity of coders, they are not, at least for now, a substitute for the expertise and deep knowledge of an experienced coder.

Using AI-generated code directly in production always carries a risk of vulnerability. Without an expert's ability to critically assess the quality of AI-generated code, vulnerabilities can easily make their way into production.

As WithSecure's Research Director Mikko Hyppönen has said, "If it's intelligent - it's vulnerable."

Who is responsible for AI?

AI technologies and related abstractions are currently evolving at a rapid pace. At the same time, the production of AI-based services and solutions is becoming easier every day, regardless of the technical skills of the developer.

This trajectory is reminiscent of the disruption that cloud computing once enabled. Cloud computing accelerated development and lowered the threshold for building a productive infrastructure without large financial investments. Anyone could make a digital service publicly available. Now, AI technologies are accelerating this cycle even further, enabling more efficient work with less technical skills.

However, without sufficient technical understanding, it is difficult to assess the security risks and technical flaws of AI applications and products. So who bears the responsibility when an AI-based solution that is quickly and easily produced contains risks?

It was this increase in speed and the lowering of the threshold that got me thinking:

Will accelerating development and increasing labour productivity create new security risks? And how can these risks be managed?

The security risks associated with AI and how to manage them are still relatively new challenges. The OWASP Foundation, which has long published Top 10 lists of security threats for web applications, published its first Top 10 list specifically for large language models (LLM applications) in 2023, indicating that AI security now requires even more attention.

The temptation of every coder 

A coder's job is a constant learning experience, working on rapidly evolving and large-scale projects. No one can claim to have it all. However, AI seems to emphasise the ever-increasing balancing act in the industry; whether to do things quickly or to do them right the first time. A question that contributes to the temptation to straighten out the bends.

As I mentioned earlier, I myself have found myself in situations where I have moved too fast on a project due to time pressure and limited resources. This is a human factor familiar to many in the AI and security field. Even as an experienced author, I have been faced with completely new challenges and found that my own technical expertise is not always sufficient to assess all the risks.

Before AI, the solution to the so-called 'code junk' was sought from colleagues or Stack Overflow, so it's only human that AI is attractive to every coder. Rush, time pressure and lack of people all play a role in the background. These have made me understand my responsibilities and my role in this job. My own mistake taught me that I can now be more critical of the benefits of AI's apparent efficiency. I no longer blindly trust the code produced by AI.

However, the use of AI code does not reflect laziness on the part of the coder, but rather a desire to be better created by external or self-imposed pressure. In these situations, I find it helps that more important than moving fast is creating code that I can take responsibility for at the end of the day. I can't apologise to the client that "I got some bad code from the AI."

Is it possible to move quickly and still take the most important things into account?
What risks do you identify and are you prepared to take?

Information security is everyone's responsibility

Finally, I encourage anyone using AI or providing AI services to familiarise themselves with the security threats associated with AI solutions, such as the risks of large language models (LLMs) and how to prevent them. In my view, AI security is a combination of traditional access and privilege management and AI-specific threat management.

Nothing has changed shift left -approach, which takes security into account at the early stages of software development rather than after release. AI will speed up software development, but not at the expense of security.

Interested in the coder's perspective?
Read about Planet AI's artificial intelligence case study

 
 
 

Erkka Korpi, Senior software & cloud engineer & Partner Kipinä

The author is one of Kipinä's experienced experts. Erka's straightforward and inquisitive nature creates a passion for understanding how things really work deep beneath the surface. IT is Erka's passion both as a consultant in the Supercell team and in his spare time: "I like to go in there really deep."

Previous
Previous

Case Otava: Full organisational support for accessibility

Next
Next

Attention Buyer! The seeds of a successful digital project are being sown earlier than you thought.