Three insights: What does locally deployed AI really mean for businesses?
04 | 2026 Jari Huilla, CTO & Partner at Kipinä
Artificial intelligence is on everyone’s lips right now. In recent weeks, the topics that have been generating the most discussion have been, in particular, increasing usage restrictions, rising prices, and the decline in performance of models that previously seemed more advanced when used via cloud services and APIs. This has raised concerns for many: should businesses dare to build their operations on artificial intelligence if its prices could rise or its availability could decline in an uncontrollable manner?
Fortunately, AI models can also be run locally—on your own computers, in your own environment. Why does this matter?
1. If you own your AI, no one can take it away from you
When you run open-source software on your own hardware, no third party can suddenly restrict your access to the service.
When the model is running in its own environment:
its operation does not depend on external services
the data does not leave the organization
Information security can be managed more effectively
This is not merely a technical choice, but a strategic decision involving business risks.
2. The model doesn't get dumber on its own
Sometimes, models offered by cloud services seem smart and capable at first, only to seem less so over the following weeks. At the time of writing, this discussion has mainly centered on Anthropic’s Claude Opus versions 4.6 and 4.7.
Locally run models remain unchanged unless you modify them yourself.
3. Value comes from a combination of factors—not from a single technology
Let’s be honest, though: on-premises AI isn’t the answer to everything, and both acquiring the necessary hardware and maintaining it require resources. On-premises AI should therefore be viewed as part of a broader strategy. The smartest solutions come from combining:
Open-source models that run locally on your own hardware (control, security)
Cloud services for AI labs (scale, performance)
model platform services from public cloud providers (access to Frontier models without data being sent to AI labs)
Open-top models that can be driven with a rental car (allowing even the largest open-top models to be driven)
The key is the ability to build the right combination for the right use case. For example, most information security and data protection requirements can be met with the right combination.
What does this mean in practice?
Local AI isn't for everyone—or for everything—but it's a sign of change:
toward more decentralized artificial intelligence
Toward stronger data management
toward more practical use cases
And perhaps most importantly: toward a world where artificial intelligence is not just an outsourced service, but part of our own core capabilities.
If you want to understand how locally run models actually work in practice (in a Linux environment, step-by-step), we recommend reading Jari’s original, more technical article: Running local AI models on Linux
Jari Huilla, CTO & partner Kipinä
Jari Huilla is Kipinä’s CTO, with an exceptionally long and diverse background in technology; he landed his first job at the Nokia Research Center at the age of 15. Over the years, he has worked as a developer, manager, and builder of growth companies. At Kipinä, Jari brings together deep technical expertise and business-oriented thinking. He is particularly interested in how to make AI solutions not only technically functional but also genuinely meaningful—and how to understand their limitations, rather than simply ignoring them.