If we want our devices to act as true agents—as we discussed recently regarding the shift in personal computing—we need to look closely at the engines driving them. The “agentic” future isn’t just about a smarter interface; it’s about the foundational models that provide the logic. Back in April, I touched on the architectural shift from colossal models to precision engineering. Today, that shift is no longer just a technical preference; it’s a geopolitical and economic battlefield.
We’re seeing a fascinating divergence in how AI is built. On one side, you have the “bigger is better” philosophy, exemplified by Anthropic’s marketing of their next generation as the “biggest and smartest” and the massive 397-billion parameter Qwen models coming out of China. On the other, we have the European approach, led by Mistral’s new Small 4.
At Ambiente Ingegneria, we’ve always leaned toward this second path: precision engineering. Think of it as the “metric system” of AI. Rather than guessing with miles of unnecessary parameters, we focus on integrated solutions that do specific things exceptionally well. Whether we are building image recognition or automated content grouping, a well-tuned, specialized model often outperforms a bloated general model in a production environment. It’s about measurable, standardized units of performance rather than vague marketing hype.
This isn’t just about efficiency; it’s about sovereignty and safety. The recent news regarding Claude Mythos Preview—a model restricted to a select few US companies—has even the European Central Bank concerned about the future of European savings and data. When AI becomes a “black box” controlled by a handful of entities, it threatens the transparency we strive for.
In our work with Python, Django, and PostgreSQL, we see daily how critical it is for systems to be verifiable. A model is only as good as the database it sits on and the standards it follows. Promoting open standards isn’t just a preference for us; it’s a defense against the hallucinations and misinformation that unvetted, “colossal” models can produce. By prioritizing data-driven analysis over hype, we ensure that our LLM integrations (like RAG or Voice) and our custom Odoo ERP modules remain robust, secure, and—most importantly—truthful.
The race for AI dominance is shifting. It’s moving away from sheer scale and toward strategic functionality. As we navigate this era, our commitment remains the same: building systems that prioritize engineering integrity, use the right “metric” for the job, and stand firmly against the noise of fake news and online bullying.


