Skip to content
AI in the Trenches

Is Your Tech Stack AI-Obsolete?

Pierre-Jean L'Hôte

Pierre-Jean L'Hôte

Strategic CTO Advisory • Founder Etimtech

8 min read
ai
stack
development
obsolescence
strategy
Fragmented tech stack facing AI acceleration, technological obsolescence

A Haskell developer and a Python developer ask the same thing to an AI. One gets a senior colleague. The other gets an intern who read the manual

This isn't a joke. It's the current state of AI-assisted software development, and the strategic implications for your organization are massive.

Big Tech won't tell you this openly, but they're creating an irreversible technological gap. And this gap isn't about computing power. It's about human capital injected into the machine. Your choice of tech stack, once an engineering decision, has become a strategic decision that determines your access to the biggest productivity lever of the decade.


The Open Secret of Post-Training

What AI actually does to "learn"

The common assumption is that AI "reads the entire internet" and becomes intelligent by magic. That's wrong.

The raw model, the Base Model, is a chaotic savant. It has ingested terabytes of text and code, but without discernment. It can generate syntactically valid code while ignoring security subtleties, idiomatic patterns, and ecosystem conventions. To make it usable in the enterprise, it undergoes two crucial stages where the human factor is decisive.

SFT, Supervised Fine-Tuning. Thousands of experts are recruited and paid to write the "perfect" answer to technical questions. They don't just give the right answer: they explain their reasoning, step by step (Chain of Thought). The model learns not only what to answer, but how to reason to get there.

RLHF, Reinforcement Learning from Human Feedback. These same experts then evaluate the model's responses. They "punish" incorrect, dangerous, or poorly structured answers, and "reward" rigorous, secure, and idiomatic responses. The model adjusts its behavior accordingly.

The data point nobody mentions

Here's the critical insight: the quality of the model on a given language depends directly on the number and quality of human experts available for SFT and RLHF in that language.

It's not the volume of source code on GitHub that matters. It's the volume of structured human expertise injected into the post-training phase. And that resource is not evenly distributed across languages.


The "Rich Get Richer" Effect of Languages

Languages where AI is a Senior Developer

Python, TypeScript, Java. For these three ecosystems, AI labs can easily recruit 5,000 competent developers to rate responses all day long. The talent pool is enormous. The cost per evaluation is reasonable. The volume of human feedback is massive.

Result: AI has a luxury private tutor in these languages. It knows the subtleties. It detects security pitfalls. It generates modern idiomatic patterns. It understands the library ecosystem. It suggests relevant optimizations. In practice, it is a Senior Developer in these technologies.

The frameworks that benefit most from this dynamic are what I call "AI-Native" stacks: Spring Boot for the Java ecosystem, Next.js for TypeScript/React, FastAPI for Python. AI doesn't just generate code for these frameworks: it generates good code, with best practices, the right patterns, and an awareness of common pitfalls.

Languages where AI is an intern

Rust, Haskell, OCaml, and your in-house DSLs. Experts are rare and expensive. Recruiting 5,000 competent Haskell evaluators is simply impossible. The human feedback dataset is tiny. The RLHF is superficial.

Result: AI has "read the manual" but never practiced. It generates code that compiles (sometimes), but with Python logic translated word for word. It doesn't understand the language's idioms. It ignores advanced patterns. In practice, it is an intern who completed a tutorial in these technologies.

The reinforcement effect

This imbalance compounds over time. The more performant AI is in a language, the more developers in that language use it. The more they use it, the more labs invest in post-training for that language. The richer the post-training, the better AI gets. It's a virtuous cycle for dominant languages, and a vicious cycle for niche languages.

The gap isn't closing. It's widening.


Measurable Impact on Productivity

The numbers are emerging from early comparative studies, and they're unambiguous.

On AI-Native stacks (Python/FastAPI, TypeScript/Next.js, Java/Spring Boot), AI-equipped teams measure productivity gains of around 30 to 40% on routine development tasks: boilerplate, unit tests, refactoring, documentation.

On niche stacks (Rust, Elixir, Clojure), measured gains plateau between 5 and 15%, with a significantly higher error rate in generated code, requiring review and correction effort that erodes part of the gain.

In other words: choosing a niche stack today means giving up 20-25 productivity points compared to a competitor on an AI-Native stack. On a 12-month project with a team of 10 developers, the gap translates to hundreds of thousands of euros.


The Uncomfortable Question

Does the theoretical "technical superiority" of your niche language justify the cost of forgoing the biggest productivity lever of the decade?

The answer isn't universal. It depends on your context. But the question must be asked explicitly, with numbers, not convictions.

When the niche is still justified

Critical embedded systems where Rust offers memory safety guarantees that no amount of AI productivity can compensate for.

High-frequency financial systems where the computational performance of specialized languages justifies the additional human cost.

Differentiating intellectual property where your competitive advantage relies precisely on rare expertise that AI cannot replicate.

When the niche is a trap

Standard business applications (CRUD, REST APIs, back-office) where the language's technical superiority offers no functional advantage, but the lack of AI assistance slows down delivery.

Growing teams where hiring is already difficult and where the absence of performant AI makes onboarding juniors even slower.

Time-constrained projects where every week of delay has a direct and measurable business cost.


CTO Playbook: Evaluate and Decide in 5 Steps

1. Map your current stack. List every language, framework, and tool in use. For each, evaluate the level of AI assistance available (Senior, Junior, Non-existent) by concretely testing current tools on representative tasks from your codebase.

2. Measure the productivity gap. Take a standard task (implementing an API endpoint, writing tests, refactoring a module) and measure the time with and without AI assistance, on your actual stack. Theoretical numbers aren't enough: measure your own.

3. Evaluate the cost of migration vs. the cost of stagnation. A stack migration is expensive. But the cost of stagnation (lower productivity, harder hiring, longer time-to-market) is a continuous cost that accumulates. Model both over a 3-year horizon.

4. Consider a hybrid strategy. You don't have to migrate everything. Identify the components where AI assistance would generate the most value (typically: new developments, API layers, back-office) and evaluate a targeted migration of these components to an AI-Native stack, while keeping your existing stack for the critical modules where it excels.

5. Factor in recruitment. Developers of the current generation choose employers partly based on the tech stack. AI-Native stacks attract more candidates, who are themselves more productive thanks to their mastery of AI tools. This is a multiplier that traditional staffing models underestimate.


Obsolescence Is Not Inevitable, It's a Choice

The AI gap between dominant and niche languages is real, measurable, and expanding. Ignoring it is a strategic risk. But acknowledging it doesn't mean capitulating to technological conformism.

It means making an informed decision: either your niche stack gives you a differentiating advantage that justifies the cost of lesser AI assistance, or it has become a strategic ball and chain that technical nostalgia prevents you from seeing.

The companies that will dominate the next decade won't be the ones that chose the best language. They'll be the ones that chose the best combination of language + AI + human skills. And that equation, in 2026, massively favors AI-Native stacks.

The question is no longer theoretical. Your competitors on AI-Native stacks are coding 40% faster, with an AI that acts as a genuine pair-programming security partner. Can you afford not to react?

Want to go further?

Related Articles