· 9 min read

The 100x Engineer Doesn't Write Code

Programming complexity has always been abstracted upward. LLMs are the next layer, and the best engineers are becoming agent orchestrators.

The 100x Engineer Doesn't Write Code

The best engineer on your team in 2028 won’t be the fastest typist. They won’t have the most GitHub commits. They might not even open an IDE most days.

They’ll be the person who orchestrates a team of AI agents that ships more reliable code than any individual ever could.

I know this because I’m already living it.

🔑

The 100x engineer isn’t a faster coder. They’re the person who builds, secures, and orchestrates teams of AI agents.

The longest-running pattern in computing

Every major abstraction layer in programming history did the same thing: it made more people productive by hiding complexity downward.

1

Machine Code → Assembly (1950s)

Hardware engineers could finally program without flipping physical switches. The abstraction: mnemonic instructions instead of raw binary.

2

Assembly → C (1970s)

Systems programmers stopped thinking in opcodes. Kernighan and Ritchie gave us portable code that compiled across architectures.

3

C → Java/C# (1990s)

Memory management disappeared behind garbage collectors. Enterprise development exploded because you no longer needed to track every malloc and free.

4

Java → Python/JavaScript (2000s–2010s)

Scripting languages made development accessible to millions of new programmers. You could build a web app in a weekend.

5

All Languages → Natural Language (Now)

LLMs are abstracting every programming language into English. JavaScript, Python, TypeScript, CSS, Terraform, Rust: they’re all becoming lower-level implementation details that AI handles.

This isn’t speculation. GitHub’s 2022 study found that developers using Copilot completed tasks 55% faster. The 2024 Stack Overflow Developer Survey showed 76% of developers are using or planning to use AI coding tools. McKinsey’s 2024 State of AI report found 72% of organizations have adopted AI in at least one business function.

The pattern is clear. Each abstraction layer didn’t eliminate the layer below it. C didn’t kill assembly. Java didn’t kill C. But each one shifted where the highest-leverage work happens. We’re watching that shift happen again, faster than any previous transition.

76%

Devs using AI tools

72%

Orgs adopting AI

55%

Productivity gain (Copilot)

33%

Gartner: agentic AI in enterprise apps by 2028

What I actually do all day

At Axislabs, I manage a team of AI agents. They have an org chart. They have defined roles, responsibilities, and escalation paths. This sounds absurd. It also works better than anything else I’ve tried.

My job now is closer to a director of engineering than a developer. I define outcomes, architect systems, review work, and handle the problems that require judgment the agents don’t have yet. I write code when I need to, but that’s increasingly the exception, not the default.

The engineers who will thrive in this world aren’t the ones who memorize API docs or type 150 WPM. They’re the ones who can decompose problems clearly, design agent workflows, evaluate outputs critically, and understand the security implications of what they’re building.

Old 10x Engineer

  • Writes code faster than anyone
  • Knows every API by heart
  • Ships solo features in a weekend
  • Measured by commits and PRs
  • Deep in one language/framework

New 100x Engineer

  • Orchestrates agents that write code
  • Knows how to decompose problems for agents
  • Ships entire products in a weekend
  • Measured by outcomes and reliability
  • Fluent across the full stack via agents

The questions no one is answering yet

I have more questions than answers. That’s honest. Here’s what I’m genuinely wrestling with.

Who owns the agents?

Do engineers BYO their own agent teams? Does the company provide them? What’s the budget model for token usage?

Right now, most companies are winging it. Microsoft’s 2024 Work Trend Index found that 78% of AI users were bringing their own AI tools to work. This is shadow IT all over again, except the stakes are higher because these tools process proprietary code and business logic.

Shadow AI is the new shadow IT. A Salesforce survey found over 55% of employees admitted to using unapproved AI tools. Samsung’s 2023 incident, where employees leaked proprietary source code through ChatGPT, was just the canary.

There needs to be a real budget model. Per-engineer token allocations? Department pools? Usage-based with guardrails? I don’t know the right answer, but I know “everyone just use their personal ChatGPT subscription” isn’t it.

Does the C-suite need a Chief Agent Officer?

Someone needs to own agent strategy across the org. Governance, safety policies, orchestration standards, vendor management, risk assessment. This isn’t a side project for the CTO. It’s a full-time job.

Deloitte’s 2024 State of Generative AI in the Enterprise report found that only 22% of enterprise leaders had enterprise-wide AI governance frameworks. That number needs to approach 100%, and fast. Maybe the answer is a new C-suite role, the Chief Agent Officer. Or maybe it’s an expansion of the CTO or CISO role. But someone has to be accountable.

Should you build your own orchestration layer?

There’s a strong argument for moving away from third-party AI subscriptions and building internal orchestration. Not “we use ChatGPT Teams.” More like: “We have our own agent infrastructure that embeds our knowledge base, our SOPs, our compliance requirements, our risk models, and our security policies directly into the system.”

McKinsey’s data shows adoption is broad but governance is thin. The organizations that build their own orchestration, with compliance baked in, will have a structural advantage over those that treat AI as just another SaaS subscription.

What does a hybrid human/agent workflow actually look like?

I think we’re heading toward unified task management where humans and agents work side by side. Each employee has their own AI assistant or team. Tasks flow between human and agent based on complexity, judgment requirements, and risk level.

This isn’t a new concept. It’s how good engineering managers already think about delegation. The difference is that your “team” now includes agents that work 24/7, don’t need context-switching time, and can handle the kind of repetitive work that burns humans out.

The hard part isn’t the technology. It’s the organizational design. How do you performance-review a human whose output is primarily “agent orchestration”? How do you onboard new engineers when half the institutional knowledge lives in agent configurations? These are management problems, not technical ones.

The interface convergence problem

This is the part that keeps me up at night.

The average enterprise uses over 300 SaaS applications. Developers alone juggle dozens of tools daily. And increasingly, what everyone wants is simple: one interface to all of them.

A single AI chat that orchestrates everything. Your calendar, your code, your email, your deployments, your CRM, your monitoring dashboards. Not switching between fifteen browser tabs. Just talking to one agent that coordinates across all of them.

This is happening. It’s obviously useful. And it’s a security nightmare.

The OWASP Top 10 for LLM Applications exists for a reason. Prompt injection is the #1 risk. When your AI agent has access to production infrastructure, sensitive data, and communication tools, prompt injection isn’t a theoretical concern. It’s an existential one.

So what does the secure version look like? Probably something like:

  • Scoped agent permissions: Each agent gets the minimum access needed for its role. Your code review agent can’t access your email. Your scheduling agent can’t deploy to production.
  • Human-in-the-loop for high-risk actions: Agents propose, humans approve. Especially for anything involving money, data access, or external communication.
  • Audit trails on everything: Every agent action logged, attributed, and reviewable. Not optional.
  • Isolation between contexts: Your personal agent conversations don’t leak into shared organizational data without explicit permission.

This is solvable. But it requires thinking about AI agents the way we think about IAM and zero-trust networking, not the way we think about productivity tools.

AGI isn’t just intelligence

Here’s a thought I keep coming back to.

When people talk about AGI, they focus on the model. How smart is it? Can it reason? Can it pass benchmarks? But I think the “general” in AGI might be less about the model’s intelligence and more about the interface layer.

🔮

AGI might not arrive as a smarter model. It might arrive as a unified interface layer that coordinates every tool, system, and data source through a single conversational agent. The “general” in AGI could be about the interface, not the intelligence.

Think about it. A model that scores perfectly on every benchmark but can only operate inside a chat window isn’t general-purpose. A model that’s “merely” very good but can seamlessly orchestrate your entire digital life, that’s functionally general. It can do anything because it can access and coordinate everything.

This reframing matters because it changes what we should be building. Not just smarter models (though that too), but better orchestration layers, better tool integrations, better permission systems, and better interfaces. The model is the engine. The interface and orchestration layer is the car.

Gartner predicts that by 2028, 33% of enterprise software will include agentic AI capabilities. That means two-thirds won’t. The competitive gap between companies that build the orchestration layer and those that don’t will be massive.

What this means for you

If you’re an engineer reading this, the actionable version is straightforward.

1

Start orchestrating, not just coding

Use AI agents for the work you do today. Not as autocomplete, but as team members. Give them tasks, review their output, iterate on your prompts and workflows.

2

Learn the security model

Read the OWASP Top 10 for LLMs. Understand prompt injection, credential management, and the principle of least privilege as it applies to agents. The engineers who understand agent security will be the most valuable people in the room.

3

Think in systems, not features

The highest-leverage skill is decomposing complex problems into agent-appropriate tasks. This is systems thinking. It’s what senior engineers already do with human teams, just applied to a new kind of team member.

4

Push your org on governance

If your company doesn’t have an AI governance policy, advocate for one. Shadow AI is a real risk. The engineers who help solve the governance problem will have outsized influence on how their organizations adopt this technology.

The abstraction is moving upward again. Every previous layer made the engineers who embraced it more productive and the engineers who resisted it less relevant. This time isn’t different.

The 100x engineer doesn’t write code. They build the system that writes the code. And they make sure that system is secure, governed, and aligned with what the organization actually needs.

The question isn’t whether this is coming. It’s whether you’ll be orchestrating or be orchestrated.

Roger Chappel

Roger Chappel

CTO and founder building AI-native SaaS at Axislabs.dev. Writing about shipping products, working with AI agents, and the solo founder grind.

New posts, shipping stories, and nerdy links straight to your inbox.

2× per month, pure signal, zero fluff.


#ai #engineering #future

Share this post on:


Steal this post → CC BY 4.0 · Code MIT