Bvoxro Stack

Mastering AI-Assisted Development: From Vibe Coding to Harness Engineering

How to move from vibe coding to agentic engineering with verified feedback loops, harness engineering, and a shifted programmer role. Key insights from Chris Parsons, Simon Willison, and Birgitta Böckeler.

Bvoxro Stack · 2026-05-14 09:36:54 · Technology

The landscape of AI-assisted software development is evolving rapidly, with practitioners refining their techniques and sharing hard-won insights. A recent update to Chris Parsons' guide on using AI for coding offers a concrete, detailed look at how experienced developers are leveraging these tools effectively. This article synthesizes the key lessons from that guide and related discussions, providing a roadmap for moving beyond casual experimentation to disciplined, agentic engineering.

The Evolution of AI Coding Practices

Chris Parsons recently released the third major update to his guide on coding with AI. Unlike generic advice, his post provides specific, actionable details about his workflows—details that allow others to learn and adapt. The core principles from earlier versions remain relevant: keep changes small, build guardrails around agent behavior, document processes meticulously, and ensure every change is verified before deployment. However, one crucial shift has occurred: the meaning of verified has expanded. Previously, verification meant a human read and approved every line. Now, given the volume of code agents generate, verification must encompass automated checks—tests, type checkers, and other programmatic gates—with human judgment reserved for high-impact decisions.

Mastering AI-Assisted Development: From Vibe Coding to Harness Engineering
Source: martinfowler.com

Vibe Coding vs. Agentic Engineering

A key distinction, highlighted by Simon Willison and echoed by Parsons, is the difference between vibe coding and agentic engineering. Vibe coding involves letting the AI produce code without understanding or caring about the output—a practice that can be dangerous in production. Agentic engineering, by contrast, treats the AI as a collaborative partner whose output is scrutinized, guided, and improved through deliberate feedback loops. Parsons recommends tools that support this disciplined approach, specifically Claude Code and Codex CLI, because they provide what he calls an inner harness—a framework that constrains and validates the AI's actions.

The New Verification Imperative

Parsons emphasizes that verification is the single most important capability to optimize. He offers a powerful mental model: a team that can generate five approaches and verify all of them in an afternoon will outpace a team that generates only one approach and waits a week for feedback. The game has shifted from how fast can we build to how fast can we tell whether this is right. Consequently, investment should flow toward better review surfaces, not better prompts. Where possible, agents should verify their output against realistic environments before involving a human. Where human feedback is unavoidable, it should be made instantaneous.

Building Better Feedback Loops

Concretely, this means establishing rigorous automated gates. Instead of relying solely on a developer's intuition, teams should integrate static analysis, unit tests, integration tests, and type checkers into the AI's workflow. The agent should be required to run these checks and only present results to the human after passing them. This approach not only catches errors early but also trains the AI to produce higher-quality code over time.

The Programmer's Shifting Role

Perhaps the most profound insight from Parsons' guide is the redefinition of the senior engineer's job. The role is no longer about writing code line by line, but about training the AI to write code correctly. This involves shaping the harness—the combination of prompts, guardrails, and automated checks—so that the AI's output is correct on the first attempt. As Parsons notes, senior engineers who fear their job is becoming a series of diff approvals should instead focus on making those approvals unnecessary by improving the harness. This work compounds: a well-tuned harness benefits every future session, whereas reviewing diffs only addresses the current batch.

Harness Engineering: The Next Frontier

Early this month, Birgitta Böckeler published an outstanding article on harness engineering, which quickly attracted widespread attention. She followed it up with a video discussion with Chris Ford, diving deeper into the concept. At its core, harness engineering is about building the computational sensors—such as static analysis and test suites—that allow the AI to self-correct and produce reliable outputs. These sensors act as the eyes and ears of the development process, catching issues before they reach a human reviewer.

Computational Sensors in Practice

Böckeler and Ford discuss how LLMs excel at exploitation—they can quickly generate plausible solutions—but without a harness they can also produce plausible-sounding but incorrect code. The harness provides the necessary constraints: it validates assumptions, enforces coding standards, and verifies behavior against specifications. By embedding these checks into the development pipeline, teams can maintain high quality even at high velocity.

Practical Tools and Recommendations

For those ready to adopt agentic engineering, the choice of tool matters. Parsons explicitly recommends Claude Code and Codex CLI for their built-in harness capabilities. These tools not only generate code but also integrate verification steps into the agent's workflow. The inner harness they provide is a significant advantage, as it reduces the cognitive load on the developer and ensures consistency.

Getting Started

  • Start small: use the AI for isolated tasks with clear acceptance criteria.
  • Invest in automated tests before scaling agent usage.
  • Document your harness configurations so they can be reused and improved.
  • Foster a culture of feedback: encourage junior developers to interrogate AI outputs.

Conclusion

The era of AI-assisted development demands a shift in mindset. The most effective practitioners are those who move from vibe coding to disciplined harness engineering, who prioritize verification speed over generation speed, and who see their role as teachers and architects of the AI's environment. By following the strategies outlined by experts like Chris Parsons, Simon Willison, and Birgitta Böckeler, developers can harness the full potential of AI while maintaining rigorous quality standards.

Recommended