AI writes bad code, but what if that’s the good news? (Yannick De Pauw)
Is messy, AI-generated code a threat or an opportunity? For non-technical founders and product owners, AI is becoming a powerful prototyping intern, enabling rapid validation of ideas directly in the codebase. This creates a natural tension with engineers focused on clean, scalable architecture. The key strategic takeaway is to reframe the engineering role from gatekeeper to mentor. Instead of rejecting AI-generated code, engineers can guide its evolution from a promising, if flawed, prototype into a robust, production-ready feature. As the author notes from his own experience, “Refactoring to me means: this feature is worth keeping.” This mindset shift treats messy code not as a failure, but as a signal that an idea has enough merit to invest in, turning AI into a collaborative tool for accelerating value discovery. (link)
LLMs are mirrors of operator skill (Geoffrey Huntley)
With AI fundamentally changing software development, how we identify skilled engineers is now broken. The author argues that since large language models can solve most traditional interview questions, screening must evolve to assess a candidate's true proficiency with the new tools. An LLM, in this view, is a mirror reflecting the operator's skill. The critical takeaway is to shift interviews from rote problem-solving to direct observation. Companies should ask candidates to "dance with the LLM" on a screen share, observing their workflow, how they build context, and their ability to critique and refine AI-generated output. This assesses deeper skills like taste and judgment, which are the new differentiators. As the author puts it, if an interviewee teaches you a new meta, they’re a great fit. (link)
When AI Has Better Taste Than You (
)As AI's creative and analytical skills accelerate, what is left for humans to contribute? Drawing on a conversation with Notion CEO Ivan Zhao, this essay proposes a framework of three value components: capabilities (our skills), taste (our preferences), and agency (our will to act). AI is rapidly conquering capabilities and can likely learn taste by pattern-matching across vast datasets. The most durable human advantage, therefore, is agency. It's the motivation to decide which problems are worth solving, the drive to pursue a vision, and the will to act on our values. While AI can optimize for a programmed objective, humans provide the initial spark of intent. In a world where skills and even taste become commoditized, our agency—the choice to move the hand to draw what the eye admires—may be our final, most precious moat. (link)
Pace Layering: How Complex Systems Learn and Keep Learning (Stewart Brand)
How do robust civilizations adapt and endure over time? The answer lies in "pace layering," a model where different parts of the system operate at different speeds. Brand identifies six layers: fast-moving Fashion/art and Commerce at the top, followed by slower Infrastructure, Governance, Culture, and finally, the geological pace of Nature. The strategic insight is that this structure creates a healthy tension that fosters resilience. The fast layers innovate and propose, while the slow layers stabilize and constrain. "Fast learns, slow remembers. Fast gets all our attention, slow has all the power." A society runs into trouble when one layer's pace is forced upon another, like when commerce's demand for speed is allowed to degrade nature. Understanding this dynamic provides a powerful framework for thinking about long-term strategy and systemic health. (link)
A recent study suggested LLMs create "cognitive debt," but this piece offers a compelling reframe: prompting isn't just a simple action, it's a form of management. The author introduces the "Prompting-Managing Impact Equivalence Principle," arguing that the cognitive load of using an AI assistant mirrors that of supervising a junior human. The perceived mental "idling" isn't sloth; it's the watchful calm of a manager engaged in supervisory control: delegate, monitor, integrate, and ship. This perspective holds a key lesson. The challenge of the AI era isn't a neurological decline but a skills gap. We're putting novice users in a manager's seat without management training. The upgrade path isn't to ditch the tools but to teach people the craft of oversight, quality control, and strategic intervention. (link)
June 2025: The AI agent schism (
)A schism is quietly forming in the world of AI agents, splitting between non-deterministic (autonomous, reasoning) and deterministic (predictable, API-like) approaches. While the hype often centers on autonomous agents that can think for themselves, the reality in high-stakes enterprise environments is starkly different. For enterprise use cases in sectors like healthcare and finance, what users truly want is an API—a tool that delivers the same output reliably, every single time. The core takeaway is that at any real scale, variability is a liability. A non-deterministic agent that chooses its own path introduces an intolerable risk of exceptions. The winning strategy for enterprise agents, therefore, is to be deterministic by design, using models to handle edge cases and self-heal UI changes, not to make core operational decisions. (link)
ChatGPT: H1 2025 Strategy (OpenAI)
It’s not often we get to see the internal strategy of a company at the center of the tech universe, making this leaked document a valuable artifact for thinking about our own strategic planning. The plan details OpenAI's vision to evolve ChatGPT from a chatbot into a "super-assistant" that is deeply personalized and serves as a primary interface to the internet. The strategy outlines a "T-shaped" assistant with broad skills for daily life (planning trips, managing calendars) and deep expertise in complex domains like coding. A core principle for H1 2025 is to first build a valuable, indispensable product before pursuing wider third-party integrations. Key initiatives include relentless weekly iteration, strengthening the brand to be synonymous with its category, and a policy push to let users set ChatGPT as their default assistant on all major operating systems. (link)