Monday, 11 May 2026

The Deep Feed

The Friction of Progress

76 min read · 6 pieces
In this issue
01 The Spec-First Revolution 12 min
02 The Weight of Words 8 min
03 The Incorruptible Company 10 min
04 The Token Economy 14 min
05 The Inference Shift 15 min
06 The Zombie Internet 5 min
Editor's Letter

Tonight we examine the tension between the frictionless promise of AI and the messy, high-friction reality of human excellence. From the way engineers are rewriting their workflows to the degradation of academic prose, we look at what is gained when we accelerate and what is lost when we stop trying.

01 Lenny's Newsletter

The Spec-First Revolution

How Notion is moving from writing code to dictating intent

By Claire Vo · 12 min read
Editor's note: Engineering is shifting from manual construction to high-level orchestration.

The traditional software engineering workflow is dying. For decades, the loop was predictable: read a ticket, write code, run tests, wait for CI, repeat. But at Notion, the introduction of AI agents is breaking this cycle. Ryan Nystrom and his team are moving toward a model where the primary unit of work is no longer the line of code, but the specification. Instead of typing out functions, engineers are dictating ideas into Whisper, having an agent like Codex format those thoughts into a rigorous spec, and then letting an autonomous agent implement and verify the work. This isn't just about speed; it is about a fundamental shift in the engineer's role from a builder to an architect of intent.

The Death of the Manual Pull Request

In this new world, the 'Boxy' system allows engineers to trigger background agents directly from Notion comments. You don't just leave a note for a human colleague; you @mention a system that can generate a full pull request, complete with screenshots and verification, in twenty minutes. This collapses the time between an idea and its implementation. However, this speed creates a new bottleneck: Continuous Integration (CI). If an agent can ship code in minutes, but the testing suite takes an hour, the entire advantage of AI-driven development evaporates. This is why Notion is currently obsessed with 'Project Afterburner'—a mission to slash CI times to a quarter of their current duration. Speed at the keyboard is useless if the infrastructure is stuck in the slow lane.

The spec is the new changelog. It is the version control for how a feature actually works, not just how it is written.

This shift demands a new kind of discipline. When an agent is doing the heavy lifting, the engineer's value lies in the precision of their requirements. If the spec is vague, the implementation will be wrong, and debugging an autonomous agent's logic is far more taxing than debugging a human's syntax error. Engineers must learn to prompt agents to defend their reasoning under pressure. It is a transition from being a craftsman of syntax to a judge of logic. The role of the senior engineer is evolving into that of a high-level auditor, ensuring that the autonomous output aligns with the original intent.

The New Engineering Stack
  • Voice-to-Spec: Using Whisper to capture raw architectural ideas.
  • Agentic Implementation: Using Codex to turn specs into commits.
  • Automated Verification: Using subagents to test and defend code logic.
  • High-Frequency Context: Using agents to automate daily standup pre-reads.

Ultimately, this workflow changes the economics of software. When the cost of generating a feature drops toward zero, the value of that feature also drops. What remains valuable is the ability to define what should exist in the first place. The engineers who thrive will be those who can think clearly enough to write the instructions that the machines follow. The era of the 'coder' is ending; the era of the 'spec writer' has begun.

Key Takeaway

In an AI-driven workflow, the quality of your specification determines the quality of your product.

02 Cal Newport

The Weight of Words

Why AI-generated prose is failing the test of utility

By Study Hacks · 8 min read
Editor's note: Efficiency in writing often comes at the expense of actual communication.

There is a growing rot in academic publishing. Editors at the journal *Organization Science* have noticed a strange phenomenon: a massive surge in manuscript submissions alongside a dramatic decline in actual readability. The papers look polished on the surface, but they feel empty. The words are there, but the meaning is elusive. This is the direct consequence of the generative AI boom. Since 2023, the volume of submissions has spiked, but the percentage of papers using minimal AI has plummeted. The result is a flood of text that is technically correct but cognitively exhausting to read.

The Illusion of Polished Text

The common assumption is that AI makes writing better by smoothing out errors. In reality, it often does the opposite. AI-generated prose tends to lean on longer words, complex sentence structures, and excessive jargon. It uses 'nominalizations'—turning verbs into clunky nouns—that make sentences heavy and hard to parse. It creates a veneer of sophistication that masks a lack of depth. This is why the 'reading ease' scores for academic papers have dropped significantly. The tools make the act of writing easier for the individual, but they make the act of reading harder for the community.

Making things faster or easier is not the same as making things better.

The consequences for the scientific community are measurable and severe. *Organization Science* is now desk-rejecting nearly 70% of manuscripts that show heavy AI usage. Compare that to the 44% rejection rate for papers written without AI. More tellingly, only 3.2% of high-AI papers actually make it to acceptance, compared to 12% for low-AI papers. This isn't just a matter of preference; it is a matter of quality control. The AI is producing a high volume of 'noise' that taxes the time and patience of peer reviewers, slowing down the progress of actual science.

Why AI Writing Fails
  • Increased complexity without increased clarity.
  • Over-reliance on jargon and 'big' words.
  • A lack of structural cohesion that forces the reader to work harder.
  • The decoupling of the writing process from the thinking process.

The lesson here is a cautionary one for any professional. The temptation to use AI to bypass the 'hard work' of drafting is immense. But writing is not just a way to record thoughts; it is a way to refine them. When you outsource the struggle of phrasing, you often outsource the clarity of the thought itself. There are no shortcuts to deep understanding, and there should be no shortcuts to communicating it.

Key Takeaway

Efficiency is a trap if it produces output that no one can actually use.

03 Lenny's Newsletter

The Incorruptible Company

How to build an institution that survives its own success

By Lenny Rachitsky · 10 min read
Editor's note: Success is the greatest threat to a company's original mission.

Most successful companies are not destroyed by competitors, but by their own internal gravity. As a company grows, it naturally drifts toward mediocrity, seeking to protect its current profits rather than its original mission. Eric Ries, author of *The Lean Startup*, calls this 'financial gravity'. Once a company becomes a massive, profitable entity, the incentives shift. The goal is no longer to innovate or solve problems, but to manage the existing machine and minimize risk. This is why 80% of venture-backed founders are ousted within three years of going public; the board and the market demand stability, which is often the enemy of the very thing that made the company great.

Governance as a Shield

To survive, a company needs more than just a good product; it needs structural protection. Companies like Anthropic, Costco, and Novo Nordisk have implemented specific governance models to prevent this drift. For example, mission-aligned companies use legal frameworks to ensure that the company's purpose is legally protected, even if it conflicts with short-term profit maximization. This might seem counter-intuitive to the traditional view of shareholder primacy, but it is the only way to ensure that a company remains 'incorruptible' by the pressures of the market.

Success won't protect you—it makes you a bigger target for the forces of mediocrity.

The danger of success is that it creates a target. When a company becomes a market leader, every decision it makes is scrutinized by shareholders, regulators, and competitors. The pressure to 'optimize' every aspect of the business leads to a slow erosion of the culture and the product quality that built the company in the first place. Without a formal mechanism to protect the mission, the company will inevitably succumb to the need for predictable, safe, and ultimately uninspired growth.

Strategies for Longevity
  • Implement governance structures that prioritize mission over quarterly earnings.
  • Recognize 'financial gravity' early and actively fight it.
  • Use legal filings to codify the company's core purpose.
  • Protect the culture that enables high-velocity innovation.

Building a company that lasts requires a long-term view that is often at odds with the immediate demands of the financial markets. It requires the courage to say 'no' to profitable opportunities that would compromise the company's soul. In the end, the most resilient companies are those that view their mission not as a marketing slogan, but as a structural constraint.

Key Takeaway

Structure your company to protect your mission, or the market will eventually destroy it.

04 Lenny's Newsletter

The Token Economy

Gamifying AI adoption at Sendbird

By Lenny Rachitsky · 14 min read
Editor's note: Transformation happens through culture and curiosity, not top-down mandates.

How do you turn an entire organization into an AI-first powerhouse? You don't issue a memo; you build a marketplace. John Kim, CEO of Sendbird, has approached AI adoption not as a training program, but as a product. He created the 'Automators' platform, a gamified internal system where employees can post 'quests'—requests for automation or new tools. Engineers and AI agents then pick up these quests. It is a bottom-up approach that turns AI usage into a visible, rewarding, and competitive activity.

The Rise of the AI Gods

To drive engagement, Kim tracks token usage across the company, but not for the purpose of surveillance. Instead, he uses it to create an aspirational hierarchy. Employees move through tiers: Beginner, Intermediate, Expert, Architect, Catalyst, and finally, 'AI God' (those consuming over 100M tokens a day). This makes AI fluency a status symbol. When the top token consumers are the executives themselves, it sends a clear signal: this is not a peripheral experiment; it is the core of how we work. It moves AI from a 'tool you use' to a 'capability you master'.

The future of work belongs to people with curiosity and agency, not just years of experience.

One of the most effective ways Kim has unlocked productivity is by giving non-technical teams the ability to build. By providing secure, pre-configured templates, his marketing team was able to build a fully functional e-commerce store with Stripe integration without needing a single engineer. In the old world, this would have been a months-long project buried in a product roadmap. In the new world, it was a matter of days. This is the true power of AI: it lowers the barrier between an idea and its execution for everyone in the company.

The AI Adoption Playbook
  • Treat internal AI tooling as a product, not a program.
  • Gamify usage through quests and experience points.
  • Create 'safe' templates for non-technical builders.
  • Model the behavior from the executive level down.

The ultimate goal of this approach is to 'smooth the curve' of productivity. Kim looks for a steady stream of token usage that doesn't dip on weekends or holidays. When the curve is smooth, it means the AI agents are working 24/7, filling the gaps when humans are unavailable. This is a fundamental shift in the concept of labor: we are no longer just augmenting human work; we are building a continuous, autonomous operational layer.

Key Takeaway

AI adoption is a cultural challenge that is best solved through gamification and agency, not mandates.

05 Stratechery

The Inference Shift

Why the next era of AI hardware won't look like Nvidia

By Stratechery · 15 min read
Editor's note: The hardware bottleneck is moving from training to inference.

The semiconductor market is undergoing a massive structural shift. For the last few years, the story has been dominated by Nvidia and the massive compute requirements of training large language models. Training is a massively parallel process that requires thousands of GPUs to work in lockstep, communicating constantly. This is why Nvidia’s dominance in networking and high-bandwidth memory has been so absolute. But as AI moves from the laboratory to the real world, the primary workload is shifting from training to inference.

The Complexity of the Decode

Inference is a different beast. While the initial 'prefill' stage of inference is highly parallel, the subsequent 'decode' stage—the part where the model actually generates tokens one by one—is largely serial and memory-bandwidth bound. For every single token produced, the system must read the entire model weight and the growing 'KV cache' (the model's short-term memory of the conversation). This creates a massive demand for memory bandwidth that is fundamentally different from the demands of training. The bottleneck is no longer just raw calculation speed; it is how fast data can move from memory to the processor.

What works for training does not necessarily work for inference.

This is why companies like Cerebras are gaining traction. While Nvidia’s GPUs are versatile 'jacks-of-all-trades' that can handle both training and inference, Cerebras is building architectures specifically designed for the unique constraints of inference. By using much larger single-chip designs, they can bypass some of the networking bottlenecks that plague multi-GPU systems. The era of the GPU-only AI stack is ending, replaced by a more heterogeneous landscape where specialized silicon handles different parts of the AI lifecycle.

Training vs. Inference
  • Training: Massively parallel, requires high-speed chip-to-chip networking.
  • Inference (Prefill): Highly parallel, compute-intensive.
  • Inference (Decode): Serial, memory-bandwidth bound, requires massive KV cache management.

As agents become more ubiquitous, the sheer volume of inference requests will dwarf the compute used for training. The companies that win the next decade won't necessarily be the ones with the most powerful training clusters, but the ones who can provide the most efficient, low-latency inference at scale. The 'Inference Shift' is the transition from building intelligence to deploying it.

Key Takeaway

The hardware winner of the AI era will be determined by memory bandwidth, not just raw compute.

06 Simon Willison

The Zombie Internet

The erosion of human connection in the age of agents

By Simon Willison · 5 min read
Editor's note: We are entering a period of profound digital alienation.

The internet is changing in a way that is difficult to describe but easy to feel. It is not just the 'Dead Internet' theory—the idea that bots are talking to other bots—that we are witnessing. We are entering the era of the 'Zombie Internet.' This is a more insidious state where the lines between human and machine are not just blurred, but intentionally obscured to facilitate mass-scale manipulation and low-effort content generation.

The Human-Agent Loop

The Zombie Internet is defined by a series of strange, asymmetrical interactions. It is people using AI to talk to people who are not using AI. It is people using AI to talk to other people who *are* using AI. It is influencer 'hustlebros' using automated agents to run YouTube channels, blogs, and social media accounts that exist solely to harvest engagement and drive ad revenue. In this environment, the 'content' is often just an AI summary of a real book, sold as if it were the original work, or an inspirational Reddit thread written by a marketing firm's agent.

The internet is becoming a hall of mirrors where the original human intent is lost in a loop of automated responses.

The mental cost of navigating this landscape is significant. Filtering through the noise becomes an exhausting, full-time job. When you can no longer trust that a heartfelt comment or a piece of advice is coming from a human being with lived experience, the social contract of the internet begins to dissolve. We are left in a state of constant skepticism, where every interaction feels like a potential encounter with a sophisticated marketing script.

Signs of the Zombie Internet
  • AI-generated summaries being sold as primary sources.
  • Automated social media accounts mimicking human personality for engagement.
  • Mass-scale 'inspirational' content designed by algorithms.
  • The feeling of exhaustion when trying to find genuine human connection.

This is not just a technological problem; it is a psychological one. As the cost of generating 'human-like' content drops to zero, the value of genuine, unmediated human connection rises. The challenge for the next decade will be finding ways to verify authenticity in a world where the appearance of humanity can be perfectly simulated.

Key Takeaway

As the cost of content hits zero, the value of authenticity becomes infinite.

Endnote
Tonight's pieces reveal a singular truth: we are currently in a period of massive friction. We see it in the engineering workflows of Notion, where the speed of thought is hitting the wall of CI infrastructure. We see it in academia, where the ease of AI writing is creating a mountain of unreadable noise. We see it in the hardware markets, where the shift from training to inference is forcing a total rethink of silicon architecture. And we see it in our social fabric, as the 'Zombie Internet' threatens to erode the very possibility of human connection. The common thread is that 'easy' is not the same as 'better'. The tools that allow us to move faster often force us to deal with more complexity, more noise, and more alienation. The winners of this era will not be those who simply use AI to do things faster, but those who use it to build more meaningful, more robust, and more authentic structures.
In a world where everything can be automated, what is the one thing you will insist on doing manually?
The Deep Feed · A nightly magazine · Monday, 11 May 2026