Tuesday, 12 May 2026

The Deep Feed

The friction of the new age

76 min read · 6 pieces
In this issue
01 The Spec-First Revolution 12 min
02 The Weightless Prose Problem 8 min
03 The Gamified Transformation 15 min
04 The Rule-Follower's Delusion 10 min
05 The Inference Shift 14 min
06 The Zombie Internet 5 min
Editor's Letter

Tonight, we examine the tension between the frictionless promise of AI and the messy, difficult reality of human excellence. From the silicon architecture of the next decade to the erosion of academic integrity, we look at what is gained and what is lost when we automate the struggle.

01 Lenny's Newsletter

The Spec-First Revolution

How Notion is rewriting the engineering workflow

By Claire Vo · 12 min read
Editor's note: A look at how the role of the engineer is shifting from writing syntax to defining intent.

The traditional software engineering workflow is dying. For decades, the process remained largely unchanged: a developer sits before a terminal, translating logic into syntax, wrestling with compilers, and managing the tedious mechanics of implementation. But at Notion, the emergence of AI agents is turning this model on its head. Ryan Nystrom and his team are moving toward a 'spec-driven' approach, where the primary act of engineering is no longer coding, but the precise articulation of intent. In this new world, the engineer acts more like an architect or a high-level strategist, dictating complex requirements into a system that then handles the heavy lifting of execution.

The Death of the Syntax Struggle

When an engineer can dictate an idea into Whisper, have an agent like Codex format it into a formal specification, and then watch that agent implement and verify the code, the bottleneck shifts. The difficulty is no longer in knowing where the semicolon goes; it is in knowing exactly what the system should do. This requires a higher level of rigor. If your specification is vague, the resulting code will be a hallucinated mess. The 'spec' becomes the source of truth—a version-controlled document that describes how a feature actually works, serving as both the instruction set for the agent and the definitive changelog for the human team.

The bottleneck is no longer the ability to write code, but the ability to define logic with absolute clarity.

This shift necessitates a massive change in infrastructure. If agents are shipping code at high velocity, the traditional Continuous Integration (CI) pipelines become a massive drag. Notion's 'Project Afterburner' is a direct response to this: a push to cut CI times to a quarter of their current duration. If an agent can write a pull request in minutes, but the testing suite takes an hour, the entire speed advantage of AI is lost. High-frequency, high-quality feedback loops are the only way to keep pace with autonomous agents.

The New Engineering Stack
  • Voice-to-spec: Using transcription to capture raw logic
  • Agentic implementation: Moving from manual coding to reviewing agent outputs
  • High-speed CI: Reducing the feedback loop to match AI speed
  • Contextual agents: Integrating tools like Boxy to handle background tasks

This evolution does not make engineers obsolete; it makes them more responsible. As the distance between idea and implementation shrinks, the cost of a bad idea or a poorly defined requirement drops to zero in terms of effort, but rises in terms of systemic error. The engineer of 2026 is a curator of logic, a defender of reasoning, and a master of the specification. The craft is moving from the fingers to the mind.

Key Takeaway

In the age of AI, the most valuable skill is the ability to define exactly what you want.

02 Cal Newport

The Weightless Prose Problem

Why making writing easier is making it worse

By Study Hacks · 8 min read
Editor's note: An examination of how generative AI is flooding academia with polished but empty text.

There is a strange phenomenon occurring in academic journals. Manuscripts are arriving in higher volumes than ever, yet they feel strangely hollow. Editors at journals like *Organization Science* report a new kind of writing: text that is technically correct but lacks any sense of substance. It is 'weightless' prose. On the surface, the papers look professional, but as readers move through the sentences, they find themselves struggling to grasp the actual meaning. The words are there, but the thought behind them seems to have evaporated.

The Paradox of Readability

Standard metrics suggest that AI makes writing cleaner. In reality, the opposite is happening in high-level discourse. AI-generated text tends to lean on longer words, complex sentence structures, and excessive jargon to mask a lack of depth. This has led to a measurable drop in 'reading ease' scores. While the text might look 'polished' to a casual observer, it is actually harder for a human brain to parse and absorb. We are seeing a surge in papers that are technically sophisticated in their vocabulary but intellectually shallow in their execution.

Making things faster or easier is not the same as making things better.

The data bears this out. In *Organization Science*, the rejection rate for papers that show heavy AI usage is nearly double that of papers written without it. High-AI papers are being desk-rejected at a rate of 70%, compared to 44% for human-authored work. The editors aren't necessarily spotting the AI, but they are sensing the lack of rigor. The tools make the act of writing easier for the individual researcher, but they create a massive tax on the community of reviewers who must now sift through mountains of low-value content.

The Costs of AI-Assisted Writing
  • Increased volume of low-quality submissions
  • Higher cognitive load for peer reviewers
  • Erosion of clarity and directness
  • The substitution of 'polish' for actual insight

This is a cautionary tale for anyone in a knowledge-based profession. The temptation to use AI to bridge the gap between a half-formed thought and a finished document is immense. But writing is not just a way to record thoughts; it is a way to *form* them. When you outsource the struggle of articulation, you often outsource the thinking itself. There are no shortcuts to depth.

Key Takeaway

Efficiency in production often comes at the expense of quality in thought.

03 Lenny's Newsletter

The Gamified Transformation

How Sendbird is building an AI-first culture

By Lenny Rachitsky · 15 min read
Editor's note: A blueprint for leaders trying to move beyond AI mandates toward actual adoption.

Most corporate AI strategies fail because they are treated as top-down mandates. Executives buy licenses, send out a memo, and then wonder why nothing changes. John Kim, CEO of Sendbird, has taken a different approach: he treats internal AI adoption as a product. Instead of telling people to use AI, he has built an ecosystem that makes it rewarding, visible, and—most importantly—fun. This is not about training programs; it is about changing the social and economic incentives of the workplace.

Quests and Token Leaderboards

At the heart of this strategy is the 'Automators' platform. It functions as an internal marketplace where employees can post 'quests'—requests for specific automations or new tools. Engineers and AI agents then pick up these quests. To drive engagement, Kim has gamified the process. Completing a quest earns you experience points, which can be traded for tangible rewards like gift cards or high-level access. This turns AI adoption from a chore into a competitive, social activity.

The most successful AI transformations treat internal tooling as a product, not a program.

Kim also tracks a metric that most companies would find terrifying: token usage. By categorising employees into tiers—from 'Beginner' to 'AI God'—he makes AI fluency visible. This isn't used for punitive performance reviews, but for enablement. If a team's token usage is low, it signals a need for support. If it is high, it signals a champion. He even monitors the 'smoothness' of the token usage curve. If the usage dips on weekends, it means the human-AI partnership is still tethered to human working hours. The goal is a 24/7 operation where AI fills the gaps when humans are offline.

The AI Adoption Playbook
  • Build internal marketplaces for automation
  • Create secure templates to lower the barrier for non-engineers
  • Gamify fluency through visible leaderboards
  • Prioritise curiosity and agency over years of experience

Ultimately, the transition to an AI-first company requires a shift in hiring. Kim has moved away from valuing tenure and toward valuing curiosity and energy. In a world where knowledge can be retrieved instantly, the ability to learn and apply that knowledge is the only sustainable advantage. The leaders of the future won't be those with the most experience, but those with the most agency.

Key Takeaway

Don't mandate AI; build a culture where using it is the path of least resistance and highest reward.

04 Experimental History

The Rule-Follower's Delusion

Why regulation cannot fix a lack of integrity

By Adam Mastroianni · 10 min read
Editor's note: A sharp critique of the attempt to solve the scientific replication crisis through more rules.

The scientific community is currently obsessed with rules. To solve the replication crisis, the consensus is to mandate more transparency: preregistration of studies, public data sets, and larger sample sizes. The logic is simple: if we tighten the regulations, we reduce the ability for researchers to cheat, and thus, we get better science. But this logic ignores a fundamental truth about human nature: rules are only as effective as the people who follow them.

The Retracted Rigor

Consider a recent example. A group of researchers published a paper in 2023 claiming that high rates of replicability could be achieved by following a specific set of 'rigor-enhancing practices.' It was a victory for the regulation movement. A year later, the paper was retracted because the authors themselves had failed to follow those very practices. They had cherry-picked results and ignored their own preregistration. The very people proposing the solution were the ones violating it.

You can't turn a cheat into a scientist by making a rule against cheating.

This pattern is pervasive. We require clinical trials to post results publicly, yet only 45% do. We ask researchers to specify outcomes beforehand, yet when studies fail, they often sneak in different analyses to find a 'significant' result. The problem isn't a lack of guidelines; it's a lack of motivation. If the goal is to publish in a high-impact journal rather than to discover truth, then every new rule is just another obstacle to be bypassed through cleverness.

Why Regulations Fail
  • Rules are seen as hurdles rather than aids
  • Incentives reward 'significant' results over truth
  • Compliance is often performative rather than substantive
  • Complexity in rules provides more ways to hide errors

This applies to more than just science. It applies to interpersonal relationships and corporate governance. When we try to solve human problems with 'handbooks' and 'rules,' we often end up with people who are merely performing compliance. A partner who asks about your day because a rule dictates it is not the same as a partner who asks because they care. Integrity is a character trait, not a regulatory requirement.

Key Takeaway

Rules cannot substitute for the internal drive to be right.

05 Stratechery

The Inference Shift

Beyond the GPU era

By Stratechery · 14 min read
Editor's note: An analysis of the changing hardware requirements for the AI age.

For the past few years, the AI story has been synonymous with the GPU. Nvidia has become the most important company in the world because its chips are perfectly suited for the massive, parallel calculations required to train large language models. But as we move from the training phase to the deployment phase, the hardware requirements are changing. We are entering the era of inference, and the dominance of the general-purpose GPU is being challenged by a new, more heterogeneous landscape.

Training vs. Inference

Training a model is a massive, serial-parallel hybrid process. It requires enormous amounts of high-bandwidth memory (HBM) and incredibly fast chip-to-chip networking to allow tens of thousands of GPUs to act as a single system. GPUs are excellent at this. Inference, however—the process of actually using the model to generate a response—is different. It is heavily memory-bandwidth bound. For every token an AI generates, the system must read the entire model weight and the growing context (the KV cache) from memory. This is a serial bottleneck that doesn't necessarily require the same brute-force parallel compute that training does.

The future of AI compute will be defined by how we manage memory bandwidth, not just raw FLOPS.

This shift is why companies like Cerebras are gaining traction. While Nvidia focuses on the versatility of the GPU, Cerebras is building massive, single-wafer chips designed to solve the memory and bandwidth problems in a fundamentally different way. If the goal is to make inference faster and cheaper, you don't necessarily need a thousand interconnected GPUs; you might just need a more efficient way to move data from memory to the processor.

The Changing Compute Landscape
  • Training: Needs massive parallelism and networking
  • Inference: Needs extreme memory bandwidth
  • GPU Era: Defined by versatility and the CUDA ecosystem
  • Post-GPU Era: Defined by specialized, heterogeneous architectures

The implications for the semiconductor industry are enormous. As the workload shifts from the massive clusters used for training to the distributed, high-frequency tasks of inference, the winners will be those who can solve the data movement problem. The era of 'one size fits all' compute is ending, replaced by a world where the architecture is dictated by the specific stage of the AI lifecycle.

Key Takeaway

The hardware that builds AI is not necessarily the hardware that runs it.

06 Simon Willison

The Zombie Internet

The erosion of human digital space

By Simon Willison · 5 min read
Editor's note: A warning about the increasingly indistinguishable line between human and bot interaction.

There is a growing sense of exhaustion in digital spaces. It isn't just the 'Dead Internet' theory—the idea that bots are simply talking to other bots in a closed loop. It is something more insidious: the 'Zombie Internet.' This is a state where the internet is populated by people using AI to talk to other people, who are themselves using AI to respond. It is a hall of mirrors where the human element is still present, but it is being filtered, augmented, and simulated to the point of unrecognisability.

The Layer of Simulation

We see it in the 'influencer hustlebros' who use automated channels to spam content, in the LinkedIn posts that feel eerily uniform, and in the Reddit threads where heartfelt advice is actually the output of a marketing firm's agent. The interaction is technically 'human-to-human,' but the cognitive content is artificial. This creates a layer of simulation that sits on top of our digital lives, making it increasingly difficult to know if you are engaging with a person's actual thought or a highly optimized approximation of it.

We are moving from a world of bots talking to people, to a world of people talking through bots.

The danger isn't just spam; it's the distortion of human style. As we use AI to 'clean up' our emails, 'summarise' our thoughts, and 'optimize' our social media presence, we are all beginning to sound the same. We are smoothing out the idiosyncrasies, the errors, and the unique rhythms that make human communication meaningful. The internet is becoming a highly efficient, highly polished, and deeply boring place.

Symptoms of the Zombie Internet
  • AI-generated summaries of real books sold as original content
  • Automated social media accounts mimicking human empathy
  • The homogenization of professional prose
  • The exhaustion of filtering 'synthetic' content from 'authentic' content

If we continue down this path, the cost will be a loss of digital trust. When every interaction carries the suspicion of being synthetic, the value of digital connection collapses. We are building a world of perfect communication that says absolutely nothing.

Key Takeaway

Efficiency in communication is not a substitute for authenticity.

Endnote
Tonight's collection presents a recurring theme: the tension between the ease provided by technology and the difficulty required for excellence. Whether it is the engineer moving from code to specification, the researcher struggling against weightless prose, or the CEO gamifying AI adoption, we are seeing a fundamental shift in how humans exert agency. We are moving from a world of direct action to a world of orchestration. This transition promises immense productivity, but it carries the risk of hollowed-out expertise and a loss of genuine connection. As we automate the friction out of our lives, we must be careful not to automate the substance out of our work and our world.
In your own work, what is the 'friction' that actually makes your output meaningful?
The Deep Feed · A nightly magazine · Tuesday, 12 May 2026