Friday, 8 May 2026

The Deep Feed

The Ghost in the Machine: Agency, Infrastructure, and the Compute War

75 min read · 6 pieces
In this issue
01 The Memory Problem: Why LLMs are Amnesiacs 12 min
02 Skills at Scale: Coding the Agentic Era 10 min
03 The GitHub Dilemma 15 min
04 The Compute Crisis 11 min
05 The Great OS Lie 14 min
06 Optimization via Evolution 13 min
Editor's Letter

Tonight we look at the scaffolding of the new era. From the physical limits of silicon to the way we teach machines to remember, the boundary between human intent and machine execution is blurring into something entirely new.

01 AI Dot Engineer · Video

The Memory Problem: Why LLMs are Amnesiacs

Architecting around the limitations of the context window

By Michael Arnaldi · 12 min read
Editor's note: Understanding why your AI forgets is the first step to building something that actually works.

There is a fundamental misunderstanding about how large language models function. Most people treat them like human brains—entities that learn, internalise, and grow through experience. They assume that if they explain a rule today, the model will respect it tomorrow. This is a mistake. LLMs are static snapshots of knowledge. They undergo a pre-training phase, a fine-tuning phase, and then they are essentially frozen. They do not learn from your conversation in any permanent way. When you close the chat, the knowledge vanishes. Every new session is a clean slate, a digital rebirth with no memory of your previous struggles or preferences.

The Context Window Illusion

To solve this, developers lean on the context window—the fixed-size array of messages passed to the neural network. There is a growing trend toward massive context windows, some claiming millions of tokens. But size is not a substitute for intelligence. When you flood a model with a million tokens of information, you aren't giving it a better memory; you are giving it a harder job. The neural network has to predict the next token amidst a sea of noise. Pushing too much information into a single window often leads to confusion, where the model loses the thread of the actual task. It becomes a victim of its own input.

An LLM is not a person learning; it is a mathematical function being fed a growing list of previous inputs.

The real challenge for engineers is not finding larger windows, but architecting around the dump process. We have to build systems that manage what information is relevant and when. If you want an agent to act like a senior engineer who knows your specific coding standards, you cannot rely on the model to 'just know' them. You must build the machinery that retrieves, selects, and injects that context precisely when needed. We are moving away from 'chatting' and toward building complex retrieval systems that act as an external hard drive for a brain that can't remember.

The limits of LLM intelligence
  • Static training: Models do not update their weights based on user interaction.
  • Context noise: Larger windows increase the likelihood of model confusion.
  • Architectural necessity: Memory must be built externally, not expected internally.

This shift changes the role of the developer. You are no longer just writing code; you are managing the flow of information into a probabilistic engine. You are the librarian for a genius who has total amnesia. The success of an AI agent depends less on the model's raw power and more on the precision of the context you provide it.

Key Takeaway

Stop treating LLMs like people; start treating them like stateless functions that require external memory management.

02 AI Dot Engineer · Video

Skills at Scale: Coding the Agentic Era

Moving beyond memory files to portable developer expertise

By Nick Nisi and Zack Proser · 10 min read
Editor's note: As we stop writing code by hand, the way we package our expertise becomes the new unit of work.

The era of the manual coder is ending. Many senior engineers admit they haven't written a line of code from scratch in months. Instead, they act as directors, guiding agents through complex repositories. But this new workflow has a massive friction point: the constant need to re-explain the basics. Every time you start a new session, you have to tell the agent which package manager you use, how your testing suite works, and what your architectural preferences are. It is a repetitive, soul-crushing tax on productivity.

The Failure of Memory Files

Current workarounds like `.claudemind` or `agents.md` files attempt to solve this by storing instructions within a repository. While helpful, they are deeply flawed. These files are tied to specific projects. If you move to a new repo, your context is gone. Furthermore, they are passive. An agent might read a file saying 'use pnpm', but it can still decide to ignore it, or it might skip a crucial step in a multi-part instruction. You end up in a loop of correcting the agent for failing to follow the very instructions meant to guide it.

We are trying to carry the DRY (Don't Repeat Yourself) principle into the agentic era.

The next evolution is the concept of 'Skills'. Instead of static text files, skills are discrete, portable, and executable units of work. Imagine a skill that isn't just a set of instructions, but a package that includes scripts to inject real data and enforce deterministic results. A skill can be composed, shared across a team, and used by any agent regardless of the repository. This moves us from 'telling' the agent what to do to 'giving' the agent the actual tools to do it correctly.

The evolution of agentic context
  • Manual Prompting: High friction, high repetition.
  • Memory Files: Project-specific, passive, and easily ignored.
  • Skills: Portable, executable, and composable units of expertise.

This transition is essential for scaling. Whether you are a solo founder with a fleet of twelve agents or a massive enterprise team, you cannot afford to spend half your day re-onboarding your digital workers. The goal is to encode human expertise into a format that machines can reliably execute, making the transition from 'vibe coding' to professional engineering.

Key Takeaway

The future of development lies in creating portable, executable skills rather than static instruction files.

03 Theo - t3.gg · Video

The GitHub Dilemma

Searching for a stable home in an era of unreliable infrastructure

By Theo - t3.gg · 15 min read
Editor's note: When the industry standard becomes a liability, the search for an alternative begins.

GitHub has long been the undisputed centre of the software universe. It is more than just a place to host code; it is a community, a social network, and a vital piece of infrastructure. But the foundation is cracking. Recent instances of random merge reverts and downtime measured in days rather than minutes have spooked the community. For developers whose livelihoods depend on the stability of their version control, these aren't just inconveniences—they are existential threats.

The Illusion of Choice

The alternatives exist, but none feel like a true replacement. GitLab is often cited as the primary contender, but many users find its user experience to be a significant step backward. It feels like a tool designed by engineers who have never actually used their own product. Bitbucket and others exist, but they lack the social gravity and the robust CI/CD ecosystems that make GitHub so sticky. We are in a strange position where we know the current leader is failing us, but we aren't sure if any of the contenders are ready to lead.

GitLab feels like a bicycle in a world of cars: we know it's an option, but nobody actually wants to ride it to work.

A proper replacement needs to do more than just host Git remotes. It needs to provide a seamless pull request workflow, a reliable community feed, and, most importantly, rock-solid uptime. As we move into an era where AI agents are increasingly responsible for managing code, the stability of the platform becomes even more critical. If an agent is performing automated merges or managing CI/CD pipelines, a platform outage doesn't just stop a human; it breaks the entire automated loop.

What a GitHub alternative must provide
  • Stable, server-backed Git remotes.
  • A robust, non-broken PR and merge workflow.
  • Reliable CI/CD integration.
  • A sense of community and discovery.

The risk of moving is high, but the risk of staying is becoming higher. The open-source community is at a crossroads. We are looking for a platform that is not just a utility, but a reliable partner in the development process. Until someone delivers that, we are all just walking on thin ice, hoping the platform holds long enough to finish the next sprint.

Key Takeaway

Infrastructure stability is a prerequisite for the agentic era; GitHub's reliability issues are a warning sign for the entire industry.

04 Theo - t3.gg · Video

The Compute Crisis

Why Anthropic is making peace with its enemies

By Theo - t3.gg · 11 min read
Editor's note: In the AI race, politics takes a backseat to the raw necessity of silicon.

The recent partnership between Anthropic and SpaceX is a startling development that reveals the true state of the AI industry. For months, there has been visible friction between these entities. Anthropic has previously restricted access to its models for certain parties, and the cultural divide between the 'safety-first' ethos of Anthropic and the aggressive expansionism of Elon Musk's ventures is vast. Yet, here they are, coming together. This isn't a change of heart; it is a surrender to reality.

The Bottleneck is Physical

The central problem facing frontier AI labs is not pricing power or user acquisition. It is compute. The demand for training and inference is so unprecedented that even the most well-funded labs are hitting a wall. Anthropic's biggest challenge is simply having enough GPUs to keep up with the world's appetite for Claude. When you are fighting for survival in a resource-constrained environment, ideological beefs become secondary to the need for more chips and more power.

Anthropic does not like SpaceX. But Anthropic needs compute.

This partnership signals a shift in how the AI arms race will be fought. It is no longer just about who has the best researchers or the most clever algorithms; it is about who can secure the most massive clusters of hardware. The ability to lease or build massive compute clusters, like the Colossus systems, is becoming the ultimate competitive advantage. We are seeing the emergence of a new kind of geopolitical tension, where the 'nations' are actually corporate entities fighting over the limited supply of high-end silicon.

Key drivers of the AI landscape
  • Compute scarcity: The primary limiting factor for model growth.
  • Pragmatic alliances: Ideology yields to resource necessity.
  • Hardware supremacy: The winner is often whoever owns the most GPUs.

As we watch these alliances form, we should recognize that the 'intelligence' we see in these models is inextricably linked to the physical infrastructure supporting them. The era of software-only dominance is over; the era of the hardware-software monolith has begun.

Key Takeaway

The AI revolution is being throttled by physical hardware limits, forcing even the most ideological players into pragmatic partnerships.

05 Fireship · Video

The Great OS Lie

How your computer creates a functional reality from chaos

By Fireship · 14 min read
Editor's note: A reminder that the stability of your digital life is a carefully maintained illusion.

Every time you move your mouse or open a browser tab, you are witnessing a miracle of software engineering. Your CPU is managing hundreds of simultaneous processes, while Chrome attempts to consume every byte of RAM available. This level of coordination is not natural; it is the result of the operating system, the most underappreciated layer of the computing stack. Without it, your computer would be nothing more than a collection of uncoordinated electrical impulses.

The Privilege of Control

At the heart of this coordination is the concept of privilege rings. Your CPU is designed with layers of protection to prevent total system collapse. Ring zero is the domain of the kernel, where it has absolute power over the hardware. Ring three is the user space, where your applications live. This separation is what prevents a single buggy application from reading your password manager's memory or crashing the entire machine. The kernel acts as a strict gatekeeper, ensuring that no program can overstep its bounds without explicit permission.

The kernel tells the biggest lie in computing: virtual memory.

Perhaps the most elegant deception is virtual memory. When a program requests a specific memory address, it isn't actually touching physical RAM. It is interacting with a fake, virtual address provided by the kernel. A piece of hardware called the Memory Management Unit (MMU) translates these lies into real physical locations. This allows every application to live in its own private universe, unaware of the other programs running alongside it. It is a controlled hallucination that provides the stability required for modern multitasking.

Core OS functions
  • Bootloading: The handoff from firmware to the kernel.
  • Privilege Rings: Enforcing boundaries between the kernel and users.
  • Virtual Memory: Creating isolated environments for every process.
  • File Systems: Translating raw disk blocks into human-readable files.

From the moment the bootloader finds the kernel to the moment you shut down, the OS is constantly managing interrupts, scheduling tasks, and maintaining the illusion of order. It is a massive, complex, and incredibly fragile balancing act that we take for granted every single day.

Key Takeaway

The stability of modern computing relies on a series of sophisticated deceptions, primarily through memory isolation and privilege separation.

06 AI Dot Engineer · Video

Optimization via Evolution

Using genetic algorithms to refine the prompt

By Samuel Colvin · 13 min read
Editor's note: Stop guessing your prompts and start breeding them.

Prompt engineering is often treated as a dark art—a process of trial and error where developers tweak words and hope for a better result. But as we move toward more complex agentic systems, this manual approach is insufficient. We need a way to optimize the instructions we give to models that is as rigorous as the way we optimize code. The solution may lie in an unexpected place: evolutionary biology.

The Genetic Approach

Using libraries like Jepper, we can apply genetic algorithms to the task of prompt optimization. The process mimics natural selection. You start with a population of candidate prompts. You test them against a specific goal, identify the ones that perform best, and then 'breed' them. By mixing the characteristics of high-performing prompts and introducing small mutations, you produce a new generation of candidates. Over many iterations, the system converges on a prompt that is far more effective than anything a human could have written through manual tinkering.

Don't just write prompts; breed them for performance.

This isn't just about text. With modern frameworks, we can optimize managed variables—complex JSON objects that define the state and parameters of an agent. By treating these variables as part of the evolutionary process, we can autonomously fine-tune how an agent behaves in specific contexts. We are moving from a world where we 'instruct' an agent to a world where we 'evolve' its capabilities.

The evolutionary optimization loop
  • Candidate Generation: Creating a diverse set of initial prompts.
  • Evaluation: Testing candidates against rigorous benchmarks.
  • Selection: Identifying the top performers (the 'Pareto frontier').
  • Mutation and Crossover: Breeding new candidates from the best performers.

This shift represents the transition from manual prompting to automated agent tuning. It allows us to handle the complexity of real-world tasks—like analyzing political dynasties or managing enterprise workflows—with a level of precision that manual engineering simply cannot match.

Key Takeaway

The future of prompt engineering is not manual refinement, but automated, evolutionary optimization.

Endnote
Tonight's pieces trace a single, inevitable arc: the transition from human-centric control to machine-centric orchestration. We see it in the way we must build external memory for amnesiac models, the way we must package our expertise into portable 'skills', and the way we must evolve our prompts through genetic algorithms. Even the massive geopolitical shifts in the AI industry—the desperate scramble for compute—point toward a world where the physical and the digital are merging into a single, high-stakes struggle for resources. We are no longer just writing software; we are designing the ecosystems in which autonomous intelligence lives and breathes. The tools are changing, the infrastructure is shifting, and the old ways of 'doing' are being replaced by new ways of 'directing'.
As you delegate more of your cognitive load to agents, what part of your unique expertise are you most afraid to lose?
The Deep Feed · A nightly magazine · Friday, 8 May 2026