2026-04-28

The Deep Feed

The Friction of Progress: From Invented Futures to Fractured Truths

52 min read · 4 pieces
In this issue
01 The Invention of Unnecessary Futures 12 min
02 The Intelligence Commodity 10 min
03 The Agentic Hardware Shift 9 min
04 The Death of Shared Reality 11 min
Editor's Letter

Tonight, we examine the growing distance between technological ambition and human utility. We look at the systems being built, the hardware being designed, and the very definition of truth that holds our society together.

01 Cal Newport

The Invention of Unnecessary Futures

Why Silicon Valley has abandoned problem-solving in favour of venture-backed fantasies

By Cal Newport · 12 min read
Editor's note: A sharp critique of the current tech cycle and why most AI hype fails to meet actual human needs.

Silicon Valley has undergone a fundamental shift in its core mission. For decades, the most successful tech companies operated on a simple premise: identify a friction point in human life and build a tool to remove it. The goal was service. You wanted to find a restaurant, so they built a directory; you wanted to move money, so they built a digital ledger. But in the years following the financial crisis, a new philosophy took hold. The objective shifted from serving existing needs to inventing entirely new ones. Entrepreneurs stopped asking what people wanted and started deciding what people should want. This is the era of the invented future, where consumers are expected to bend their habits to accommodate the latest technological obsession, whether that be the metaverse, NFTs, or the current relentless push for generative AI integration in every conceivable corner of life.

The Venture Capital Mandate

This shift is not accidental; it is driven by the mechanics of venture capital. To secure massive rounds of funding, a startup cannot simply promise a slightly better way to manage a calendar. It must promise a revolution. It must promise a world that looks nothing like the one we inhabit today. This creates a feedback loop where product development is decoupled from consumer utility. Companies are building for the sake of the next funding round, creating technologies that solve problems that do not exist for the vast majority of the population. Large language models are the current poster child for this phenomenon. While they possess immense potential, much of the current development is aimed at a hypothetical user who wants to automate every second of their existence. Most people, however, are simply looking for a more efficient way to format an itinerary or search for information.

These technologies are not built to really solve a market problem. They are built to make VCs and companies rich.

The disconnect is most visible when we compare the arrival of AI to previous technological shifts. When the iPod arrived, it solved a clear, existing problem: the difficulty of carrying music. It was an improvement on a reality people already understood. AI, by contrast, is often presented as a force that will fundamentally rewrite the rules of reality. We are told that everything is about to change, often in ways that feel threatening or uncontrollable. This constant barrage of breathless pronouncements creates a sense of fatigue rather than excitement. People are not running around trying to automate their lives; they are simply trying to live them. Until tech companies can bridge the gap between their grand visions and the mundane needs of the public, they will continue to face a wall of indifference.

The Job Market Contradiction

The confusion extends into the economic narrative. For the last year, the media has been caught in a cycle of contradictory reporting regarding AI and the labour market. One week, the headline claims AI is decimating entry-level roles for college graduates by automating routine tasks. The next week, data shows that hiring in that same demographic is rebounding. This inconsistency reveals a deeper truth: the impact of AI is not a monolith. It is not simply 'replacing' or 'creating' jobs in a linear fashion. Instead, it is shifting the nature of work in ways that the current media narrative struggles to capture. The idea that AI is a singular force of destruction or salvation is a simplification that ignores the complexity of how businesses actually adopt new tools.

Why the AI narrative fails:
  • It prioritises hype-driven headlines over nuanced economic data.
  • It assumes a direct, one-to-one replacement of human tasks by machines.
  • It ignores how companies use AI to expand services rather than just cut costs.

The reality is that for a technology to be truly successful, it must move past the stage of being a novelty for enthusiasts. It must become a reliable, quiet part of the infrastructure of daily life. Silicon Valley's current obsession with 'inventing the future' is a high-stakes gamble that assumes people will eventually want the world these companies are building. But as history shows, if the vision does not serve the person, the person will eventually walk away. The overlords of software have forgotten that adoption requires desire, not just capability.

The challenge for the next decade is not to build more powerful models, but to build more useful ones. The winners will not be the companies that create the most complex systems, but those that manage to solve the most common problems with the least amount of friction.

Key Takeaway

True technological success comes from solving existing human problems, not from forcing people to adapt to invented ones.

02 Julian Goldie SEO · Video

The Intelligence Commodity

How massive context windows and low-cost models are rewriting the economics of agency

By Julian Goldie SEO · 10 min read
Editor's note: A look at how DeepSeek V4 and the rise of open-source models are making high-level reasoning affordable for everyone.

The economics of artificial intelligence are shifting beneath our feet. For the past two years, the industry has been defined by a scarcity of high-level reasoning. If you wanted a model that could code, reason through complex logic, or handle massive amounts of data, you had to pay a premium to the giants. You paid for the privilege of accessing the most capable models, often at a cost that made large-scale automation difficult to justify for smaller operations. This scarcity created a moat around the major players. But the emergence of models like DeepSeek V4 is beginning to erode that moat. We are moving from an era of expensive, proprietary intelligence to an era where high-level reasoning is becoming a cheap, abundant commodity.

The Million-Token Threshold

The most significant technical development in this shift is the explosion of the context window. In the early days of LLMs, you could only feed a model a few pages of text before it began to lose the thread. This limited AI to being a conversational partner—something you talk to, rather than something you work with. Now, with models offering context windows of one million tokens or more, the nature of the interaction has changed. You are no longer just asking a question; you are providing an entire world of information. You can upload a company's entire legal history, every marketing transcript from the last three years, and every product specification document, and ask the model to find the patterns. This capability turns the AI from a chatbot into an analyst.

A million-token context window changes the AI from a conversational partner into a full-scale research department.

This is not just about volume; it is about the depth of reasoning that can be applied to that volume. When you combine massive context with the low costs of models like DeepSeek, the barrier to entry for complex automation disappears. An agency owner can now build workflows that ingest vast amounts of client data, process it, and generate strategic outputs for a fraction of the cost of previous methods. The value is no longer in the model itself, but in how you structure the workflow that utilizes it. The intelligence is there; the question is what you do with it.

The Open Source Ripple Effect

The rise of these models is also a victory for the open-source movement. For a long time, the consensus was that proprietary models would always stay ahead of open-source alternatives. That assumption is being challenged. The speed at which open-source models are catching up to the performance of Claude Opus or GPT-4 is unprecedented. This creates a massive competitive pressure on the hyper-scalers. If an open-source model can perform at 95% of the level of a proprietary one for 1% of the cost, the business case for the proprietary model becomes increasingly difficult to defend for most use cases.

Strategic implications for agencies:
  • Shift focus from 'buying access' to 'building workflows'.
  • Utilise massive context windows to ingest entire client ecosystems.
  • Reduce reliance on single-provider ecosystems to mitigate cost and risk.

We are entering a period of intense experimentation. The tools are becoming cheaper and more capable, which means the cost of failure is lower. This is the ideal environment for innovation. The agencies that will thrive are those that stop treating AI as a magic box and start treating it as a raw material—a commodity that must be refined through clever engineering and deep domain expertise.

In the end, the winner won't be the one with the best model, but the one with the best way to use a cheap, powerful one.

Key Takeaway

When intelligence becomes cheap and context becomes massive, the competitive advantage shifts from the model to the workflow.

03 Julian Goldie SEO · Video

The Agentic Hardware Shift

Why the future of mobile might not involve apps at all

By Julian Goldie SEO · 9 min read
Editor's note: Speculation on OpenAI's hardware ambitions and the potential end of the smartphone app era.

The smartphone is a collection of walled gardens. We carry these devices in our pockets, but we interact with them through a series of isolated silos. To book a ride, you open Uber. To send a message, you open WhatsApp. To check your bank, you open a banking app. Each app is a destination, a separate interface that requires your attention and your manual input. This is the 'app-centric' model of computing, and it is reaching its limits. The next great leap in hardware will not be about making screens larger or cameras better; it will be about removing the need for the app interface entirely. The future is not a device that hosts apps, but a device that hosts an agent.

The Death of the App Store

Leaked reports suggest that OpenAI is moving aggressively into hardware, developing a device designed to act as a seamless interface for AI agents. This is a direct challenge to the current mobile paradigm. In an agentic model, you don't tap an icon to perform a task; you simply state the task. The agent then navigates the underlying services for you. It doesn't matter if the service is an app or a web API; the agent handles the execution. This shift could effectively kill the traditional app economy. If a single agent can perform the functions of a hundred different apps through direct integration, the incentive for developers to build standalone mobile interfaces diminishes significantly.

The interface is no longer a set of buttons; the interface is the agent itself.

This represents a move from 'active' computing to 'delegated' computing. Currently, we are the pilots of our devices, manually steering through various applications to achieve a goal. In the agentic future, we become the managers. We provide the intent, and the hardware executes the process. This requires a fundamental change in how software is built. Instead of designing for human eyes and fingers, developers will need to design for machine-to-machine interaction. The 'user interface' will shift from the visual to the functional.

On-Device Intelligence

For this to work, the hardware must be capable of more than just being a window to the cloud. To feel truly seamless and to respect user privacy, the device needs significant on-device intelligence. This means custom AI chips designed specifically to run large models locally. If every request has to travel to a data centre and back, the latency will break the illusion of a natural interaction. The hardware must be able to process voice, vision, and intent in real-time, locally, without the constant need for a high-speed connection. This is a massive engineering challenge that requires a convergence of silicon design and model optimisation.

How an agentic phone changes daily life:
  • Task delegation replaces app navigation (e.g., 'Book a table' instead of opening Yelp).
  • Contextual awareness allows the device to anticipate needs based on surroundings.
  • Reduced screen time as interaction moves to voice and subtle gestures.

The implications for businesses are enormous. The gatekeepers of the current mobile era—the owners of the app stores—could see their influence evaporate. The new gatekeepers will be those who control the agentic interface and the underlying models. It is a high-stakes gamble by OpenAI to move from being a software provider to being the foundation of a new computing era. If they succeed, the smartphone as we know it will become a relic of a much more manual age.

The transition will be messy, but the direction of travel is clear. We are moving away from the tyranny of the icon and towards the freedom of the agent.

Key Takeaway

The next era of computing will be defined by delegating tasks to agents rather than manually navigating through individual apps.

04 Psyche

The Death of Shared Reality

Why the fragmentation of truth is the most significant challenge of the digital age

By Psyche · 11 min read
Editor's note: An exploration of the psychological and epistemological reasons why consensus is becoming impossible in a digital world.

We are living through an epistemological crisis. It is no longer enough to disagree on how to solve a problem; we are increasingly unable to agree on what the problem actually is. The very concept of a shared reality—a common ground of facts upon which debate can occur—is fracturing. This is not merely a political issue; it is a fundamental psychological shift. Recent research suggests that the difficulty in reaching consensus is not just about misinformation or 'fake news'. It is about the fact that different groups of people hold fundamentally different conceptions of what constitutes truth itself.

The Subjective Turn

The traditional model of truth was built on a foundation of objective verification. A fact was something that could be observed, measured, and independently confirmed. However, in the digital age, the process of information consumption has become deeply personalised. Algorithms do not just show us what we like; they reinforce our existing frameworks for interpreting the world. This creates a feedback loop where our subjective experiences are treated as objective evidence. When truth becomes tied to identity and group belonging, the ability to engage in rational debate disappears. To accept a fact that contradicts your group's narrative is not just a cognitive error; it is a social betrayal.

Truth is no longer a shared ground; it has become a tool for group identity.

This fragmentation is exacerbated by the speed and volume of information. In a world of constant updates, the ability to verify is outpaced by the urge to react. We are forced to make judgements on incomplete or biased data, and once those judgements are made, they become part of our psychological architecture. The result is a society of people living in parallel realities, each with its own set of 'facts' and its own internal logic. Arguments between these groups are not just difficult; they are often entirely irresolvable because the participants are not even playing by the same rules of evidence.

The Mapping of Belief

The research into these contrasting conceptions of truth reveals a complex map of belief. People do not just disagree on the details; they disagree on the hierarchy of evidence. For some, empirical data is the highest authority. For others, lived experience or traditional authority carries more weight. When these different hierarchies collide, there is no neutral arbiter. The digital environment provides the perfect stage for these collisions, as it allows for the rapid scaling of niche truths and the immediate suppression of dissenting views through social pressure.

Why consensus is disappearing:
  • Personalised algorithms create divergent information environments.
  • Truth is increasingly used as a marker of social and political identity.
  • The speed of digital communication discourages deep verification.

For businesses and leaders, this fragmentation presents a massive challenge. How do you communicate a brand message or a corporate strategy in a world where the audience cannot agree on basic reality? The old methods of mass communication—one message to many people—are becoming increasingly ineffective. Instead, engagement requires a much more sophisticated understanding of the different epistemological frameworks that different segments of the population inhabit. You cannot simply present the facts; you must understand how those facts will be interpreted through the lens of the receiver's reality.

Navigating this landscape requires a move away from the pursuit of universal consensus and towards a more granular understanding of human belief. We must accept that the fracture is not a temporary glitch in the system, but a permanent feature of the digital age. The challenge is to find ways to build bridges between these realities, even if we can never truly merge them.

Key Takeaway

In a fragmented digital world, truth is often a matter of group identity rather than objective verification, making consensus nearly impossible.

Endnote
Tonight's pieces trace a common thread: the tension between the systems we are building and the human reality they are meant to serve. We see it in the disconnect between Silicon Valley's invented futures and the actual needs of users. We see it in the shift from expensive, proprietary models to the commodification of intelligence. We see it in the move from manual app navigation to agentic hardware, and in the very breakdown of our shared understanding of truth. The overarching theme is one of transition. We are moving from a world of manual, siloed, and objective structures to one that is automated, integrated, and deeply subjective. This shift offers immense power and efficiency, but it also threatens to alienate us from the very reality we are trying to enhance. The question for the coming years is not just what we can build, but what we can maintain in the process.
As technology becomes more seamless and agentic, how much of your own agency are you willing to trade for convenience?
The Deep Feed · A nightly magazine · 2026-04-28