Stephen Hood is Mozilla's Principal Open-Source AI Lead.
Something seismic is happening. AI is fracturing the very bedrock of computing, creating fissures from which a whole new era will emerge, one in which nearly everything we think we know about computers will change. The way we experience and interact with the Web will be remade. And anyone who tries to cling to the status quo will be staring down an Avengers-level threat to their worldview and/or business.
Each prior era of computing — time sharing, personal computing, the Internet and the Web, mobile computing — was built upon its predecessors. But AI will sweep away and replace the norms of our era down to the bedrock of von Neumann architecture and possibly beyond. A decade from now, computing will look radically different, and our relationship with information will be fundamentally reshaped.
This future might not be built by the people and companies most visibly working with AI today, as many are using it to extend and improve existing computing systems. Instead, the future will be invented and steered by people who understand that those existing systems are themselves being uprooted.
The Jenga Tower of Computing
The history of computing is the ongoing struggle to make computers understand human intent. We started from a seemingly obvious truth: they can’t. A foundational constraint of computer science has been that the computer cannot make sense of human language and cannot understand our commands on our own terms. Because the computer can’t come to us, we’ve instead had to “come to the computer” by learning to express ourselves on the computer’s terms. To do this, we built three pillars of modern computing: programming languages, operating systems, and applications.
- We began by writing binary code, expressing it with patch cables and punch cards. This was intolerable, so we created programming languages. The first ones were barely any easier but evolved and improved over time. Through it all, programming remained a specialized skill.
- As our programming languages evolved, we needed ever-increasing levels of isolation from and control over the machine’s inner workings. So, we created operating systems. The first ones were little more than schedulers or storage controllers. But they grew in sophistication until they formed the basis for mega-corporations and quasi-religious feuds among technologists.
- Because programming remained challenging and time-consuming, we needed a way to reuse our work. We needed to enshrine a given set of computer commands and user functionalities into a package that could be distributed and used by many people. This would allow us to enjoy economies of scale and allow users to learn how to accomplish their goals reliably. So we created applications, and with them, we created libraries, frameworks, and UI conventions.
These pillars made computers more capable and usable than ever. But progress came at a cost. Each step up the ladder required more abstraction. Our hardware requirements ballooned as we constructed this glorious Jenga tower of abstraction. A computer serviceable just ten years ago would be unusable today. The computer didn’t get slower; we just increased the abstraction.
Yet, we’ve never bridged the gap between humans and computers. Programming hasn’t gotten easier or simpler over time. Just look at the Web. When Tim Berners-Lee gifted it to the world, anyone could create a web page with just a little bit of poking around. Today? You generally need to be part of an increasingly priest-like class called “web developers.”
We’ve ended up with undeniably powerful computers, but most users can only make them perform pre-defined tasks. The levers of power remain with programmers, as they were in the beginning. And so for nearly eight decades, from the first stored-program computers to the present day, we’ve been locked in an endless cycle:
- The user has a need, but they can’t communicate it to the computer.
- Software developers deliver on the user’s needs, raising the level of abstraction in the process.
- Hardware makers deliver more powerful and expensive hardware to support this new abstraction.
- Software and hardware grow notably in complexity while actual user capabilities advance only incrementally.
- Go to 1.
A New Kind of Computer
The latest AI technologies break this cycle. They enable the computer to finally “understand” our intent by making sense of and responding to human communication. This means abstractions are no longer strictly needed to command the computer.
Suddenly:
- The computer can speak our language. As it has become fashionable to say in AI circles, “the hottest new programming language is English.”
- Operating systems face obsolescence. Rather than managing complex abstractions to support programming languages, we can redirect that effort to managing the machine’s cognitive powers and actions on our behalf.
- Applications lose their purpose. Instead of relying on pre-built, single-purpose software, we can ask the computer to execute our bespoke intentions on demand.
Such a common understanding between user and computer, rooted in human language, enables us to bypass 75 years of abstraction and complexity.
Like the “hyperspace bypass” of Douglas Adams’ Hitchhiker’s Guide to the Galaxy — where aliens casually demolish Earth to make way for their galactic transportation infrastructure — an “AI bypass” is coming. It will destroy the raison d’être for countless computing metaphors and technologies. And in doing so, it will stamp an expiration date on all of them.
The Road Ahead
So, what does this future look like? As the saying goes, “Prediction is difficult, especially when you’re talking about the future.” Let’s enumerate the changes likely to occur because of the AI bypass. From there, we can apply our perspectives and values and build a shared vision of this new era of computing.
A. Programming becomes optional
Much has been written about how LLMs can generate valid, executable code. This will provide a huge productivity gain for programmers, enabling many more people to become programmers themselves. But the real power will come from bypassing code altogether: going directly from human language to execution. This means that every user will, in effect, become a programmer without having to learn the skill of programming.
A vast slice of programming use cases will melt away and be replaced by direct user instruction in plain old human language. For the average user, this will make the computer suddenly feel like a much more powerful tool and one over which they have much more control.
Web development is ripe for this disruption. We’ve taken what was initially envisioned as a medium for everyone and ballooned it into a stack of abstractions that only programmers can navigate. The AI bypass will barrel through the world of web dev without mercy.
B. Operating systems transform
Tomorrow’s operating systems will bear little resemblance to Windows, macOS, or Linux. Instead, they’ll evolve from today’s AI tools: orchestration systems, prompt managers, vector databases, inference engines, and agents. AI developers are already building ad-hoc operating systems because traditional OS services don’t meet their needs.
As these technologies mature, new “cognition systems” will emerge to supplant traditional operating systems, raiding them for parts and encapsulating those parts as hidden, underlying providers of commodity services. Users will choose these systems based on their cognitive traits and styles — more like choosing friends than choosing software.
C. Interfaces become adaptive
The future is multimodal. AI systems will seamlessly mix text, imagery, video, and sound. This will disrupt our notion of user interfaces, freeing us from traditional OS frameworks and design standards. Interaction will fluidly shift based on the user’s situation, attention, and data type — a visual display could switch to audio when you walk away, then to async mode during dinner.
The era of “one size fits all” UI ends. Instead of users learning one interface, interfaces will adapt to each user. While filming Star Trek: The Next Generation, actor Wil Wheaton noticed that his fellow actors employed different hand gestures to pilot the ship. Asking production designer Mike Okuda if he was “doing it right,” Okuda responded that each user’s interface would be tailored to them in the future. Thus Wheaton’s gestures were correct because they were correct for his character. Over thirty years later, we’ll finally realize this promise of interfaces that adapt to the user rather than vice versa.
D. Computing becomes hyper-personal
Our “digital exhaust” — the data trail from our computer use — contains a deeply personal representation of our intentions, needs, and preferences. Today, this data primarily serves advertisers.
After the AI bypass, computers will use this data not to serve us better-targeted ads but to customize and personalize our experience in ways that the founding minds of modern computing could only dream of.
By harnessing this digital exhaust and pairing it with long-term memory, computers will cease to respond to every user as if they were the same person. Instead, they will build and maintain a dynamic and evolving model of their specific user, interpreting and responding to our commands according to that ever-changing context.
E. Applications go disposable
Users will access and control systems to help them accomplish their goals without acquiring and using specific software. Computers will generate custom applications on demand, perfectly tailored to user preferences and context. When the task is complete, the application vanishes.
We are talking about the death of the application. In its place is a set of malleable, plastic interfaces that users can manipulate to express their needs, which computers can act upon to execute. More code than ever will be written — by computers themselves — and immediately discarded after use.
F. The browser is remade
Today’s dominant user agent is the web browser. It’s the closest thing we have to a universal application because it gives us a window into today’s universal platform, the Web. It is the single most impactful software innovation of the past thirty years.
Yet the browser remains fundamentally simple. Even its name suggests passive consumption — “browsing” rarely describes productive action. While browsers are complex to build, for users they’re straightforward apps for loading pages and following links. This limited scope exists because computers haven’t been able to interpret user needs or web content until now.
As computers learn to understand user intent and interpret content, today’s browsers will yield to more proactive agents. Combined with the disruption of applications, interfaces, operating systems, and programming, this new agent won’t merely be another app — it will form the heart of this new kind of computer.
The Internet will be an integral part of any future vision of computing. Thus, many parties are already thinking about how the browser informs or inspires the next user agent…whether or not that agent ultimately looks like a browser.
Building the Bypass
This coming future is exciting and promising, but that doesn’t automatically make it better. As builders of this new era, we need a clear sense of the benefits we hope to deliver and the harms we aim to avoid. We must:
- Prevent centralized control
A vibrant and competitively viable open-source movement around AI is essential to fighting the centralized control that will otherwise consume the space. - Protect user privacy
As AI systems become increasingly intelligent and personalized, they will need access to more personal data. We must build systems that operate locally whenever possible and in a secure and trusted environment to protect it. Personal data should always remain under users’ control. - Augment humans instead of replacing them
Automation is one of the most powerful abilities of AI systems, but cutting humans out of the loop entirely risks undermining the very things that give us meaning, creating new risks to safety and livelihoods, and devaluing human creative output. We should eschew applications that remove the human from the equation and instead embrace those that augment and improve the human experience. - Fight bias and discrimination
Because of how AI systems are built, there is a risk that they will inadvertently perpetuate existing systems of control, repression, and discrimination. We must demand (and participate in) the creation of truly open large language models trained with transparent, legally-sourced data reflecting a diversity of cultures and traditions. - Avoid an Internet of Crap
If you thought we had a bot and spam problem on the Internet before, just wait: we, the users, are at risk of being crowded out of the online world by waves of machines cosplaying as humans. It is our responsibility to ensure this does not happen.
None of us can do this alone. Just as building the Web was (and remains) a collective effort, creating the Bypass requires the open-source community to work together towards a common goal. This is the only way we can counter the competing effort, vision, and motivation of Big Tech. From indie developers and hackers to academic researchers and startup founders, we all have an essential role in shaping this new era.
We need you in this fight. The Bypass brings with it very real risks, but also the rare opportunity to restore the original promise of computing. It is no coincidence that the last time we saw such an opportunity was thirty years ago, at the birth of the Web. We are once again at such a moment. And once again, we have the chance to build a new future, together.
About the Author
Stephen Hood
Stephen is Mozilla’s Principal Open-Source AI Lead. As part of Mozilla Builders, his team incubates, co-develops, and sponsors open-source AI collaborations that advance the cause of open-source AI as an effective bulwark against closed and centralized offerings.