Meet Henry, a boyish-looking AI researcher who stands at the forefront of addressing the potential dangers of artificial intelligence. He believes there is a roughly 50/50 chance that within a few years, AI could evolve to a point where it poses an existential threat to humanity. To combat this looming danger, Henry is dedicated to his role at a small, safety-focused AI research lab located in the Bay Area. His commitment to this mission is profound; he has sworn off romantic relationships, choosing instead to devote his life to ensuring AI safety, and he donates a third of his income to AI safety nonprofits.
In his free time, Henry is not just working on theoretical solutions; he is actively preparing for the possibility of an AI apocalypse by constructing DIY bioshelters. During a video call from his office, he explains how easy it is to build a bioshelter capable of withstanding lethal pathogens potentially created by advanced AI. The process begins with purchasing a positively pressurized tent, commonly used as grow rooms for plants. He then recommends stacking multiple professional-grade HEPA filters in front of the air intake and filling the tent with as much shelf-stable food, water, and essential supplies as possible. Henry estimates the total cost of his bioshelter to be under $10,000, including enough food to last three years.
Henry prefers to remain anonymous and asked for a pseudonym due to the social stigma associated with being a prepper—particularly if his fears never materialize. However, he is not alone in his proactive approach to AI risks. Many in Silicon Valley view AI not simply as the next technological wave but as a force that could fundamentally alter our society at a rapid pace.
For a segment of Silicon Valley denizens, the urgency around AI is palpable. The rationalist community, focused on improving human rationality and morality, has become increasingly concerned about the perceived risks of AI. Meanwhile, startup enthusiasts are growing ever more optimistic about the tech's potential. Some believe we are on the verge of an age of superabundance, where nearly all intellectual labor could be automated, leading to unprecedented human flourishing. This lifestyle shift, dubbed "smart-to-hot," emphasizes the importance of social skills and physical attractiveness over raw intelligence as automation takes over intellectual tasks.
Jason Liu, an AI consultant, is among those who have already pivoted towards this new paradigm. After suffering a debilitating injury that halted his career as a software engineer, he embraced leisure pursuits such as jiu-jitsu and ceramics while retooling his life to prioritize social interactions. "I personally did not want to be valued for my intelligence," he explains, reflecting on how the push for intellectual excellence can sometimes lead to physical consequences.
For many, the fear of AI's implications has transformed their approach to life. Aella, a fetish researcher and sex worker in the San Francisco Bay Area, embodies this sentiment. "I like throwing weird orgies, and I'm like — well, we're going to die. What's a weirder, more intense, crazier orgy we can do?" she says, illustrating how her concerns about AI have led her to embrace more spontaneous and thrilling experiences. Similarly, venture capital investor Vishal Maini advocates for a "bucket-list mentality," urging people to focus on what truly matters during uncertain times.
For Holly Elmore, the executive director of the anti-AI protest group Pause AI, concerns about unchecked AI development contributed to her divorce. She felt a strong moral conviction to act against AI risks, which her ex-husband did not share. Their differing views on the urgency of AI safety led to significant resentments, ultimately culminating in their separation.
As AI continues to advance, many are reassessing their financial strategies. Daniel Kokotajlo, an AI researcher who worked at OpenAI, stopped saving for retirement, fearing the existential threats posed by AI. He emphasizes a pressing need for humanity to address these risks, having published a widely read essay titled "AI 2027," which discusses potential loss-of-control scenarios linked to rapid AI advancements.
Others, like Haroon Choudery, view the next few years as a crucial window for building generational wealth. He plans to spend down his savings, believing that the future is too uncertain to invest long-term. This sentiment resonates with many in the tech industry, where fears of obsolescence loom large. Massey Branscomb, an executive at an AI hedge fund, warns that failing to position oneself within leading AI companies could lead to a precarious future.
While some are preparing for an AI-driven apocalypse, others remain skeptical about the extent of AI's impact on society. David Thorstad, an assistant professor of philosophy, advocates for caution, suggesting that extreme views can lead to a narrow perspective on the future of AI. Meanwhile, entrepreneurs like Ulrik Horn are turning their concerns into business opportunities by developing bioshelters and resiliency consulting firms aimed at helping individuals prepare for existential threats.
In a world increasingly dominated by AI, Henry's journey illustrates the diverse reactions to this technology—from radical preparation to a focus on enjoying life in the moment. Whether viewed as a potential savior or a harbinger of doom, the implications of AI are profound, shaping not only our future society but also the very fabric of our daily lives.