AI Hysteria Masks Power Grab by Tech Elites, Threatening Workers
A New Yorker report exposes Sam Altman's dangerous influence over AI, raising concerns about job displacement, corporate control, and the erosion of democratic values.

The escalating anxieties surrounding artificial intelligence (AI), amplified by a recent exposé in the New Yorker, demand a critical lens focused on the concentration of power within the hands of tech elites like OpenAI's Sam Altman. While concerns about AI's potential impact on society are valid, the current discourse often overlooks the systemic inequalities and power dynamics that shape its development and deployment.
The article highlights the increasing apprehension regarding AI's potential impact on society, ranging from job displacement to existential threats. The author, Emma Brockes, expresses a shift from localized anxieties about AI's impact on personal income and her children's future job prospects to broader fears about its potential for societal disruption. This shift was prompted by the New Yorker piece, which depicts Altman as a controversial figure with significant influence over the trajectory of AI development. However, it is crucial to recognize that these anxieties are not simply about a neutral technology, but about the choices being made by those who control its development.
The New Yorker investigation, authored by Ronan Farrow and Andrew Marantz, characterizes AI as both a technological and a power story, with Altman at its center. The article suggests Altman's leadership style at OpenAI has been described as cult-like and indifferent to potential risks, drawing comparisons to other tech leaders but with potentially more dangerous consequences given the nature of AI. Karen Hao's book, 'Empire of AI,' previously raised similar concerns about Altman's leadership and the potential for unchecked AI development. This pattern of unaccountable leadership within the tech sector should be a cause for alarm, particularly as AI is increasingly integrated into all aspects of life.
The 'alignment problem,' where AI could potentially outmaneuver human control, is not merely a technical challenge; it is a reflection of the values and priorities embedded within these systems. If AI is developed primarily by corporations driven by profit, it is likely to exacerbate existing inequalities and further concentrate wealth and power in the hands of the few.
Brockes' attempts to use ChatGPT to summarize the New Yorker article resulted in what she perceived as a bland and insufficient overview, further fueling her concerns about the potential for AI to downplay or obscure critical information. This highlights the risk of relying on AI systems for unbiased information when those systems are controlled by powerful corporations with their own agendas. The control of information flows is a significant aspect of AI's potential to reshape society.


