AI and Society: A conversation with Gemini
Rishabh
I have 3 axioms that I would like you to explore the implications of. Assume that these are true, and think hard about their societal implications
-
Technology (specially AI) is improving at a very fast rate, and is approaching takeoff — where it can improve itself without much human supervision and eventually builds better energy sources, weapons, and robots than humans can themselves
-
Society’s ability to absorb the pace of this AI is very limited, and primary resources + industrial capacity remain the only valuable things as AI know how becomes better than human know how (specially if the AI can run very high quality simulations)
-
Rich countries have embedded growth obligations, and inability to grow creates real social unrest
Please explore the implications of this, particularly the relationship between countries and of class tensions within countries
Gemini
Okay, let's explore the societal implications assuming these three axioms are true. This scenario paints a picture of rapid, potentially destabilizing change concentrated around AI development and resource control.
Core Tension:
The fundamental tension arises from Axiom 1 (unstoppable, self-improving AI creating immense power/capability) colliding with Axiom 2 (society's inability to adapt, shifting value solely to physical resources/industry) and Axiom 3 (rich nations' desperate need for growth to maintain social order). This creates a high-stakes environment where control over AI development and physical resources becomes paramount for national survival and internal stability.
Implications for Relationships Between Countries:
1. Intensified Resource Competition & Neo-Colonialism:
- Axiom 2 dictates that primary resources and industrial capacity become the only sources of tangible value. Countries rich in minerals, arable land, energy sources, and manufacturing capabilities gain immense strategic importance, potentially reversing historical dependencies.
- Rich, technologically advanced nations (per Axiom 3) that may lack sufficient domestic resources will face immense pressure to secure them abroad. This could lead to aggressive foreign policy, economic coercion, proxy conflicts, or even direct military interventions to control resource-rich regions – a form of AI-driven neo-colonialism.
- The ability of AI (Axiom 1) to run high-quality simulations (Axiom 2) could be used to identify optimal resource deposits or strategic industrial locations, further fueling this competition.
2. AI Arms Race & Geopolitical Instability:
- Axiom 1 explicitly mentions AI developing better weapons. Nations, particularly rich ones needing to project power or secure growth (Axiom 3), will engage in a frantic race to develop or acquire dominant AI-driven military capabilities (autonomous weapons, cyber warfare tools, strategic simulation/planning).
- The "takeoff" nature suggests this race could be short and decisive. A nation achieving a significant lead in self-improving AI could gain an insurmountable military advantage, potentially leading to preemptive actions by rivals fearing obsolescence or domination. This creates a highly unstable multipolar or potentially unipolar world order based on AI supremacy.
- Control over the infrastructure for AI (computing power, specialized chips, data centers) becomes a critical geopolitical battleground, intrinsically linked to industrial capacity (Axiom 2).
3. Formation of Techno-Resource Blocs:
- Countries might align based on complementary strengths: those leading AI development (initially, likely rich nations) forming partnerships with those controlling essential resources and industrial capacity.
- This could create distinct geopolitical blocs centered around shared AI ecosystems and resource/industrial supply chains, potentially excluding or marginalizing nations lacking either cutting-edge AI or vital resources.
4. Devaluation of Traditional Power Metrics:
- Metrics like GDP based on services or knowledge work (outside of AI development itself) become less relevant. Population size might be less of an asset and more of a liability if large segments become economically unproductive (Axiom 2). Military power becomes defined by AI capability (Axiom 1) rather than sheer numbers of personnel.
Implications for Class Tensions Within Countries:
1. Extreme Wealth Concentration & Inequality:
- Axiom 2 states that value shifts away from human know-how towards primary resources and industrial capacity. Ownership of these assets becomes hyper-concentrated.
- Simultaneously, those who own or control the rapidly advancing AI (Axiom 1) – the companies, key engineers (initially), and investors – accrue immense wealth and power.
- This creates a stark divide: a small elite controlling AI, resources, and industry versus a vast majority whose economic value has plummeted.
2. Mass Structural Unemployment & Social Irrelevance:
- As AI surpasses human know-how (Axiom 2) and robots become more capable (Axiom 1), large swathes of the workforce, including previously secure white-collar and knowledge workers, become redundant.
- This isn't just cyclical unemployment; it's a fundamental devaluation of human labor and intellect in the economic sphere as defined by Axiom 2.
3. Fueling Social Unrest (Axiom 3):
- In rich countries with embedded growth obligations, this widespread economic dislocation directly triggers the social unrest mentioned in Axiom 3. The inability of the economy to provide growth for the majority leads to instability.
- Expect protests, riots, political radicalization (both left and right), and demands for systemic change (e.g., universal basic income funded by AI/resource wealth, radical wealth redistribution, Luddite movements opposing AI).
4. Rise of a "Resource/AI Aristocracy":
- A new elite class emerges, defined by ownership and control of the means of AI production and the physical assets (resources, industry) deemed valuable by Axiom 2. Their interests may diverge significantly from the rest of the population.
- The "middle class," traditionally reliant on skills and knowledge now devalued, could be effectively wiped out or drastically diminished, removing a key stabilizing force in many societies.
5. Increased State Control vs. Demands for Redistribution:
- Governments in rich nations (Axiom 3) face a dilemma: how to maintain order amid decay?
- One path is increased authoritarianism: using AI-driven surveillance and control (Axiom 1's capabilities) to manage dissent and secure resources/AI development for the state/elite.
- Another path involves radical social policies: attempting to capture the wealth generated by AI and resources (Axioms 1 & 2) for large-scale redistribution (like UBI) to appease the population and fulfill a semblance of "growth" or stability (Axiom 3). The political feasibility of this is highly uncertain given the likely power of the new elite.
In Summary:
Assuming these three axioms hold true, the world heads towards a period of intense disruption. Internationally, a scramble for control over AI and physical resources redraws the geopolitical map, potentially leading to conflict and new power blocs.
Domestically, particularly in rich nations, the devaluation of human labor and know-how coupled with the need for growth creates extreme inequality and fuels significant social unrest, forcing governments towards potentially radical solutions or increased repression.
The society that emerges would be vastly different, structured around the control of AI and tangible assets rather than human capital in its traditional sense.
Rishabh
Now think hard about if the axioms are likely to be true. Why or why not?
Gemini
Okay, let's critically evaluate the likelihood of each axiom being true based on current trends, expert discussions, and potential counterarguments.
Axiom 1: Technology (especially AI) is improving at a very fast rate, and is approaching takeoff — where it can improve itself without much human supervision and eventually builds better energy sources, weapons, and robots than humans can themselves.
Arguments for Likelihood:
- Rapid Recent Progress: The pace of improvement in AI capabilities, particularly in areas like large language models (LLMs), image generation, and specific scientific problems (e.g., protein folding), has demonstrably accelerated in recent years. This lends credence to the "very fast rate" part.
- AI Assisting Development: AI tools are already being used to assist in chip design, code generation, and materials discovery. This suggests a pathway, albeit perhaps slow initially, towards AI contributing to its own improvement and related fields.
- Investment and Talent: Unprecedented levels of financial and intellectual capital are being poured into AI R&D globally, increasing the probability of breakthroughs.
- Potential of Foundational Models: The development of large, general-purpose models suggests that capabilities might scale rapidly with more data and compute, potentially leading to unexpected emergent abilities.
Arguments Against/Uncertainties:
- Is Current Progress Leading to "Takeoff"? Many experts debate whether current progress, largely driven by scaling existing architectures (like transformers) on massive datasets, is sufficient for true Artificial General Intelligence (AGI) or recursive self-improvement ("takeoff"). Fundamental conceptual breakthroughs might still be required, and progress could plateau.
- The Hurdle of Autonomous Self-Improvement: Designing a better AI is an incredibly complex task requiring deep understanding, creativity, and goal formulation. An AI doing this reliably "without much human supervision" is a significant leap beyond current capabilities, where human feedback and goal-setting are crucial.
- Physical World Constraints: While AI can design, building better energy sources, weapons, and robots involves complex engineering, materials science, manufacturing, supply chains, and interaction with the physical world. Progress in these areas is often slower and more resource-intensive than pure software development. AI might accelerate design, but implementation remains a bottleneck.
- Defining "Better": "Better" energy, weapons, or robots involves complex trade-offs (cost, safety, ethics, efficiency, deployability) that still require human judgment and value alignment, which is a major unsolved problem in AI.
- Overall Assessment (Axiom 1): The "very fast rate" of improvement is arguably true currently in specific domains. The "approaching takeoff" and autonomous creation of superior physical technologies are far more speculative. While plausible as a long-term possibility debated by experts, it's not a certainty, and the timeline is highly contested (ranging from years to many decades or never). Likelihood: Partially true (fast improvement), highly uncertain (takeoff, autonomous superhuman capabilities in physical domains).
Axiom 2: Society’s ability to absorb the pace of this AI is very limited, and primary resources + industrial capacity remain the only valuable things as AI know how becomes better than human know how.
- Arguments for Likelihood:
- Pace Mismatch: Historically, societal adaptation (education reform, job market shifts, legal frameworks, ethical norms) lags behind rapid technological change. If AI progress is truly exponential (per Axiom 1), this gap could become critically wide.
- Automation Potential: AI combined with robotics (enabled by Axiom 1) has the potential to automate a vast range of cognitive and manual tasks, potentially devaluing existing human skills and labor on an unprecedented scale.
- Tangibility Bias: In times of uncertainty and dematerialization of value (like knowledge work being automated), tangible assets like land, resources, and factories can be perceived as more secure stores of value.
- Arguments Against/Uncertainties:
- Value is Not Monolithic: It seems unlikely that only primary resources and industrial capacity would retain value. Human ingenuity will likely still be needed to:
- Direct and control the AI (setting goals, ensuring alignment).
- Manage complex systems integrating AI, resources, and industry.
- Engage in tasks requiring deep human understanding, empathy, creativity (care, arts, complex negotiation, leadership).
- Adapt societal structures. These roles might become more valuable.
- Human Adaptability: While slow, societies and individuals do adapt. New job categories emerge that we can't yet predict, education systems eventually adjust, and social safety nets can evolve (e.g., UBI discussions). The capacity to absorb isn't zero.
- AI Itself is Know-How: The AI systems, the data they run on, and the expertise to build/manage them represent immense "know-how." Value wouldn't just be in raw materials but also in the sophisticated technological capital (both hardware and software).
- Demand for Human Experience: Value may shift towards human-centric services, experiences, arts, and crafts that AI cannot replicate authentically.
- Value is Not Monolithic: It seems unlikely that only primary resources and industrial capacity would retain value. Human ingenuity will likely still be needed to:
- Overall Assessment (Axiom 2): The "limited ability to absorb" is highly plausible and a significant concern. The potential for devaluation of current human know-how is real. However, the claim that value shifts only to resources/industry seems overly extreme. Value is likely to become more complex, potentially concentrating around AI control, unique human skills, and physical assets. Likelihood: Partially true (limited absorption, shift in value), unlikely in its extreme exclusivity (value only in resources/industry).
Axiom 3: Rich countries have embedded growth obligations, and inability to grow creates real social unrest.
-
Arguments for Likelihood:
- Economic Structure: Modern capitalist economies in wealthy nations are heavily reliant on GDP growth for financial stability, investment returns, funding pensions, servicing debt, and government budgets.
- Political Imperative: Economic growth is a core measure of political success. Governments are generally punished by electorates for economic stagnation or decline.
- Social Expectations: Citizens often expect improving standards of living or at least economic stability, which has historically been linked to growth. Lack of opportunity or perceived decline fuels dissatisfaction.
- Historical Correlation: Periods of low/no growth or recession in developed countries consistently correlate with increased social tensions, political polarization, and sometimes significant unrest.
-
Arguments Against/Uncertainties:
- Potential for System Change: While deeply embedded, the growth imperative isn't an immutable law of nature. A sufficiently large shock (like transformative AI) could force a rethink of economic models (e.g., towards well-being, sustainability, or post-scarcity models), though this would be politically and socially very difficult.
- Distribution Matters: Unrest isn't just caused by lack of aggregate growth, but often by its unequal distribution. Growth benefiting only a few while the majority stagnate can also cause unrest. Conversely, stability might be maintained even without high growth if resources are distributed widely (e.g., via strong social safety nets funded by AI productivity).
- Defining "Growth": If AI leads to massive productivity gains and abundance, traditional GDP might become a poor measure of societal well-being. The "obligation" might shift from growing GDP to distributing abundance or improving quality of life metrics.
-
Overall Assessment (Axiom 3): Within our current socio-economic paradigm, this axiom appears highly likely to be true. The political and economic systems of rich countries are structured around growth, and its absence reliably generates social pressure and instability. While alternative futures are conceivable, the path dependency of current systems makes this axiom robust in the near-to-medium term.
Conclusion on Likelihood:
-
Axiom 3 (Growth Obligation = Stability Link) seems the most grounded in current reality.
-
Axiom 1 (AI Takeoff/Superpowers) contains elements reflecting current rapid progress but ventures into highly speculative territory regarding the speed, autonomy, and physical-world impact.
-
Axiom 2 (Slow Absorption / Value Shift) accurately identifies the challenge of societal adaptation but likely oversimplifies the future of value by excluding crucial human roles and the value of AI systems themselves.
Therefore, the scenario you asked me to explore is based on one highly likely premise (Axiom 3), one plausible but uncertain premise regarding the degree and speed of change (Axiom 1), and one premise that identifies a real challenge but offers an overly extreme outcome (Axiom 2).
The direction of the explored implications (increased resource competition, inequality, social tension) seems plausible if AI progress continues rapidly, even if the specific extreme conditions of the axioms don't fully materialize exactly as stated.
Rishabh
Alright, now can you think very hard about the implications with these revised axioms
Gemini
Okay, working with a revised understanding of the axioms based on our previous discussion. Let's assume the axioms are now more nuanced: 1 Revised Axiom 1: AI is improving rapidly, yielding powerful tools that significantly enhance design, simulation, analysis, and automation, often requiring human direction and integration, rather than achieving imminent autonomous "takeoff."
2 Revised Axiom 2: Society adapts slowly to AI's pace, causing labor disruption and a significant shift in economic value towards tangible assets (resources, industry, infrastructure) and control over AI systems, while certain high-level human skills (strategic, creative, technical, interpersonal) retain or increase in value.
3 Revised Axiom 3: Rich countries maintain embedded growth expectations, and significant economic disruption or stagnation (especially if benefits aren't widely shared) continues to fuel social unrest and political instability.
Here's an exploration of the implications under these revised, arguably more plausible, axioms:
Core Tension (Revised):
The central tension remains the mismatch between the accelerating capabilities provided by AI tools (RevAxiom 1) and society's slower adaptation, particularly in labor markets and value perception (RevAxiom 2). This occurs within rich nations that are structurally dependent on growth and stability (RevAxiom 3).
The result is a dynamic where harnessing AI effectively becomes critical for economic competitiveness and managing social peace, but doing so exacerbates underlying inequalities related to asset ownership and skills.
Implications for Relationships Between Countries (Revised):
1 Strategic Competition over AI Ecosystems and Resources:
- The race continues, but focuses more on building and controlling the entire AI ecosystem: leading AI models, specialized hardware (semiconductors), data infrastructure, and the talent pipeline (RevAxiom 1 & 2).
- Competition for primary resources and resilient industrial capacity (RevAxiom 2) intensifies, seen as foundational for deploying AI at scale and securing national economic health. Resource nationalism and supply chain protectionism are likely to increase.
- Nations will use AI tools (RevAxiom 1) for competitive advantage in trade, industrial optimization, resource discovery, and economic intelligence.
2 Rise of Techno-Economic Blocs:
- Alliances will likely form based on shared AI platforms, technological standards, semiconductor access, and complementary resource/industrial strengths. This could lead to a fragmentation of the global digital and economic order.
- Dependency on foreign AI platforms or critical hardware components becomes a major geopolitical vulnerability, leading countries to pursue technological sovereignty.
3 AI-Augmented Military Modernization:
- Rather than just a race for hypothetical superweapons, the focus will be on integrating AI tools (RevAxiom 1) across military domains: enhanced intelligence analysis, logistics optimization, autonomous drone swarms (under human oversight), cybersecurity, sophisticated simulations for strategy, and command-and-control systems.
- An "AI advantage" means having better integrated, faster, and smarter military systems, leading to a continuous modernization race among major powers.
4 Widening Development Gaps:
- Nations adept at integrating AI tools (RevAxiom 1) with their industrial base and resource management (RevAxiom 2) will likely pull ahead economically.
- Countries lagging in AI adoption, infrastructure, or the necessary skilled workforce (RevAxiom 2) may find it harder to compete, potentially becoming more dependent on leading nations or blocs.
Implications for Class Tensions Within Countries (Revised):
1 Aggravated, More Complex Inequality:
- Wealth concentration intensifies but along multiple axes (RevAxiom 2):
- Owners of capital (resources, industry, infrastructure).
- Owners/controllers of key AI companies and platforms.
- A high-skilled "AI Gentry" – individuals whose expertise (technical, strategic, creative) is amplified by AI tools and remains in high demand.
- Simultaneously, individuals whose tasks are readily automated by improving AI tools (RevAxiom 1) face wage stagnation, job displacement, or precarious employment (RevAxiom 2). This affects both blue-collar and white-collar workers performing routine tasks.
2 Labor Market Polarization and Anxiety:
- Demand grows at the high end (AI developers, strategists, complex problem-solvers, certain creative roles) and potentially in some non-automatable service/care/trade sectors.
- The middle shrinks as routine cognitive and manual jobs are reduced (RevAxiom 1 & 2). This fuels widespread economic anxiety, even among those currently employed, fearing future obsolescence.
3 Persistent Driver of Social Unrest:
- The combination of visible wealth concentration, widespread job insecurity, and a sense that the economy isn't delivering broad-based prosperity directly fuels the social unrest predicted by RevAxiom 3.
- Expect continued or increased political polarization, populism, demands for stronger social safety nets (like UBI), calls for regulating AI's impact, higher taxes on capital/wealth, and potentially anti-technology sentiment.
4 Intensified Pressure on Education and Social Systems:
- A huge strain is placed on education systems to adapt curricula and promote lifelong learning/reskilling (RevAxiom 2). However, access to quality retraining might itself become unequal.
- Governments face immense pressure (RevAxiom 3) to manage these transitions – funding reskilling programs, reforming welfare states, potentially implementing industrial policies to steer AI development, and mediating conflicts between capital, AI developers, and labor.
5 Emergence of New Social Debates:
- Questions about data ownership, algorithmic bias, the ethics of AI decision-making, and the very definition of "work" and "value" in an AI-augmented economy become central political and social issues.
In Summary (Revised Axioms):
Even with these more moderate axioms, the picture remains one of significant societal stress. Internationally, competition sharpens around controlling the means of AI production and the physical resources needed to fuel modern economies.
Domestically, while not necessarily creating a purely bifurcated society, AI accelerates inequality along lines of capital ownership and AI-complementary skills. This disrupts labor markets and puts immense strain on social cohesion and political systems tasked with managing both the pursuit of growth (or stability) and the fallout from technological change.
The challenges are profound, requiring proactive adaptation and difficult political choices regarding distribution, regulation, and the future social contract.