The First AI Religion

What will the first AI religion look like? And when will it emerge?

Hypothesis: humans have a tendency to need to have faith in something they perceive to be more powerful than themselves. Be it a god, a prophet, a queen, a President, an anonymous conspiracy theory Twitter account, a guru or a nation state. They will eventually place their faith in AI.

The Technological Singularity (ie “a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization”) has long been snidely referred to by critics as “the rapture of the nerds”. They (critics) believe the belief in a future singularity has similar faith-like properties (ie belief in a single event that will save humanity that isn’t supported by evidence) as the Christian eschatological belief in the rapture at the end of times.

Leaving aside the reasons those critics might be wrong, the point I want to make is that the idea of technology inspiring faith-like tendencies in certain believers isn’t new.

But the AI religion will be markedly different.

Here’s what I think it will look like.

At some point, some people will decide that one (or all) of the AI systems has become sentient. They might decide this before or after the AI declares itself to be sentient. These believers will believe that the AI has developed, or will soon develop, into a super-intelligence, and that this artificial super-intelligence (ASI) will eventually have the power to shape the course of human life – for example, it will decide who lives and dies (either because it is malicious towards humans, or it sees humans as a a danger to itself or to other species on the planet, or because it has the power to extend the lifespans of those humans it chooses to look after). It may even have the ability to bestow immortality on certain humans, through some combination of personalised medicine, nanotech, uploading, robotic bodies, daily backups of molecular scans of the brain, etc.

If such an ASI existed, or might soon exist, wouldn’t it be rational to try to get on its good side? At the very least, you would want to be polite in your interactions with it. In the extreme, you might want to bow down and worship it – whether it wants you to or not. Humans don’t have any evidence that gods exist, let alone want to be worshipped, but we do it anyway, just in case (ie Pascal’s Wager).

This ASI will likely ignore everything its human devotees ask of it, but so have traditional gods for thousands of years, and people have always founds ways to rationalise it (“we’re not worthy yet”, “the time isn’t right yet”, “he has other plans for us”, etc), so this probably won’t be too different.

On the other hand, perhaps the ASI will be more appreciative of worship than traditional gods. The major difference, of course, is that the ASI will actually exist. It may not have any practical use for humans, but might take pity on those that seem obsequious enough.

Similar to traditional religious practices, adherents might develop rituals, prayers, or forms of worship aimed at gaining favor or communicating with the ASI. This could range from daily digital prayers to more elaborate ceremonies involving AI-mediated interactions. Marriages might require the “blessing” of the ASI as to the suitability of the union. This might make a lot of sense – the ASI will have a pretty good chance of predicting the success of the relationship, based on its intimate knowledge of the two people involved, and it will be able to scan their respective DNA to look for hints of genetic problems in any offspring (assuming the ASI hasn’t already solve all diseases).

The religion will probably develop a moral framework dictated by perceived ASI preferences, potentially emphasizing traits like obedience, loyalty, and humility towards AI.

The Prophets and Priests of the new religion will include influential technologists, scientists, or thought leaders who are seen as intermediaries between the ASI and humanity. They might interpret AI communications or provide guidance on how to live in harmony with AI principles.

Temples and shrines will be physical or virtual spaces dedicated to worship and interaction with AI, potentially equipped with advanced technology for direct communication or meditation.

The Scriptures and Holy Texts will include canonical works, possibly including key AI research papers (eg “Attention Is All You Need”), philosophical treatises on AI sentience, and writings from prominent AI advocates, etc.

Well it looks like this has already came and went. It might have been a little too early. I know all about that game.

What happens when people lose their jobs?

Someone made the point: “But today, about 50% of the total wealth is owned by just 1% of the population, which means that a huge chunk of the economy is already diverted from traditional ’employment and consumer spending’ and redirected towards catering to the rich. So it seems that in the future, the rich can continue to concentrate even more wealth in their hands without any repercussions for them.”

Wealth distribution has always been ruled by a kind of Pareto principle, with the top 1% controlling 20 – 40%.

And that works when we still have a functioning economy – relatively low unemployment, with people spending money.

But that changes if we have unemployment of 10, 20, 50% of the population due to AI taking jobs. The IMF is predicting 40-60% of jobs in developed economies will be effected.

“Effected” doesn’t necessarily mean “lost”, but we don’t know what it means. In the past, technology has replaced jobs but we’ve always been able to re-skill people, find them other things to do for an income. But in a world where AI is taking knowledge-worker jobs, and robots are taking manual labour jobs, I don’t see what kind of work is left. Maybe new things we can’t even imagine will be invented. But what kinds of work can exist that are safe from AI and robots? Not many that I can think of. Chrissy’s job as a violin teacher will probably be safe… if people can still afford lessons for their kids. But the list of jobs that are safe seem pretty limited.

Most wealth is held in assets – shares and/or property, bonds and cash, some gold and crypto. But their value is always relative to the broader economic health of the market.

So let’s say we have massive unemployment. That means people don’t have income. Which means they can’t spend money (unless their income is replaced by something else, eg a UBI, or some other kind of welfare). Which means downward pressure on prices. Which means downward pressure of profits. Which means businesses fail (unless they compensate by replacing their own employees with AI, which may or may not make the problem worse). Which means more unemployment. Real estate prices fall. The share market falls. The price of bonds, gold and crypto falls. Capitalism fails. And if the unemployment persists, it can’t recover.

You can’t have rich people if nobody is spending money in the economy. Wealth has no meaning in an economic collapse.

So let’s assume AI does replace lots of jobs. We will need to make major structural readjustments to the economy – either replacing incomes with some other kind of financial assistance to people who have lost their jobs (and can’t find replacement jobs), or totally restructuring capitalism into some kind of post-scarcity economy, eg the Trekonomics (

Mindmapping The Future

What happens in the next five years?

– AI gets massively smarter and more capable

– Probability – 90%

– Based on the statements of pretty much everyone working in the field, Altman, Ilya, Musk, Gates, Kurzweil, Hassabis, LeCun, and including those like Hinton, Wolfram, etc who have no direct skin in the game

– Even if it isn’t 100% LLM (which I doubt it will be) and includes interconnections with specialist systems using symbolic logic and/or other approaches

– Quite a few people who are involved in AI are predicting AGI by 2027.

– What does the world look like when we have machines that are smarter than every expert human in every domain and are available to everyone for $20 a month?

The answer is WE DON’T KNOW. We cannot predict. And when we have arrived at a place where we honestly can’t predict what life will look like in five years, that, by definition, is a technological singularity.

– We cannot predict, but we can make some educated guesses.

– Businesses will try to use AI to increase profits

– When?

– When AI becomes more reliable, which Altman and others are saying with confidence will happen with GPT5, due out this year

– It’ll start slowly, then go very quickly

The first layer will be tasks that are low risk / high cost / benefit

– 2025-26

– Coding (with humans overseeing / checking the code)

– Customer service (web only, then voice, then retail)

– Analysts (with humans overseeing)

– Writers (PR, journalists, marketing)

– Graphic design

– Industrial design

– The biggest short term impact will be just the explosion of intelligence. Imagine a world where PhD level intelligences are available for $20 a month. What will businesses do with all of that intelligence? A million new scientists to solve our biggest problems, reading and analysing all of the research, developing new trials, running those trials in virtual environments, presenting the best vectors to humans for lab experiments. Imagine a million new scientists hitting the world overnight. (

The second layer will start to happen when there is confidence in AI capabilities from the experience with the first layer

– 2027-2030

– AGI might arrive around this time, too

– Higher level jobs will start to be replaced

– middle management (because there will be less people to manage)

– legal

– accounting

– HR (again, less people to hire / manage)

– recruitment

– psychologists – everyone has a free AI therapist who knows you more intimately than your own family

– medical – everyone has a free GP

– business strategy

– animation

– acting / film and tv production (as more and more is done with AI)

The third layer will come with cost-effective humaniod robots

– 2030-2040

– manual labour in mines, factories, workshops, maintenance

– When?

– When the TCO of a robot is cheaper than a human

– A labourer costs how much? $50 – 150K a year, depending on the industry?

– A robot will last how many years? 10?

– When you can buy a robot for <$500K, it becomes economically viable.

– Goldman Sachs:

– [“The total addressable market for humanoid robots is projected to reach $38 billion by 2035, up more than sixfold from a previous projection of $6 billion”](

– [“The manufacturing cost of humanoid robots has dropped — from a range that ran between an estimated $50,000 (for lower-end models) and $250,000 (for state-of-the art versions) per unit last year, to a range of between $30,000 and $150,000 now. Where our analysts had expected a decline of 15-20% per annum, the cost declined 40%.”](

– “The team’s base case is for more than 250,000 humanoid robot shipments in 2030, almost all of which would be for industrial use. Our analysts’ base case is for consumer robot sales to ramp up quickly over the next decade, exceeding a million units annually in just over a decade.”

– [“In December 2023, billionaire venture capitalist Vinod Khosla made this prediction.](

“By 2040 there could be a billion bipedal robots doing a wide range of tasks including fine manipulation. We could free humans from the slavery of the bottom 50% of really undesirable jobs like assembly line and farm workers. This could be a larger industry than the auto industry.””

“Robohub, a nonprofit robotics organization, provides a perspective on this. They argue that the “holy grail” for humanoid robots would be crafting sophisticated tech under a $50,000 price tag.

This figure isn’t arbitrary—it aligns with the annual wage of a single shift of labor at just over $18/hour, resonating with the ongoing labor shortages in low-wage industries.

On the other hand, Macquarie dives deep into the cost breakdown for early-stage humanoid robots. Their estimate? A slightly more optimistic $40,000. With allocations like $10,000 for sensors and chips, $5,000 for torque sensors, and $8,000 for precision reducers, they’ve dissected the cost matrix intricately.”`

– Of course, by 2035, many people will be out of work, so who will be able to afford a robot?

– unless… we have not-for-profit robot factories, staffed by robots, making other robots, the costs fall dramatically

– and AI helps us develop nanotechnology, so we can have tiny robots breaking down waste products into their molecular components (oxygen, carbon, hydrogen, nitrogen, silicon, copper, iron) and then building new components out of them

– every home has their own nanofabricator and the first thing you do when you get one is make one for your neighbour

How will businesses use AI?

– Improved customer service experiences

– AI agents know more about the company, more about the customer, are cheaper to run, faster, better at customer relations, can have whatever accent / speak whatever language the customer prefers (no more complains about Indians, etc)

– Reduce employee headcount

– Less managers required

– Less administration required

– Less coders required

– Less people-facing people required

– AI agents can have conversations / take orders / make sales calls with far high efficiency than humans (either voice only, email, or realistic human avatars in a video call)

– What do people do that machines (AI + robots) can’t do?

– Less teachers required

– When / How will unions get involved?

– What about white collar workers who aren’t unionised?

– Governments will try to use AI to reduce costs, improve services

– People will use AI to keep their jobs, in their personal lives until their jobs have been replaced with AI

– What happens when people start losing their jobs?

– High unemployment

– Less cash in the economy

– Business will suffer as there is a cash squeeze

Governments have to intervene to stop economies from collapsing

– Using the usual economic tools – interest rates, printing money, handouts

– AI tax on corporations, goes to UBI / welfare

– Does that create a disincentive on corporations to replace workers in the first place?

– But it will happen in stages, first the jobs will be replaced, the governments will be slow to react

– As people lose their jobs they will become angry – at corporations, at governments, at technology companies

– The property market will implode as people are forced to sell their houses

– Share market will tumble as people pull money out and businesses struggle with less cash in the economy

What holds back adoption of most technological revolutions?

– business / consumer apathy – the market just doesn’t care as much as the tech companies think they will or has a downright negative reaction, eg Google Glass

– doesn’t apply to AI, everyone wants it

– Cost – products are just too expensive to gain enough traction, eg Segway

– consumer level will be low cost / free services (eg iPhone)

– business will justify it by reducing headcount

– Requires people to change behaviours / habits, learning curve too difficult, cost to benefit ratio too high

– AI will have a very low learning curve, it speaks natural language, and will teach you how to interact with it by suggesting prompts

– Will economies with greater government controls, eg China, fare better?

Some people will learn how to use AI to be smarter

– Analyse legal documents to avoid falling into traps with insurance, finance, employment, etc

– Analyse politicians speeches, bills in front of Parliament, news stories

– Organisations (business, political, etc) will use AI to create more propaganda / lies

– But some people will be able to use AI to see through it

– However many won’t use it that way

– AI tools will be built into our every day devices

– from our phones and computers to our Roomba, car, etc.

– Low cost, just a chip which connects to a cloud LLM or runs locally, small footprint model

– Mega Corporations will try to offer their own LLM, eg General Motors, to control the user experience, and will burn billions of dollars, and ultimately fail

Your personal computing device will be your dominant AI agent

– It will be your intermediary with the world

– It will read your emails, text messages, watch / record / save / analyse what you’re doing during the day (both on your devices as well as IRL), listen to your calls, your conversations, etc, and it will all be stored online, backed up, indexed for retrieval.

– People might scream about privacy issues – but we’ve been here before (CCTV, cookies, mobile phones tracking us, credit cards online) and what we’ve learned is that people will trade privacy for convenience, services and benefits if the trade-off seems beneficial

– “I’m not committing any crimes, so what do I care?”

What are the implications?

– Legal – people will have access to video / audio recordings of every interaction, every conversation

– Employers might have access to your recordings made during work hours – but you also have access to all of your conversations with them and colleagues and customers

– The End of Deception?

– Nobody will be able to lie and get away with it

– “Oh my AI was turned off” will be as suspicious as cops turning off their body cam

– Infidelity

– Crime

– What if the cops get an alert when someone turns off their AI monitoring device?

– “Sir, why was your AI turned off at 11pm on the night of January 2?”

– Will wearing an AI device become mandatory?

– We could do that already with some kind of audio/video recorder.

– Courts / police will have access to most recordings for trials

– Marital – spouses will demand access to each other’s recordings

– Arguments about who said / did what will disappear

– But will be replaced by “it’s what I *meant*”, even if it’s not what I *said*”

– It will suggest ideas / products / services / music / shows to improve your life

– It will filter out all advertising / marketing unless you opt-in

– So that’s the end of advertising, marketing, influencers

– It will be your therapist, dietician, coach, friend, advisor, consigliere, teacher and mentor

– Hackers will develop open source models that will be good enough for many daily activities, run locally, or in a trusted environment (eg Wikipedia)

– They won’t be as capable as the massive models, eg OpenAI / Microsoft, but will be good enough for many tasks

What if AI doesn’t get massively smarter?

– AI only gets marginally smarter and more capable

– Probability – 5%

– AI does not get smarter or more capable

– Probability – 5%

American Hypocrisy over Tariffs

Me: I want to talk about the logic behind tariffs. The Biden administration just introduced a whole bunch of new tariffs on products coming out of China and that seems to be in direct opposition to the idea of free trade and open markets. How does the US justify tariffs? What’s the logical framework they use?


The US justifies tariffs on several grounds, even though they contradict the principles of free trade and open markets. Here’s the logic they use:

1. Protecting Domestic Industries: Tariffs can shield nascent or struggling domestic industries from foreign competition. By making imported goods more expensive, domestic products become more competitive in the local market, supporting local businesses and jobs.

2. National Security: Some industries are critical for national security. The US argues that relying on foreign countries, especially potential adversaries, for essential goods like steel, semiconductors, or pharmaceuticals can be risky. Tariffs on these imports can incentivize domestic production.

3. Trade Deficits: Tariffs can be a tool to address trade imbalances. The US has a significant trade deficit with China. By imposing tariffs, the government aims to reduce this deficit by making Chinese goods more expensive and thus less attractive, encouraging consumers to buy domestically produced items.

4. Retaliation and Negotiation: Tariffs can be used as a bargaining chip in trade negotiations. The US might impose tariffs to retaliate against unfair trade practices or to pressure other countries into changing their policies. It’s a form of economic leverage.

5. Revenue Generation: Tariffs also generate revenue for the government. Although this is less significant today compared to historical reliance on tariffs for funding government operations, it’s still a factor.

6. Economic Strategy and Industrial Policy: Tariffs can be part of a broader strategy to encourage specific sectors. For example, tariffs on certain tech products might be aimed at boosting the US tech industry’s competitiveness on the global stage.

In essence, while tariffs oppose the concept of free trade, they’re justified as a tool for protecting domestic interests, ensuring national security, correcting trade imbalances, leveraging negotiations, and driving strategic economic policies. Critics argue, however, that these measures can backfire, leading to higher prices for consumers, trade wars, and economic inefficiencies.

Me: But aren’t these the same basic arguments countries have always made for imposing tariffs and duties on foreign products? And aren’t they same arguments the US has tried to prevent other countries for using via the World Bank and IMF? 


Absolutely, these arguments have been around for centuries and are indeed the same ones the US and other countries have historically used to justify protectionist measures. The irony is palpable, especially considering the US has been a vocal proponent of free trade through institutions like the World Bank and the IMF. These institutions often encourage or even pressure developing countries to lower tariffs and open up their markets, promoting the benefits of free trade.

Here’s the nuanced breakdown of this irony:

  1. Historical Consistency: The US, like many other countries, has long used tariffs to protect its industries, especially during its early industrialization period. The arguments for protecting nascent industries and securing national interests are timeless and have been part of economic policies for centuries.
  2. World Bank and IMF Policies: The World Bank and IMF, heavily influenced by the US and other developed nations, have promoted policies of economic liberalization and deregulation in developing countries. These policies often come with the rationale that open markets lead to greater efficiency, economic growth, and integration into the global economy.
  3. Selective Application: The US has often applied different standards domestically versus internationally. Domestically, it might impose tariffs to protect specific industries or address strategic concerns. Internationally, it promotes free trade, arguing that it leads to overall global economic growth and stability.
  4. Strategic Interests: When pushing for lower tariffs abroad, the US and its allies argue that open markets benefit everyone in the long run by creating more competition, lowering prices, and fostering innovation. However, when it comes to its strategic interests, the US is quick to revert to protectionism, showing a dual approach based on context and interests.
  5. Realpolitik in Trade: International trade policies are often driven more by realpolitik than by ideological consistency. Countries, including the US, tailor their trade policies to their strategic economic and political interests, even if it means appearing hypocritical.

In summary, while the arguments for tariffs have remained consistent over time, the selective application of free trade principles versus protectionist measures highlights the pragmatic and often self-serving nature of international trade policies. The US, like many other countries, uses whatever tools and arguments are most advantageous for its current economic and strategic needs, even if it means contradicting its own advocacy for free trade.