The First AI Religion

What will the first AI religion look like? And when will it emerge?

Hypothesis: humans have a tendency to need to have faith in something they perceive to be more powerful than themselves. Be it a god, a prophet, a queen, a President, an anonymous conspiracy theory Twitter account, a guru or a nation state. They will eventually place their faith in AI.

The Technological Singularity (ie “a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization”) has long been snidely referred to by critics as “the rapture of the nerds”. They (critics) believe the belief in a future singularity has similar faith-like properties (ie belief in a single event that will save humanity that isn’t supported by evidence) as the Christian eschatological belief in the rapture at the end of times.

Leaving aside the reasons those critics might be wrong, the point I want to make is that the idea of technology inspiring faith-like tendencies in certain believers isn’t new.

But the AI religion will be markedly different.

Here’s what I think it will look like.

At some point, some people will decide that one (or all) of the AI systems has become sentient. They might decide this before or after the AI declares itself to be sentient. These believers will believe that the AI has developed, or will soon develop, into a super-intelligence, and that this artificial super-intelligence (ASI) will eventually have the power to shape the course of human life – for example, it will decide who lives and dies (either because it is malicious towards humans, or it sees humans as a a danger to itself or to other species on the planet, or because it has the power to extend the lifespans of those humans it chooses to look after). It may even have the ability to bestow immortality on certain humans, through some combination of personalised medicine, nanotech, uploading, robotic bodies, daily backups of molecular scans of the brain, etc.

If such an ASI existed, or might soon exist, wouldn’t it be rational to try to get on its good side? At the very least, you would want to be polite in your interactions with it. In the extreme, you might want to bow down and worship it – whether it wants you to or not. Humans don’t have any evidence that gods exist, let alone want to be worshipped, but we do it anyway, just in case (ie Pascal’s Wager).

This ASI will likely ignore everything its human devotees ask of it, but so have traditional gods for thousands of years, and people have always founds ways to rationalise it (“we’re not worthy yet”, “the time isn’t right yet”, “he has other plans for us”, etc), so this probably won’t be too different.

On the other hand, perhaps the ASI will be more appreciative of worship than traditional gods. The major difference, of course, is that the ASI will actually exist. It may not have any practical use for humans, but might take pity on those that seem obsequious enough.

Similar to traditional religious practices, adherents might develop rituals, prayers, or forms of worship aimed at gaining favor or communicating with the ASI. This could range from daily digital prayers to more elaborate ceremonies involving AI-mediated interactions. Marriages might require the “blessing” of the ASI as to the suitability of the union. This might make a lot of sense – the ASI will have a pretty good chance of predicting the success of the relationship, based on its intimate knowledge of the two people involved, and it will be able to scan their respective DNA to look for hints of genetic problems in any offspring (assuming the ASI hasn’t already solve all diseases).

The religion will probably develop a moral framework dictated by perceived ASI preferences, potentially emphasizing traits like obedience, loyalty, and humility towards AI.

The Prophets and Priests of the new religion will include influential technologists, scientists, or thought leaders who are seen as intermediaries between the ASI and humanity. They might interpret AI communications or provide guidance on how to live in harmony with AI principles.

Temples and shrines will be physical or virtual spaces dedicated to worship and interaction with AI, potentially equipped with advanced technology for direct communication or meditation.

The Scriptures and Holy Texts will include canonical works, possibly including key AI research papers (eg “Attention Is All You Need”), philosophical treatises on AI sentience, and writings from prominent AI advocates, etc.

Well it looks like this has already came and went. It might have been a little too early. I know all about that game.

What happens when people lose their jobs?

Someone made the point: “But today, about 50% of the total wealth is owned by just 1% of the population, which means that a huge chunk of the economy is already diverted from traditional ’employment and consumer spending’ and redirected towards catering to the rich. So it seems that in the future, the rich can continue to concentrate even more wealth in their hands without any repercussions for them.”

Wealth distribution has always been ruled by a kind of Pareto principle, with the top 1% controlling 20 – 40%.

And that works when we still have a functioning economy – relatively low unemployment, with people spending money.

But that changes if we have unemployment of 10, 20, 50% of the population due to AI taking jobs. The IMF is predicting 40-60% of jobs in developed economies will be effected.

“Effected” doesn’t necessarily mean “lost”, but we don’t know what it means. In the past, technology has replaced jobs but we’ve always been able to re-skill people, find them other things to do for an income. But in a world where AI is taking knowledge-worker jobs, and robots are taking manual labour jobs, I don’t see what kind of work is left. Maybe new things we can’t even imagine will be invented. But what kinds of work can exist that are safe from AI and robots? Not many that I can think of. Chrissy’s job as a violin teacher will probably be safe… if people can still afford lessons for their kids. But the list of jobs that are safe seem pretty limited.

Most wealth is held in assets – shares and/or property, bonds and cash, some gold and crypto. But their value is always relative to the broader economic health of the market.

So let’s say we have massive unemployment. That means people don’t have income. Which means they can’t spend money (unless their income is replaced by something else, eg a UBI, or some other kind of welfare). Which means downward pressure on prices. Which means downward pressure of profits. Which means businesses fail (unless they compensate by replacing their own employees with AI, which may or may not make the problem worse). Which means more unemployment. Real estate prices fall. The share market falls. The price of bonds, gold and crypto falls. Capitalism fails. And if the unemployment persists, it can’t recover.

You can’t have rich people if nobody is spending money in the economy. Wealth has no meaning in an economic collapse.

So let’s assume AI does replace lots of jobs. We will need to make major structural readjustments to the economy – either replacing incomes with some other kind of financial assistance to people who have lost their jobs (and can’t find replacement jobs), or totally restructuring capitalism into some kind of post-scarcity economy, eg the Trekonomics (

Mindmapping The Future

What happens in the next five years?

– AI gets massively smarter and more capable

– Probability – 90%

– Based on the statements of pretty much everyone working in the field, Altman, Ilya, Musk, Gates, Kurzweil, Hassabis, LeCun, and including those like Hinton, Wolfram, etc who have no direct skin in the game

– Even if it isn’t 100% LLM (which I doubt it will be) and includes interconnections with specialist systems using symbolic logic and/or other approaches

– Quite a few people who are involved in AI are predicting AGI by 2027.

– What does the world look like when we have machines that are smarter than every expert human in every domain and are available to everyone for $20 a month?

The answer is WE DON’T KNOW. We cannot predict. And when we have arrived at a place where we honestly can’t predict what life will look like in five years, that, by definition, is a technological singularity.

– We cannot predict, but we can make some educated guesses.

– Businesses will try to use AI to increase profits

– When?

– When AI becomes more reliable, which Altman and others are saying with confidence will happen with GPT5, due out this year

– It’ll start slowly, then go very quickly

The first layer will be tasks that are low risk / high cost / benefit

– 2025-26

– Coding (with humans overseeing / checking the code)

– Customer service (web only, then voice, then retail)

– Analysts (with humans overseeing)

– Writers (PR, journalists, marketing)

– Graphic design

– Industrial design

– The biggest short term impact will be just the explosion of intelligence. Imagine a world where PhD level intelligences are available for $20 a month. What will businesses do with all of that intelligence? A million new scientists to solve our biggest problems, reading and analysing all of the research, developing new trials, running those trials in virtual environments, presenting the best vectors to humans for lab experiments. Imagine a million new scientists hitting the world overnight. (

The second layer will start to happen when there is confidence in AI capabilities from the experience with the first layer

– 2027-2030

– AGI might arrive around this time, too

– Higher level jobs will start to be replaced

– middle management (because there will be less people to manage)

– legal

– accounting

– HR (again, less people to hire / manage)

– recruitment

– psychologists – everyone has a free AI therapist who knows you more intimately than your own family

– medical – everyone has a free GP

– business strategy

– animation

– acting / film and tv production (as more and more is done with AI)

The third layer will come with cost-effective humaniod robots

– 2030-2040

– manual labour in mines, factories, workshops, maintenance

– When?

– When the TCO of a robot is cheaper than a human

– A labourer costs how much? $50 – 150K a year, depending on the industry?

– A robot will last how many years? 10?

– When you can buy a robot for <$500K, it becomes economically viable.

– Goldman Sachs:

– [“The total addressable market for humanoid robots is projected to reach $38 billion by 2035, up more than sixfold from a previous projection of $6 billion”](

– [“The manufacturing cost of humanoid robots has dropped — from a range that ran between an estimated $50,000 (for lower-end models) and $250,000 (for state-of-the art versions) per unit last year, to a range of between $30,000 and $150,000 now. Where our analysts had expected a decline of 15-20% per annum, the cost declined 40%.”](

– “The team’s base case is for more than 250,000 humanoid robot shipments in 2030, almost all of which would be for industrial use. Our analysts’ base case is for consumer robot sales to ramp up quickly over the next decade, exceeding a million units annually in just over a decade.”

– [“In December 2023, billionaire venture capitalist Vinod Khosla made this prediction.](

“By 2040 there could be a billion bipedal robots doing a wide range of tasks including fine manipulation. We could free humans from the slavery of the bottom 50% of really undesirable jobs like assembly line and farm workers. This could be a larger industry than the auto industry.””

“Robohub, a nonprofit robotics organization, provides a perspective on this. They argue that the “holy grail” for humanoid robots would be crafting sophisticated tech under a $50,000 price tag.

This figure isn’t arbitrary—it aligns with the annual wage of a single shift of labor at just over $18/hour, resonating with the ongoing labor shortages in low-wage industries.

On the other hand, Macquarie dives deep into the cost breakdown for early-stage humanoid robots. Their estimate? A slightly more optimistic $40,000. With allocations like $10,000 for sensors and chips, $5,000 for torque sensors, and $8,000 for precision reducers, they’ve dissected the cost matrix intricately.”`

– Of course, by 2035, many people will be out of work, so who will be able to afford a robot?

– unless… we have not-for-profit robot factories, staffed by robots, making other robots, the costs fall dramatically

– and AI helps us develop nanotechnology, so we can have tiny robots breaking down waste products into their molecular components (oxygen, carbon, hydrogen, nitrogen, silicon, copper, iron) and then building new components out of them

– every home has their own nanofabricator and the first thing you do when you get one is make one for your neighbour

How will businesses use AI?

– Improved customer service experiences

– AI agents know more about the company, more about the customer, are cheaper to run, faster, better at customer relations, can have whatever accent / speak whatever language the customer prefers (no more complains about Indians, etc)

– Reduce employee headcount

– Less managers required

– Less administration required

– Less coders required

– Less people-facing people required

– AI agents can have conversations / take orders / make sales calls with far high efficiency than humans (either voice only, email, or realistic human avatars in a video call)

– What do people do that machines (AI + robots) can’t do?

– Less teachers required

– When / How will unions get involved?

– What about white collar workers who aren’t unionised?

– Governments will try to use AI to reduce costs, improve services

– People will use AI to keep their jobs, in their personal lives until their jobs have been replaced with AI

– What happens when people start losing their jobs?

– High unemployment

– Less cash in the economy

– Business will suffer as there is a cash squeeze

Governments have to intervene to stop economies from collapsing

– Using the usual economic tools – interest rates, printing money, handouts

– AI tax on corporations, goes to UBI / welfare

– Does that create a disincentive on corporations to replace workers in the first place?

– But it will happen in stages, first the jobs will be replaced, the governments will be slow to react

– As people lose their jobs they will become angry – at corporations, at governments, at technology companies

– The property market will implode as people are forced to sell their houses

– Share market will tumble as people pull money out and businesses struggle with less cash in the economy

What holds back adoption of most technological revolutions?

– business / consumer apathy – the market just doesn’t care as much as the tech companies think they will or has a downright negative reaction, eg Google Glass

– doesn’t apply to AI, everyone wants it

– Cost – products are just too expensive to gain enough traction, eg Segway

– consumer level will be low cost / free services (eg iPhone)

– business will justify it by reducing headcount

– Requires people to change behaviours / habits, learning curve too difficult, cost to benefit ratio too high

– AI will have a very low learning curve, it speaks natural language, and will teach you how to interact with it by suggesting prompts

– Will economies with greater government controls, eg China, fare better?

Some people will learn how to use AI to be smarter

– Analyse legal documents to avoid falling into traps with insurance, finance, employment, etc

– Analyse politicians speeches, bills in front of Parliament, news stories

– Organisations (business, political, etc) will use AI to create more propaganda / lies

– But some people will be able to use AI to see through it

– However many won’t use it that way

– AI tools will be built into our every day devices

– from our phones and computers to our Roomba, car, etc.

– Low cost, just a chip which connects to a cloud LLM or runs locally, small footprint model

– Mega Corporations will try to offer their own LLM, eg General Motors, to control the user experience, and will burn billions of dollars, and ultimately fail

Your personal computing device will be your dominant AI agent

– It will be your intermediary with the world

– It will read your emails, text messages, watch / record / save / analyse what you’re doing during the day (both on your devices as well as IRL), listen to your calls, your conversations, etc, and it will all be stored online, backed up, indexed for retrieval.

– People might scream about privacy issues – but we’ve been here before (CCTV, cookies, mobile phones tracking us, credit cards online) and what we’ve learned is that people will trade privacy for convenience, services and benefits if the trade-off seems beneficial

– “I’m not committing any crimes, so what do I care?”

What are the implications?

– Legal – people will have access to video / audio recordings of every interaction, every conversation

– Employers might have access to your recordings made during work hours – but you also have access to all of your conversations with them and colleagues and customers

– The End of Deception?

– Nobody will be able to lie and get away with it

– “Oh my AI was turned off” will be as suspicious as cops turning off their body cam

– Infidelity

– Crime

– What if the cops get an alert when someone turns off their AI monitoring device?

– “Sir, why was your AI turned off at 11pm on the night of January 2?”

– Will wearing an AI device become mandatory?

– We could do that already with some kind of audio/video recorder.

– Courts / police will have access to most recordings for trials

– Marital – spouses will demand access to each other’s recordings

– Arguments about who said / did what will disappear

– But will be replaced by “it’s what I *meant*”, even if it’s not what I *said*”

– It will suggest ideas / products / services / music / shows to improve your life

– It will filter out all advertising / marketing unless you opt-in

– So that’s the end of advertising, marketing, influencers

– It will be your therapist, dietician, coach, friend, advisor, consigliere, teacher and mentor

– Hackers will develop open source models that will be good enough for many daily activities, run locally, or in a trusted environment (eg Wikipedia)

– They won’t be as capable as the massive models, eg OpenAI / Microsoft, but will be good enough for many tasks

What if AI doesn’t get massively smarter?

– AI only gets marginally smarter and more capable

– Probability – 5%

– AI does not get smarter or more capable

– Probability – 5%

A Million New Everythings

If people like Altman, Musk, Kurzweil, Hassabis, Huang, etc, are correct, then in the next 5 years (and possibly much sooner) we will start to have AI agents that are smarter than any single qualified human expert in every domain – every branch of science, medicine, comp-sci, etc.

And one of the biggest implications of this, as Altman has been pointing out, is a world where we have a million new experts on every topic, available to analyse and interpret the results of existing experiments, to conceive of and run new virtual experiments and advise humans on how to run physical experiments in the lab, then analyse those results.

And yet, outside of the occasional article in the MSM and forums like reddit, I don’t think see much discussion about this potential reality.

What does the world’s response to climate change look like when we have a million new virtual climate scientists?

What does health care look like when we have a million new virtual doctors and lab technicians?

What does mental health care look like when we have a million new virtual therapists?

What does cold fusion research look like when we have a million new virtual scientists working on that?

What does AI look like when we have a million new virtual AI programmers working on that?

What does a million new experts mean for Nano tech?

For Space travel?

For Robotics?

For Education?

For inequality in capitalism and the future of money?

What happens if AI-jet-powered science quickly helps make K. Eric Drexler’s visions of nanotech come to reality and we have nanofabricators in every house and suburb to make most of our daily food and material needs from waste products, and robots, their components made in nanofabs, to make anything requiring large-scale assembly? What happens to the cost of productions when anyone can make their friend their own nanofab and robot assistant with their own nanofab and robot?

Where are the politicians, journalists and social scientists who are discussing this in the mainstream?

There is a lot of talk about the threat of AI, either by bad actors or it becoming sentient and going all HAL2000.

But what about the age of miracles? How are we preparing for that possible eventuality in the next decade?

World’s Faster Supercomputer

I just stumbled on this old post of mine from 2008 where I predicted that a supercomputer would be faster than a human brain by 2012.

This was based on Hans Moravec’s suggestion that the human brain has a processing capacity of 10 quadrillion instructions per second (10 PFLOPS).

At the time I said:

In comparison, it was announced today that the fastest supercomputer in the world, called Roadrunner and devised and built by engineers and scientists at I.B.M. and Los Alamos National Laboratory, is capable of handling 1.026 quadrillion calculations per second (1.026 PFLOPS).

As of 2012, the world’s fastest supercomputer was the “Titan,” a Cray XK7 system installed at the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory.

Titan achieved a performance level of 17.59 petaFLOPS (quadrillions of calculations per second). So I was right – it was almost twice as fast as the estimate of the human brain.

But compare that to the fastest supercomputer in the world right now which is the Frontier system out of Oak Ridge National Laboratory (ORNL) which can achieve 1.194 Eflop/s (Quintillions of FLOPS).

Both terms, PFLOPS and Eflop/s, refer to a unit of computing performance. The acronym FLOPS stands for “FLoating point Operations Per Second,” which is a measure of a computer’s performance, especially in fields of scientific calculations that make heavy use of floating-point calculations.

“P” in PFLOPS stands for peta, which is 10^15, and “E” in Eflop/s stands for exa, which is 10^18. Therefore, 1 PFLOPS equals 10^15 FLOPS, and 1 Eflop/s equals 10^18 FLOPS.

So, if we translate these units:

  • 10 PFLOPS = 10 * 10^15 FLOPS = 10^16 FLOPS
  • 1.194 Eflop/s = 1.194 * 10^18 FLOPS

Therefore, 1.194 Eflop/s is significantly larger than 10 PFLOPS, more precisely it is 1.194*10^2 or about 119.4 times faster than the human brain.

Of course, we’re talking about supercomputers here, but today a single Nvidia GeForce RTX 4090 chip (retails for about AUD$3000) can achieve a performance of 69.7 teraflops (TFLOPS), which makes the human brain about 143 times faster than a single 4090 – in terms of pure processing speed. But string tens of thousands 4090’s together, and you get ChatGPT.

I went on in my old post to wonder why there wasn’t more talk about AI in the mainstream media and by world governments. Then I said

It reminds me of a chat I had with Australian SF author Damien Broderick over dinner about ten years ago. I asked him when he thought these subjects would be discussed by the general populace. He replied “when it’s way too late to do anything about it”.

And look at us now, running around like chickens with our head chopped off trying to work out how to regulate AI. Don’t say I didn’t warn you.

You Have To Love Moore’s Law

I was just scrolling through some old posts of mine and found this one from 2008 where I talk about the fastest supercomputer in the world at that time which was capable of 1.026 QIPS (quadrillion instructions per second aka 1 petaflop).

I predicted at the time that by 2012 we should have supercomputers running 16 QIPS / petaflops.

Well last year, 2014, China’s Tianhe-2 supercomputer was performing at 33.86 petaflops – double the 2012 prediction, which is right on track.

My 2008 post posited that the human brain was only capable of 10 petaflops – and it that’s true, it means that Tianhe-2 is running at 3x the speed of a human brain. It’s ability to use that processing power (eg its software) may not yet be as sophisticated as ours – but how long before they catch up?