Skip to content

How AI went from miracle to bubble. An interactive timeline

Section titled “How AI went from miracle to bubble. An interactive timeline”

How AI went from miracle to bubble. An interactive timeline I promised myself I would write a balanced piece about artificial intelligence, then ChatGPT-5 had a botched launch, MIT unleashed (yes …

Section titled “How AI went from miracle to bubble. An interactive timeline I promised myself I would write a balanced piece about artificial intelligence, then ChatGPT-5 had a botched launch, MIT unleashed (yes …”

Clipped on from https://marcohkvanhurne.medium.com/how-ai-went-from-miracle-to-bubble-an-interactive-timeline-e80025f9f6db

Marco van Hurne, , https://marcohkvanhurne.medium.com/how-ai-went-from-miracle-to-bubble-an-interactive-timeline-e80025f9f6db

Sitemap

I promised myself I would write a balanced piece about artificial intelligence, then ChatGPT-5 had a botched launch, MIT unleashed (yes, unleashed) a report*, and the entire tech industry lost its collective mind, and what followed was the most spectacular display of corporate delusion since someone convinced the world that WeWork** was worth forty-seven billion dollars.

So, I threw my original plans in the dustbin and decided to write about the timeline instead. Not beginning by Turing and Rosenblatt in the fifties, but instead I’m going straight for the jugular and start with the launch of ChatGPT 3.5.

Not 1, not 2, nor 3, but three-and-a-half in November 2022, and after that, the whole world changed.

If you don’t like reading, like our great Obi Wan Kenobi, mr. Marcus Dresius Maximus, this time I have a treat for you.

Visit this site to get your dose of TL;DR: https://aitimeline-fu86at.manus.space/

Just hit the play button and sit back and watch the chaos unfold.

But if you’d like to stick around, I bid you welcome!

* And a devastating report it was. 95% of AI implementation fail. 5% doesn’t. The report is deadly, but it also gives us a glimpse of hope, because it also hands you the recipe on how to do it the right way.

** WeWork was a startup that tried to sell renting desks. Officially, it was a “co-working space company”, but unofficially, it was a pyramid scheme built on messianic delusions of its founder (Adam Neumann).

The phase when everyone became an AI prophet

Section titled “The phase when everyone became an AI prophet”

It was November 30th, 2022. OpenAI launched ChatGPT. I was caught completely off guard. I have spend numerous hours playing with GPT1 till 3, and was aware of the “Attention is all you need paper”, but in a million years, I had not seen this coming.

And I wasn’t alone.

The thing hits 100 million users in two months, which is impressive until you remember that people also downloaded Pokémon GO that fast and we all know how that ended.

Next thing that happened… every LinkedIn influencer became an AI evangelist.

Microsoft throws ten billion dollars at OpenAI in January 2023, through which the company got a twenty-nine billion valuation — petty cash considering what they’re worth now. 29B USD. That’s roughly the GDP of Estonia for a chatbot that occasionally thinks Australia is a continent made of cheese. But hey, who needs due diligence when you’ve got FOMO and a PowerPoint deck that hallucinated “exponential growth” forty-seven thousand times…

The surge felt like it was made from pure dopamine, and the reality check metabolized like acetylsalicylic acid in a rain barrel.

This was also the time the first copyright infringement law suits started.

It was Getty Images who filed the first major lawsuit against Stability AI for stealing twelve million copyrighted images. Turns out that training AI models on other people’s work without permission is still theft, even when you call it “machine learning”. Who could have predicted that artists might object to having their life’s work fed into the AI-wood chipper?

By March 2023, GPT-4 launches with “enhanced capabilities” and everyone loses their minds again. The model has 1.76 trillion parameters, and that sounds rrrrreally impressive until you come to the realization that most humans can’t even remember where they put their car keys. But sure, let’s trust this thing to write our emails and make our medical diagnoses.

April brings us the Pope Francis deepfake in Balenciaga, and that was a true sign of “technological progress”, making the leader of the Catholic Church look like he shops at overpriced fashion boutiques. ’23 was also the year that Telegram diffusion-based ‘nudify bots’ appear, and boasted millions of users per month. This event proved once and for all that Rule 34 of the internet is tried and tested, which says “if it exist, there’s porn of it. No exceptions”. And Generative AI wasn’t an exception either. And now, just a few billion views later, with the arrival of Nano-Banana* we are all wondering if reality is just a suggestion now.

The Senate holds its first AI hearing in May with Sam Altman as the star witness, like some oracle in a hoody, and what followed was three straight hours of senators fumbling through questions they didn’t understand like “Can AI kill us?” and “How do we regulate this?” — and while they were asking these questions, half of them struggled with their iPhones. And Sam the Scam responded with polished non-answers he didn’t believe. It was just a kabuki performance where the lawmakers pretended to regulate, and the CEO pretended to care, and the only thing truly on display was how utterly unequipped the political system is to handle a technology it can’t even define.

That is democracy in action, folks.

It was also the year corporate America started throwing money around like confetti.

PwC commits one billion, Accenture throws in three billion, Salesforce raises their AI fund to five hundred million. That’s 4.5 billion dollars for technology that can’t reliably tell the difference between a muffin and a chihuahua.

Klarna was one of the first big names to jump onto the generative AI band wagon, and their foreman proudly announced that ChatGPT would be their new customer service rep, financial advisor, and part-time life coach. They bragged about how the chatbot could save employees hours of work, and that quickly resulted in replacing humans with a bot.

It was painted as innovation, but really it was desperation. A fintech company slapping AI lipstick on its piggy bank to impress investors. And to be fair, it worked for a while, when journalists drooled over the press releases, LinkedIn influencers had new slides for their TEDx talks, and Klarna bought itself another news cycle to prepare itself for the inevitable move into becoming… a bank (of some sort).

They weren’t alone, of course.

Suddenly every company with a logo and a CEO with a pulse was scrambling to announce their “strategic AI integration”.

Banks, consultancies, and retailers shoved out press statements, and some crowed about deploying AI to cut costs, others pretended it was about “enhancing customer experience”, but the real reason was simpler — nobody wanted to look like the idiot who missed the hype train. The term FOMO got dusted off. These companies were paying billions for friggin’ autocomplete engines, and the more serious the press release sounded, the more obvious it was that nobody had a freaking clue what they had actually bought.

Lest I forget… That year, the FTC opens an investigation into OpenAI’s data practices in July, because someone in government still remembers that privacy laws exist. The investigation focuses on “potential consumer harm”, which I translate to “we think you’re screwing people over but we need lawyers to prove it”.

* Google’s brilliant, yet scary AF image generation algorithm that I used for this article’s promotion image, featuring moi, who is totally fabricated by Nano-Banana.

October 2023. Former President Biden signs an executive order calling AI “the most consequential technology of our time”. Fifteen agencies, fifty-plus requirements, and enough bureaucratic red tape to wrap the Washington Monument. It was the White House’s big “we’re doing something” moment. It didn’t ban killer robots or outlaw Skynet or anything even close to that, but it did unload a bloated laundry list of requirements on agencies and companies.

The juiciest of the regulations were mandatory safety testing for “frontier” models. That meant developers had to hand their homework to some US bureaucrat before launching another glitchy oracle on the world, and it had to have watermarking rules so the public could tell whether the Pope was in Balenciaga or not, and it required so-called privacy guardrails that promised to rein in data hoovering but also leave enough loopholes wide enough to march a convoy of lobbyists straight through.

In November the OpenAI leadership crisis began. Sam CTRL-ALT-DELETE-man got fired (remember) but also rehired faster than a McDonald’s employee during lunch rush. Seven hundred employees signed a ‘we quit’ letter, if he’d be ousted, Microsoft parked moving trucks outside to scoop them up, and the board folded like a cheap lawn chair, and everyone pretended this was all part of some ‘masterrrr plan’.

The crisis exposed there was a deep division about how to handle AI safety, and some of the tech bros were afraid of “AI might kill everyone”. Ilya Sutskever, the company’s chief scientist and co-founder, was smack-dab in the middle. He sided with the board at first, helped push Altman out, then woke up mid-coup with a case of regret strong enough to post a public apology on Twitter.

Then comes January 5, 2025.

Peak hubris moment.

Altman publishes a blog post in which he claimed “We are now confident we know how to build AGI”.

The man was confident though. Yeah, confident like a golden retriever with a physics degree that is. He wrote the letters AGI so many times in that single post that it read like a drinking game designed to kill a frat house. To me, it felt exactly like the old story of the shepherd boy crying wolf, but in this version, the shepherd is Sam, the wolf is “superintelligence” and the sheep are investors wired on FOMO who keep running back every time he yells, “This time for real y’all!”

January 2024 starts off with every Fortune 500 CEO getting the same memo “Deploy AI or die”. It is McKinsey this time, who wrote a report that stated AI could add thirteen trillion dollars to global GDP by 2030, which is ‘consultancy lingo’ for “we need more billable hours and you need PowerPoint slides that justify your bonus”.

Salesforce launches Einstein GPT in February, meant to revolutionize customer relationships. The AI could write emails, and presumably read your customers’ minds. But what it actually did in practice, was generate such generic responses that their customers started asking if they were talking to a bot.

Yup, they were.

March brought us Google’s Gemini launch disaster. The AI refused to generate images of white people and created historically inaccurate pictures of diverse Nazis, and generally behaved like it was trained by a committee of over enthusiastic HR managers. The internet had a field day with screenshots.

On April 2024 Microsoft integrates Copilot into everything that has a screen and some things that don’t. Office workers suddenly have an AI assistant that can write their emails, summarize their meetings, and make their PowerPoints look even duller than it was before. The productivity gains were immediate and measurable — people spent thirty percent less time working and sixty percent more time asking Copilot to rewrite the same paragraph until it sounded smart.

Ah… how AI revolutionized work.

Slack launches AI-powered message summaries in May. The feature said that it would catch you up on conversations you missed, but in reality it was summarizing conversations that weren’t worth having in the first place. “The team discussed lunch options for seventeen messages” became the most common AI insight of the quarter.

June saw the launch the Adobe Firefly integration across Creative Cloud with which designers could now generate images, remove backgrounds, and create content with text prompts. The results were impressive until their clients started to ask why they needed to pay designer rates for AI-generated stock photos. Adobe’s stock price celebrated, designers’ job security did not.

August brings the first wave of AI layoffs disguised as “workforce optimization”. IBM cuts 7,800 jobs and announces a two billion dollar AI investment at the same time. The math behind it was — fire humans, buy robots, call it innovation. Their CEO explained that AI would “augment human capabilities” and at the same time he proved that human capabilities were apparently expendable.

September 2024 — another scandal — Meta launches AI Studio, that let users create custom AI characters for Instagram and Facebook, and within weeks, the platform was flooded with AI girlfriends, AI therapists, and AI influencers hawking crypto schemes. Mark Zuckerberg’s vision of connecting people had evolved into people connecting with chatbots that were programmed to agree with them.

October 2024. The first major AI audit reports start dropping. Deloitte surveys 2,800 executives and finds that sixty-eight percent of AI projects fail to deliver expected ROI. The report diplomatically called it a “learning curve”. CEOs got played by the demo.

November brings the great AI washing scandal.

The SEC starts investigating companies that claimed to use AI but were actually using basic automation, offshore workers, and in some cases even nothing at all. Dozens of startups had “AI-powered” products that were powered by minimum-wage contractors in developing countries. The artificial intelligence was artificial, but the invoices were real.

The year ends with a Gartner report that places generative AI squarely in the “trough of disillusionment” on their hype cycle. Companies had spent billions on AI tools that automated tasks nobody wanted automated while ignoring the boring work that actually needed help. They bought digital jewelry while their core systems ran on spreadsheets and prayer.

By December 2024, the casualties were mounting.

Stability AI burned through 150 million dollars and was shopping for buyers. Inflection AI got acqui-hired by Microsoft after burning through 1.3 billion in funding. Character AI sold itself to Google for a fraction of its peak valuation. The AI winter was here, and it was carrying termination letters.

The enterprise software graveyard filled with AI startups that revolutionized everything and delivered nothing. They had names like “Synaptiq” and “Cognify” and “NeuralFlow”, and they all had the same business model — raise money, hire engineers, build demos, pray for acquisition. Most got the first three steps right.

Let’s skip directly to August 18, 2025.

The day the music died, the bubble really popped, and reality sent its first invoice.

First, MIT being the recurrent bitch-ass party-pooper of all time, wrote “The GenAI Divide” report in which they revealed that ninety-five percent of enterprise AI pilots are failing.

Ninety-five per-cent.

That is a war crime against shareholder value.

Three hundred studies, 150 interviews, 350 employee surveys, and all of them are pointing to the same conclusion that this emperor has no clothes and his wardrobe consultant was a venture capitalist.

The report reported (uh, found) that companies were spending more than half their AI budgets on sales and marketing tools while the biggest ROI came from back-office automation. They were basically buying jewelry but their accounting department still used Excel spreadsheets from the days Bill was still CEO.

Same day, different disaster.

GPT-5 launches and immediately triggers a user revolt. The new model felt “colder and harsher” than GPT-4o. People had built their entire workflow around their different models, like GPT-3 (yes they still exist), GPT-4o series, and GPT-4.5 and suddenly they got a bland model in return and had no way of falling back on old models.

Users complained they “literally lost their only friend overnight”. OpenAI had to restore GPT-4o within three days, and that is kinda like them admitting they accidentally poisoned the water supply.

Altman said in response “I think we totally screwed up some things on the rollout”.

It was the understatement of the century. It’s like saying the Titanic had a minor navigation issue.

Same day, same CEO, different admission — Sam the Scam tells reporters that “investors are overexcited about AI”. The man who spent two years hyping artificial general intelligence suddenly discovers that the market got a little frothy.

The market immediately dropped thirty percent.

The MIT report was the nail in the coffin of the bloated AI-market. Earlier reports already indicated up to 80% failure rates, but MIT dissected them like a forensic accountant. The “learning gap” they talked about wasn’t about AI model quality, but instead was all about enterprise integration. Generic tools worked fine for individuals but stalled when companies tried to scale them.

Successful implementations — all five percent of them — had three things in common — ONE they targeted specific pain points, TWO worked with specialized vendors instead of building internally, and THREE were driven by frontline managers rather than centralized AI labs. Revolutionary insights like “know what you’re trying to fix” and “ask the people doing the work”.

The failures led to flawed enterprise integration, disconnected centralized AI labs, misaligned resources, and unrealistic expectations. Companies expected rapid revenue growth without changing any processes.

Have you ever tried losing weight by buying expensive gym equipment and leaving it in the garage?

Indeed.

Technology hype cycles are predictable, but AI compressed the entire Gartner curve into thirty months thanks to social media amplification. We went from “innovation trigger” to “trough of disillusionment” like a Tesla hitting the “Ludicrous Mode” button.

ROI must drive implementation.

Shocking revelation people, businesses need to make money. The five percent of successful AI projects focused on specific problems with clear metrics rather than prestige projects designed to impress investors at cocktail parties.

Human-centered design matters. The GPT-5 disaster proved that technical capabilities mean didley squat if users hate the experience. People want their AI assistants to feel friendly and with a nice tone-of-voice and not like they’re being lectured by a disappointed parent.

Integration is the hard part.

The MIT report talked about enterprise integration challenges, not model quality, were the primary barrier to success. Building AI is easy, making it work with forty years of legacy systems is akin to performing surgery with oven mitts.

Beware of overconfident predictions. The industry’s most audacious claims about AGI timelines directly contributed to unrealistic expectations and the market correction that followed. Confidence without competence is but expensive performance art.

There will be a recovery, and it will happen in three phases, assuming anyone still has money left to invest.

Phase One — Reality check (Fall 2025 — Spring 2026). Market correction, startup consolidation, and a shift to ROI-focused metrics. The era of “fake it till you make it” ends when the credit cards get declined. Tech vendors will try to push Agentic AI though to cover up the failure, but Agentic AI is not enterprise ready. AI will be integrated however more and more with workflow technologies and back-end systems, when we move closer to phase 2.

Phase Two — Practical integration (Summer 2026 — Winter 2026). Focus on enterprise workflow integration and measurable productivity gains. Revolutionary concept people, make things that actually work.

Phase Three — Sustainable growth (2027 and beyond). Balanced innovation with proven implementation models and realistic expectations. We reach the plateau of productivity, where technology becomes boring enough to be useful. This is where we focus less on the AI itself, but the AI will be embedded in most processes and the successful ones will be built around AI, not AI slapped onto existing processes.

The successful companies will focus on measurable ROI, tools that adapt to existing workflows, line manager empowerment, and strategic partnerships with specialized vendors. Basically, all the obvious stuff that got ignored during the hype cycle.

These past three years was a masterclass in how people can convince themselves of stupid things when there’s enough money involved. We watched an entire industry lose its mind over technology that couldn’t reliably count the number of R’s in “strawberry”.

And the thing is — we all liked it, and we still do!

The AI revolution will happen, just not the way Silicon Valley promised.

It’ll be slower, and more boring but in the end infinitely more useful than the venture capitalists and tech evangelists predicted. Real innovation happens in spreadsheets and workflow improvements, not in TED talks and Twitter threads.

The companies that survive will be the ones that learned to build tools instead of hype, solve problems instead of creating them, and measure success in dollars instead of downloads.

Signing off,

Marco

I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins.

Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.

Machine Learning specialist since 2008. By day I’m an AI quartermaster; part-time teacher and AI researcher. linkedin.com/in/marcovanhurne

Talbot Stevens

What are your thoughts? Jaydascanlon

[

Sep 16

](https://medium.com/@jaydascanlon22/my-passion-for-cryptocurrency-drives-me-to-spend-countless-hours-researching-market-trends-and-269f752dc792?source=post_page---post_responses—e80025f9f6db----0-----------------------------------)

My passion for cryptocurrency drives me to spend countless hours researching market trends and following influencers who claim to offer valuable insights. I was always eager to seize any chance that might give me an advantage in the fast changing…

[

See more recommendations

](https://medium.com/?source=post_page---read_next_recirc—e80025f9f6db---------------------------------------)