On paper, it looked like the future. A startup backed by Microsoft and SoftBank. Roughly $450 million raised. A $1.5 billion valuation. Slick decks promising artificial intelligence that could transform business.

Inside the company, it looked very different. Hundreds of workers in India, manually doing the tasks that investors thought were handled by AI. No real machine learning models. No research breakthroughs. Just people, keyboards, and a lot of secrecy.
They look similar because both the fake AI unicorn and real AI companies promise the same thing: software that can do what humans do, only faster and cheaper. Both talk about models, data, and automation. Both raise money on the idea that code will replace labor.
But one side is software that actually learns from data. The other is a human factory hidden behind a login screen.
This is a comparison of those two worlds: the human-powered “AI” operation that fooled investors, and real AI companies that actually build and deploy machine learning. Same pitch, very different reality.
Origins: Why fake AI companies and real AI startups look alike
Fake AI companies usually start with a pitch, not a product. The story that surfaced on Reddit about an AI firm raising hundreds of millions, then turning out to be 700 people in India doing manual work, fits a pattern that has repeated in smaller ways across the tech boom.
The pattern goes like this. Founders spot a hot trend, like AI. They promise a product that automates something expensive and annoying: content moderation, customer support, medical transcription, legal review. Investors, afraid of missing the next big thing, pour in money based on a slide deck and a few demos.
The problem: real AI is hard. Training models takes data, time, and expertise. So some founders quietly swap algorithms for people. They hire large teams in lower-wage countries and tell them to do the work “until the AI is ready.”
Sometimes that stopgap never ends. The company keeps scaling the human operation, keeps pitching itself as an AI firm, and hopes no one looks too closely at the margins or the tech stack. That is how you end up with a “billion-dollar AI company” that is mostly a back office.
Real AI startups usually begin in the opposite direction. They start with a technical insight or research background. Think of companies like DeepMind (founded by Demis Hassabis and others in 2010) or OpenAI (founded in 2015). Their origin stories are about algorithms, not arbitrage.
Founders are often researchers from universities or big tech labs. The first hires are machine learning engineers, not operations managers. The early money goes into GPUs, data pipelines, and research salaries, not into renting extra office space for hundreds of data-entry workers.
Both fake and real AI companies talk about disruption and automation. Both can pitch a future where software replaces repetitive work. That shared story is why they look similar at the seed and Series A stages.
The difference is what they are actually building on day one. One is building a research and engineering engine. The other is building a call center with better branding. That early choice shapes everything that comes next, so it matters for how honest the company can be as it grows.
Methods: Humans in the loop vs humans as the product
On the surface, the fake AI unicorn and a real AI company can describe their methods in almost identical language. Both will talk about “training data,” “feedback loops,” and “human-in-the-loop systems.”
In a real AI company, humans in the loop are there to improve the model. They label data, review outputs, and correct mistakes. Their work is used to train and refine machine learning systems so that, over time, the software gets better and needs less human intervention.
For example, a real AI firm building a document summarizer might hire annotators to mark the key sentences in thousands of documents. Those labels feed into a training pipeline. The model learns patterns, then can summarize new documents on its own. Humans still check edge cases, but the core work shifts to the algorithm.
In the fake AI operation described on Reddit, the humans are not training a model. They are the model. The system is a workflow tool that routes tasks to people who read, type, and click. When a client uploads a file or sends a request, it does not go to a neural network. It goes to a person.
There is a long history of this kind of setup. Amazon Mechanical Turk, launched in 2005, made it normal to route small tasks to remote workers and present the result as if it were automated. Many early “AI” features in apps were really a thin layer of code on top of human labor in lower-wage countries.
The line between honest and dishonest here is not the presence of humans. Real AI products often rely on human reviewers, especially in safety-critical areas like medicine or finance. The line is whether the company is transparent about what is automated and what is not.
In a real AI company, the goal is to reduce manual work over time by improving the models. In a fake AI company, the goal is to hide the manual work while scaling it. The tech stack becomes a way to disguise labor, not to replace it.
That difference in method shapes cost structure and scalability. Real AI companies invest heavily upfront in research, then see marginal costs drop as software scales. Fake AI operations see costs rise with every new customer, because each new contract needs more human hands. That hidden cost curve is why the method matters for whether the business can survive contact with reality.
Outcomes: Demos, margins, and the moment the illusion breaks
From the outside, fake and real AI firms can both produce impressive demos. A client uploads audio, gets a transcription back in minutes, and assumes a model did it. A user types a question into a chat interface, gets a fast answer, and assumes it came from an LLM.
That is one reason investors get fooled. In a demo environment, latency can be masked. A human can sit on the other side of the screen and respond quickly. As long as the volume is low and the questions are predictable, the illusion holds.
Real AI companies also stage demos. They pick tasks where their models perform well. They avoid edge cases. But under the hood, there is an actual model that can be stress-tested, audited, and improved. When usage spikes, they add servers, not floor space.
The outcomes start to diverge when scale and scrutiny arrive. A fake AI unicorn that relies on 700 people in India has a fixed ceiling on how much work it can do per hour. To grow, it must hire more people, train them, and manage them. Quality varies. Turnover hurts. Margins stay thin.
Real AI firms face different bottlenecks: GPU shortages, data quality, model performance. Their costs are front-loaded. Once a model is trained and deployed, each additional user is relatively cheap. That is why software companies can reach high margins that human-service companies rarely match.
Then there is the audit problem. When a big client or a regulator asks to see the tech, a real AI company can walk them through the architecture. They can show training data sources, model versions, and error rates. A fake AI company has to keep the curtain closed.
History is full of tech companies that pushed this too far. Theranos promised a revolutionary blood-testing machine, but used traditional lab equipment in the back. Some “AI” content moderation tools quietly routed flagged posts to large teams of human moderators. The Reddit story about the 700-person Indian back office fits that same pattern of overpromising automation while hiding labor.
Eventually, reality catches up. A whistleblower talks. A journalist investigates. A client notices that response times match Indian working hours. When that happens, the outcome is not just embarrassment. It can mean lawsuits, clawbacks, and a collapse in trust for the entire sector. That fallout is why the difference in outcomes matters far beyond one company.
Legacy: What fake AI does to trust vs what real AI leaves behind
Fake AI companies leave behind two main things: burned investors and skeptical customers. When a firm raises hundreds of millions on the promise of automation and turns out to be a disguised outsourcing shop, it does not just hurt its own cap table. It makes the next real AI startup’s job harder.
Investors become more cautious. They ask for deeper technical due diligence. They want to see code, not just slides. That can be healthy, but it also slows funding for legitimate research-heavy teams that do not have flashy demos yet.
Customers become wary of AI claims. They start asking whether a product is truly automated or just backed by low-wage workers. That skepticism can be good for transparency, but it also means real AI firms have to spend more time explaining their stack instead of selling outcomes.
Real AI companies leave a different sort of legacy. They produce open-source models, research papers, and infrastructure that other teams build on. DeepMind’s AlphaGo and AlphaFold changed how people think about what machine learning can do in games and biology. OpenAI’s GPT series reset expectations for language models.
Those advances are not just marketing. They change how universities train students, how companies design products, and how regulators think about risk. Even when you strip away the hype, there is a real technical lineage that can be traced through citations, benchmarks, and code.
There is also a human legacy on both sides. The 700 workers in India are part of a long, often invisible, history of outsourced digital labor. Many have done annotation and content work that quietly trained the very models that might one day reduce their job options. Their story is a reminder that “AI” often rests on human effort, even when the press releases forget to mention it.
Real AI firms also rely on human labor, but when they are honest about it, they can shape better norms: fairer pay for annotators, clearer disclosures about human review, and more realistic expectations about what is automated and what is not. That cultural shift is part of their legacy too, and it matters for how society absorbs new technology.
Why fake AI and real AI get confused so easily
So why did so many people on Reddit find the story of a $1.5 billion “AI” company powered by 700 coders both believable and outrageous? Because the line between automation and outsourcing has been blurry for a long time.
From a user’s perspective, the interface looks the same. You click a button, something happens on a server, and you get a result. Whether that result came from a transformer model or a tired worker in a different time zone is invisible.
Marketing language does not help. Companies throw around terms like “AI-powered,” “machine learning,” and “neural” for everything from spam filters to simple rule-based scripts. That inflation makes it hard for non-technical people to tell what is genuinely new and what is just rebranded labor.
There is also a genuine gray area. Many honest AI products mix automation with human review. A medical AI might flag suspicious scans, but a radiologist still makes the call. A translation tool might auto-translate, then send tricky segments to human translators. Those hybrid systems are not scams, but they can look similar to the fake AI setups if the company is not clear about the split.
The cleanest way to separate fake AI from real AI is to ask two questions. First: if you removed all the humans, would anything still work? Second: is the company investing to reduce human labor over time, or just to hide it better?
Real AI companies can answer yes to the first and can show a roadmap for the second. Fake AI firms cannot. That difference shapes whether they build lasting technology or just a short-lived illusion. It matters because the next wave of AI claims will arrive, and people will need a way to sort signal from noise.
What this means for the next AI boom
The Reddit story about the fake AI unicorn is not just a funny anecdote about gullible investors. It is a warning about what happens when hype outruns understanding.
As large language models and other AI tools spread, there will be more companies claiming to use them. Some will. Some will not. The temptation to quietly swap in human labor when the model underperforms will be strong, especially when contracts are on the line.
For investors and customers, the lesson is not to distrust all AI. It is to ask better questions. Who trained the model? What data was used? How much of the workflow is automated, and how much is human? What happens to costs as usage scales?
For workers, especially in places like India and the Philippines, the story is more complicated. They are both the hidden engine behind many “AI” products and the group most at risk when real automation arrives. Their role in this history is not a footnote. It is central.
Real AI will keep advancing. Fake AI will keep trying to ride its coattails. The difference between the two is not just academic. It shapes where money goes, whose work is valued, and how honest the tech industry is about what it is selling. That is why the story of 700 people pretending to be a billion-dollar AI matters long after the Reddit thread scrolls away.
Frequently Asked Questions
How can you tell if a company is using real AI or just humans?
Ask whether the core service would still work if you removed all the human workers. Real AI companies use humans to train and monitor models, but the model does most of the work at scale. Fake AI operations rely on people to perform the main task and use software mainly to route and track that labor.
Do real AI companies still use human workers behind the scenes?
Yes. Real AI firms often use human annotators to label data, review outputs, and handle edge cases. The difference is that their goal is to improve the model so it needs less human intervention over time, and they are usually transparent about human review in sensitive areas like medicine or finance.
Why do some startups pretend to use AI when they do not?
AI hype attracts investment and customers. Building real machine learning systems is expensive and slow, so some founders use human labor as a shortcut while marketing the product as automated. As long as demos look good and margins are not scrutinized, they can raise money before the gap is exposed.
Is it a scam if a company mixes AI with human labor?
Not by itself. Many honest products combine automation with human review, especially when accuracy is important. It becomes deceptive when a company claims full automation while hiding that most of the work is done by low-wage human workers and when it sells itself to investors as a high-margin software business.