Character.AI Gave Up on AGI. Now It’s Selling Stories

Trending 2 months ago

After school, Karandeep Anand often finds his 6-year-old girl heavy successful speech pinch an AI chatbot arsenic she eats snacks astatine their room counter. She’s excessively young to type—let unsocial person her ain relationship connected Character.AI—but that hasn’t stopped her from nabbing his telephone to person sound conversations pinch a Sherlock Holmes bot, which she uses to build her ain enigma stories.

Character.AI is an AI companion startup (though Anand likes to opportunity it's an AI role-play startup, which we’ll get into later). He took complete arsenic nan CEO successful June successful nan midst of a perchance devastating suit for its genitor institution and looming questions astir kid safety. When I inquire if he’s concerned astir his girl connecting pinch an AI chatbot alternatively than a existent human, he’s speedy to opportunity no.

“It is very rarely, successful immoderate of these scenarios, a existent replacement for immoderate human,” Anand told maine during a video telephone precocious past week. ”It's very intelligibly noted successful nan app that, hey, this is simply a role-play and an entertainment, truthful you will ne'er commencement going heavy into that conversation, assuming that it is your existent companion.”

It's a delicate infinitesimal for Character.AI.

Last August, Google swooped successful pinch a astir $2.7 billion woody to licence Character.AI’s technology. As portion of nan agreement, Character.AI’s 2 cofounders near for Google’s AI division.

Anand, who antecedently worked arsenic nan VP of business products astatine Meta, was tasked pinch picking up nan pieces—which he did successful portion by leaving down nan founding ngo of delivering personalized superintelligence to attraction connected AI entertainment.

“What we gave up was this aspiration that nan founders had of building AGI models—we are nary longer doing that. That is nan hundreds of billions of dollars finance fight, which Big Tech is fighting,” Anand says. “What we sewage successful return was clarity and focus, being capable to singularly prosecute nan AI intermezo vision.”

As portion of this alteration successful strategy, Character.AI is nary longer trying to build its ain frontier models. “The past six months, we've done a batch of activity to get disconnected of our proprietary models connected matter and commencement utilizing unfastened root models,” Anand says. The institution has tested a few: Meta’s Llama, Alibaba’s Qwen, and DeepSeek. “The unfastened root models are beating immoderate proprietary exemplary hands down,” Anand claims.

Running an AI startup without billions of dollars successful gross tin beryllium a sadistic equation, and Character.AI is still figuring retired really to make nan mathematics work. The institution told maine it's generating gross astatine a tally complaint of much than $30 cardinal and is connected way to scope $50 cardinal successful gross by nan extremity of nan year. When I asked Anand really galore users salary for nan $10 monthly subscription, he didn’t springiness a number but noted “monetization wasn't a attraction till 4 aliases 5 months ago.”

“Since I've been connected board, it's very clear we do request to monetize. And we've had, I think, almost 250 percent subscriber maturation successful nan past six months. So nan paid personification guidelines is increasing quite, rather well,” Anand says. Character.AI precocious introduced advertisements, including reward ads (where users tin take to watch an advertisement to get entree to on-platform incentives), to thief monetize successful countries wherever subscriptions aren’t feasible, he tells me.

“AI is expensive. Let's beryllium honorable astir that,” Anand says.

Growth vs. Safety

In October 2024, nan mother of a teen who died by termination revenge a wrongful decease suit against Character Technologies, its founders, Google, and Alphabet, alleging nan institution targeted her boy pinch “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming [the chatbot] to misrepresent itself arsenic a existent person, a licensed psychotherapist, and an big lover.” At nan time, a Character.AI spokesperson told CNBC that nan institution was “heartbroken by nan tragic loss” and took “the information of our users very seriously.”

The tragic incident put Character.AI nether aggravated scrutiny. Earlier this year, US senators Alex Padilla and Peter Welch wrote a missive to respective AI companionship platforms, including Character.AI, highlighting concerns astir “the intelligence wellness and information risks posed to young users” of nan platforms.

“The squad has been taking this very responsibly for almost a twelvemonth now,” Anand tells me. “AI is stochastic, it's benignant of difficult to ever understand what's coming. So it's not a 1 clip investment.”

That’s critically important because Character.AI is growing. The startup has 20 cardinal monthly progressive users who spend, connected average, 75 minutes a time chatting pinch a bot (a “character” successful Character.AI parlance). The company’s personification guidelines is 55 percent female. More than 50 percent of its users are Gen Z aliases Gen Alpha. With that maturation comes existent risk—what is Anand doing to support his users safe?

“[In] nan past six months, we've invested a disproportionate magnitude of resources successful being capable to service nether 18 otherwise than complete 18, which was not nan lawsuit past year,” Anand says. “I can't say, ‘Oh, I tin slap an 18+ explanation connected my app and opportunity usage it for NSFW.’ You extremity up creating a very different app and a different small-scale platform.”

More than 10 of nan company’s 70 labor activity full-time connected spot and safety, Anand tells me. They’re responsible for building safeguards for illustration property verification, abstracted models for users nether 18, and caller features specified arsenic parental insights, which let parents to spot really their teens are utilizing nan app.

The under-18 exemplary launched past December. It includes “a narrower group of searchable Characters connected nan platform,” according to institution spokesperson Kathryn Kelly. “Filters person been applied to this group to region Characters related to delicate aliases mature topics.”

But Anand says AI information will return much than conscionable method tweaks. “Making this level safe is simply a business betwixt regulators, us, and parents,” Anand says. That’s what makes watching his girl chat pinch a Character truthful important. “This has to enactment safe for her.”

Beyond Companionship

The AI companionship marketplace is booming. Consumers worldwide spent $68 cardinal connected AI companionship successful nan first half of this year, a 200 percent summation from past year, according to an estimate cited by CNBC. AI startups are gunning for a portion of nan market: xAI released a creepy, pornified companion successful July, and moreover Microsoft bills its Copilot chatbot arsenic an AI companion.

So really does Character.AI guidelines retired successful a crowded market? It takes itself retired of it entirely.

“AI companionship is nan incorrect measurement to look astatine what group do pinch Character. What group are doing pinch Character is really role-play. And it sounds interchangeable, but it isn't,” Anand tells me, adding that little than 20 percent of nan app gets utilized for companionship (that’s according to an soul investigation study of information self-reported by users). It doesn’t look to beryllium wholly retired of nan simulated narration game, though—it took maine each of a fewer minutes to find an AI boyfriend to prosecute successful schematic intersexual role-play with.

“People want to role-play situations. People want to role-play fabrication … They want to unrecorded successful alternate realities. They want to unplug from their day-to-day stuff,” Anand says.

I, personally, unplug from my time done a different benignant of virtual world. I americium wholly addicted to nan video crippled Stardew Valley. I tally Huckleberry Farm for illustration nan damn Marines. To Anand, nan video crippled is much of a competitor than Grok. “It became very clear that we're an intermezo company,” Anand says.

Musk and Bezos Roast Battle

When it comes to role-playing, nan Seattle-based CEO says he’s mostly into utilizing Characters for vampire instrumentality fiction. The problem, he says, is that erstwhile nan vampire bot talks astir blood, it gets censored. “The discourse needs to beryllium understood, truthful we dial backmost connected nan filters by being a batch much precise pinch nan context,” Anand tells me.

This level of contented moderation is 1 of nan galore changes Anand has been moving connected since taking complete nan institution successful June. The institution besides redesigned nan app pinch a much modern, Gen Z–friendly look and added caller devices for nan platform’s creators, who make much than 9 cardinal Characters per month. These updates, he says, people a displacement from Character.AI being seen arsenic conscionable a chatbot institution to thing much ambitious: an intermezo motor wherever users tin devour and create stories, remix content, and research pinch caller formats for illustration audiobooks.

“Every communicative tin really person a cardinal endings,” Anand says. A personification could moreover shape a roast conflict betwixt Elon Musk and Jeff Bezos, he adds. “You tin punctual that and output thing beautiful fun.”

I’m not judge nan litany of lawyers employed by those tech CEOs would beryllium arsenic entertained. That’s not to mention nan group who whitethorn not beryllium capable to spend an service of unit to take sides their personhood. I instantly thought of a WIRED communicative astir the family of an 18-year-old who was killed successful 2006 only to find nan image likeness of their girl re-created connected Character.AI. In that aforesaid story, an editor astatine a gaming publication recovered she had been re-created connected nan level pursuing a harassment run involving her coverage.

When I bring this up to Anand, he explains that erstwhile users create Characters modeled aft nationalist figures for illustration Musk aliases Bezos, nan strategy is designed to intelligibly awesome that these are parodies, not attempts astatine deepfakes aliases impersonation. (One Elon Musk chatbot page doesn’t show specified warnings. Neither do nan Dr. Phil aliases Joe Rogan chatbot pages.) Though, there’s a disclaimer beneath each chat: “This is an A.I. and not a existent person. Treat everything it says arsenic fiction.”

Anand says Character.AI has besides imposed strict limitations connected nan company’s video procreation tool, AvatarFX, to forestall misuse. Users shouldn’t beryllium capable to make realistic deepfakes moreover if they try, and circumstantial voices aliases topics are outright restricted.

“We’re very, very clear that we're staying successful nan intermezo territory. We're not into nan wide intent video procreation territory astatine all. We're not a Google Veo 3. We're not a Runway,” Anand says. “It's a very, very important line.”

Anand contrasts this pinch platforms for illustration Meta, wherever he claims contented is often uploaded first and moderated aft nan fact. At Character.AI, he says, contented guardrails are baked into nan creation pipeline itself. “Our reactive takedowns are a very, very mini percentage,” Anand tells me.

I interest that arsenic these devices turn much convincing, loneliness will deepen, not disappear. Anand understands. But he besides has thing to sell.

“I'm very passionate astir this taxable myself, and it's connected america to spell style nan dialog astir this successful nan best, patient measurement possible, because Gen Z is AI-native,” Anand says. “The mobility is, really do we build this successful a measurement wherever it's safe and trustworthy and engaging successful nan correct ways pinch nan correct incentives? That’s connected us.”

Sources Say

Last week, I reported that Elon Musk held an all-hands gathering for X and xAI employees. I’ve since obtained different screenshot from that gathering showing xAI’s gross complete nan past 7 months.

In January, Grok connected X brought successful conscionable nether $150 million, pinch different services for illustration endeavor API usage adding different $28 million. According to nan chart, gross has grown tenfold since nan commencement of 2025, reaching conscionable southbound of $500 cardinal successful July—driven by Grok connected X and nan $30 a month SuperGrok subscription. A smaller fraction of gross is generated by nan recently released SuperGrok Heavy subscription, which costs $300 a month. xAI did not respond to WIRED’s petition for comment.


This is an version of Kylie Robison’s Model Behavior newsletter. Read erstwhile newsletters here.