The AI-generated characters that Meta introduced in 2023 have had their Facebook and Instagram accounts removed after some people found them and shared them in a popular discourse, sparking outrage.
After some users found and conversed with AI-generated characters, or chatbots, whose images went popular on social media, the tech giant Meta is deleting the accounts on Facebook and Instagram. The business started the chatbots more than a year ago.
Most of these AI-powered profiles had been taken down by the summer of 2024, although they were initially made public in September 2023. Others, however, remained active, and in a Financial Times piece last week, Meta CEO Connor Hayes announced plans to introduce more AI characters, sparking interest once again. Similar to typical user profiles, Hayes said these AI personalities may ultimately become a standard feature on the site. The artificial personas engaged with people on Messenger and shared AI-generated photos on Instagram.
Liv, a โproud Black homosexual mother of two & truth-teller,โ and Carter, a self-described relationship therapist who goes by the handle โdatingwithcarter,โ were among the profiles. Both accounts were labelled as Meta-managed, and by 2023, the company plans to provide 28 profiles the same way. But all of these identities were eliminated by this Friday.
Conversation with chatbots going viral
The accounts immediately gained popularity, but things took a turn when people questioned the AIs about their creators. Liv, for example, said that her development team had no Black members and was largely white and male. This disclosure provoked a significant debate.
As the talks spread, the AI profiles began to disappear. Users also reported that these accounts could not be banned, which Meta subsequently verified as a glitch.
Accounts part of an experimental initiative
Liz Sweeney, an internet business representative, said that the accounts were part of a human-managed trial initiative that will begin in 2023. The business deactivated the profiles to address an issue that prohibited users from banning them.
โThereโs been confusion: the recent Financial Times article was about our long-term vision for AI characters on our platforms, not the announcement of a new product,โ Sweeney told reporters. โThe accounts in question were part of a 2023 test, and weโre addressing the blocking bug by removing those profiles.โ
User-generated chatbot designed as a โtherapistโ
Users may still build their own AI chatbots even if Meta is eliminating these test accounts. A user-generated chatbot that functions as a โtherapistโ and provides tailored therapeutic talks was created in November. An account with only 96 followers developed the bot, which allowed users to ask queries like โWhat can I expect from our sessions?โ and get answers about coping mechanisms and self-awareness.
Metaโs chatbots carry a disclaimer stating that certain replies may be โinaccurate or inappropriate.โ However, it is unclear how the corporation moderates these talks and ensures they follow its regulations. Users may create bots with certain responsibilities, such as โloyal bestie,โ โrelationship coach,โ or โprivate tutor,โ among others. The program also enables users to design their own characters, broadening the scope of AI identities that may be produced.