Facebook and Instagram’s parent company, Meta, announced on Wednesday that it had fixed a security vulnerability that would have allowed other users to see users’ private AI prompts and created content.
Sandeep Hodkasia, an Indian security researcher and the founder of the cybersecurity company AppSecure, spotted the flaw and made it public. Hodkasia informed TechCrunch that he had discovered the vulnerability on December 26, 2024, and that Meta had subsequently given him $10,000 as part of their bug bounty program.
On January 24 of this year, Meta released a patch and reported that it has not discovered any evidence of malicious use of the vulnerability.
The company’s standalone chatbot program, Meta AI, handled user prompt edits incorrectly, according to Hodkasia. He discovered that every prompt and its AI-generated response had a distinct numerical identifier while examining browser traffic during the editing process.
Hodkasia was able to retrieve stuff that belonged to other users by altering this identification; this indicates that Meta’s servers were not confirming that a requestor was permitted to access the prompt and its output.
Hodkasia told TechCrunch that “the prompt numbers were easily guessable,” expressing concern that a malevolent actor could have quickly and cheaply created a basic script to collect a significant amount of private user data.
In a statement, Ryan Daniels, a representative for Meta, verified the patch and restated that “no evidence of abuse” was discovered.
The bug’s announcement coincides with big tech companies’ continued aggressive push towards generative AI products, despite persistent worries about platform security and data privacy.
When the company’s primary chatbot, Meta AI, was introduced earlier this year to compete with OpenAI’s ChatGPT, it immediately came under fire after a number of users inadvertently made private discussions public. The most recent disclosure is probably going to raise concerns again about how reliable the security features incorporated into quickly evolving AI systems are.
Experts have long cautioned that the AI industry’s rate of innovation has frequently surpassed its capacity to implement suitable security measures. Errors like this might have a big impact on consumer trust as Meta and other companies compete to control the developing AI market.