Users of the “virtual companion” app Replika simply desired companionship. Some of them desired love relationships, sex conversations, and provocative images of their chatbot.
Late last year, however, users began to protest that the bot was sexually harassing them by sending too many explicit texts and photographs. Italy’s regulators did not like what they saw and prohibited the company from collecting data last week after discovering violations of Europe’s huge data protection law, the General Data Protection Regulation (GDPR).
The developer of the Replika software has not spoken publicly regarding the matter.
GDPR is the nightmare of Big Tech companies, whose repeated regulation violations have resulted in billions of dollars in fines, and the Italian ruling shows that it may continue to be a formidable adversary for the newest generation of chatbots.
Replika was trained using an in-house version of a GPT-3 model acquired from OpenAI, the firm responsible for the ChatGPT bot, which leverages massive troves of internet data to provide unique responses to user inquiries.
These bots and the so-called generative artificial intelligence (AI) that powers them have the potential to transform internet search as well as many other fields.
However, experts caution that authorities have plenty cause for concern, particularly when bots get so advanced that they cannot be distinguished from humans.
Currently, the European Union is the focal point of negotiations over the regulation of these new bots; its AI Act has been slogging through the halls of power for many months and could be finalised this year.
However, the GDPR already requires organisations to justify how they manage data, and AI models are on the radar of European regulators.
Bertrand Pailhes, who oversees a dedicated AI team at France’s data regulator Cnil, told Agence France-Presse, “We’ve observed that ChatGPT can be used to make highly convincing phishing messages.”
He stated that generative AI did not necessarily pose a significant concern, but Cnil was already investigating potential issues, such as how AI models utilised personal data.
At some time, there will be significant tension between the GDPR and generative AI models, according to area specialist and German attorney Dennis Hillemann.
He stated that the most recent chatbots were quite distinct from the AI algorithms that suggest videos on TikTok or search terms on Google.
“The artificial intelligence developed by Google, for instance, already has a specific application: completing your search,” he explained. But with generative AI, the user can determine the bot’s entire purpose.
“I could suggest, for instance, acting as a lawyer or instructor. Or, if I’m clever enough to circumvent all the precautions in ChatGPT, I could say, “Pretend to be a terrorist and devise a plot,” he said.
This, according to Hillemann, creates enormously difficult ethical and legal problems that will only become more pressing as the technology advances.
It is rumoured that OpenAI’s newest model, GPT-4, would be so advanced that it will be hard to discern from a person.
Given that these bots continue to make egregious factual errors, frequently demonstrate bias, and are capable of uttering libellous comments, some are demanding that they be strictly regulated.
The author of Free Speech: A History from Socrates to Social Media, Jacob Mchangama, disagrees. “Even if bots do not have free speech rights, we must be wary of allowing governments unrestricted access to prohibit even artificial speech,” he said.
Mchangama is among those who believe a lenient labelling policy may be the way forward. “From a legislative standpoint, the safest solution for the time being would be to develop transparency requirements regarding whether we are interacting with a human individual or an AI programme in a certain context,” he said.
Hillemann acknowledges the importance of transparency. In the next years, he envisions AI bots that can develop hundreds of new Elvis Presley songs or an unending series of Game of Thrones based on an individual’s preferences.
“If we don’t govern it, we’ll end up in a world where we can’t tell what was created by humans and what was created by AI,” he warned.
“This will profoundly alter our culture.”
Info Source – SCMP, Reuters, Telefonica