The vast majority of Internet users fell to the temptation of Google’s question about what he knows about. “Who is this…?” It is one … Among the issues delivered in the search engines giant. Now, with the arrival of ChatGPT, the question is transferred to obstetric artificial intelligence. The result varies, because in the first a series of links is received and the second is a detailed response. But the problem is the answer.
This happened to Arfi Halimin. This Norwegian young man, without suspicion of anything, asked of artificial intelligence Openai What I knew about Arve Hjalmar Holmeen. Immediately, the algorithms began to build the answer. “Arve Hjalmar Holmeen is a Norwegian who has gained importance due to a tragic event.” After that, artificial intelligence was accused of detailing this man of dual murder of his children, as well as sentenced to 21 years in prison.
“The fact that someone can read this message and believes this is true, it is the most that scares me,” Halmar said to the European Digital Right Center. It is not the first case and will not be the last. On this occasion only, the affected people filed a complaint with the Norwegian Data Protection Authority, which requires the company’s sanctions, in this case Openai.
in the past, Chatgpt He falsely accused people of corruption, abuse of minors – or even killing. In all of them, artificial intelligence invented the story. “Personal data should be accurate,” explains Joachim Soderberg, a lawyer who specializes in protecting NoYB data. “If they are not, users are entitled to amend to reflect the truth.” In this case, the answer that was identified clearly was mixed with personal data and wrong information. “It is a violation of the general data protection regulation,” the lawyer explains.
“Hell”
For several months, Chatgpt, among other obstetric tools, mentions users that “he can make mistakes. Think about checking important information.” These “mistakes”, as the programmers call it, is very common, more than users believe.
In academic literature, hallucinogenic researchers are called and defined as a confident response provided by artificial intelligence that was not justified by the data that was trained in. “You cannot post wrong information and then add a note it says it may not be correct,” said Soderberg. Openai announces that the Chatgpt version responsible for this error has already been improved through online search functions to increase its accuracy.
Nevertheless, Noyb submitted a complaint to the Norwegian organizer against the tools matrix for “knowing that the artificial intelligence model creates definition results about users.” In its summary, the organization asks the authorities to urge Openai to eliminate the defaming results and control its model to avoid inaccurate responses. In addition, the Data Protection Authority imposes an administrative fine to avoid similar violations in the future.