Guarantor of Privacy : from OpenAi availability to collaboration

In order to protect Italian citizens

RegulationApril 8, 2023

 1 minute of reading

 Share

Facebook Twitter LinkedIn

0

The meeting, which lasted almost three hours between the Guarantor of Privacy and OpenAi, took place on April 5, after the Italian authority suspended the use of ChatGpt ; the meeting took place in a very good climate, but it is still early to understand if and when the service will return active in Italy.

At the meeting, which was attended by Sam Altman, CEO of OpenAI, were present, in addition to the College of Guarantor (Pasquale Stanzione, Ginevra Cerrina Feroni, Agostino Ghiglia, Guido Scorza), Che Chang, Deputy General Counsel of the US company, Anna Makanju, Head of Public Policy and Ashley Pantuliano, Associate General Counsel.

The Italian Data Protection Authority stated in a note  "OpenAI, while reiterating that it is convinced that it complies with the rules on the protection of personal data, has however confirmed the will to collaborate with the Italian Authority with the objective to arrive to a positive solution of the criticalities found from the Guarantor regarding ChatGPT. The Authority, for its part, stressed that there is no intention to put a brake on the development of AI and technological innovation and reiterated the importance of compliance with the rules for the protection of the personal of Italian and European citizens. OpenAI is committed to enhancing transparency in the use of data subjects' personal data, the existing mechanisms for the exercise of children’s rights and guarantees and to send the Supervisor by today a document indicating the measures that respond to the Authority’s requests

The Guarantor reserves the right to assess the measures proposed by the company, including the measure taken against OpenAI.

In addition to the Italian Data Protection Authority, Canada has also announced that it has opened an investigation on OpenAI, the company that develops and manages ChatGPT.  For the same reasons advanced by the Italian Authority.

And the reference is to the need to standardize generative artificial intelligence as soon as possible as ChatGPT, based on Large Language Models (large language models) before it is too late is felt by many countries:

in the US a complaint was filed with the Federal Trade Commission (FTC) against OpenAI : according to the chair of the complainant research group, "GPT-4 is biased, misleading and is a risk to privacy and public security" in practice, large language models do not meet the Agency’s standards on AI, that is, they are not "transparent, explainable, fair..."

ChatGpt Guarantor of Privacy OpenAi privacy

Leave a Reply

Your email address will not be published. Required fields are marked *