Artificial Intelligence: How to prevent bots from infringing on our privacy

The primary characteristic of AI is the chatbot, which has become a consumer product in almost every household, with millions of users every month.

 Artificial Intelligence words are seen in this illustration taken March 31, 2023 (photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)
Artificial Intelligence words are seen in this illustration taken March 31, 2023
(photo credit: REUTERS/DADO RUVIC/ILLUSTRATION/FILE PHOTO)

In the past year, a new race has begun, defined as the AI Arms Race 2.0, when ChatGPT entered our lives and sparked a war around the field of artificial intelligence.

The same ChatGPT that impressed everyone when it first came out also caused concerns worldwide regarding privacy invasion, copyright infringement, and even the stability of countries in economic, security and political terms.

The primary characteristic of artificial intelligence is the chatbot, which has become a consumer product in almost every household, with millions of users every month.

This has led to an acceleration in investment in the field and competition among technology giants for AI tools.

The launch of ChatGPT has also raised concerns about the protection of personal information. 

 OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023 (credit: REUTERS/DADO RUVIC/ILLUSTRATION)
OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023 (credit: REUTERS/DADO RUVIC/ILLUSTRATION)

Regulators and lawmakers worldwide have responded to these concerns and soon they will have to decide whether to restrict, slow down, or even approve various developments in the field.

Global limitations

Beyond privacy concerns, some see AI as a public and economic issue that interests regulators, particularly in Europe, where they strive to lead and respond first.

For example, in Italy, the operation of OpenAI's ChatGPT was suspended after the data protection authority in the country, Garante, prohibited its use and launched an investigation into privacy law violations. 

This process is a means to prevent violations of privacy laws or the leakage of information to malicious actors.

Advertisement

Following this, the company announced several changes to the Chatbot's operations and declared that there would be complete transparency about its functionality. 

However, individuals aged 18 and above will be allowed to use the software like those aged 13, but they will need parental consent for its usage. This raises concerns among regulators in Italy because it is easy to forge or falsify parental consent.

In addition to Italy, data protection authorities in Spain, France, Germany, Ireland and other Western countries are investigating and examining the collection and use of data by artificial intelligence systems.

Recently, the European Data Protection Board, which unites the privacy authorities of Europe, has established a task force on ChatGPT to conduct investigations and enforcement actions on this problematic subject, which may lead to a common policy throughout the continent.

After these incidents, the service resumed but remains under continuous monitoring. Various data protection authorities in Europe are still awaiting additional requirements from artificial intelligence, such as stricter age verification, information processing campaigns and the rights of citizens regarding their data.

Questions have also arisen regarding the use of data provided to the chatbot itself and whether it can be utilized without consent.

Despite the numerous attempts to enforce and investigate the topic, the European Union has recently formulated a set of laws expected to be passed in the coming months.

What about the United States?

In the US, several measures are being developed against this new technology. The purpose of regulation and emerging laws is to prevent potential catastrophic progress, a proper examination is necessary before providing the public with powerful tools like artificial intelligence.

 A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023.  (credit: DADO RUVIC/REUTERS)
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. (credit: DADO RUVIC/REUTERS)

It is not easy to control the powerful technology when it grants millions of users access to personal information and important findings. Therefore, recent tests have introduced several models whose role is to protect the content posted by users on various platforms. 

"The use of artificial intelligence technologies poses numerous risks to privacy, personal information, and their protection. The risks to privacy can manifest both during the use of the technology, where user information is collected, and in relation to the information to which users are exposed while using the technology and the manner in which existing information is processed through the technology," explained Attorney Gafnit Lagziel, a partner and head of the privacy protection department at the Firon law firm.

What are the immediate risks?

"The Italian data protection authority relied on two claims," said Lagziel. "The secondary one is that ChatGPT exposes minors to responses that are not suitable for their level of development and understanding. The primary claim is that there is no legal basis justifying the massive collection of data, including personal information, for the purpose of 'training' the platform's algorithms."

However, shortly after the Italians blocked the chatbot, they provided a list of fixes for its re-launch.

"In response, OpenAI began implementing and embedding corrections, and it is expected to comply with European privacy laws within a short period of time."

Should Israelis be worried?

"Artificial intelligence is perhaps one of the greatest inventions of our time, if not the greatest." Like any new thing, there is concern that it will harm or destroy the foundations of the existing. 

"However, in my opinion, it is impossible to prevent the progress of technology, and despite the significant concerns, it is reasonable to say that it will not undermine privacy."

What is needed to ensure this?

"The legislation needs to adapt itself and evolve accordingly. For example, the European Union is already working on specific legislation to regulate the field of artificial intelligence, which is expected to be approved by the European Parliament in the near future. 

"It is important to note that the existing legislation today, which regulates privacy aspects in the European Union - GDPR, and privacy protection laws in Israel, regulates most of the issues that arise in the context of artificial intelligence as well. 

"Those dealing with artificial intelligence are likely to implement it as a matter of course, so the concern for privacy has significantly diminished."

Who is responsible for protecting our privacy?

"The new legislation takes into account the existing privacy laws, which regulate most of the issues related to artificial intelligence, and the additional regulatory mechanisms that will be enforced mainly on the owners and developers of these artificial intelligence systems, with the aim of creating preemptive mechanisms for the creation of artificial intelligence, such as registering artificial intelligence systems in a registry, training artificial intelligence on data sets that are meant to prevent biases, and more. 

"The new legislation also differentiates between the regulation of artificial intelligence systems with higher risks, including privacy concerns such as unauthorized access or theft of personal information."

What about Israel?

"In Israel as well, the Privacy Protection Authority has taken a position regarding the 'duty of knowledge in the context of collecting and using personal information,' in which it addressed the duty of knowledge in algorithm-based or artificial intelligence-based systems. It clarified, based on the existing privacy laws today, the way in which it should be implemented on the new technology."