top of page

Act on Artificial Intelligence: Protecting Privacy and Safeguarding Human Rights


The last plenary session in June was significant to me. The European Parliament adopted its position on the first set of rules for the usage of Artificial Intelligence in the world and thus concluded the two-year phase of the negotiations in the Parliament, which I also participated in. The outcome reflects the Parliament’s commitment to enabling technological advancement while upholding fundamental rights.

The voting was a crucial opportunity for the European legislators to shape the future of AI usage in Europe and to address the concerns surrounding some types of AI that could be harmful to one’s privacy and other human rights. By taking a proactive approach to limit the use of remote biometric identification in public spaces, such as facial recognition, only to the most critical cases of misconduct and crimes when approved by a court, we have taken a vital step in mitigating the inherent threats posed by such use. But it is not only that.


AI as a good friend, not as a Big Brother

The European Parliament also confirmed the ban on the use of social scoring systems, such as the one we know from China. This system assigns a numerical score to each citizen based on their behavior, social interactions, and adherence to government policies. While its proponents argue that it promotes social cohesion and ethical conduct, the reality is deeply troubling. The social score system grants authorities an unprecedented level of control and surveillance over individuals, enabling the government to reward or punish citizens based on their perceived loyalty or compliance. This creates a chilling effect on freedom of expression, inhibiting dissent and stifling independent thought.

Apart from the category of banned usage, the European Parliament defined three other categories of AI according to the risk it may pose together with adequate measures. My personal input and success was to include so-called e-proctoring in the category of high-risk systems. E-proctoring systems are AI programs that monitor students during exams to prevent cheating. Since these tools are known for making mistakes, it is crucial for us to have specific measures for this type of monitoring that can determine individuals’ future.

AI holds tremendous potential and offers numerous benefits that can positively impact our lives. With the AI Act making its way to the trilogue, I deeply believe that this regulation will uphold those benefits and protect our rights.

Comentarios


bottom of page