The Ethical Issues of Artificial Intelligence and the AI Act

Development of Artificial Intelligence

Over the past few years, the development of Artificial Intelligence has achieved exceptional results, making this technology an integral part of our daily reality. Thanks to the progress in machine learning techniques, such as machine learning, neural networks have become increasingly complex and efficient in processing information. 

AI has been widely integrated into various sectors, from medicine to finance, revolutionizing production and decision-making processes. Today, artificial intelligence is necessary in several fields such as health, agriculture, and the environment, and its impact on modern industry is becoming increasingly significant. The launch of language models like ChatGPT has demonstrated the potential of artificial intelligence in generating complex texts.

However, the rapid advancement of Artificial Intelligence has led to a growing awareness of ethical implications associated with the use of this technology. The scientific community is committed to ensuring ethics and responsibility in the use of AI, addressing challenges related to transparency and preventing biases in the models used for training. Challenges related to diversity in the industry and public awareness persist, requiring a continuous educational approach and regulation that balances technological efficiency with ethical and social needs.

 Ethical Issues of Artificial Intelligence

Artificial Intelligence is a complex issue that raises many concerns. Here are some of the main ethical problems of Artificial Intelligence:

  • Bias: AI can be influenced by biases, as the data used to train machine learning models can be unbalanced or contain discriminatory information. This can lead to unfair and discriminatory decisions, such as selecting job candidates based on their gender or race.
  • Copyright: Many texts used to train artificial intelligence algorithms are protected by copyright.
  • Surveillance: AI can be used for mass surveillance, violating people’s privacy. For example, facial recognition systems can be used to identify individuals in public places without their consent.
  • Responsibility: AI can be used to make important decisions, such as selecting job candidates or granting loans. However, if these decisions are based on biased algorithms, it can be challenging to determine who is responsible in case of errors or discrimination.
  • Autonomy: AI can be used to make decisions without human intervention. However, this can lead to decisions that are not in line with the values of modern society, such as selecting goals harmful to the environment.
  • Deepfakes: Video manipulation using artificial intelligence to realistically replace or overlay a person’s face in existing multimedia content
  • Security: AI can be vulnerable to cyber-attacks, compromising the security of data and infrastructure.

These are just some of the main problems that artificial intelligence poses. To ensure responsible and safe use, a multidisciplinary approach and appropriate regulation are necessary.

The AI Act

Last December, the European Parliament and the European Council reached an agreement on the AI Act, the world’s first legislation regulating the use and development of artificial intelligence. The most controversial issue concerned the use of this technology for law enforcement, with the Parliament defending a total ban while the Council proposed a more permissive approach. The compromise led to a prohibition of predictive policing, social scoring, and biometric recognition, except in specific cases such as clear terrorist threats.

Furthermore, standards were established for foundational AI models (models trained on a large amount of uncategorized data).

Although the agreement bodes well, concerns remain, and it will be necessary to monitor the implementation phase. In conclusion, the AI Act marks a step forward, but continuous monitoring will be crucial to adapt to the evolving challenges of AI.