News & Insight
Ethical Principles in the Use of Artificial Intelligence Systems

Artificial intelligence (“AI”) technologies have become an integral part of our daily lives, due to the ongoing advancements. While AI technologies offer numerous benefits, it is important to be aware of the potential negative consequences they may bring. In this context, it is crucial to examine the ethical principles developed by various organizations worldwide regarding the responsible use of AI systems. These principles aim to mitigate risks and ensure that AI technologies are deployed in a manner that respects human rights, promotes transparency, and adheres to legal and ethical standards.

B. Positive and Negative Impacts of AI Systems

1. Positive Impacts

AI technologies, especially due to their ability to analyze large datasets at high speed, provide significant advantages in rapid decision-making processes. In the healthcare sector, AI systems are invaluable for disease diagnosis, while in the education sector, AI can offer personalized learning experiences, contributing to both efficiency and time savings. AI systems that provide real-time translations are also widely used and contribute to the simplification of daily activities.

Furthermore, AI is applied in various fields, including navigation, cybersecurity applications, and the defense industry. AI systems are also utilized in finance, agriculture, and energy sectors. For example, voice response systems or virtual assistants used in telephone conversations with banks are AI products. Similarly, autonomous vehicles and unmanned aerial vehicles leverage AI systems to enable autonomous driving capabilities.

2. Negative Impacts

Despite the substantial benefits, AI systems present a range of legal and ethical risks that cannot be ignored.

a) Bias and Discrimination Risks

One of the most significant concerns is the risk of bias inherent in AI systems. If the datasets used for training AI algorithms contain biased information, the decisions produced by the AI will likely perpetuate these biases. This can lead to discriminatory practices, particularly in sensitive contexts such as recruitment, credit scoring, and criminal justice. For instance, in recruitment processes, an AI model trained on biased data may unjustly favor certain demographic groups while discriminating against others.

To address this issue, transparency in AI algorithms is recognized as a fundamental ethical and legal principle. Transparent algorithms allow for the identification and correction of biases, thereby fostering accountability and trust in AI systems.

b) Data Privacy and Security Risks

AI systems often require access to vast amounts of data, raising concerns about data privacy and security. The risk of privacy violations arises when AI systems inadvertently capture or misuse personal information during data analysis processes. This may occur through broad data collection techniques or from user-provided information during interactions with AI systems. Data breaches or misuse may constitute violations of legal frameworks governing personal data protection, such as the General Data Protection Regulation (GDPR) in the European Union.

Ensuring data security and compliance with data protection laws is imperative to prevent the unlawful processing or disclosure of personal data by AI systems.

c)  Lack of Algorithmic Transparency and Accountability (The “Black Box” Phenomenon)

Another significant issue is the lack of transparency in AI system algorithms. The complexity of these algorithms often renders their operational processes opaque, making it challenging to understand the basis for AI-generated decisions. This issue is referred to as the “black box” phenomenon in the European Union's Ethical Guidelines for the Use of Artificial Intelligence.

When harm arises from an AI system's decision, determining legal responsibility becomes complex due to this opacity. Addressing this challenge requires the adoption of legal frameworks that mandate the traceability and explainability of AI decisions to ensure accountability.

d) Employment Displacement Risks

Due to their capacity to perform various tasks efficiently and automate processes, AI systems present a risk of job displacement, particularly in roles that involve repetitive or routine tasks. While AI technologies create new job opportunities in technology-driven fields, they may also lead to unemployment in certain sectors. From a legal perspective, policymakers must strike a balance between fostering technological advancements and safeguarding workers' rights. Proactive legal and regulatory measures may be necessary to ensure the fair transition of workers into new roles.

C. Ethical Principles

 The use of AI systems has been evaluated by various organizations such as the World Trade Organization (“WTO”), the World Health Organization (“WHO”), UNESCO, the United Nations (“UN”), the European Union (“EU”), and the Turkish Bar Association (“TBB”), which have provided guidelines and ethical principles for the safe and responsible use of these technologies. While these principles are non-binding, they highlight crucial considerations for the ethical deployment of AI systems.

 The standards published by the WTO focus on the definition of products, systems, and processes, specifying conformity requirements, ensuring the continuity of trade, and maintaining quality. A key aspect is the clear definition of systems, ensuring that stakeholders are informed about the risks and characteristics of the AI systems they are engaging with.

\ The principles outlined by the WHO include protecting human autonomy, promoting human well-being, safety and the public interest, ensuring transparency, explainability, and understandability, promoting accountability and responsibility, ensuring participation and equality and encouraging responsive and sustainable AI systems.

UNESCO has set forth ethical principles for AI usage, including proportionality and non-harm, safety, fairness and non-discrimination, sustainability, privacy protection and data security, human oversight, transparency and explainability, establishing accountability, raising awareness of AI and adopting a compatible execution approach. These principles are also endorsed by the UN and emphasize the importance of ensuring AI systems do not cause harm, maintain fairness, and provide mechanisms for accountability.

In 2019, the EU published guidelines on ethical AI use, which were followed by the adoption of a regulation in August 2024. These principles include human oversight, ensuring security, privacy and data protection, transparency, diversity, non-discrimination, and fairness, promoting social and environmental well-being and accountability.

Moreover, TBB has published principles for trustworthy AI systems, which include:

  • Human oversight and supervision
  • Technical robustness and security
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Social and environmental benefit
  • Accountability

These principles highlight the importance of ensuring human oversight and supervision, developing robust and secure systems, and protecting privacy and data rights. A trustworthy AI system must be transparent, unbiased, and accountable.

A reliable AI system must be developed, with particular emphasis on data access and protection. It is crucial to create a trustworthy system, as the protection of data provided by users is of significant importance. Additionally, a transparent and explainable system must be established. In this regard, the algorithms used in the AI system should be auditable.

As mentioned earlier, the principles published by the TBB also emphasize the importance of creating an AI system free from biases, ensuring that the outputs produced by the AI are also unbiased. Otherwise, discrimination may occur, potentially infringing on fundamental rights. As a consequence of this, AI systems must be accountable. It is essential that the negative outcomes of AI systems are subject to oversight, with compensation mechanisms in place to address any potential harm.

Clearly, the risks associated with AI systems are generally understood to include discrimination, unfair treatment, the potential for personal data breaches, the inability to assign responsibility in the event of harm, and the use of unreliable algorithms. As a result, similar principles have been established by various organizations. Furthermore, the creation of a system that operates independently of human oversight is not the objective. It has been emphasized that human oversight and monitoring are essential in AI applications. In this context, it is possible to argue that the ethical principles put forward to prevent the risks associated with AI carry a global significance. 

D. Ethical Principles in the Use of Artificial Intelligence in Law

The use of AI systems has begun to spread in the legal field as well. In fact, AI systems such as “Ross Intelligence” and “Harvey” have been developed to specifically assist in legal matters. These systems generally function in a similar way to platforms like “ChatGPT”, providing responses to legal inquiries based on the legal resources and case precedents they have been trained on.

As the use of AI systems becomes more widespread, in July 2024, the American Bar Association (“ABA”) published a guide on the ethical principles concerning the use of AI in the legal field. This guide addresses issues related to authorization, confidentiality, communication, and fees in the context of lawyers using AI systems.

The guide stipulates that lawyers must be competent in legal knowledge and practice and should be aware of both the benefits and risks associated with the technological tools they use to provide legal services. In this regard, it is important that lawyers understand the risks of the AI systems they use and are capable of supervising them.

Moreover, lawyers using AI tools must maintain the confidentiality of information related to their clients’ representation, unless the client has given explicit consent.

Additionally, lawyers should determine the course of action in the representation process by reasonably consulting with their clients and communicating necessary information. In this context, clients should also be informed about the use of AI systems.

Lastly, according to the ABA guidelines, while lawyer fees and expenses must be reasonable, lawyers may charge for the 15 minutes spent entering the necessary information into the AI system to prepare a draft of a legal document, as well as for the time spent reviewing the draft for accuracy. However, lawyers may not charge clients for the time spent learning how the AI tool operates.

Following these regulations, an official opinion issued by the ABA emphasizes that these rules must be adhered to and states the following:

“With the continuous development of technology by lawyers and courts, it is essential for lawyers to be diligent in complying with the Rules of Professional Conduct to ensure their ethical responsibilities are met and their clients are protected.”

It is imperative for legal professionals to act with a heightened awareness of the risks associated with the use of AI systems. Lawyers must ensure that any outputs generated through AI systems are explicitly identified as AI-produced to maintain transparency and accountability. This transparency is essential to ensure compliance with legal obligations, adherence to professional ethical standards, and the prevention of potential legal and ethical issues arising from AI-generated content.

Similar ethical principles are reflected in the European Union’s Artificial Intelligence Act, which emphasizes the importance of transparency, accountability, and risk mitigation in the deployment of AI systems. Additionally, the regulations issued by the ABA serve as a vital reference for establishing ethical guidelines in the legal profession regarding AI usage.

@Çağla BARUT, @Ece BAYAR

Let's Get Connected!