top of page

AI AND FACIAL RECOGNITION : THE POTENTIAL FOR ABUSE AND DISCRIMINATION.


INTRODUCTION

AI facial recognition has emerged as one of the most controversial topics in the digital age. It is one of the most effective tools and measurements in the development of biometric technology. However, it poses ethical challenges and threats to human rights, especially the right to privacy. This technology is an invasion of privacy because it is aimed at criminals or anyone who may pose a threat to society. Artificial intelligence (AI) and facial recognition technologies are increasingly being used in various industries such as law enforcement, marketing and commerce Although these technologies have the potential to bring many benefits, such as security and convenience face, but concerns about potential abuse and discrimination.[1] It has been discovered that facial recognition technology reinforces gender and racial discrimination, and the risk of being misidentified is exacerbated primarily by racial, gender, and class bias built into both algorithms and police databases.[2] AI-powered facial recognition technologies should be carefully evaluated and regulated to ensure ethical use and prevent discrimination and abuse.[3]

 

BIAS AND DISCRIMINATION IN AI ALGORITHMS

Bias and discrimination in AI algorithms can have serious consequences in various industries. AI systems can be biased, leading to discrimination, especially if they are trained on biased and incomplete data. This bias can lead to unequal treatment based on legally protected characteristics like race and gender, perpetuating existing discrimination patterns. The obstacle is in eliminating the relationship between algorithmic biases and discrimination, as well as defining the thresholds that constitute evidence of discrimination.[4] Furthermore, the unintentional nature of bias in AI design makes it difficult to identify and address, and can lead to actual instances of discrimination, such as testing data and design to understand and address AI biases though reduce these issues in health and recruitment. Reducing the risk of bias in AI programs and implementing best practices is essential.

The term Bias refers to a deviation from the norm that is sometimes required to identify the presence of statistical patterns in the data or language used. In this article, we will discuss the three most well-known sources of bias:

1.    Bias in Modelling:

Bias in modelling can be purposefully introduced, for example, through eliminating or legalization parameters in order to reduce or compensate for bias in the data, which is known as algorithmic processing bias, or it can be introduced while modelling in cases where objective categories are used to make subjective judgments, which is known as algorithmic focus bias.

2.    Training bias:

Algorithms learn to make choices or forecasts based on data sets that frequently contain previous decisions. If a data set used for training reflects existing biases, algorithms will almost certainly learn to make the same biased decisions. Furthermore, if the data do not accurately represent the characteristics of different populations, resulting in an unequal ground truth, the algorithmic decisions may be biased.

3.    Bias in application:

When algorithms are used under circumstances for which they were not designed, they can produce bias. When an algorithm used to predict a specific outcome in one population is applied to another, it can produce inaccurate results—a type of transfer context bias. Furthermore, misinterpretation of the results of the algorithm can lead to biased actions, known as semantic biases.[5]

 

FACIAL RECOGNITION TECHNOLOGY (FRT)

It is a type of technology that uses algorithms and artificial intelligence to scan and identify people based on their facial expressions. While FRT can be useful in some situations, such as law enforcement or security, it has many risks and concerns. They are

 

1.    Invasion of privacy:

Invasion of privacy is one of the main concerns of the FRT. The technology could also be used in public places to identify and track individuals without their knowledge or consent.

2.    Bias and discrimination:

Another problem with FRT is the possibility of bias and discrimination. Many FRTs have been shown to be inaccurate when it comes to identifying groups of colour, women, and other marginalized groups. This can lead to imperfections and discrimination.

3.    Security risks:

FRT systems are also vulnerable to attacks and data breaches, which can expose sensitive information and put the security of individuals and organizations at risk.

4.    False positives:

FRT can also lead to false positives, harassing or targeting innocent people. This can have serious consequences, especially in context of law enforcement..

5.    Lack of regulation:

There is currently lack of legislation on FRT, which means there are few safeguards to protect individuals from potential risks.

 

ORGANISATION COLLABORATE WITH STAKEHOLDERS TO ADDRESS AI-DRIVEN DISCRIMINATION.

Organizations can collaborate with stakeholders to address AI-driven discrimination by implementing the following strategies:

1.    Open discussion and the exchange of best practices:

Encourage open dialogue between stakeholders such as industry associations, academic institutions and regulatory bodies to establish ethnic AI standards and guidelines. This can contribute to the concept of AI with acceptable ethics and cross-functional collaboration[6].

2.    Regulatory frameworks and industry standards:

Collude with government and law enforcement agencies to make the use of AI fair, transparent and accountable. Industry should actively contribute to the development of this legislation by sharing their expertise and insights.

3.    Diverse and inclusive AI development teams:

Develop diversity in AI development teams to challenge assumptions and stereotypes that influence specific data features, algorithms, or decision criteria in AI systems. Diverse teams can assist in uncovering biases and driving innovation.[7]

4.    Regular testing and monitoring:

Test AI systems on a regular basis to avoid AI bias and discrimination in sensitive areas such as financial services and compliance. Monitor their performance to identify and correct biases.[8]

5.    Collaboration with ethics committees or model risk teams:

Develop internal teams dedicated to conducting independent reviews of AI/ML models as they evolve, with the goal of avoiding risk to the customer and the company.[9]

 

THE DANGERS OF FACIAL RECOGNITION TECHNOLOGY

From unlocking our phones to boarding an airplane, facial recognition technology is increasingly a part of our daily lives. While this technology may seem convenient, its implications go far beyond that. There are real dangers in facial recognition technology, which has piqued the interest of policymakers, privacy advocates and civil rights organizations.

One of the biggest concerns with facial recognition technology is the potential for law enforcement abuse, as revealed in a recent article by the BBC that Clearview, a facial recognition company, has conducted nearly a million searches U.S. police, according to its founder. However, law enforcement is not the only threat. Private companies are also collecting massive amounts of facial data, often without the individuals being survey’s insight or permission.

According to privacy expert Edward Snowden, “Facial recognition is the most complete extortion tool ever invented”. Once a person’s face is entered into the database it can be used to track their movements, track their behaviour and even discriminate based on their appearance.[10] The dangers of facial recognition technology are obvious, and there is an urgent need for controls. As we embrace new technologies, we must do so with open eyes and a keen understanding of the potential risks.[11]

 

PROS AND CONS OF FACIAL RECOGNITION TECHNOLOGY

PROS

1.    Finding missing people and identifying perpetrators.

Law enforcement agencies apply facial recognition technology to locate missing people and identify criminals by comparing images from camera feeds to those on watch lists.

Missing children have also been found using this technology. It is sometimes used in conjunction with advanced aging software to predict what a child might look like based on photos taken when they were missing.

2.    Protecting Business against theft.

As a preventative measure against shoplifting, facial recognition software has been used. The software and security cameras are used by business owners to identify suspects against a database of known thieves, and it has been argued that the mere presence of cameras that recognize faces acts as an a deterrent over criminals.

3.    Better medical treatment.

One unexpected application of facial recognition technology is the detection of genetic disorders. In some cases, facial recognition software can determine how particular genetic variations caused a specific syndrome by examining mild facial attributes. Traditional genetic testing may be faster and less expensive with this technology.

 

CONS

1.    Greater threat to Individual and Societal privacy.

Personal privacy can be compromised by facial recognition methods that capture and analyse faces without consent. Unauthorized use of facial data for surveillance increases the susceptibility to movement observation and inappropriate surveillance.[12]

2.    Provide Opportunities for fraud and other crimes.

Criminals can also use facial recognition technology to commit crimes against innocent victims. To commit identity fraud, they can collect individuals’ personal information, including imagery and video collected from facial scans and stored in databases. Aside from fraud, they can use facial recognition technology to harass or stalk victims.

3.    Lack of Regulation:

Due to the rapid adoption of facial recognition has surpassed the development of comprehensive regulations, a regulatory gap has emerged that could be exploited for unethical purposes. To ensure responsible use and accountability, clear guidelines and standards are required.


[1] Caroline Adams, Human right in AI facial recognition, ideaforpeace, November 24, 2023, https://ideasforpeace.org/content/human-rights-in-ai-facial-recognition/

[2] Agnė Limantė, Bias in Facial Recognition Technologies Used by Law Enforcement, Nordic Journal of Human Rights, November 24, 2023, https://www.tandfonline.com/doi/full/10.1080/18918131.2023.2277581?src=exp-la

[3] Carolina Caeiro, Regulating facial recognition in Latin America, Chatham House, November 11, 2022.

[4] Zoe Larkin, AI Bias – What Is It and How to Avoid It, November 12, 2023, https://levity.ai/blog/ai-bias-how-to-avoid

[5] Xavier Ferrer, Bias and Discrimination in AI: A Cross-Disciplinary Perspective, IEEE Technology and Society, November 11, 2023,https://technologyandsociety.org/bias-and-discrimination-in-ai-a-cross-disciplinary-perspective/

[6] Corporate English Solutions, Minimising AI bias, British Council, December 1, 2023, https://corporate.britishcouncil.org/insights/minimising-ai-bias-best-practices-organisations

[7] Supra note 6

[8] Jake Silberg and James Manyika, Tackling bias in artificial intelligence, McKinsey & Co., December 1, 2023,https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

[9] Artificial Intelligence, How do you address the potential bias and discrimination in your Al data and algorithms, December 1, 2023,https://www.linkedin.com/advice/3/how-do-you-address-potential-bias-discrimination?utm_source=share&utm_medium=member_android&utm_campaign=share_via

[10] Gary Bernstein, Eye in the Sky: The Invasive Nature of Facial Recognition Technology, Cloudtweaks, December 10, 2023,https://cloudtweaks.com/2023/04/eye-sky-invasive-nature-facial-recognition/

[11] Kevin Julian, Patients increasingly are embracing technology, and so must the Pharmaceutical industry, Cloudtweaks, December 12, 2023, https://cloudtweaks.com/2020/08/patients-increasingly-embracing-technology/

[12]  David Gargaro, The pros and cons of facial recognition technology, Itpro, December 15, 2023, https://www.itpro.com/security/privacy/356882/the-pros-and-cons-of-facial-recognition-technology






Author: Yogiraj Sadaphal

Law School & Year: BHARATI VIDYAPEETH (DEEMED TO BE UNIVERSITY) NEW LAW COLLEGE, PUNE 4th YEAR

Course: BA LLB

bottom of page