Facial recognition and rights of the accused

10 mins read

Devina Malaviya

Trisha Chandran

The increasing reliance on technology in criminal justice systems across the world is believed to replace manual methods of investigation with ‘speedier’ and more ‘accurate’ models. However, it is essential to recognise that any technology comes with inherent limitations and consequently the legal system requires a framework which enables an inquiry into these limitations. The use of facial recognition technology in the criminal justice system, including for the identification of suspects in criminal cases, has been on the rise in India and the technology is already being used by the police in various states for identification of criminals. Facial recognition technology uses computer algorithms that create a ‘face-map’ on the basis of distinctive details of the face, which is then compared to pre-existing photographs of individuals in a database. The introduction of the technology raises several concerns, particularly in the area of constitutional and criminal law. The current use of facial recognition technology by various investigative agencies in the absence of any law or regulatory framework poses huge constitutional concerns to due process. The widespread use of facial recognition as a method of large-scale surveillance poses potential threats to the fundamental right to privacycivil liberties of individuals, including their fundamental right to speech and right to assemble  particularly in cases of protests without fear of being identified and targeted by the governmental agencies. A major concern is the use of such technology in the absence of validation studies demonstrating the accuracy and reliability of such softwares, tested on different population sets within India. The focus of this article is to outline the inadequacy of existing criminal law in enabling the accused to meaningfully challenge the reliance on facial recognition for identification of accused persons  in criminal investigations and trials.

In 2020, the National Crime Records Bureau issued a revised Request for Proposal for tender bids to create a National Automated Facial Recognition System (AFRS). The AFRS is proposed to be used for identification of criminals, missing children/persons, unidentified dead bodies and unknown traced children/persons. The system, which is believed to be a “great investigation enhancer” will assist in identification and verification of digital images, photographs, digital sketches and video sources which will be compared with information on the existing criminal databases. The use of this technology in the criminal justice system raises concerns as the accuracy of facial recognition has been doubted by several studies across jurisdictions. Consequently, this impacts the rights of the accused, especially when the Indian legal system is ill-equipped to examine the validity, limitations and errors of forensic disciplines and technologies.   

Facial Recognition System: ‘Perfect’ technology? 

In India, the use of facial recognition has been perceived by the investigators as a ‘quick’ and ‘reliable’ method for identification of criminals.  The faith reposed by the authorities in facial recognition is a manifestation of their belief in the infallibility of technology in general. Facial recognition systems compare features  from an image with previously stored images in a database which then shows the most similar candidates. The probe photographs may either be compared in its original form, or the examiners might even edit them before feeding them for comparison. After comparison with the database photos, the facial recognition software usually shows numerous results, also known as a candidate list, with an accompanying similarity score for each photo. The onus finally lies on the examiner to examine the candidate list and use their discretion to identify the person in the probe photo. Therefore, the reliability of this technique not only depends on the accuracy of the facial recognition software but also on the protocols governing the examiner while feeding the probe photo and analysing the results.  

Studies have shown that facial recognition systems are prone to error. These systems  are not 100 percent accurate and can report significant error rates when photographs that are compared to one another contain different lighting, shadows, backgrounds, poses, or expressions. These factors are relevant in light of the fact that photographs for investigation of crimes are not always captured in ideal conditions and could be influenced by factors such as poses, lighting, expressions of the person in question.  The problem of error rates is even more acute when the algorithm is used on photographs of people of colour. In 2012, a study examined the performance of  facial recognition algorithms on three demographics: race, gender and age. The results showed lower recognition accuracies on females, Blacks, and people in the age group of 18-30 years. The results note that training face recognition systems on  datasets  which were representative of all demographics was essential to reduce the inaccuracies on specific demographics. Even as recently as 2018, a  study conducted by ACLU found that a facial recognition tool called ‘Rekognition’ incorrectly identified 28 members of Congress as people who had been arrested for a crime. Nearly 40 percent of the false matches were of people of colour even though they comprised 20 percent of Congress. Given that the working of a facial recognition system is influenced by the datasets used to train the system, ensuring that the dataset  is representative of the population on whom the facial recognition system is used, ought to be considered while assessing the accuracy of the systems being deployed in India. 

In addition to the limitations of the system itself, those who are conducting the exercise of interpreting the data generated by such systems are also prone to error. A study has shown that persons using the technology as a part of their daily work are more prone to error, than highly experienced and trained experts in the area of facial recognition.This raises important questions about the level of training and expertise of the police officers and other persons in the investigating agencies who are and will be using facial recognition technologies. 

On its use in criminal investigations and trials 

At present, there is very little transparency displayed by the investigative agencies on the manner of use of facial recognition, the details and limitations of the technology being used by them. 

Potential misidentifications by facial recognition in the investigation stage  even if identified subsequently in light of other evidence, may have already led to the accused spending considerable time incarcerated. In this context, it is important to note that the burden of judicial delays and backlog is particularly borne by those belonging to lower socio-economic backgrounds who may not be able to afford effective legal aid to even avail bail. Further, in cases of special legislations such as Unlawful Activities (Prevention) Act, 1967, the investigation period can be extended to 180 days. In these cases, the accused is not entitled to bail if there are reasonable grounds to believe that the accusation against the accused is prima facie true. 

This article will consider the issue of fair disclosure of the use of facial recognition and the related material as well as  the ability of the legal system to effectively evaluate and challenge such material if introduced as evidence. 

Disclosure of use of facial recognition and related material

In the United States, facial recognition is primarily relied on as an investigative tool. It is used as a starting point in identifying the accused and the results generated by the software are not produced as evidence in trial. This poses grave challenges for the defence, who are in many cases likely to remain oblivious to the utilization of facial recognition by the investigating authority as the primary reason for suspecting the accused. While the prosecution may finally adduce eyewitness evidence during trial, such witnesses may be shown or made aware of the results of the facial recognition system, thus biasing their opinion. The issue of excessive reliance on the  facial recognition result as an anchor for investigation  is especially relevant given the excessive faith placed in technology by investigating agencies in India. Even where an independent test identification process is conducted, (i.e. having eye witnesses identity the accused amongst a line up of similar looking individuals), it is argued that eyewitnesses are likely to confirm selections made by facial recognition, despite an erroneous match by the software, since the accused selected from the possible matches provided by the software will look similar to the true perpetrator. Errors in facial recognition softwares could thus have the potential of steering the investigation in the wrong direction and in turn implicating innocent persons.

Considering the implications that facial recognition could have on the finding of guilt of the accused, the ability of the defence to be made aware of the use of such technology and access information and material pertaining to the details of the algorithm and limitation of the facial recognition system in question, is a matter of significance. The importance of this disclosure must be viewed in light of the serious limitations of this technology as  mentioned above. The undisclosed usage of error-prone facial recognition softwares by the police has raised due process concerns in the United States. In December 2018, an appellate court in the state of Florida upheld the conviction of Willie Allen Lynch primarily on the basis of the results of a facial recognition software called FACES. Notably, the Court had denied the defence’s request for the disclosure of the faces of other candidates generated by the FACES software, as well all relevant information pertaining to the software including the algorithm. Such information is crucial to enable the defence to challenge the results, on the basis of alternative perpetrators generated by the software, the reasons and manner in which the accused was chosen from numerable ‘possible matches’, the percentage probability of the match and the functionality and impartiality of the algorithm of the software utilized. In the pending review before the Supreme Court of Florida, a collection of civil liberties organizations have taken the plea that all relevant information pertaining to the facial recognition software, including the faces of other potential matches constituted potentially exculpatory material that the prosecution was obligated to disclose as a due process guarantee per the decision in Brady v. Maryland. The decision of the Supreme Court of Florida will have wide ranging implications on the use of facial recognition technology in criminal trials in the United States. 

In India, should the investigating authorities not inform the accused of the use of such technology and produce the related material as evidence, the accused could stand considerably prejudiced. In the absence of an equivalent to a strict Brady right to receive all information collected during the course of investigation, including exculpatory material, the defence will not only need to potentially second-guess the reliance on facial recognition, but also rely on the discretion of the court to summon all relevant material in its connection. Under Section 207, Code of Criminal Procedure, 1973 (“CrPC) the accused is entitled to receive only those documents that the prosecution proposes to rely on. The Supreme Court in Manu Sharma v. State of NCT (Delhi) has held that while the accused does not have an “indefeasible right” to receive all documents collected by the prosecution, the Court may exercise its powers under Section 91, CrPC to summon those documents that it deems necessary for the accused to mount a fair defence. Similarly, the accused is usually only permitted to seek discharge (early-stage dismissal) on the basis of the material produced by the prosecution as a part of the charge-sheet. However, in Nitya Dharmananda the Court held that the accused could seek the summoning of documents that may have been withheld by the Prosecution, upon convincing the court that the material is of “sterling quality”. Nevertheless, in the absence of clear-cut laws on disclosure of the entire material including exculpatory material, the accused could stand prejudiced if information from facial recognition technology is relied on. 

Analysis of evidence from facial recognition systems in Court

In India, presently, there is no clarity as to whether and how the material obtained from facial recognition systems will be presented as evidence in courts. There is, however, news on the police contemplating its use as evidence in court. In the future, if such material is used as evidence, it will be introduced in a legal system which does not provide for a meaningful inquiry into the scientific validity of the technique underlying the expert opinion or analyse its reliable application to the facts of a case. The law on  Section 45 of the Indian Evidence Act, 1872 which deals with expert evidence has not provided guidance on examination of scientific validity of forensic techniques.

This is in contrast to the position in the USA where the law provides for an inquiry into the scientific validity of a technique. In Daubert v. Merrell Dow Pharmaceuticals  the Supreme Court of the United States held that judges must make a preliminary assessment of whether “reasoning or methodology underlying the testimony is scientifically valid and whether that reasoning or methodology properly can be applied to the facts in issue”. The Court proceeded to create a framework for assessing the validity of any scientific technique. However, given that in the United States, materials from facial recognition systems are not introduced at trial, the Daubert framework does not provide assistance to the defence. This has posed challenges for the defence which is struggling to challenge this technology which  “operates in a way that shields it from traditional methods of judicial review.”

In India, there is also uncertainty around who will be carrying out the process of reviewing the candidates shortlisted by facial recognition systems to finally identify the suspect. Given that the NCRB tender focuses on capacity building for police personnel and there are news reports on the use of technology by the police, it appears that this process will be conducted by the investigating authorities. This raises further questions about whether the evidence will even be introduced in court as expert evidence under Section 45, Indian Evidence Act, 1872 and what the standards of scrutiny will be.

Irrespective of the provision under which such evidence is introduced, the weakness of our expert evidence laws points to a larger culture of our judges and lawyers succumbing to their intuitive belief in what they perceive to be ‘scientific’ and hence, infallible. This, compounded by an inadequate legal framework and lack of training in forensic evidence, leads to the fear that results of facial recognition systems will be treated as gospel truth without any meaningful inquiry into the limitations of the technology and its reliable application to the case at hand. 

Conclusion 

Jurisdictions such as the United States that have a legal framework to effectively inquire into the validity and reliability of forensic evidence are struggling with adapting to the unique challenges posed by facial recognition technology to their criminal justice system. The introduction of facial recognition to the  Indian criminal justice system which is ill equipped to recognise and assess the limitations of technology at every stage is bound to have far reaching consequences on the rights of the accused. 

Devina Malaviya is a part of the Forensics Team and Trisha Chandran is a part of the Litigation Team at Project 39A.