The murky world of facial recognition technology leaves society on the back foot.
And after 20 years, why is there so little (legal, ethical, social science) scholarship in this field?
NB: Edited following the taking down of a New Zealand Herald article that was removed from three sites within 24 hours of publishing this piece.
Facial recognition technologies (FRT) can act as a deterrent to crime. FRT is broadly viewed by retail industries as loss prevention technology (to deter or eliminate crime); or crime prediction technology - such as where the technology is developed to assess and understand human behaviour associated with characteristics of, say aggression, stress or anxiety, that might make people stand out as a risk factor.
But what we see in New Zealand is policy lagging behind trials and implementation; and a relative absence (a yawning chasm) of scholarly research across law, ethics and social science. Because there’s precious-few experts, there’s no public voice to draw attention to risk.
It’s remarkable if we consider how expansive this technology is and how quickly it is becoming embedded in corporate and governing systems; and how financially resourced these companies and big government agencies are; while considering how little power you or I have to contest these institutions, should something go wrong.
This asymmetrical power explains why democratic governments should be on the forefront, setting aside funding, creating a safe haven for scientists and researchers to - over the long term – assess what is happening and draw attention to risk. From Foodstuffs, right down into the bowels of New Zealand’s Department of Internal Affairs, across policing and surveillance. So as to prevent abuse of power and promote transparency and accountability.
But that’s not happening. And then there is the ‘guardian of the public interest’ who because of portfolio let’s just say, complexities, might be a little distracted.
WHAT IS ‘OUT THERE?
A 2020 Law Foundation paper, Facial Recognition Technology in New Zealand Towards a Legal and Ethical Framework outlined what is in play:
[T]here are also systems with capability for targeted and mass surveillance activities that clearly pose high-risk. As detailed in the first section, systems that are in place include:
• BriefCam – analysis of CCTV footage including facial images,
• NewX “Searches unstructured data and platforms for faces, guns, and body markings (tattoos)”
• Cellebrite – searches seized cell-phone for data. Includes FRT capability, • ABIS (Automated Biometric Information Survey) -FRT capability.
Planned systems include:
• Digital Information Management. ICTSC has indicated it will be running an RFI/RFP to look at systems that will store both evidential information and CCTV, social media and photographs. It is likely the tenders will list AI and potentially facial recognition as part of the requirement
POLICY & RESEARCH LAG YEARS BEHIND FRT DEPLOYMENT
Facial recognition technology (FRT) commenced in New Zealand in 2000, scaling up to ‘a state-wide system replete with such tools as BriefCam, that scans CCTV footage superfast using 27 identifying factors in addition to facial recognition.’
Law and regulation has lagged, it wasn’t until around 2020 that conversations became a little more meaningful. Then, in November 2023 the Privacy Commissioner announced that the Office was due to consult on new rules for biometrics, stating that the Office would issue a biometrics code exposure draft in early 2024.
When it comes to New Zealand’s monitoring and research environment, and academic research to understand the risks of biometrics and surveillance, you find that precious little work is being done. A search on Google Scholar for ‘Zealand’ and ‘biometric’ detects little, with the 2020 Law Foundation paper, a standout. The only paper I can find that has been published in the last two years is Emma Taumoepeau’s (2022) An Ethical Framework for Facial Recognition Use in New Zealand.
‘Currently, there is a lack of an ethical framework for legal entities to adopt, follow and utilise within New Zealand. This makes it an ethical grey area for the businesses and individuals looking to utilise this technology.’ Taumoepeau (2022).
Right then.
In the last decade, as corporate commitment to the implementation of these technologies have accelerated, and while our Government has integrated biomonitoring technologies into digital identity infrastructure – there’s been no commensurate funding dedicated to a principled and independent, cross-sector research effort. To not only explore the ways FRT and surveillance technologies not only increase convenience, but threaten privacy, human rights. It seems that the routes by which governments are adopting these technologies seem not only secret (undisclosed), but, I suspect that once these technologies are in place there is a stifling of transparency and accountability mechanisms.
Frankly, the New Zealand public, the public sector, academia and law are broadly naïve and uninformed about what these technologies mean over the longer term, and what they mean for democracy.
Privacy expert Elizabeth Renieris sheds light on why transparency and debate are essential if tech is to be appropriately stewarded – inside complex, secret (via intellectual property and commercial in confidence) software programmes, there’s no obvious red line that will tell us when these technologies compromise human dignity and human rights.
Governance is increasingly undermined by the complexity and opacity of privately owned and operated technologies deployed in public settings or procured by public entities in the provision of government services.
We know that there is a democratic deficit.
We know that authoritarian regimes adore surveillance.
It’s no secret that China has ‘cornered the surveillance market’ and aggregates (or fuses) data in a ‘police cloud’
Provincial police cloud-computing centers fuse data from public and private sources, including ID cards, CCTV footage, medical history, supermarket memberships, IP addresses, social media usernames, delivery records, residential addresses, hotel stays, petition records, and biometrics,
FOODSTUFFS SCALING UP FRT - TO GENERAL SURPRISE
In New Zealand retail crime has been surging as the economy slumps following Reserve Bank interest rate hikes, and the public struggle with increasing costs of living. A supermarket manager privately has been telling me how, for years, he has struggled with theft - from staff as much as shoppers. This problem has been around for a long time.
In 2018 the New Zealand public had been surprised to find out that Foodstuffs
quietly rolled out facial recognition CCTV technology in some of its North Island stores
and that a security system that "bridges the gap between businesses and the police" was in use in the South Island.
The public were surprised again a week or so ago, when Foodstuffs and Radio New Zealand’s Checkpoint program advised the public on February 8, 2024, that a 6-month trial was being rolled out with:
‘Up to 25 New World and PAK'nSAVE supermarkets in the North Island are rolling out the technology.’
Foodstuff’s press release stated that:
‘FR works by matching, in real time, the faces of people who enter a store against that store’s record of offenders and accomplices. The FR system analyses facial features and converts them into an alphanumeric computer code. Both the images and the code will be securely stored.
No information stored in the FR system will be shared between stores, and no information from the FR systems will be shared with third parties, unless this is required by law, or to run and evaluate the trial.
“No images of minors under the age of 18, or vulnerable people, will be enrolled into a store’s record of offenders and accomplices within the FR system.”
Lawyer Michael Bott (who has likened collection of facial data to ‘an extension of a surveillance net’) stated:
‘Here we are in New Zealand in 2024 embarking on an extension, wholesale as it were, of the sort of snooping and biometric data gathering that we associate with China.’ … the technology is here and it must be ringfenced before it goes everywhere.’
RNZ reported that:
‘Foodstuffs consulted with the Privacy Commissioner on the plan and an independent evaluator has been appointed. All images of customers will be instantly deleted unless they have committed a crime, been aggressive, violent or threatening towards workers or customers.’
Algorithms are less trustworth when applied to nonwhite people. Māori technology ethicist Dr Karaitiana Taiuru, emphasised a recognised problem:
‘The technology struggles to accurately identify people of colour, and Māori and Pasific people will be falsely accused at a higher rate.’
Taiuru noted further, it was ‘not an if but when’ writing later that
‘Within the United States, People of Colour suffer the consequences of mistaken identity and bias disproportionately more, because algorithms typically have been less accurate when applied to nonwhite people. As with any new forensic technology, FRT systems are being incorporated into systems and institutions with their own histories of disparities.’
All too often FRT is deployed before the development of clear and transparent principles and rules. As Taiuru notes
‘There is no legislation to manage FRT and CCTV, but the Privacy Commission have guidelines and have undertaken consultation in this area.’
Does the New Zealand public have any idea the extent of surveillance whether by the private sector, the public sector, or through private/public arrangements?
Thaddeus Johnson, lead author of the 2022 paper Facial recognition systems in policing and racial disparities in arrests stated in a recent NCJA podcast that
Most Americans probably have an image in a facial recognition database and don’t even know it. That gets to some of the many privacy concerns that people may have about this. Where do we get these images from? Is use of these images permissible? How are these images being used? We’ve been using this for twenty years now. Like most things research has been outpaced by the actual use. You see this in many other fields.
I presume it is the same for Kiwis.
FOODSTUFF’S: NOT JUST FRT. TO LICENCE PLATES, VIDEO & AUDIO…
A week later on February 13, Kirsty Wynn’s article was published first by New Zealand Herald: ‘Woolworths Everyday Rewards members can have licence plates, video, audio and IP addresses recorded’ and then republished on the Newstalk and RNZ platforms.
Tucked away under a privacy policy link in the terms and conditions of Everyday Rewards, it states the information gathered from customers includes not only full names, dates of birth and phone numbers but also images or audio recordings, IP address, email and other contact addresses.
Wynn, New Zealand Herald, February 13, 2024.
Within 24 hours all links to Wynn’s piece: New Zealand Herald Newstalk and RNZ were taken down, but thanks to Fraser and the Wayback Machine you can read it here:
The article stated:
The Office of the Privacy Commissioner has voiced concerns and said part of the Privacy Act required agencies to be “open and transparent about what personal information they collect”.
“This may require something more proactive than just putting it in deep in their privacy policy,” the Office of the Privacy Commissioner said in a statement.
“Woolworths New Zealand needs to be able to explain that the steps they have taken to make sure people know about the collection of information through their loyalty programmes are ‘reasonable in the circumstances’.”
I scanned my fancy new orange card a week ago, albeit reluctantly, as I have misgivings when new technology is introduced.
I won’t be using it again.
Now it
POLICE CONDUCTED AN FRT TRIAL IN 2020 - WHO KNEW?
To the surprise of many, in early 2020 the New Zealand police conducted a trial of Clearview AI, a corporation which in 2020 had built a database of 2.8 billion faces by scraping facial images from the internet.
Not only did the Police Commissioner Andrew Coster not know about the trial, neither did Privacy Commissioner John Edwards.
The public of course, were left in the dark.
Soon after Nessa Lynch and colleagues commenced work, publishing a November 2020 Law Foundation white paper Facial Recognition Technology in New Zealand, Towards a Legal and Ethical Framework, which placed the
‘… burden firmly on those who want to use FRT, particularly live FRT to demonstrate not only its utility as a surveillance tool, but also due appreciation of its broader social impact and the factoring of this into any assessment of use.
The paper drew attention to the lack of regulation and oversight mechanisms, noting that
Public trust is essential for state services and particularly in policing. Our overarching recommendation is for transparency and consultation. Lynch et al 2020.
After the release of The Law Foundation White paper, reporter Phil Pennington interviewed Lynch who stated:
The infrastructure and the framework for mass surveillance does exist, and all our discussion and our investigation seems to show that the police are using it in a relatively restrained manner at present.
The white paper stated that:
While the benefits that might be offered by FRT surveillance are increasingly observable, its effect on civil liberties is subtler, but certainly pernicious. Given the potential for FRT to be used as a key identity and access management tool in the future, there are pertinent questions around how images are being collected and stored now by the private sector. Where are these images being stored? Who has access to this data? What else might the images be used for?
Without a detailed appraisal of the benefits of state FRT surveillance, and an understanding of the ethical issues raised by its use, any framework for the regulation of this activity cannot hope to engender public confidence that its use is fair and lawful.’
The conclusions and recommendations section acknowledged that:
‘New Zealand’s system does not allow the same pathways for an individual to seek recognition and redress for a breach of human rights through the courts. While civil complaints mechanisms are available through the Human Rights Review Tribunal exist, this is a relatively weak form of protection.
As Lynch and colleagues stated (while listing potential risk factors on 7:3):
The technology patently has many uses and potential uses which, consequently, create a spectrum of risk in terms of impact on human rights.
I asked Lynch which recommendations had been undertaken since that date. Lynch replied
The ones that I know have been advanced were those related to policing which myself and Dr Andrew Chen did further work for Police on – these are available on the police website in their publicly available information on technology assurance e.g. commitment to pause any consideration of live AFRT.
And the Privacy Commissioner is working on some codes of practice in relation to biometrics which cover our recs there. Again available on the Commissioner’s website.
Biometric regulation generally seems to be on hold, the Law Commission’s work on DNA has not been implemented.’
We’ve got a long way to go, folks.
CONSIDERATIONS - FRT IN POLICING
It’s evident that the approbation following New Zealand Police’s secret trial spurred the senior executives into a frenzy of principled policy production.
Police policy for emergent technology use and testing was developed later in 2020 with the release of a New Technology Framework, which included ten principles to support police decision-making. This was signed off July 2021.
New Zealand police contracted Dr Nessa Lynch (an Associate Professor at Victoria University of Wellington) and Dr Andrew Chen (a Research Fellow at the University of Auckland) ‘to explore the current and possible future uses of facial recognition technology and what it means for policing in New Zealand communities’.
Lynch and Chen’s 2021 report Facial Recognition Technology: Considerations for use in Policing was published in November 2021. As the report noted, New Zealand has established standards for algorithm usage by government and public sector agencies; and the Government Chief Data Steward and the Privacy Commissioner have jointly issued guidelines Principles for the Safe and Effective Use of Data and Analytics.
OF COURSE POLICY LAGS: WHERE’S THE PUBLIC INTEREST RESEARCH?
Technologies don’t roll-out into public use at scale unless there are political and financial institutional interests there to push the tech out. These can be private or public (government) and often, a mixture of both.
But in New Zealand we lack an expert multi-disciplinary cohort who can dedicate years of research to understand the overlapping social, technical, legal and ethical recommendations. The Law Foundation evidently stumped up financing for the 2020 report. So New Zealand doesn’t end up with public expert who can talk publicly about the recommendations, make controversial statements that might ‘upset’ senior executives - but which improve regulation and stewardship.
In this research and expertise deficit - technology is implemented and scaled up - ‘Woo Hoo’ says the financial and political interests.
But this is a global problem. Public interest (independent) research funding is negligible, spotty and precarious. As a consequence, the public remain largely ignorant of what occurs behind the scenes - and policy is then under-developed or negligible. A public voice that might talk about the risk of a tech, from the biological, all the way up to human rights, simply doesn’t exist.
It’s all very well for the bankers and economists to say ‘NZ is a decade behind Europe' in investing in digital technologies including artificial intelligence.
But of course, we lag even further behind in any effort to steward these technologies so that these technologies do not compromise human autonomy, nor place a chilling effect on democratic dissent. That voice is under-resourced and largely silent.
Governance is increasingly undermined by the complexity and opacity of privately owned and operated technologies deployed in public settings or procured by public entities in the provision of government services. E.Renieris, Beyond Data
I’ve previously discussed the Department of Internal Affairs’ (DIA) increasing power and murky control over digital identity (here and here) and interest in censorship (here).
Who is watching the Department of Internal Affairs (DIA) in academia and law to scrutinise what this very powerful agency is doing across its digital infrastructure? To secure a digital identity people must agree to supplying a photograph. Biometric data held by the Department of Internal Affairs includes facial images and liveness testing. Are these images shared across government? Who knows. Who is exploring the potential for abuse of power by the DIA? I doubt anyone is.
And now supermarkets are ramping up FRT. Yes, the supermarkets have a theft problem. But is this the right – the ethical- way to go about it?
Governance and stewardship systems haven’t kept up – and the agencies with the budgets to deploy the technology have failed to place their chief executives into the media to highlight the risks and encourage debate.
Obviously, any capacity for a government to steward technology, requires dedicated research funding at arms-length from the political agencies who are keen on accelerating the development and integration of this new tech into society.
FRT as artificial intelligence assisted technology works by pulling different images together – it’s not a one-to-one match (as Professor Thaddeus Johnson explains in this NCJA podcast) contrasting a captured image against a number of images. The algorithm puts out a similarity score which grades the extent of similarity of features, the higher, the better match, or probability that the person is the same.
New Zealand’s research is, well, negligible when it comes to how our government and the private sector make facial recognition software ‘work’. How extensively is the New Zealand government using our biometric images, what data is private industry scraping? How is our biometric data for public and private purposes?
COVID-19, as an emergency event, paved the way for society to loosen up our ethics-framing and in April 2020 World Economic Forum identity Yuval Noah Harari conceived COVID-19 as a
‘watershed event in terms of enabling greater surveillance of society’.
This removed barriers for government and big tech, enabling a greater integration of biometrics into daily life without commensurate consultation, discussion and ethics oversight. Harari later went on to state in an October 2020 panel discussion that COVID-19 was critical as
‘this is what convinces people to accept, to legitimise, total biometric surveillance.’
Supermarkets globally are taking steps to collect biometric data, from supermarket Carrefour collecting fingerprints (to pay for purchases or to add loyalty points; to UK supermarket Costcutters scanning fingers with a technology that captures the pattern of veins under the fingertip. Facial recognition payment systems are increasing in China, even has citizens are concerned about their privacy.
So many ethics concerns. Are facial recognition cameras more likely to be deployed in poorer areas?
OOPS - FORGETTING ABOUT HUMAN RIGHTS
I don’t consider that there has been meaningful and consistent scholarship in New Zealand on how the collection and aggregation of biomonitoring data can potentially threaten human rights and democratic norms.
Lawyer and privacy expert Elizabeth Renieris in her book Beyond Data emphasises that the contemporary focus on data protection and privacy creates a narrow line of sight, which has:
‘induced a kind of 'data blindness', causing us to lose sight of a vast array of human rights and freedoms that are implicated by digital technologies, especially as the technological landscape undergoes rapid evolution and change.’
The question for Renieris, is how to structure legal frameworks that elevate the protection and safety of people, over narrower concerns of data protection and data security. For example, privacy and free expression
‘represent only a small proportion of over 30 fundamental human rights and freedoms and are "insufficient to address the wide array of risks posed by technologies like AI, machine learning, or extended-reality technologies such as augmented and virtual reality".
Renieris articulates in Beyond Data how asymmetrical the power imbalances are, even as our data is taken and stored:
In particular, generalized facial recognition technologies tied to real-world identities, increasing pervasiveness of digital identity solutions (including the use of advanced physical and behavioural biometrics), in combination of new IoT[things] and IoB[bodies] networks, threaten the notion of anonymity, even as they evade laws conditioned on the identifiability of individuals. Moreover, the private ownership and control of these digital identity infrastructures introduces commercial incentives and profit motives that conflict with core democratic values such as fairness, transparency and accountability.
Renieris quotes Shoshanna Zuboff:
‘Individuals each wrestling with the myriad complexities of their own data protection will be no match for surveillance capitalism’s staggering asymmetries of knowledge and power …[T]he individual alone cannot bear the burden of this fight at the new frontier of power.’
For Renieris:
‘in the complexity of a fully interconnected, cyberphysical world, it is nearly impossible to predict in advance where data will travel, or what certain data points might reveal or enable down the line, particularly as AI, machine learning, and other computational technologies allow for new manipulations of data. As such, our new expanded frame must also transcend other enduring but false dichotomies, such as personal or non-personal data, or sensitive versus non-sensitive data – determinations that cannot be made at the time of the data’s origination. We must close the vast and dangerous divide between technologists and private interests that propagate a view of privacy as a technical, mathematical exercise in approaching anonymity (or at least reducing identifiability), on the one hand, and law and policy experts who understand privacy as a much broader concept necessary to protect the rights and interests of people in practice – a divide on heightened display as companies seek to weaponize privacy-enhancing technologies to further cement their power, influence and profits. (P.115)
Amid neglect, fundamental rights erode and decay:
‘The increasing privatization or usurpation of rights by corporations is a threat to individuals, communities, and society at large. The more these corporate powers and rights expand, the more the institutions we rely on for the realization and protection of our fundamental rights decay.’ (P.146)
Renieris urges that we refrain from focusing on data, stating that it is:
‘easy to see how a vast corpus of human rights aw quickly gets whittled down to a few individual rights’
‘When you begin from the perspective of people, metaversal technologies such as XR, emotion and affect recognition technologies, neurotechnologies, digital identity systems and myriad other emerging technologies implicate a much broader array of human rights, including ones traditionally characterized as civil and political rights and economic, social and cultural rights… It then becomes easier to see the potential value, potency and enduring sustainability of a human-rights based approach to technology governance’. (P.153-154)
I believe New Zealand’s science and research funding policy and policy should not be directed by the Ministry of Business, Innovation and Employment.
HOW MANY HATS SHOULD AN ATTORNEY-GENERAL WEAR?
If we think about digital architecture, human rights, and New Zealand as a growing surveillance state without a body of scholars who might contest decisions by the Government…
… let’s spare a thought for how confused Judith Collins might be right now.
What is happening here, with such a pattern of legal and Ministerial powers, held by one individual?
Perhaps the New Zealand public would appreciate understanding that our new Attorney General is also Minister for Defence, Minister for Digitising Government, Minister for GCSB, Minister for Government's Response to the Royal Commission's Report into the Terrorist Attack on the Christchurch Mosques; Minister for New Zealand’s Security Intelligence Service; Minister for Science Innovation and Technology and Minister for Space.
I go to Joseph for a little insight on the role of the Attorney General, New Zealand’s senior law officer and a cabinet minister discharging portfolio responsibilities. The senior law office is the guardian of the public interest and may enforce the law ‘as an end in itself’.
By convention the Attorney must exercise independent and impartial judgement, without exception for party political considerations or policy goals… the Attorney General formally represents the Crown in the courts whenever rights of a public character come into question. The Attorney-General discharges this role in any of three ways: by bringing actions in the Attorney’s capacity to enforce public rights, by authorising relator proceedings to enforce human rights, or by intervening in private litigation to protect the public interest. The Attorney is names, although it is the Solicitor-General who acts. The Attorney-General is a party to the assertion of public rights even when the mover is a private party. The Attorney is plaintiff and defendant in all civil proceedings by and against the Crown, unless it is appropriate that a government department or an officer sue or be sued in their own name. The Attorney may intervene, upon invitation or with leave of the court, where a private action might affect the prerogatives of the Crown or raise questions of public policy. In important cases affecting the public interest, the Attorney has appeared in person to present argument, although it is a rare occurrence.
Philip Joseph (2021), Joseph on Constitutional and Administrative Law, 5th Ed. Thomson Reuter. 27.8. p.1336
Funny, hey?
All three news articles about Woolworths seem to have been removed. An archived version is available here:
https://web.archive.org/web/20240213031646/https://www.rnz.co.nz/news/business/509080/woolworths-can-record-licence-plates-video-audio-of-shoppers-and-link-to-everyday-rewards-card