The Context
In June 2025, the Privacy Commissioner cleared Foodstuffs North Island’s facial recognition trial, which scanned 226 million faces across 25 stores. The system generated 1,742 alerts. Six months later, a woman in Rotorua was misidentified and removed from a store, triggering a Human Rights Review Tribunal case. This is not a compliance success story; it’s a live-fire test of governance. The Commissioner’s clearance came with a warning: the technology is only acceptable if risks are “successfully managed.” The new Biometric Processing Privacy Code, effective 3 November 2025, now mandates an assessment of “effectiveness and proportionality.” The trial passed a legal check, but the subsequent failure exposed the operational and reputational reality.
The Risk
Directors are personally liable for overseeing the company’s risk management framework. A board that greenlights a mass surveillance programme, even one deemed compliant, assumes responsibility for its systemic failures. The Rotorua incident demonstrates a failure of the promised human review process, which operated at a 92.5% accuracy threshold. This creates potential exposure under Section 131 of the Companies Act 1993 for a failure to exercise the care, diligence, and skill of a reasonable director. More critically, it may indicate a failure of duty under the Privacy Act 2020 to ensure personal information is used fairly and not in a way that causes significant humiliation. The reputational damage from a single misidentification can outweigh the perceived security benefit from 1,742 alerts. You are not governing a technology trial; you are governing a high-stakes public experiment with consumer trust.
The Control
Governance must shift from asking “Is this legal?” to “Is this wise?” Before approving any expansion, demand a board-level review of the programme’s inherent design flaws. The data shows a critical gap: teenagers account for six of the ten worst offenders for threatening behaviour, yet the system excludes minors from the watchlist. This is a strategic vulnerability, not a privacy feature. Your oversight must mandate a proportionality assessment that weighs the actual crime reduction against the brand and legal liability of false positives. The control is a board directive that ties any further investment to a publicly available transparency report detailing error rates, redress processes, and the specific, necessary problem being solved.
The Challenge
These are the critical questions you should be raising at the board table:
|
|
Given the system’s documented exclusion of minors—a key offender demographic—what specific, measurable reduction in violent incidents justifies the privacy intrusion and liability risk of scanning 226 million faces? |
|
|
Our human review process failed at a 92.5% accuracy threshold. What is the board-approved maximum acceptable rate of false identifications, and what compensation and apology protocol is automatically triggered when we exceed it? |
|
|
The new Biometric Code requires a proportionality assessment. Can management demonstrate, with evidence, that this technology is a more effective and less rights-invasive solution than increased security personnel or targeted interventions? |