Your face is data. It’s being scanned, stored, and sold — and until now, New York had almost nothing to say about it. That changes, at least a little, with the passage of the Facial Recognition Technology Study Act. This is the moment where the government finally admits it doesn’t know what it’s dealing with — and that admission alone should alarm you.
The act, championed by Senator James Sanders Jr., doesn’t ban facial recognition. It doesn’t fine companies for misuse. It doesn’t give you the right to opt out of being scanned at a concert, a store, or a subway platform. What it does is order a formal study of how facial recognition technology is being used across New York State. That’s it. A study. In 2026. While your face has already been scraped, indexed, and cross-referenced millions of times over.
Let’s be honest about what this is: a political tap on the brakes. Better than nothing. Worse than necessary.
Why A Study Is Both Smart And Infuriating
There’s a reasonable argument for doing this right. Facial recognition isn’t one thing — it’s dozens of overlapping technologies with wildly different accuracy rates, deployment contexts, and legal gray zones. A badly written ban could create loopholes big enough to drive a surveillance van through. A thoughtful study could produce the kind of precise, evidence-based legislation that actually sticks.
But here’s the problem with that optimistic reading: the tech doesn’t wait for your committee hearings. While New York assembles its task force and schedules its meetings and writes its 200-page report, police departments are signing contracts, retailers are installing cameras, and landlords are quietly running facial recognition on everyone who walks into their buildings. The clock doesn’t pause.
We’ve seen this pattern before. Governments study things they don’t want to regulate yet. The study becomes the policy. The report sits on a shelf while the tech becomes so embedded in infrastructure that banning it feels impossible. That’s not a conspiracy theory — that’s just history.
Who Gets Hurt While We Wait
Facial recognition doesn’t fail evenly. Study after study — real ones, not government ones — has shown that these systems misidentify Black women at rates dramatically higher than white men. The ACLU ran Amazon’s Rekognition against photos of members of Congress in 2018. It falsely matched 28 of them to criminal mugshots. A disproportionate number of those false matches were people of color.
That’s not a quirk. That’s a structural flaw baked into systems trained on biased data. And those systems are being handed to law enforcement agencies that already have documented issues with racial bias in stops, searches, and arrests. The combination is dangerous, and a study doesn’t defuse it. Real people are getting wrongly arrested right now — there are documented cases in Detroit, New Orleans, and New York City — while legislatures study the problem.
It’s also worth pointing out the corporate angle here. The companies building and selling these tools have armies of lobbyists. They will participate in the study process. They will submit comments, offer expert testimony, and shape the language of any resulting legislation. That’s not paranoia — that’s how regulated industries work. If advocates for civil liberties aren’t equally aggressive in that process, the outcome will reflect the interests of the people who showed up.
The Broader Tech Accountability Crisis
This isn’t an isolated issue. We’re watching governments scramble to catch up with technologies they didn’t anticipate and don’t fully understand. We’ve covered how North Korean hackers are using AI to find cybersecurity blind spots — that’s the same pattern: a powerful technology deployed aggressively while policy frameworks lag years behind. The gap between what tech can do and what law can govern has never been wider.
And it’s not just surveillance. When you look at how aggressively companies are racing to ship new hardware — Apple alone has 15+ new devices set for 2026 — you realize that every one of those products ships with cameras, sensors, and on-device processing that feeds ecosystems we don’t fully regulate. Privacy rights aren’t a facial recognition problem. They’re a technology accountability problem, and facial recognition is just the most visible symptom.
The Hot Take
Facial recognition should be banned in public spaces by default — full stop, no exceptions for “public safety” carve-outs — until the companies deploying it can prove their systems meet accuracy thresholds above 99.9% across all demographic groups. Anything less isn’t a privacy compromise. It’s a civil rights violation we’re agreeing to manage rather than stop. Comfort with surveillance is a privilege that not everyone in this country can afford.
New York’s study is a start. But starts mean nothing if they don’t move fast and don’t produce teeth. The people most at risk from facial recognition misuse are the same people historically failed by the systems this technology plugs into. A report without a deadline, without enforcement mechanisms, and without a presumption toward protection over profit is just political theater with better footnotes. New York can do better. It needs to.
Watch the Breakdown
IdentityShield
Find out what data brokers know about you
We scan 200+ people-search sites and dark web sources to show you exactly what strangers can find about you — for free.
