As a college student and a New Yorker, I depend on public transportation. I typically take a bus and several trains to commute to and from school. On an otherwise mundane day, while waiting for a bus in the Bronx, I noticed two small black cameras mounted on the MTA bus, facing opposite directions from where the bus was heading. As I was met with these unfamiliar devices, I wondered about the intent behind their placement, as they seemed to be designed to record vehicles near the bus.
That moment transformed my perspective, as if I were a person wandering through a one-way mirror —an experience that has left me increasingly paranoid. I started to notice the abundance of security cameras along every route I travel. What does it mean when we travel to public spaces that become places of surveillance, where our actions could one day be used against us?
It was only a recent effort when the MTA implemented surveillance cameras to better track parking violations, starting in August 2024. It began with 623 buses equipped with Automated Camera Enforcement (ACE) cameras, and that number has since increased to 1000. However, thousands of parking violations have been misreported, with AI largely to blame. Unfortunately, AI mistakes are common, yet the New York State government has doubled down on its use. Governor Kathy Hochul has also launched a program to equip subway cars with security cameras and use AI to trace “potential trouble or problematic behavior on our subway platforms.” Aside from the obvious problems that arise with using AI to track behavior, the existence of security cameras does not actually incentivize good behavior.
The legitimacy of authority and enforcement can be demonstrated through Thomas Hobbes’ social contract theory, which argues that individuals surrender certain rights to fully cooperate as a society. With the declining trust in our government, AI surveillance is increasing in all aspects of our daily lives. The use of AI for surveillance suggests that those who are in power may not trust the people they govern. While those who promote the use of AI surveillance claim that it promotes public safety, there is little to no evidence to show that its use has led to a decrease in crimes. A recent study at MIT and Penn State University warns that using large language models for home surveillance can lead to inconsistent outcomes.
Rather than addressing the root causes of crime, our city is turning to AI as a shortcut for public safety. Doing this risks ignoring the socioeconomic factors that come into play. Before we put all our faith in AI surveillance to keep us safe, we need to understand how this technology can threaten our ability to act freely, by imposing immediate impersonal consequences driven by numbers, not human judgment.
To promote public safety, policymakers must prioritize long-term investments in social capital, such as affordable housing, education, and accessible mental health services, as long-term solutions. Relying on AI as a technological quick fix oversimplifies complex social challenges and risks exacerbating existing inequalities. Therefore, any implementation of AI in public safety must be accompanied by strong transparency measures to ensure these tools serve the public interest without compromising civil liberties.