The ACLU of Maryland and other civil liberties groups are concerned by reports that Maryland police are using sophisticated facial recognition technologies to identify people by matching them against millions of images from the state's motor vehicle archives. We share those concerns, even as we recognize the technology's potential to assist in legitimate law enforcement activities. Though the technology is still in its infancy, it is developing rapidly. Even if it doesn't pose an immediate threat to citizens' privacy rights, there's no assurance it won't in the future. In the interests of transparency, the public needs to know where and for what purpose the equipment is being used and what guidelines, if any, are in place to ensure against its misuse.
The issue has gained prominence following recent revelations of other mass surveillance programs that in some cases have been operating quietly out of public view for years. What they all have in common is the capacity to scoop up massive amounts of data, including social media posts and, recently in Baltimore, aerial photographs, which police can then use to track the movements and location of people over a wide area. It's that capability that's raised the specter of big data's morphing into the fictional Big Brother of George Orwell's classic 1949 novel "1984" and an all-seeing government constantly spying on its own citizens.
Things haven't reached that point yet, but the prospect is real enough to have prompted civil rights groups to complain in August that the Baltimore Police Department's use of the cell phone tracking technology known as stingray was both illegal and racially discriminatory. That same month city police acknowledged that a private company has been conducting secret surveillance flights over some city neighborhoods on behalf of police, and last month The Sun reported that police in Baltimore and surrounding communities have hired the social media monitoring company Geofeedia to ferret out suspicious or potentially threatening posts on sites like Facebook, Instagram and Twitter. The fact that the police department ignored the repeated advice of the aerial surveillance vendor to hold public outreach before deploying the technology reinforces the concern that government is deaf to these concerns.
To be sure, the new technologies can provide critical help to law-enforcement in solving high-profile crimes and bringing dangerous fugitives to justice. It isn't perfect, but it's improving rapidly. Facial recognition software wasn't able to identify the perpetrators of the Boston Marathon bombing in 2013 (at least not more quickly than traditional methods did). Two years later, authorities were able to use it to identify some Islamic State militants who attacked Paris, and security officials hope one day to be able to use it to detect known terrorists as they approach airports or other public places.
But every new technology also carries the potential for abuse. Police in Baltimore, for example, have yet to make clear which neighborhoods they have targeted for surveillance, whether the technologies employed there have resulted in arrests or convictions and whether African-Americans are being disproportionately affected by the program. It's one thing to track a person suspected of committing a specific crime but quite another to monitor thousands of innocent people without their knowledge simply because they happen to live in a high-crime neighborhood or attend a public event. We're particularly concerned by reports that police have used technology to capture information about peaceful protesters at Black Lives Matter events. Such surveillance obviously has the potential to have a chilling effect on people exercising their First Amendment rights.
We are also concerned about the potential blurring between law enforcement and commercial uses of the data. Officials at Persistent Surveillance, the company behind Baltimore's surveillance plane, say they are exploring opportunities to sell data to private clients — for example, to insurance companies that might have an interest in images that could show who is at fault in auto accidents. Could it go farther? Could companies pay to keep tabs on the comings and goings of their competitors? Could it eventually amass commercially valuable data about the activities of identifiable individuals? Granted, companies like Google and Facebook already know a great deal about us, but only because of our choice to use their products, not as an inevitable result of living in Baltimore.
Maryland is on the cutting edge of a revolution in surveillance technologies. Its database includes more than 7 million images from the Maryland Motor Vehicle Administration and more than 3 million mug shots and other photographs of arrestees, making it one of the largest such archives in the country. And its monitoring programs could soon become even more pervasive if they are eventually linked to imagery from Baltimore's hundreds of CitiWatch cameras, new gunshot detection systems and aerial surveillance. We are just at the beginning of a process that could end with biometric databases that literally keep tabs on everyone. We need to set the ground rules now for how this vast trove of information will be managed and create mechanisms to ensure that government and private officials are held accountable for sticking to them.