Thought Crimes? Facial Recognition Technology Is Invading The U.S.
Thought Crimes? Facial Recognition Technology Is Invading The U.S.; San Fran Passes Historic Privacy Bill; Man Arrested In UK For Hiding Face
MAY 18, 2019
By Aaron Kesel
You can’t run, you can’t hide; facial recognition technology is advancing at a quickening pace, it’s becoming more widespread and accurate, and we are entering the path of no return.
A report from Georgetown researchers states that agencies in Chicago and Detroit have bought real-time facial recognition systems. Meanwhile, a privacy bill failed to pass in San Francisco. Elsewhere, in the UK an unidentified man was arrested for hiding his face from facial recognition technology and was gifted an insane disorderly behavior fine of £90.
According to the Georgetown researchers, software from a South Carolina company, DataWorks Plus, called FaceWatch Plus, “provides continuous screening and monitoring of live video streams” and is being used by Chicago and Detroit police departments.
Both police departments claim it has not used its system. However, Chicago previously introduced an Orwellian program in the Chicago Police Department that targets innocent citizens based on pre-crimes. Similar to dystopian films like Minority Report, a complex computer algorithm was being tested silently for 3 -years prior form 2016. The program was set up to track and catalog every citizen in the city, and use private data about each person to determine whether or not they could be a potential criminal, as Free Thought Project reported previously.
Wired further notes that:
Chicago’s adoption of FaceWatch Plus goes back to at least 2016, the report says. According to a description of the program—found in DataWorks Plus’ pitch to Detroit—the “project objective” involved tapping into Chicago’s 20,000 street and transit cameras. Chicago police told the researchers the system was never turned on. (The department did not respond to additional questions from WIRED.) Illinois is one of only three states with biometric-identify laws that require consent from people before companies collect biometric markers, like fingerprints and face data. But public agencies are exempted.
In addition, government agents in Chicago previously visited more than 1,300 innocent people who had high numbers on its pre-crime list, to inform them that they are now regarded as “potential criminals.” According to the New York Times, Police Superintendent Eddie Johnson previously said that officials were stepping up those visits at the time, with at least 1,000 more people.
“We are targeting the correct individuals. We just need our judicial partners and our state legislators to hold these people accountable,” Johnson insisted.
That’s a horrifying quote that essentially says, “trust us, we are the authority, we know what’s good for you, hear no evil, see no evil, see something, say something, we have your best interest in mind, we promise we just want to protect you.” So the notion that Chicago hasn’t used another system is unlikely.
The scary part is that Chicago and Detroit aren’t the only states starting to use facial recognition technology; police in Orlando and New York City are also testing similar technology in pilot projects.
Blind trust of government or any authoritative figure is never a good thing despite what the mainstream media or education system will preach. ACLU Illinois pointed out that innocent people are being flagged based on criteria that hasn’t ever been publicly established by an oversight committee; in other words, they are making up their own rules. Karen Sheley, the director of the Police Practices Project of the American Civil Liberties Union of Illinois, highlighted the entire reason why there should be concern about law enforcement running this previous program and the current program without oversight from the public:
We’re concerned about this. There’s a database of citizens built on unknown factors, and there’s no way for people to challenge being on the list. How do you get on the list in the first place? We think it’s dangerous to single out somebody based on secret police information.
Elsewhere in the U.S., in San Francisco a historic privacy bill passed when the San Francisco Board of Supervisors voted 8-1 to become the first major city in the United States to ban government use of face surveillance technology in all departments, EFF reported.
“It is encouraging to see San Francisco take this proactive step in anticipating the surveillance problems on the horizon and heading them off in advance. This is far easier than trying to put the proverbial genie back in the bottle after it causes harm,” Nathan Sheard, EFF’s Grassroots Advocacy Organizer, wrote.
On the other side of the sphere, in the UK police just made several arrests including issuing a £90 fine for disorderly behavior after a man tried to cover his face from controversial facial recognition cameras being deployed in the UK that have a failure rate of 96% as Activist Post and the Independent previously reported.
Police also arrested three other people during the day thanks to the technology, according to BBC Click.
After being stopped the unknown man who hid his face asked an officer: “How would you like it if you walked down the street and someone grabbed your shoulder? You wouldn’t like it, would you?”
“Calm yourself down or you’re going in handcuffs. It’s up to you. Wind your neck in.” But the man replied: “You wind your neck in,” the officer told the man.
After being fined, the man spoke to a reporter: “The chap told me down the road – he said they’ve got facial recognition. So I walked past like that (covering my face).”
“It’s a cold day as well. As soon as I’ve done that, the police officer’s asked me to come to him. So I’ve got me back up. I said to him ‘f*** off’, basically,” the man told the reporter.
“I said ‘I don’t want me face shown on anything. If I want to cover me face, I’ll cover me face, it’s not for them to tell me not to cover me face,’” the man concluded to the reporter.
In eight trials in London between 2016 and 2018, the technology gave “false positives” that wrongly identified individuals as crime suspects when an individual passed through an area with a facial recognition camera.
Big Brother Watch, the watchdog organization that received the data through a freedom of information request, demanded police drop using the technology. Big Brother Watch further warned of the Orwellian consequences of using it, arguing that it “breaches fundamental human rights protecting privacy and freedom of expression.”
“This is a turning point for civil liberties in the UK. If police push ahead with facial recognition surveillance, members of the public could be tracked across Britain’s colossal CCTV networks,” Director Silkie Carlo said. “For a nation that opposed ID cards and rejected the national DNA database, the notion of live facial recognition turning citizens into walking ID cards is chilling.”
Further according to Big Brother Watch, Police scored a 100% misidentification rate in two separate deployments at Westfield shopping centers in Stratford, London twice. It is a horrifying thought that this technology is now being used to harass citizens as they shop.
Of course, we know that facial recognition technology is currently or will be tested in UK supermarkets for the first time to verify the age of citizens buying alcohol and cigarettes at special self-checkout machines, as Activist Post reported.
The company responsible for the devices to be used in supermarkets, according to the Telegraph, is U.S. company NCR which makes self-checkout machines for Asda, Tesco, and other UK supermarkets.
NCR has announced the integration of facial recognition technology from Yoti with its “FastLane” tills within supermarkets.
Fastlanes are currently used by UK retailers Tesco, Sainsbury’s, Marks & Spencer, Boots, and WHSmith. While not all these retailers will be a part of the pilot test program, it’s important to note how widespread this could be.
Meanwhile, hundreds of retail stores and soon thousands are investigating using another biometric facial recognition software called FaceFirst to build a database of shoplifters as a means of anti-theft, Activist Post reported.
FaceFirst is designed to scan faces as far as 50 to 100 feet away. As customers walk through a store entrance, the video camera captures repetitious images of each shopper and chooses the clearest one to store.
The software then analyzes that image and compares it to a database of “bad customers” that the retailer has compiled; if there is a match, the software sends an alert to store employees that a “high risk” customer has entered the door.
The future of shopping seems to allude to having biometric scanners written all over it, a worrying prospect for privacy enthusiasts.
Several privacy advocate groups, attorneys, and even recently Microsoft, which also markets its own facial recognition system, have all raised concerns over the technology, pointing to issues of consent, racial profiling, and the potential to use images gathered through facial recognition cameras as evidence of criminal guilt by law enforcement.
“We don’t want to live in a world where government bureaucrats can enter in your name into a database and get a record of where you’ve been and what your financial, political, sexual, and medical associations and activities are,” Jay Stanley, an attorney with ACLU, told BuzzFeed News about the use of facial recognition cameras in retail stores. “And we don’t want a world in which people are being stopped and hassled by authorities because they bear resemblance to some scary character.”
The technology currently has a lot of problems; Activist Post recently reported how Amazon’s own facial “Rekognition” software erroneously and hilariously identified 28 members of Congress as people who have been arrested for crimes according to the ACLU. Maybe the technology was trying to tell us something? But then it should have labeled more than just African American members of Congress as criminals, unless the technology has a racial bias, or perhaps this is just more evidence of how inaccurate the technology is.
Activist Post previously reported on another test of facial recognition technology in Britain which resulted in 35 false matches and 1 erroneous arrest. So the technology is demonstrated to be far from foolproof.
Many likely laughed about the paranoid nature this writer has expressed when it comes to facial recognition technology; however, vindication came swiftly recently when Amazon announced it wanted to create a “Crime News Network” to monitor neighborhoods with its Ring doorbell facial recognition cameras. At this point, they are literally just creating George Orwell’s 1984 or reinventing the Stasi.
Amazon employees who are against the company selling facial recognition technology to the government have protested the company’s decision. Over 20 groups of shareholders have sent several letters to Amazon CEO Jeff Bezos urging him to stop selling the company’s face recognition software to law enforcement.
“We are concerned the technology would be used to unfairly and disproportionately target and surveil people of color, immigrants, and civil society organizations,” the shareholders, which reportedly include Social Equity Group and Northwest Coalition for Responsible Investment, wrote. “We are concerned sales may be expanded to foreign governments, including authoritarian regimes.”
Another letter was just sent in January 2019, organized by Open Mic, a nonprofit organization focused on corporate accountability, and was filed by the Sisters of St. Joseph of Brentwood; both letters warned the technology poses “potential civil and human rights risks.”
Numerous civil rights organizations have also co-signed a letter demanding Amazon stop assisting government surveillance; and several members of Congress have expressed concerns about the partnerships.
Several lawmakers in the U.S. have even chimed in to voice concerns about Amazon’s facial recognition software, expressing worry that it could be misused, The Hill reported.
The American Civil Liberties Union (ACLU) obtained hundreds of pages of documents showing Amazon offering the software to law enforcement agencies across the country.
In a 2018 report, the ACLU called Amazon’s facial recognition project a “threat to civil liberties.”
Amazon responded by essentially shrugging off the employees’ and shareholder concerns by the head of the company’s public sector cloud computing business, stating that the team is “unwaveringly” committed to the U.S. government.
“We are unwaveringly in support of our law enforcement, defense and intelligence community,” Teresa Carlson, vice president of the worldwide public sector for Amazon Web Services, said July 20th at the Aspen Security Forum in Colorado, FedScoop reported.
Amazon has since released an update claiming to have fixed all of the problems with lighting that caused inaccuracy to its systems according to the company.
This also follows a report by the U.S. Government Accountability Office (GAO) that the facial recognition technology the FBI is using for the Next Generation Identification-Interstate Photo System failed privacy and accuracy tests, as Activist Post reported.
In 2018 it was reported that the FBI and other law enforcement agencies were using this same Amazon Facial Rekognition technology to sift through surveillance data.
Defense One reports that “AI-Enabled Cameras That Detect Crime Before it Occurs Will Soon Invade the Physical World” are in the works and on display at ISC West, a recent security technology conference in Las Vegas.
Activist Post has previously reported in its own way that the rise of facial recognition technology is inevitable and, as a result, the death of one’s privacy is sure to come with it.
Shanghai-based YITU Technology has evolved the facial recognition industry by being able to identify a person within a matter of seconds from a database of people, even if only their partial face is visible, CNBC reported.
The evolution of facial recognition technology is further documented by researchers at the University of Bradford have found that “facial recognition technology works even when only half a face is visible,” according to EurekAlert. Although, this upgraded technology hasn’t been tested by police to this writer’s knowledge, and let’s hope that it never is, for if it does civil liberties and privacy will cease to exist.
Elsewhere in the world, facial recognition and the use of biometrics can be seen all over starting to emerge. In Malta, Prime Minister Joseph Muscat recently confirmed plans to implement facial recognition into the CCTV surveillance cameras around the country’s zones.
“The police are doing a good job but there’s a lot of work that still needs to be done to step up enforcement,” Muscat said in an interview on ONE Radio today. “We are looking into safe city concepts to prevent antisocial behaviour, whereby CCTV systems with technology that can identify law-breakers can do away with the need to have police stationed 24/7 in certain areas.”
Meanwhile, China is planning to merge its 170+ million security cameras with artificial intelligence and facial recognition technology to create a mega-surveillance state. This compounds with China’s “social credit system” that ranks citizens based on their behavior, and rewards and punishes depending on those scores.
Oh, and Russia isn’t missing out on the action either; it’s recently been reportedthat Moscow is also getting in on enslaving its citizens with a draconian web of surveillance by looking into which company it should give a contract to for facial recognition technology, according to Defense One.
Among the top contenders for the job is NtechLab, an AI startup whose FindFace face recognition technology won IARPA’s Face Recognition Challenge Prize in 2017. The following year, it was deployed for the World Cup, and supposedly uncovered property theft and prevented other crimes. In a recent interview with the Russia daily RiaNovosti, CEO Artem Kuharenko said FindFace operates as part of pilot surveillance programs in various Russian cities; in the Tartarstan region alone, he said, nearly 2,000 crimes were solved last year with the help of video surveillance.
Another contender is IVA Cognitive, which develops the IVACV video analytics system. In the RiaNovosti interview, IVA’s CEO Alexey Tsessarsky speculated that “perhaps the city will choose one company or organize something like a consortium of several companies. The officials will divide the cameras between companies to see how each copes with the task. Thus, the competition will continue, and the technology will continue to develop.
Activist Post reported two years ago that Russia was looking to integrate emotional recognition into its web of CCTV cameras around Moscow as well.
Consent to be identified by the government whenever and wherever we go is approval to have the government decide whether, when, and where we are allowed to travel like China. Put bluntly: it is very dangerous.
The scary part is that intelligence agencies would be able to use their surveillance dragnet interlinked into CCTV cameras and companies like Facebook that utilize the technology to track someone’s location in real-time.
For more on facial recognition technology and what’s to come for our future, see this writer’s previous article “The Rise Of Facial Recognition Technology Is Now Inevitable.”
If we’re not careful, what’s happening in the UK could be coming to the States It’s the beginning of a 21st-century Stasi state that will make George Orwell’s 1984and The Minority Report look like paradise compared to our not-too-distant potential nightmarish future. There is no easy way to say this, wake the f*ck up and realize we are on the cusp of having dossiers on every man woman and child on this planet with the combination of A.I. and facial recognition technology. Rise up, get off the couch, stand the f*ck up, make noise or watch as our future generations live enslaved by the invasion of privacy of facial recognition technology. If we consent, we normalize this type of technology and we are telling the military-industrial complex and police state that it’s okay to force this tech down our throats and take more of our rights. If we confront them, like in San Francisco, we show that we care about our rights and won’t have our rights taken away without a fight.
Aaron Kesel writes for Activist Post