Here are this week’s surveil-links: reading and summarizing the latest news in digital privacy so you don’t have to.
You can easily, and slightly more privately, navigate to each link by browsing to “surveil.link/” followed by the link’s corresponding number. For example, surveil-link #12 can be found at surveil.link/12.
Surveil-link #13: Geo-fence warrants used in Minneapolis during protests following the murder of George Floyd
While not a terribly surprising revelation, TechCrunch reported this past Saturday regarding a search warrant from Minneapolis police asking for the “anonymized” location data of all devices within a certain radius of an AutoZone parking lot during a 20 minute period the evening May 27, 2020. George Floyd, the man whose name is now known across the world, had been murdered by Minneapolis police two days prior and police say a masked man, commonly referred to as “Umbrella Man,” smashing the windows of the AutoZone in question with an umbrella. Sergeant Erika Christensen, the officer who submitted the warrant to Google, told the StarTribune in late July that her department’s investigation indicates that this “was the first fire that set off a string of fires and looting throughout the precinct and the rest of the city.”
As said, the revelation comes as little surprise, seeing that these warrants — commonly referred to as “geo-fence” or “reverse-location” warrants — are controversially common. Despite the fact that location data is truly hard to anonymize, the dragnet approach remains a staple for law enforcement all over the U.S. as they try to identify suspects of a crime. Many argue it constitutes a violation of innocent bystanders’ Fourth Amendment rights, seeing their data was served with the warrant yet they did nothing wrong.
TechCrunch learned of the search warrant from a resident in the area who was videoing the protests. The only reason he was made aware of it was because Google alerted him that his data had been requested by law enforcement. As always, be careful what apps you share your location with.
In 2021, after all we’ve been through, it is a huge milestone to see light at the end of the infectious tunnel by getting vaccinated for COVID-19. In the era of social media, people naturally want to mark the occasion with a selfie of the document proving it. But that’s probably not a good idea. Both the U.S. Better Business Bureau and the Federal Trade Commission warned against doing this saying the information could be used to target and subsequently scam those that do.
Experts interviewed by the New York Times also pointed out that eBay has already had to remove listings selling fraudulent copies of the equivalent in the U.K. It is advised to post a picture of the “I got vaccinated” sticker, instead.
Reporting on a story behind a hefty pay-wall originally released by The Information, The Verge reported that Amazon plans to constantly surveil their drivers while delivering packages in Amazon vehicles. The technology being used, detailed in an unlisted Vimeo video, is developed by Southern California artificial intelligence company Netradyne who claims the technology can detect good and bad driving habits. Should the driver engage in driving behavior the tech determines is poor, Amazon — and only in some instances, the driver — will be alerted.
Amazon claims the footage is only uploaded when such behavior is detected, but the driver can also manually upload video when needed. The camera has a 270 degree view, including side views of the vehicle pointing towards the houses the driver is delivering to, a view of the driver, and a view of where the vehicle is going.
In a longform article titled “There Are Spying Eyes Everywhere—and Now They Share a Brain,” Wired details how data from modern surveillance technologies are being correlated by law enforcement across the world. The idea was originally known as Insight and cooked up by Department of Defense engineers in 2010 led by Ben Cutler. In an interview for the story, he told Wired “I would not undertake something like Insight in a civilian context.” Sadly, exactly that has happened with similar tools being used in what are known as fusion centers in large cities across the U.S and beyond. Cutler has since moved on and now works at Microsoft.
To paint a picture of the technology, here is an excerpt directly from the article:
“The screen displayed a map of the East Side of Chicago. Around the edges were thumbnail-size video streams from neighborhood CCTV cameras. In one feed, a woman appeared to be unloading luggage from a car to the sidewalk. An alert popped up above her head: ‘ILLEGAL PARKING.’ The map itself was scattered with color-coded icons—a house on fire, a gun, a pair of wrestling stick figures—each of which, Gaccione explained, corresponded to an unfolding emergency.”
The author of the article describes observing one in both Chicago and New York but also relates stories of those that have seen them used against ethnic minorities in countries such as Egypt, Turkey, and China.
TechCrunch reported this last Friday that a transparency lawsuit was heard in EU court concerning the use of a government funded project known as iBorderCtrl. The project tests artificial intelligence algorithms to detect lies based on a subject’s facial expressions. The suit is “seeking the release of documents on the ethical evaluation, legal admissibility, marketing and results of the project.”
In 2019, reporters from The Intercept tested iBorderCtrl and found that it marked about 25% of their answers to the given questions as lies even though every answer was entirely truthful. TechCrunch also points out the assumption that deceit can be detected based solely on facial expressions seems to be based more in science fiction than in actual science.
Surveil-link #18: Times reporters track Capitol insurrectionists to their homes using commercially available location data
In a piece entitled “They Stormed the Capitol. Their Apps Tracked Them.” New York Times reporters were able to use location data purchased commercially to track insurrectionists involved in the attack on the U.S. Capitol on January 6, 2021 to their homes to states as far away as Kentucky, New Mexico, Florida, and South Carolina.
The same reporters wrote another piece in December 2019 entitled “Twelve Million Phones, One Dataset, Zero Privacy” in which they demonstrated the ease of tracking individuals in supposedly anonymous data sets. They noted that the dataset used in the most recent article was easier to definitively track people. In both pieces — found in the paper’s opinion section — they argue for the need of regulation around the collection, selling, and sharing of location data obtained via smartphone applications.
In it’s short three weeks of existence, Surveillance Today has shared stories concerning this practice every week.
Daniel Therrien, the Privacy Commissioner of Canada, did not mince words last week when he said, “What Clearview does is mass surveillance, and it is illegal.” He was referring to the controversial facial recognition startup, Clearview AI, that scraped billions of publicly available photos from the internet to store in its database, including Canadian citizens.
The New York Times reports that “privacy laws in Canada require getting people’s consent to use their personal data,” thus giving the commissioner reason to investigate. Clearview argues consent was not required because of an exception in the Canadian law for publicly available data. However the commissioner disagreed. In his report he argues that “information collected from public websites, such as social media or professional profiles, and then used for an unrelated purpose, does not fall under the ‘publicly available’ exception.”
While the commissioner’s orders don’t hold any legal weight, Clearview has been asked to cease providing their current services in Canada, stop scraping images of Canadian citizens, and delete all images of Canadian citizens already in their database. Clearview intends to challenge the findings in court.
Last week, in suveil-link #10, we talked about the current drama between Facebook and Apple largely stemming from Apple’s coming feature that will prompt users whether or not to allow apps to collect data on them from their phone. Other companies recently weighed in, publicly voicing their frustrations and concerns, including Snapchat and Google.
What do you think? Is Apple doing the right thing? Let us know what your thoughts in the comments!
Note: Surveillance Today originally reported that the prompt had already been rolled out. That is not the case and we regret the error.
Surveil-link #21: San Francisco Supervisors vote to require approval of private surveillance technologies in business districts
After San Francisco’s Union Square Business Improvement District shared surveillance camera footage in real time with the San Francisco Police Department during Black Lives Matter protests this summer, the San Francisco Board of Supervisors voted unanimously to require city approval of all new surveillance technologies adopted in the city’s various business districts. The resolution, according to the EFF, “is non-binding and it will be up to City agencies to determine whether and how to carry out the request.”