Amazon-powered AI cameras used to detect emotions of unconscious UK train passengers

Network Rail did not respond to questions about the tests sent to WIRED, including questions about the current state of AI use, emotion detection and privacy concerns.

“We take the security of the rail network extremely seriously and use a range of advanced technologies at our stations to protect passengers, our colleagues and the rail infrastructure from crime and other threats,” says a Network Rail spokesperson. “When deploying the technology, we work with the police and security services to ensure we take reasonable precautions and always comply with the relevant legislation regarding the use of surveillance technology.”

It is unclear how widely the emotion detection analysis has been deployed, with the documents occasionally stating that the use case should be “viewed with more caution” and reports from the stations stating that “accuracy cannot be verified”. However, Gregory Butler, chief executive of data analytics and computer vision company Purple Transform, which worked with Network Rail on the trials, says the feature was shut down during the tests and that no images were saved while it was active .

Network Rail documents on AI trials describe multiple use cases, which include the potential for cameras to send automatic alerts to staff when they detect certain behaviour. Neither system uses the controversial facial recognition technology, which aims to match people’s identities to identities stored in databases.

“The main advantage is the faster detection of overruns,” says Butler, adding that his company’s SiYtE analytics system is used at 18 locations, including train stations and along tracks. In the past month, Butler says, there have been five serious cases of trespassing detected by the systems at the two sites, including a teenager picking up a ball from the fairways and a man “who spent more than five minutes picking up golf balls on a high-speed line.”

At Leeds train station, one of the busiest outside London, 350 CCTV cameras are connected to the SiYtE platform, says Butler. “Analytics are used to measure the flow of people and identify issues such as platform overcrowding and of course boundary crossing – where the technology can filter out line workers through their PPE,” he says. “AI helps human operators, who cannot continuously monitor all cameras, quickly assess and resolve security risks and issues.”

Network Rail documents claim cameras used at one station, Reading, allowed police to speed up investigations into bike thefts by being able to pinpoint the bikes in the footage. “It was determined that while the analysts could not detect the theft with certainty, they could detect the person with the bike,” the files state. They also add that the new air quality sensors used in the tests could save staff time performing manual checks. One instance of AI uses data from sensors to detect “sweating” floors that have become slippery due to condensation and alert staff when they need to be cleaned.

While the documents detail some elements of the trials, privacy experts say they are concerned about the overall lack of transparency and debate about the use of artificial intelligence in public spaces. In one paper designed to assess data protection issues with the systems, Big Brother Watch’s Hurfurt says there appears to be a “dismissive attitude” towards people who may have privacy concerns. One question asks, “Are some people likely to object or find it intrusive?” One of the employees writes: “Usually not, but for some people there is no accounting.”

At the same time, similar AI surveillance systems that use this technology to monitor crowds are increasingly being used around the world. During the Olympics in Paris, France later this year, AI video surveillance will be watching thousands of people and trying to capture crowd waves, weapon use and abandoned objects.

“Systems that don’t identify people are better than those that do, but I’m afraid it’s going to be a slippery slope,” says Carissa Véliz, associate professor of psychology at the Institute for Ethics in AI at the University of Oxford. Véliz points to similar AI experiments on the London Underground, which initially blurred the faces of people who may have been dodging fares, but then changed their approach, blurring the photos and keeping the images longer than originally intended.

“There’s a very instinctive desire to expand visibility,” says Véliz. “Human beings like to see more, see further. But surveillance leads to control, and control leads to the loss of freedom that threatens liberal democracies.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top