Major facial recognition firms make almost no way for black and non-white people to be identified

Photo

Clearview AI, which helps large companies look at their facial image databases, collected hundreds of thousands of photographs of white people and then tested each on 12 commercial facial recognition software programs in artificial intelligence, artificial intelligence algorithms and machine learning.

The next round of the examination tested the software on images of other races—in this case, non-whites—again. The third round was to pit the four main facial recognition technology companies against each other to see which could be more accurate. And in this round, Clearview AI shared the results with The Economist.

Since the start of the study in 2016, researchers with both US-based and European companies were not pleased that they were routinely blind to photos of people that did not belong to them. The professor of computer science working on the project, Kyle Whitmire, told us that the main goal of the study was to make sure that the technology worked on images of all people, not just people in particular groups. To do that, the researchers also wanted to test the software’s accuracy with multiple people in different ethnicities that didn’t already match the sample group.

To make sure that a still photo could be considered a valid database image, Clearview AI worked with different types of cameras. The photos the researchers examined were taken with an iPhone 8 Plus and some Samsung Galaxy phones. These cameras automatically extract motion, including facial movement and expressions. To put these data to use, the company uses Facekit on some photo app platforms, which makes the data available to an array of facial recognition algorithms.

The company used large databases to test multiple software developers and found that the, RealFace, Crimson Hexagon, Autonomy Repository, DataTrust, Symphony face database and the AIM Database were most effective at identifying people that did not belong to them. When testing the software against non-white images, however, iSight Image Recognition, which has some government contracts, was actually the most accurate in its ability to ID non-white faces. The research didn’t show a significant difference in accuracy between the databases that were used to test the Microsoft and Facebank programs, but Microsoft’s researchers noted in a blog post that its own database is used by police agencies and for military use.

The report also notes that the results may underestimate the effectiveness of facial recognition programs. Clearview AI is not the only company that has tested facial recognition programs and raised similar questions. Because some of the facial recognition databases that are used by the companies to test their software are part of larger law enforcement databases, the researchers argue that commercial and government facial recognition technologies could underestimate their effectiveness. The team also pointed out that facial recognition software is currently at an early stage, and many companies face ethical concerns about bringing such systems into civilian areas, such as transport systems.

Leave a Comment