Man looking at phone with graphic simulating facial recognition

FTC: Claims about facial recognition software were deceptive

By Randy Hutchinson

President of the BBB of the Mid-South

Reprinted from The Commercial Appeal

I’ve written earlier columns about enforcement actions the FTC and other regulators have taken against companies making what they consider to be deceptive claims about the use of Artificial Intelligence (AI) in products. The FTC recently reached a settlement with a company that allegedly made false, misleading, or unsubstantiated claims about its AI-powered facial recognition software. The software is used in home security systems, smart home touch panels, and other applications.

IntelliVision Technologies claimed that its AI-based facial recognition software had one of the highest accuracy rates on the market and that it performed with zero gender or racial bias across individuals with different genders, ethnicities and skin tones. The software was supposedly trained on millions of images from across the world and had anti-spoofing technology that couldn’t be fooled by photos or video images.

But the FTC said IntelliVision didn’t have evidence to support its claims. In particular, rather than being trained on millions of faces, the technology was trained on images of approximately 100,000 unique individuals. Technology was then used to create variants of those same images. The FTC said the company also couldn’t support the anti-spoofing claims.

The FTC’s proposed consent order will prohibit IntelliVision from making misrepresentations about:

  • The accuracy or efficacy of its facial recognition technology;
  • The comparative performance of its facial recognition technology with respect to categories of individuals; or
  • The accuracy or efficacy of its facial recognition technology at detecting spoofing.

The company will also have to possess and rely on competent testing of its technology before making representations about the effectiveness, accuracy, or lack of bias of its facial recognition technology.

In bringing the action, the Director of the FTC’s Bureau of Consumer Protection said, “Companies shouldn’t be touting bias-free artificial intelligence systems unless they can back those claims up. Those who develop and use AI systems are not exempt from basic deceptive advertising principles.”

This was the second major AI facial recognition case the FTC has brought in the past year. The FTC advised other companies selling a biometric-based, artificial intelligence-driven tool to resist the temptation to give potential customers a long list of attributes the product has and reasons it’s a perfect fit for every business. Their ads should stick to the facts and not go beyond what they can prove.

It offered these additional tips to avoid breaking the law and misleading consumers:

  • Tell the truth. Compliance starts with the truth. When you lie about your product’s attributes or capabilities, you betray your customers’ trust, hurt honest competitors, and invite a call from the FTC. Don’t make claims you can’t back up.
  • Don’t make claims about your product or service without a reasonable basis. If you say your tool is accurate and bias-free, you need proof that’s true at the time you make the claim. Testing after the fact isn’t enough.
  • Review other resources. Check out the FTCʼs Policy Statement on Biometric Information and Section 5 of the FTC Act, and the FTCʼs AI and Your Business series for more information on how to avoid mistakes businesses make when AI is involved.

The BBB recommends that consumers and businesses not let AI hype cloud their judgment when evaluating a product.