Facebook has apologized for an incident in which its AI labeled a video of Black men as “primates,” calling it a “unacceptable error” that was being investigated to prevent it from happening again. According to the New York Times, users who watched the video posted by the UK tabloid Daily Mail on June 27th received an auto-prompt asking if they would want to “keep seeing videos about Primates.”
Facebook disabled the entire topic recommendation feature as soon as it discovered what was going on, a spokesperson told The Verge in an email on Saturday.
The spokesperson stated, “This was clearly an unacceptable error.” According to the spokesperson, the company is looking into the cause of the behavior in order to prevent it from happening again. “As we previously stated, while we have made improvements to our AI, we recognize that it is not perfect and that we still have a long way to go. We sincerely apologize to anyone who may have seen these obnoxious recommendations.”
The incident is just the most recent example of artificial intelligence tools exhibiting gender or racial bias, with facial recognition tools in particular being shown to have a problem misidentifying people of color. Google apologized in 2015 after its Photos app labeled photos of Black people as “gorillas.” Last year, Facebook stated that it was investigating whether its AI-trained algorithms, including those of Instagram, which it owns, were racially biased.
The US Federal Trade Commission warned in April that AI tools that have demonstrated “troubling” racial and gender biases may violate consumer protection laws if used in credit, housing, or employment decision-making. “Hold yourself accountable— or be prepared for the FTC to do it for you,” said FTC privacy attorney Elisa Jillson in a blog post on the agency’s website.