Culture
Twitter promises a fix for its racist image cropping algorithm is coming soon
The updates come after users pointed out egregious examples of image focal point biases last month.
Late last month, multiple users' experiments showing apparent racial bias within Twitter's automatic image cropping algorithms went viral on the social media platform, renewing discussions on the many shortcomings inherent in human-made coding. Yesterday, Twitter took its first step towards correcting the imbalance, noting that the company has done extensive testing of their algorithms, and will hopefully introduce solutions to the issue in the near future.
Twitter's tests show no bias, but it will change anyway— As noted in a post on Twitter's blog page, the site's machine learning system relied on what's known as image saliency, which "predicts where people might look first" in a picture. To test this, Twitter looked into the algorithm's "pairwise preference between two demographic groups (White-Black, White-Indian, White-Asian and male-female). The company explained:
In each trial, we combined two faces into the same image, with their order randomized, then computed the saliency map over the combined image. Then, we located the maximum of the saliency map, and recorded which demographic category it landed on.
While Twitter alleges its analyses haven't shown any indications of bias, the company admitted automatic image cropping "means there is a potential for harm, and plan to address this problem soon."
One of the major ways Twitter hopes to tackle this is by decreasing its reliance on machine learning-based image cropping techniques, and giving its users more control over how their uploaded photos are formatted. "We hope that giving people more choices for image cropping and previewing what they’ll look like in the Tweet composer may help reduce the risk of harm," the blog post reads. While there will always be some instances of oddly-sized images posing a problem for saliency, Twitter promised to continue experimenting with ways to further decrease any chance of an algorithmic crop to "take away from the integrity of the photo."
Strangely responsible moves from Twitter— Although (sorta) admitting fault is a refreshing change of pace for Twitter, without a timeline for these updates or any immediate changes to the saliency bias, it would seem that images will continue being questionably framed, at least for the near future. Still, the social media platform's open acknowledgement of machine learning bias is a nice step in the right direction for all major tech outlets relying on similar programs. This follows similar uncharacteristically responsible moves from the company, like better flagging inaccurate Presidential Election information (often passed along by our own President), and removing fake, politically-motivated profiles. The social media site is inarguably still a cesspool of hate and misinformation, but hey, at least its owners are starting to kinda-sorta admit it, right?