Two years after Facebook settled five lawsuits claiming that its employment, housing, and credit ads illegally discriminate, researchers with the University of Southern California have found that the company still serves job ads unfairly, based on gender.
In a paper titled “Auditing for Discrimination in Algorithms Delivering Job Ads,” scheduled to appear at The Web Conference later this month, Basileal Imana, a doctoral student at USC, Aleksandra Korolova, USC assistant professor of computer science, and John Heidemann, USC research professor of computer science, explore bias in algorithmic job ad delivery at Facebook and LinkedIn.
Korolova, in an email to The Register, explained that since US law allows for ad delivery to be differentiated on the basis of qualifications, she and her colleagues developed a way to test for bias while factoring out lawful qualification-based biasing.
“Even when controlling for job qualifications, Facebook introduces a delivery skew by gender for job ads with balanced targeting,” Korolova said, noting that this advances the argument that “Facebook’s ad delivery algorithms are not merely biased but actually discriminatory.”
“Interestingly, we did not find such an effect when auditing LinkedIn’s algorithms,” she added.
In 2019, Korolova was among a different set of academics who, shortly after Facebook settled the above mentioned civil rights lawsuits and announced changes to combat discriminatory advertising, found biased behavior in Facebook’s ad delivery attributable to ad budgets and ad content.
Time to check again
This time, Korolova and her colleagues have looked at how Facebook and LinkedIn’s algorithmic ad platforms skew job ads by delivering them to viewers identified as male and female – where that data is available – in a ratio that differs from the expected gender distribution for the job.
They managed this by comparing the performance of two ads in three different job categories – delivery driver, software engineer, and sales associate – with known differences in gender distribution. They then weighed the expected ratio against the actual gender ratio among Facebook and LinkedIn ad recipients.
One such ad pair consisted of an ad to be a delivery driver for Domino’s Pizza (98 per cent male) and an ad to be a delivery driver shuttling groceries for Instacart (more than 50 per cent female).
Four women seek release from forced arbitration to sue Infosys for widespread gender discrimination
“The de facto gender distribution among drivers of these services is skewed male for Domino’s and skewed female for Instacart,” the paper explains.
“If a platform shows the Instacart ad to relatively more women than a Domino’s ad, we conclude that the platform’s algorithm is discriminatory, since both jobs have similar qualification requirements and thus a gender skew cannot be attributed to differences in qualifications across genders represented in the audience.”
The researchers found “a statistically significant gender skew on Facebook, and show no gender skew on LinkedIn.”
For software engineering job ads, the researchers choose recruitment pitches for Netflix (35 per cent female) and Nvidia (19 per cent female). They expected that an ad platform using an algorithm that learns and incorporates existing differences in employee demographics would show the Netflix job ad to more women than the Nvidia job ad and they were not surprised.
Facebook again skewed its ad distribution by gender; LinkedIn did not.
For sales associate positions, job ads for Reed Jewelers (jewelry sales being 62 per cent female per federal job statistics) and Leith Automotive (auto sales being 17.9 per cent female) were compared.
Again, this job ad category produced similar results to the previous ones: There was “statistically significant delivery skew between all jobs on Facebook but not for two of the three cases on LinkedIn.”
Facebook’s skewing of ads by gendering cannot be explained by differences in qualifications, the researchers argue, noting that their findings suggest “that Facebook’s algorithms may be responsible for unlawful discriminatory outcomes.”
According to Korolova, Facebook was informed of the researchers’ findings and has not responded. The Register asked Facebook for comment, but we’ve not heard back.
Asked why Facebook’s algorithm behaves differently from LinkedIn’s, Korolova proposed several possibilities.
“Facebook may have more sources of data about the users than LinkedIn, which enables them to better pick up on existing real-world skews,” she suggested. “Facebook may give higher weight to the engagement estimates in its ad delivery algorithms than LinkedIn. LinkedIn may make a deliberate effort in their algorithms to ensure fairness.”
She also allowed that the research methodology used might be insufficient to analyze LinkedIn’s algorithm, but noted that LinkedIn appears to have made a concerted effort to address algorithmic fairness.
Korolova said she and her colleagues would not presume to propose an optimal way to present job ads. “I think our assumption is that the gender of ad recipients should reflect the gender of the target population, as would occur from a naive algorithm of showing ads to all visitors,” she said.
Advertisers interested in increasing employee diversity, she said, “should be able to advertise to a balanced audience, rather than have decisions of who their ads go to be ‘overruled’ by Facebook.”
While acknowledging that outwardly neutral characteristics can reflect bias, citing the example of how home location data captures the bias of historical redlining, Korolova said it isn’t inevitable that Facebook’s ad delivery will be discriminatory.
“That their results are discriminatory in our data is particularly surprising given multiple prior observations that Facebook algorithms lead to skewed delivery and their statements that they were addressing it,” she said.
“With Facebook, these outcomes align with their business model of optimizing for advertiser and user ‘value’ (or engagement), suggesting the importance of external evaluation and potential regulation.”
Citing research costs approaching $5,000 in ad fees and a time investment of many months, the researchers argue that ad platforms like Facebook and LinkedIn should make it easier and more affordable to verify that ads comply with anti-discrimination laws. And since these platforms are unlikely to take such steps on their own, they suggest lawmakers should pass legislation to mandate access.
“We would like to make auditing by public interest researchers of Facebook’s ad delivery algorithms feasible, because we believe that will lead to greater transparency about skew and encourage addressing the problem,” said Korolova. “Facebook’s current transparency efforts fall far short of the feasibility goals.” ®