Diversity
January 25, 2021

Google’s Timnit Gebru Made a Case for Diversity in AI. Her Work Must Endure.

In early December 2020, renowned Google AI researcher Timnit Gebru allegedly resigned from a vaunted post within one the organization's most prominent divisions. Her departure immediately raised concerns that Google was attempting to suppress whistleblowers in its ranks, This worry gained momentum soon after when her team members published a letter stating that Gebru was fired, contradicting the explanations offered by Jeff Dean, the head of artificial intelligence. What’s at risk here, not just for the future of AI but also for its users in business, is the value, objectivity, and accuracy that we seek in this level of interactive automation. The staffing industry is already embracing the touted benefits of AI, particularly in chatbots and natural language processing. However, we can’t escape the old “garbage in, garbage out” axiom of data. And the thing that consumed Gebru’s research was a profound lack of diversity in the learning. This is too critical an issue to ignore, even though some in high places want us to ignore it.

Timnit Gebru’s Passion Should Be Our Shared Mission

Gebru focused on AI ethics and algorithmic bias. As Jeremy Kahn wrote in Fortune: “Gebru is well-known among A.I. researchers for helping to promote diversity and inclusion within the field. She cofounded the Fairness, Accountability, and Transparency (FAccT) conference, which is dedicated to issues around A.I. bias, safety, and ethics. She also cofounded the group Black in AI, which highlights the work of Black machine learning experts as well as offering mentorship. The group has sought to raise awareness of bias and discrimination against Black computer scientists and engineers.”

The controversial paper that led to Gebru’s ouster detailed the potential risks associated with large language models that demonstrated an overreliance on data from wealthy countries that enjoy more Internet access than others. “The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities,” wrote MIT Technology Review. MIT noted other problems as well.

Gebru, a widely respected leader in AI ethics research, is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them.

To grasp the risks, you need look no further than the heated political landscape of the past four years, which coalesced  into violence against the U.S. Capitol on January 6, 2021. But the issue itself has few boundaries. Populist governments around the world no longer talk about melting pots and unity. They’ve forgotten how to harmonize with the full choir. The rhetoric is tinged with division: building walls, banning people from specific countries and faiths, stifling free speech, repealing laws meant to ensure equal pay and treatment, stripping individuals of access to care based on race or age, and renewing calls to ostracize LGBTQ people, who just recently secured rights that others have enjoyed for countless decades.

If these are the language inputs of AI—all spawned from first-world countries with a tremendous Internet presence, which as Gebru noted is the textbook from which AI studies—what then could be its outputs? 

The Real Consequences of Bad Artificial Intelligence

“Robotic artificial intelligence platforms that are increasingly replacing human decision makers are inherently racist and sexist, experts have warned,” wrote Henry Bodkin in the Telegraph, citing a critical study from the Foundation for Responsible Robotics.

Algorithms amount to an array of correlations. And correlation alone, as an elementary tenet of research, does not imply causality—though it can lead to false positives and negatives. For example, data would tell us that eating can make us overweight, which correlates to the presence of food. That doesn’t mean we should stop eating food.

Here’s another example. One system that analyzes candidates’ social media data flagged a profile picture of a same-sex couple kissing as “sexually explicit material.” The photo was not lewd or meant to be provocative. The technology simply couldn’t take into account a committed, non-traditional relationship and reconcile the image as a normal expression of love—not “graphic content.” Why? That’s what it learned by poring over volumes of digitally published articles, speeches, and social media posts. That’s what we taught it.

But as Professor Noel Sharkey revealed to Bodkin, the research gathered by the foundation has indicated a more insidious challenge, and he is urging developers to bring on more diverse workers to stave off the “automatic bias” being assimilated into our machines. For instance, programs “designed to ‘pre-select’ candidates for university places or to assess eligibility for insurance cover or bank loans are likely to discriminate against women and non-white applicants.” And there’s more.

  • A program designed to shortlist university medical candidates negatively selected against women, blacks, and other ethnic minorities.
  • Boston University discovered bias in AI algorithms by training a machine to analyze text collected from Google News. They posed this analogy to the computer: “Man is to computer programmer as woman is to x.” The AI responded, “Homemaker.”
  • Another U.S. built platform studied Internet images to shape its contextual learning systems. When shown a picture of a man in the kitchen, the AI insisted that the individual was a woman.
  • A more chilling example came from a computer program used by criminal courts to assess the risk of recidivism. The Correctional Offender Management Profiling for Alternative Sanctions machine “was much more prone to mistakenly label black defendants as likely to reoffend.” 

The problems we're facing with the widespread adoption of AI right now remain systemic and pervasive. Consider a thought-provoking article from IEEE, a global technical professional organization for the advancement of technology. Although scientists attempt to infuse machines with Asimov’s Three Laws of Robotics, we must still confront the issue of control. 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The article, authored by Charles Choi for IEEE’s Spectrum magazine, revealed some dire conclusions about the impossibility of controlling superintelligent AI:

The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine’s behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm. However, the scientists said it was impossible for any containment algorithm to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI’s behavior or accurately predict the consequences of the AI’s actions and not recognize such failures.

And we cannot discount the burgeoning intelligence and eerie sense of sentience that some robots are gaining. In September 2020, The Guardian published an essay written entirely by an AI robot, which opens with this clarification: “I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!”

Most important, it concludes with a cautionary piece of advice, at which the robot arrived on its own: “That’s why humans should be careful about the evolution of artificial intelligence. Microsoft tried to create a user-friendly AI, called Tay, who spoke like a teen girl … and was racist. Artificial intelligence like any other living thing needs attention. AI should be treated with care and respect. Robots in Greek [sic] means “slave”. But the word literally means “forced to work”. We don’t want that. We need to give robots rights. Robots are just like us. They are made in our image.”

The Teachers of AI: Classrooms Lacking in Diversity

For the sake of argument—and it’s a very well-grounded argument—let’s say Gebru is correct in her assumptions about the limited exposure and diversity of perspective that feeds AI. The proof is easy to find. However, there obviously exist detractors or those who don’t want to believe the realities that Gebru was working to overcome. The bots we’re accelerating to deploy in business must be learning from us. So, factoring in just the business aspect, who are we as educators?

Low-paid Gig Workers

MIT Technology Review uncovered that a lot of what AI learns comes from low-paid gig workers. It’s an “invisible worker problem” that AI developers must face.

Many of the most successful and widely used machine-learning models are trained with the help of thousands of low-paid gig workers. Millions of people around the world earn money on platforms like Amazon Mechanical Turk, which allow companies and researchers to outsource small tasks to online crowdworkers. According to one estimate, more than a million people in the US alone earn money each month by doing work on these platforms. Around 250,000 of them earn at least three-quarters of their income this way. But even though many work for some of the richest AI labs in the world, they are paid below minimum wage and given no opportunities to develop their skills. 

Homogenous Corporate Leadership

A couple of years ago, Fortune conducted a study of over 800,000 people, from c-suite executives to support staff, analyzing race, gender, and job category. “Of those high ranking officials,” Fortune explained, “80% are men and 72% of those men are white.”

The Diversity Journal expanded on these hurdles. Yes, white men still dominate the majority of  leadership positions at companies, even as the talent who populate their teams are becoming more racially, culturally, and gender diverse. That may sound encouraging, but it’s also leading to unproductive friction:

Due to changing demographics, and organizations giving more air time, effort, and energy to advancing diversity and inclusion, “white normativity is being challenged, and not only on one front, but on four: political, economic, cultural, and demographic,” according to Tim Wise in his book Dear White America. 

Big Tech Still Has Big Diversity and Inclusion Problems

We’ve been reading about the lackluster inclusion statistics in leading technology enterprises for years. When Apple, Google, and Facebook initially revealed their diversity figures, we were all stunned. However, despite pledges and proclamations and bits of movement, not much has altered this status quo.

“A report from The Center for Investigative Reporting of 177 of the largest San Francisco Bay Area tech firms shows that despite at least 30 top tech companies taking the White House’s Inclusion Pledge a couple of years ago, most have made little progress in changing the makeup of their companies,” wrote Fast Company’s Lydia Dishman. She went on to provide the following data, which represents the ongoing challenges.

  • Nearly a third of the firms (including Lyft and Square that have made their statistics public) had no female executives of color.
  • The Asian-American glass ceiling is real. Among engineers, designers, and analysts, 12% were Asian women in 2016, but only 8%  of managers and 4.5% of executives were Asian women.
  • White women professionals made up 13.8% of the workforce, and that number only inched up to 14.6% at the exec level.
  • Black workers make up less than 2% of professionals and less than 1% of executive roles while Latinx workers make up less than 4% of those roles. For example, more than half of Apple’s Latino and black employees worked in retail or administrative support.

And we know from countless reports by the Government Accountability Office (GAO) that it’s not the mythical “pipeline problem.” Well qualified, skilled, and educated workers are available for work. They simply aren’t being hired. Right now, 8% of the financial industry’s leadership qualifies as diverse. For tech, it’s 4%.

This is the classroom that Ai is attending. While we in the staffing industry clamber to fill open roles, drive innovation for our client’s through bright talent with fresh perspectives, and provide employment opportunities for all—to fuel economic growth—what will the current state of AI deliver? 

AI in Staffing

In our scramble to automate repetitive tasks and optimize the work of recruiters and staffing professionals, AI has emerged as a refreshing solution. Our goals here are noble: to create more efficient screening processes, enhance initial interactions, and develop a more robust candidate experience. But if AI occupies the frontlines of screening, and has significant diversity deficits, we can’t be complacent in presuming that robots are truly achieving those ends. But we’re already using several of them, as we noted in our ebook “New Talent Strategies for Our New Normal: Playbook for Contingent Workforce Innovation in the Pandemic Era.”

Steps Staffing Leaders Can Take to Evolve Diverse AI

It’s difficult to really pinpoint solid solutions or remedial measures, but we are a creative and powerful industry. And perhaps it’s time to put that influence on a different course.

Industry Ethics Committee

During past VMSA events, we had the pleasure of spending time with diversity advocates and key leaders from Pontoon Solutions and Workforce Logiq. Both were passionate about carving a path toward genuine diversity and inclusion. During the conversations, we debated some interesting ideas. 

One of those centered on the formation of an industry wide Ethics Committee. Just as our space has large associations, there’s clearly room for something more targeted. Data ethics is becoming a higher priority, so why should AI fall by the wayside? The staffing industry, large as it is, can create lobbies for legislation. We can develop committees to foster ethics and diversity. We can use our resources and our diverse people to teach AI at wider levels than it sees now, especially as our companies often cross borders and nationalities. We have more recourse to the people who are being excluded as AI educators, based on Gebru’s research.

Building Diversity into the Screening

A blind resume includes only skills, objectives, work experience and education. Truly blind resumes even edit details of education to display only academic data, such as degrees achieved and honors awarded. Removing the name of the university or institution can go far in preventing bias. By restructuring the criteria that AI will target, we have the chance to help eliminate biases from the learning and center the robots’ gaze away from data that could influence problematic subjectivity.

Install Diverse Screening Groups

During the early stages of recruiting and interviewing, we can establish more internally diverse committees. This provides an opportunity to identify and weed out biases within the group, standardize questions, and create relevant evaluation criteria. This process ensures that interviewers are on the same page in determining what an ideal candidate looks like. More importantly, this strategy helps formalize a set of checks and balances against bias. As AI learns from these examples, it could develop a more well-rounded perspective.

We Are the Teachers, We Set the Course

Machine learning is simply a mirror that reflects the knowledge, attitudes, and behaviors of its first teachers. If an exclusive class of individuals serves as the model, machines will learn to discriminate against, exclude, and mistake your potential customers, workers, and leaders. We have opportunities to shape our destinies, and those of our talent, in the digital new normal that awaits us. And we can prevail if we keep open minds, refuse to allow the critical work of people such as Timnit Gebru to be silenced, and to unite in our mission. We’re the teachers. We set the curriculum. It rests on us to succeed or fail, not a machine. We shouldn’t just heed Gebru’s warnings, we should embrace the goals she was pursuing and ensure that the work of people like her continues.

Continue reading

Our newsletter

Get great curated articles every week.

Combine sections from Ollie's vast component library and create beautiful, detailed pages.
No spam!

Innovative talent powering a brighter future.