Artificial intelligence (AI) is rapidly changing the world, and the workforce is no exception. As AI grows in sophistication, it becomes capable of performing more tasks that were once considered the exclusive domain of humans. This has led to concerns that AI will eventually replace countless roles, leaving people unemployed. The stories of business leaders seeking to maximize profits by automating jobs through GPT systems hasn’t helped quell the storm of consternation brewing among human talent. Yes, there are aspects of AI to worry about; however, they may not be what you think. More than forcing productive talent out of the workforce, the real threat of AI is human — how people may exploit it to spread false information, deceive candidates, and promote biased agendas.
The Real Threat of AI - Misuse
On May 14, 2023, the Associated Press (AP) published a very prescient article titled “AI presents political peril for 2024 with threat to mislead voters.” The story focuses on the misuse of AI to influence the outcome of elections through unethical means. As we’ve seen over the past few years, bad actors can leverage mainstream and social media to spread false and misleading information. But this level of manipulation could be an exercise in child’s play compared to what generative AI could produce.
“Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost,” the AP reported. “When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.”
The article also cautioned that “the implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.”
Think of what AI could accomplish in the wrong hands.
- Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date.
- Audio recordings of a candidate supposedly confessing to a crime or expressing racist views.
- Video footage showing someone giving a speech or interview they never gave.
- Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.
Imagine if you received a phone call from your favorite celebrity or influencer telling you to cast a vote for a specific candidate. But it wouldn’t really be that person.
AI Could Avail Recruitment Scams
The AP’s article is obviously concentrated on the risks that generative AI could pose to legitimate elections and politics. The scenario presented isn’t relegated to politics alone, though. It’s a much more universal threat. So again, imagine what AI could accomplish in the wrong hands — this time in recruiting.
- “Elon Musk” personally phones a candidate to inform them that they are being considered for an amazing position at Tesla. But it’s not Musk, it’s AI. “He” then asks them to provide sensitive information (such as social security number, date of birth, or bank account information) to get the application process started.
- Fake but incredibly realistic job posts appear and target candidates with a dream opportunity, but use that to request credit card information or application fees, which no employer would do.
- A company that doesn’t support diversity uses AI to weed out candidates based on discriminatory types of information such as age (analyzing dates of education or past employment), ethnicity (attempting to correlate names with nationalities), gender, socioeconomic status (based on schools or locations of residence), or other factors.
AI Doesn’t Replace Workers, People Do
Of course, there are cases where AI may be the presumed culprit for unemployment. And it’s getting more difficult to escape the dire predictions we read in the news. According to Boston Consulting Group, for instance, an estimated 25% of jobs will be replaced by robots by 2025, while an Oxford University study proposes that 35% of UK jobs are potentially at risk of automation in the next 20 years. Can we blame the bots? Not really.
Recently, Business Insider announced that it would be laying off 10% of its staffers. The timing between the revelation and the organization’s advertised adoption of AI raised eyebrows, as The Daily Beast noted: “The layoff announcement also came a week after global editor-in-chief Nicholas Carlson said the company planned to incorporate AI into the newsroom. ‘Generative AI can make all of you better editors, reporters, and producers, too,’ he wrote to staffers.”
Some see a direct correlation between the layoffs and the introduction of AI into the workflow. Others say it's a coincidence. Yet for the sake of argument, let’s say a business leader did decide to terminate his staff and replace them with generative AI to cut labor costs, bolster profits, and appease investors. If that’s true, GPT didn’t eliminate those jobs, a human executive did — it would have been the result of that leader’s actions, not a bot’s.
Generative AI Is Just a Tool
Since we referenced journalism, let’s continue that thread as an illustrative example. Writing for Slate, Web Editor Nitish Pahwa posted an article simply called “Chatbots Suck at Journalism.” Journalists have credibility because they investigate information, cite sources, and provide evidence. AI isn’t really doing that.
“ChatGPT, and other machine-learning, large language models, may seem sophisticated, but they’re basically just complex autocomplete machines,” explained Blayne Haggart, Associate Professor of Political Science, Brock University. “Only instead of suggesting the next word in an email, they produce the most statistically likely words in much longer packages. These programs repackage others’ work as if it were something new. It does not ‘understand’ what it produces.”
Pahwa pointed out similar logic in his post: “What’s more, chatbots can’t talk to people on the ground, learn to be extra careful and discerning with certain sources, describe situations and people firsthand with scrutiny or empathy, or come up with entirely new observations or arguments, at least not yet. Using ChatGPT for web articles ‘would make sense only if our goal is to repackage information that’s already available,’ Ted Chiang wrote in the New Yorker.”
I asked ChatGPT about its capabilities to interview people or cite sources, both of which establish authority, credibility, and accuracy in reporting. Its answers were clear.
- “As an AI language model, I don't have the capability to conduct interviews in real-time or engage in direct conversations with individuals. However, I can provide information, answer questions, and engage in discussions on a wide range of topics, including politics.”
- “As an AI language model, my responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. I have not been directly trained on specific sources or have access to a database of citations. Therefore, I cannot provide formal citations or guarantee the accuracy, reliability, or currency of the information I provide.”
That’s not to say AI isn’t a powerful tool that can help people become more efficient. Generative AI uses software to take on drudging tasks such as transcription, analysis, preliminary research, and other transactional routines, which frees workers to focus on their core jobs.
“They’re very good at having a conversation, which is to say these things can write,” Pahwa said. “They’re also talented fabulists that will pass off inaccurate or hallucinated information with eerie confidence.”
The theme here isn’t exclusive to journalists. The same can be said of recruiters who are responsible for interviewing candidates, vetting out their skills and personalities through experience and the human capacity for nuance, delivering credible information, and establishing authority through facts and sources.
AI Can Make Recruiters More Productive and Efficient
“AI recruiting software should be viewed as a way to augment the recruiting process, not replace recruiters,” said Recruiter.com. “It is a tool like an Applicant Tracking Software (ATS) that recruiters can use to streamline the process. AI won’t be a replacement for an actual human recruiter so that everyone can take a sigh of relief. Their jobs are safe. They’re in more demand now than they’ve ever been.”
As we wrote in a previous article, staffing professionals and MSPs have a lot to gain from generative AI models.
AI-powered tools automatically screen resumes and job applications, saving recruiters time while increasing efficiency. The system can quickly identify qualified candidates based on job requirements, such as experience, education, and skills.
Enhanced Candidate Sourcing
AI helps identify candidates who may not have applied for a particular job but possess relevant skills and experience. The system can analyze social media profiles, professional networking platforms, and other public sources to find potential candidates who may be a good fit for a specific role.
AI algorithms can analyze data from past hiring decisions and identify patterns that lead to successful hires. This helps recruiters make more informed decisions when evaluating candidates, increasing the likelihood of finding the right fit for the job.
Today’s AI-powered chatbots can interact with candidates and answer their questions in more conversational and natural language, providing an enriching personalized experience. Chatbots can also help schedule interviews and follow-up communications, allowing recruiters to focus on more strategic tasks.
AI can help reduce unconscious bias in the recruitment process by removing personal identifiers, such as name and gender, from resumes and job applications. This helps ensure that candidates are evaluated based on their skills, merit, and experience, rather than factors such as race, age, or gender.
Mitigating the Risks of AI
Generative AI is essentially a clever word processor, rapid research assistant, and hyper-effective parrot. But its data comes from the trove of human knowledge collected over the centuries. It needs people more than people need it. And if AI rises up to replace human talent, then it will most likely happen at the behest of a human who commanded it to do so. When AI is used to push falsehoods, deceptions, or biased agendas, that’s on us.
Data ethics and guardrails are paramount of the successful integration of GPT systems into our working lives as useful assistants. There are steps we should be taking to ensure accuracy and integrity within these systems, as we previously discussed in our podcast “AI Governance, Ethics, & Diversity Challenges with Purvee Kondal.”
- Executive and Board Oversight: Creating accountability, inclusive leadership, diverse governance, rules that prevent bias, and concentrating on teaching machines appropriately.
- Putting up the Guardrails: Establishing proven frameworks, evolving those foundations, choosing the right data sources, properly vetting the data, and developing governance practices to continue the positive growth of AI processes.
- AI Ethics in Leadership and Procurement: Making more informed and inclusive decisions, creating a healthy level of friction to prevent the normalization of inequality or self-serving motives, and the power to dramatically improve supplier diversity, selection, and partnership processes in procurement.
- Best Practices for Rolling Out and Rolling Ahead: Selecting, implementing, developing, maturing, and growing AI within the organization; improving worker and supplier advocacy; and bringing those perspectives as data inputs into the procurement organization.
AI is a tool, one that has the potential to revolutionize work and help increase the performance of human users. But a tool’s output is relative to its master’s intent. A hammer can build homes or it can tear them down. It’s only as beneficial or destructive as the person wielding it.