by Maria Korolov

How AI is revolutionizing training

Feature
Apr 07, 2021
Artificial Intelligence

Pressed by the need to fill skills gaps, organizations are turning to artificial intelligence to transform stagnant training approaches into continually evolving upskilling strategies.

ai artificial intelligence ml machine learning abstract face
Credit: kentoh / Getty Images

Employee training is an issue of critical importance for enterprises. Challenged to find skilled employees, sapped by high turnover rates, mired in massive transformations, the need to upskill and cross-train employees is paramount — and almost too much for traditional approaches to training to handle.

Enter AI.

Artificial intelligence and machine learning are increasingly being leaned on to aid in companies’ upskilling strategies, ascertaining skill sets, recommending learning paths, providing on-the-job training — even helping determine what to pay for acquired skills.

With more than 345,000 employees and an ever-present need to stay ahead of the technology curve, IBM is one such company putting AI to work in keeping its workforce sharp.

“The half-life of skills is now five years,” says Anshul Sheopuri, chief technology officer for data and AI at IBM HR. “Half of what you learn is either forgotten or becomes obsolete in five years.”

Demand for new and specialized skill sets from rapidly evolving domains such as cloud computing and AI is in large part fueling this compression of skills shelf lives. So finding a scalable way to continuously improve employee skills is “not a nice-to-have but a must-have,” he says.

Anatomy of an AI-based training strategy

The first step of any upskilling strategy is to identify what skills employees currently have. In the past, this would involve a skills self-assessment. But as Sheopuri points out, when tested, the accuracy of this method was about 75%. “It’s highly subjective and the assessments become obsolete quickly.”

Anshul Sheopuri, chief technology officer for data and AI and HR, IBM IBM

Anshul Sheopuri, chief technology officer for data and AI and HR, IBM

Today, IBM is using AI to infer skills by scanning 220 million internal documents, including resumes, blogs, published papers, and corporate communications. The system leans on IBM’s own AI system, Watson, and it includes natural language processing, clustering, and semi-supervised learning techniques.

“We say, ‘These are the skills we think you have, give us your feedback,’ and we find the accuracy here is closer to 90%,” he says, adding that these automated skill assessments are easier to keep current.

Employees then get learning recommendations based on the skills they have, the skills IBM needs, and the skills IBM customers are asking for.

“In certain strategic areas, like AI or cloud, we want everybody to have foundational knowledge in that space,” he says.

The system also makes recommendations based on what other employees with the same background study next, similar to the way users get recommendations about what movies to watch next on video streaming services.

“This system is used by 98% of IBMers every quarter,” Sheopuri says, adding that it has a Net Promoter Score of 58. According to Bain & Co, which created the NPS system, anything above 50 is excellent.

With employee pay at IBM linked to skill sets, there’s also an AI system that makes salary recommendations to managers, as a way to reward employees for upskilling, and to address problems of pay inequities. “That’s a huge thing for us,” says Sheopuri. “We want to make sure we’re very thoughtful about bias.”

The pay recommendation system also offers transparency, providing explanations for its recommendations so that managers have all the information they need to make a final decision.

In addition to raises, employees also get job recommendations — again, powered by AI. “There are tens of thousands of jobs always open at IBM,” he says. “It’s a win-win for us. It’s an attrition reduction, and we’re helping our employees attain their aspirations.”

Last year, Sheopuri says, about 200,000 IBM employees looked at job recommendations, and thousands got new jobs based on their personalized matches.

On-the-job augmentation

For the training itself, IBM relies on a traditional mix of third-party courses, internally developed learning materials, and some new decentralized educational channels where content is created by other employees. But it does lean on AI to provide on-the-job-training for some types of jobs. For example, in HR, a chatbot can answer questions related to benefits, dental plans, and onboarding policies. There’s also intelligence in tech support functions.

“We’re a huge technology company managing IT infrastructure for thousands of clients,” he says. “We’re mining logs to understand historical ticket issues and resolution paths and we make that knowledge available to agents.”

It’s the digital equivalent of having an experienced employee sitting in the next chair over, ready to help if someone gets stuck.

Swaminathan Chandrasekaran, head of solution architecture for digital solutions, KPMG KPMG

Swaminathan Chandrasekaran, head of solution architecture for digital solutions, KPMG

In fact, helping train employees by providing help right when the employee needs it is the most common use of AI for upskilling, says Swaminathan Chandrasekaran, head of solution architecture for digital solutions at KPMG.

This is especially important in contact centers, where attrition rates are high.

“Attrition rates are from 18% for the smaller centers to 40% in larger ones,” Chandrasekaran says. “The cost of replacing an agent is from $5,000 to $7,000 — not including time on the job where they’re trying to ramp up.”

Say, for example, an agent needs to explain how to replace the batteries in a MacBook Pro. A new agent would benefit from seeing the instructions come up on the screen, ready to refer to. Here, AI can ascertain a caller’s question and retrieve the most relevant information from handbooks, guides, product manuals, support manuals — all the documents a company would have available.

Moreover, AI can also be used to predict what a call is likely to be about based on past interactions and pull up relevant information right from the start — like having an experienced employee on hand who has dealt with that customer before and can guide a trainee through the process of solving that customer’s issue.

These systems can also be used to provide new information to large numbers of employees in the context of their jobs, so they don’t have to be sent out for training when, say, a new product comes out.

KPMG, for example, recently had to train its employees about changes to the London Interbank Offer Rate system (LIBOR). KPMG used AI to read contracts, in all formats, and extract references to bank rates and other LIBOR-specific language and push them into employee workflows. The systems also get training from subject matter experts to provide additional assistance to employees.

This new style of learning fits the generation that’s now coming into the workforce, Chandrasekaran says. “You can’t say, ‘Go to this six-week training program and then come back and do the job.’”

Virtual assistants

On-the-job AI augmentation points to a future in which every employee everywhere has his or her own personal assistant to help them get skilled in their job, Chandrasekaran says. “It’s like having an AI-powered personal coach.”

The idea isn’t new. Twenty years ago, Microsoft tried to get everyone to learn from Clippy, its Office assistant. It didn’t go well. People hated it so much that in 2010, Clippy made the list of Time magazine’s worst inventions of all time. But this time is different, says Chandrasekaran.

Companies now have reasonably reliable language and voice recognition, sentiment analysis technology, and recommendation engines. And with digitization, they now have the data they need to train these systems based on the company’s specific requirements.

“There’s no substitute for data,” he says. “For example, in contact centers, you have to give it good data with bad accents, with poorly pronounced words, with background noise, for it to get a good speech transcription engine. With contracts, you have to give it examples of contracts in different languages and different shapes and flavors. There’s no substitute for good data, for humanly annotated data.”

Companies also need to ensure they have a feedback mechanism to continue to train and improve the system. “When you build traditional applications, the best day is the first day because everything works and the problems happen later,” Chandrasekaran says. “With AI, the first day is the worst day. Companies have to be prepared for the first few iterations.”

In many areas, the AI systems are already at a fairly stable point and can be put to doing useful work, he says. “There are still emerging areas, such as how to extract handwritten notes in a document.”

Microdose learning

Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting, calls the new approach to AI-based learning “microdosing.”

Carmen Fontana, cloud and emerging tech practice lead, Centric Consulting Centric Consulting

Carmen Fontana, cloud and emerging tech practice lead, Centric Consulting

“We don’t like to sit in a class for 40 hours a week to learn something,” she says. “With AI, you can do it in small pieces, with small prompts, in the moment — so it’s timely and easy to absorb.”

At Centric, Fontana herself is both a creator and a consumer of this kind of learning.

“I actually create content based on my area of practice so that people outside my area can understand what we do,” she says. “It’s less work for me, and instead of having formal training classes on what my team does, I can put this content up and people can get these microdoses.”

The company also has a recommendation engine and can create learning pathways for employees. Fontana herself, for example, recently learned about her company’s values and culture.

“It’s always embarrassing when you’ve been here for nine years,” she says. “But they didn’t have this training when I started, so I wanted to go back and revisit it and understand how we position ourselves in our values and culture.”

The new approach gives employees ownership over their learning, she says.

“This is a big differentiator compared to when I became a consultant.”

Potential risks of AI-based training

Kamlesh Mhashilkar, head of the data and analytics practice at Tata Consultancy Services, sees AI being used for context-aware skill building, for identifying employees who would benefit from a particular course or conference, and for customized educational plans for individual subjects.

Kamlesh Mhashilkar, head of the data and analytics practice, Tata Consultancy Services Tata Consultancy Services

Kamlesh Mhashilkar, head of the data and analytics practice, Tata Consultancy Services

More recently, AI has become useful in helping to proctor exams. In the good old days, people could physically travel to testing centers to take certification exams.

“With COVID, AI really helped with machine proctoring or self-proctoring or dual proctoring,” Mhashilkar says. “If the person is moving their eyes here or there continuously — is the person truly taking the exam, or are they trying to do fraud?”

Some schools are already using AI to grade student papers — and getting bad publicity when it doesn’t go well.

“There’s been a backlash because of the way it gets implemented,” says Joe Tobolski, CTO at Nerdery, a digital services consultancy. “I have a little trepidation about that, because of the ability of biases to introduce false negatives to the system.”

Joe Tobolski, CTO, Nerdery Nerdery

Joe Tobolski, CTO, Nerdery

Using AI to transfer knowledge from experienced employees to new ones, which is increasingly relevant for industries with graying workforces, also carries risks.

“We see that right now in outsourcing arrangements where workers were asked to teach their job to an outsourcing organization, and they sabotaged it,” he says.

With a machine, one that’s less likely to call someone out for giving obviously bad advice, that is probably even more likely to happen, he says.

“There are unintended consequences,” he says. “Could someone misuse that? Of course, and they probably will.”