How AI is Transforming Talent Development

Kian Katanforoosh on Skills Assessment, AI Integration, Project Matching, and the Future of Mentorship.

Subscribe: AppleSpotify OvercastPocket CastsAntennaPodPodcast AddictAmazon •  RSS.

Workera founder, Kian Katanforoosh, discusses how skills verification has evolved into a critical market focused on both assessment and professional development. The conversation explores how AI has transformed skills assessment, enabling more responsive tools that adapt to rapidly changing skill requirements across industries. The discussion highlights Workera’s B2B focus on helping organizations with upskilling, project resourcing, and internal mobility, while also examining how AI can improve assessment quality and enable better mentorship.  [This episode originally aired on Generative AI in the Real World, a podcast series I’m hosting for O’Reilly.]

Subscribe to the Gradient Flow Newsletter

 

Interview highlights – key sections from the video version:



Related content:


Support our work by subscribing to our newsletter📩


Transcript

Below is a heavily edited excerpt, in Question & Answer format.

Can you give us a sense of how big the market is for skills verification?

It’s a newer market that’s extremely large. Anything that touches skills data is on the rise. People typically think of skills assessments like the SAT, TOEFL, or GMAT, but in professional contexts, there are countless points where employees need to validate their skills or demonstrate competencies—before getting a job, when being matched to projects, during mentorship, or when deciding where to invest in learning. Because of these various touchpoints, the market for accurate skills verification is massive. At Workera, our business model is exclusively B2B and federal, though we do offer free assessments for consumers to test their skills in AI, cybersecurity, software engineering, and other areas.

How has skills assessment evolved over the years, particularly before generative AI?

Historically, assessments were used primarily for summative purposes—high-stakes, pass/fail scenarios like job applications or university admissions without much feedback for personal development. What Workera has done is reinvent assessments to also help people understand where they stand compared to the market, make informed career decisions, and identify what to study next. This shift from company-focused hiring tools to providing value for the individual’s self-awareness required different technology approaches.

What changed in the assessment space with the rise of generative AI?

Several things have changed. First, skills now evolve much faster than before. The World Economic Forum reports that the “half-life” of skills used to be over 10 years, but today it’s around four years for digital areas and just 2.5 years for some technical fields. This rapid evolution means people need to update their skills more frequently.

Additionally, while anyone can write a simple quiz, creating a valid assessment that actually measures what it’s supposed to measure is extremely difficult. AI can help throughout the assessment workflow:

  1. Before development: AI assists with competency modeling—determining what skills are relevant to measure for specific roles
  2. During creation: AI streamlines assessment creation and adaptation to different languages, contexts, and industries
  3. During testing: AI can create synthetic users to test assessments and provide feedback
  4. Post-assessment: AI monitors performance metrics and can identify problematic questions, with human oversight for updates

How has AI changed the nature of coding assessments specifically?

Coding assessments need to evolve alongside coding practices. Traditional coding assessments focused heavily on syntax, but with today’s coding assistants, the ability to remember a comma or semicolon isn’t as relevant. Instead, assessments should focus on higher cognitive levels—analyzing information, synthesizing solutions, and creating new approaches—rather than just syntax details.

Our research at Workera indicates you need about 1,000 skills to prototype AI, but around 10,000 skills to deploy AI in production. This includes everything from model development to serving, monitoring, data engineering, testing, and all the infrastructure components. This reflects what Google’s paper on “Hidden Technical Debt in Machine Learning” illustrated—that ML code is just a tiny piece of the entire ecosystem needed for deployable AI systems.

What’s your process for developing assessments for new domains you may not be familiar with?

We have an agent for competency modeling that can be conditioned on various inputs—job descriptions, task analyses, or enterprise job architectures. For example, if we needed to assess a full stack engineer level 2 in a specific geography or industry, we’d start by granularizing the tasks or skills worth measuring.

There’s always a human in the loop—subject matter experts validate whether these competencies are what they want to measure. This process requires significant subject matter expertise to ensure what you’re measuring aligns with what you intended to measure.

Even for specialized domains like battery engineering, language models have been trained on enough content to provide a solid starting point, which subject matter experts can then refine from 80% to closer to 100% accuracy.

We separate our standard catalog of measurable skills (which allows for benchmarking against others in different industries) from custom catalogs that clients can create for highly relevant but less comparable assessments.

What are the primary use cases for skills assessments at Workera currently?

The majority of people are using our assessments for upskilling—acquiring important skills like AI, data, and “soft skills” that are increasingly important when working with AI agents.

The second major application is project resourcing. Managers with large teams often don’t know who’s good at what, and regular assessments help organizations identify experts and their skill levels to match the right people to specific projects.

Internal mobility is also a big use case. With hiring slowdowns, companies want to maximize their current workforce by upskilling employees, matching them to appropriate projects, and planning their workforce strategically—all of which require assessments.

Unlike traditional talent intelligence systems that just search for keywords like “Python” on profiles (which might yield thousands of results), our assessments determine actual proficiency levels that meet specific project requirements.

What would you like to see improved in frontier AI models for your work?

Two key areas:

  1. Observability—We need better tools for LLM observability, beyond standard software observability. When running an agent that has complete conversations, we need to trace it back and update it effectively.
  2. Repeatability—We need to ensure that we get consistent results for the same calls, which isn’t always the case with every foundation model provider. This is crucial for building assessments that can be trusted. It’s a balance between setting temperature low enough for consistency but high enough to allow for appropriate customization based on someone’s HR data and role context.

We’re also interested in better multimodal capabilities, particularly for video. For assessments of skills like effective communication or resilience, being able to create immersive situational assessments with video would greatly enhance the user experience.

What’s your vision for the future with mentorship?

Mentorship is a major focus for us. We’ve identified that effective mentorship consists of three subsystems:

  1. Assessment—Good mentors can accurately assess their mentees’ current abilities
  2. Goal setting—They help mentees dream bigger and set ambitious targets
  3. Guidance—They connect the dots and provide direction

Most people focus only on the guidance aspect, but assessment and goal setting are actually the bigger challenges. Workera is working on creating comprehensive assessments that can measure any skill (starting with AI and trending skills), developing frameworks for setting quantifiable targets rather than just qualitative goals, and then partnering with training providers to guide people through their development.