Designing Pivotal Practice an AI Roleplay Experience for Manager Growth

Leadership training often stops at theory. I designed a GenAI-powered simulation that let managers actually practice. It helped close $650K+ in deals and led to a 63% increase in learner proficiency.

About Praxis Labs

At Praxis Labs, we’re on a mission to make workplaces work better for everyone.

We’re an AI-powered learning platform that helps people build the critical human skills needed to succeed and lead in today’s workplace.

By combining coaching, roleplay, and skill assessment, we help organizations develop more engaged, high-performing teams.

About Pivotal Practice

Pivotal Practice is an AI-powered learning experience that helps managers safely practice difficult workplace conversations.

By combining coaching, conversational roleplay, and feedback, it provides a safe space for learners to build confidence and strengthen critical skills like feedback, empathy, and communication.

role

I was the sole product designer for Pivotal Practice, responsible for all design decisions, from product flow to interface to visual design. I led the entire end-to-end experience, collaborating closely with product, engineering, and learning partners.

Team

Product Designer
Product Owner
2 Learning Designers
2-3 Software Engineers
1 QA Engineer

Responsibilities

User Research
Visual Design
Product Design

design process

Discover


Define


Develop

Building on 1.0 research and feedback, we ran targeted discovery with client design partners and internal audits to refine the product vision

Aligned on core problem framing and success metrics; prioritized design goals for voice, coaching, and learner experience consistency

Rapid prototyping and weekly design sprints with continuous stakeholder feedback to improve the core learner experience


Deliver

Beta launched with design partner clients (Jan 2025), followed by general availability (April 2025) with a completely new platform + 14 scenarios

01

Discover

The Problem

Leadership training often fails where it matters most—real-world application.

  • Too much theory, not enough practice

  • Hard to scale across learners and clients

  • No easy way to measure if people were improving

We needed an immersive, scalable, and measurable solution.

02

Define

The Challenge

We built an immersive, scalable, and measurable solution with Pivotal Practice 1.0.

  • The product resonated but had limits:

  • Clients wanted more variety and org-specific customization

  • Learners expected more dynamic, responsive interactions

  • Content creation was slow, relying on manual scripting and voice acting

Our Approach

For Practice 2.0 (Project RAWR), we

  • Used GenAI to speed up content creation and personalize learning

  • Made conversations feel more dynamic and real

  • Tested ideas fast, learned quickly, and built a scalable immersive learning experience

03

Develop

Evolving the Experience

Concept Validation


Product Refinement


Scenario Expansion

Tested early prototypes with buyers and clients to validate core experience flow and solution (setup, voice vs. text, feedback).

Fast 1–2 week design–build–test cycles. Usability tests with success targets:

  • 70%+ easy to follow

  • 70%+ skill recall

  • 60%+ confident applying skills

  • 80%+ psychological safety

  • 70%+ positive emotion

  • 40%+ very disappointed if removed

Tested the experience across leadership challenges, expanding from feedback on communication to performance coaching and leading team change.

Intro Page

Early testing showed our first intro page didn’t give learners enough context. They felt unclear about the conversation goal and how to know when they had finished.

We redesigned it to better prepare them without overwhelming.

  • Outlined the scenario and coaching goal

  • Explained which skills they would practice and why

  • Used clear, supportive language and reduced distractions

  • Helped learners enter the simulation with more confidence and less anxiety

Skills Pages

We added a Skills Preview step so learners could see 2–3 key behaviors they’d practice. Early designs let users skip this, but many who skipped felt confused.

We changed the flow to make reviewing skill pages required.

  • Provided skill purpose, plain-language description, and example

  • Allowed experienced learners to quickly tap through if they wanted

  • Prevented frustration by giving everyone a consistent preview

  • Managers liked the quick refresher before starting conversations

04

Deliver

Learner Journey Walkthrough

After multiple rounds of iteration and testing, we created a seamless experience designed for clarity, presence, and psychological safety.

The learner journey flows through five core stages:

  1. Intro Page

  2. Skill Pages

  3. Meeting Simulation

  4. In-the-Moment Coaching

  5. Coaching Report

Intro Page

Learners land on a short, welcoming screen designed to reduce anxiety.

  • Conversational tone

  • Clean, dark-mode layout

  • Character photo provides emotional context

  • Minimal cognitive load to set expectations gently


The goal: foster presence and safety before learners begin.

Skill Page

Learners preview 2–3 inclusive leadership skills they will practice.


Each skill includes:

  • Purpose

  • Plain-language description

  • Quick example

I designed this page to be scannable, structured, and low-pressure. Consistent spacing and clear hierarchy help orient learners without overwhelming them.

Meeting Simulation

I redesigned this screen to feel like a familiar video call, removing clutter from Practice 1.0.

  • Learners speak aloud to an AI character in real time

  • Character speaks first to signal a live conversation (early testing insight)

  • Voice indicators, mic input feedback, and live captions increase trust and accessibility

This simplified, conversational interface helped learners feel confident, present, and focused during practice.

Real-Time Coaching

We added Maya’s real-time coaching to guide learners during conversations—not just afterward.

  • Scenario + goal shown in side panel for reference

  • One automatic tip triggers at ~34 seconds to offer gentle support

  • Learners can request additional tips on demand

  • Coaching panel is collapsible to reduce distraction

This balance gave learners confidence to move forward when stuck, while respecting their flow if they preferred independent practice.

Coaching Report

After the session, learners get skill-based feedback.

  • Replaced numeric scores with 5-star ratings to reduce anxiety

  • Feedback organized by skill + “what went well” + “what to improve”

  • Transcript of the conversation added for reflection + trust

  • Learners can revisit key moments for deeper learning and behavior change

05

Impact

Solving the Design Challenge

Help learners practice real conversations


Enable faster, scalable content customization


Deliver clearer feedback and measure growth

We designed an immersive, voice-based roleplay platform so learners could safely practice difficult workplace conversations.

We used GenAI to make the conversations more dynamic and responsive—so interactions felt more natural, like real life.

We reduced scenario creation time from 2 months to 2 weeks.

Clients wanted more variety and org-specific customization. We addressed this by developing a new product, Coach Maya, with RAG capabilities to personalize coaching.

We also built scenario authoring with Maya to allow learners to create limitless, personally relevant practice scenarios.

We redesigned the coaching report, added transcripts, and improved feedback to help learners reflect and grow.

We consistently heard strong learner feedback and observed meaningful skill growth across client learner populations.

Learner Quotes

This is the best on-demand learning experience for managers I’ve ever used. The opportunity to practice challenging conversations and have real-time discussions that could go anywhere is really something special.

As a newer manager, I’ve already had to have some tough conversations with direct reports, and it’s been tough to know if I’m “doing it right.” This experience practicing having hard conversations with AI was really helpful, as it has allowed me to have space to “mess up”, get feedback and get more practice is a safe space.

Genuinely helpful practice and preparation for uncomfortable or confrontational coaching situations, and using an incredibly common real world example.

Learner Sentiment

To go beyond anecdotal feedback, we used Hotjar to collect in-session learner insights:

  • Learners rated the experience an average of 4.1 out of 5

  • 70–100% said they’d be “very disappointed” if it were taken away

This reinforced what we’d heard in testing: learners found the experience valuable, emotionally safe, and worth returning to.

Learner Impact

63+ demonstrated skills growth after Praxis’ genAI-enabled practice and coaching

  • After Praxis’ genAI-enabled coaching, 46% of learners grew to “Proficient” or “Excellent”, compared to 23% at start in giving structure feedback*

  • After Praxis’ genAI-enabled coaching, 78% of learners grew to “Proficient” or “Excellent”, compared to 15% at start at restating*

06

Learnings

Designing an 
AI Product

Design flexible systems


Design for unpredictability


Test relentlessly

You create templates, guardrails, and adaptable flows, not scripts, so the product feels intentional while adapting to unpredictable user inputs.

We built in fallback behaviors and system protections to maintain learner experience even when AI responses were unexpected.

We stress-tested edge cases and adversarial conversations to refine fallback behaviors and protect learner experience when AI responses were unpredictable.

Designing with 
AI

Experiment constantly


Stay current


Work with AI as a creative partner

I use AI across writing, research, visuals, prototyping, and testing to explore where GenAI excels and where it struggles.

I subscribe to newsletters from UX and education leaders who track AI advancements to continuously improve my approach.

AI helps me move faster, unstick design problems, and explore new ideas. I see AI as both collaborator and co-creator in my design process.

AI Team Collaboration

Create fast feedback loops


Embrace ambiguity and experimentation


Build rituals to stay aligned

We organized as a GenAI Tiger Team using a Build + Recon model. The Build team prototyped weekly; the Recon team gathered external feedback bi-weekly to refine priorities.

Traditional roles blurred, designers, PMs, and learning scientists collaborated on prompt design and prototyping. We defined “good enough” as “not obviously wrong” to ship quickly and keep momentum.

We held daily check-ins, weekly prioritization meetings, and cross-team collaboration sessions. During early development, we ran Tiger Feast, a weekly review of user feedback and test videos to guide iteration.