Designing Pivotal Practice an AI Roleplay Experience for Manager Growth

Managers rarely get to practice the hardest parts of leadership—giving feedback, managing conflict, navigating defensiveness. Pivotal Practice makes that possible through AI-powered roleplays that turn theory into action. As the sole designer, I reimagined how managers practice and measure real-world skills—launching with enterprise clients like Amazon, Salesforce, and Uber, driving 23–63% skill growth, 86% confidence gains, and $1.5M in new and renewal revenue.

A diverse group of four young adults working together in a modern conference room, with one woman speaking, while others use laptops and a smartphone, and a person in the background holding a notebook.

About Praxis Labs

Praxis Labs is an AI-powered learning platform on a mission to make workplaces work better for everyone. Through coaching, roleplay, and skill assessment, we help people develop the critical human skills needed to succeed as leaders. Organizations use our platform to build these skills at scale and drive higher engagement and performance.

A woman smiling in an office setting, with feedback messages about project issues and team impact displayed around her.

About Pivotal Practice

Pivotal Practice is an AI-powered roleplay experience that helps managers practice the conversations that matter most — giving feedback, coaching performance, and navigating conflict. It provides a safe, measurable way to build inclusive leadership skills.

role

Sole designer for Pivotal Practice, leading design from discovery to launch. I partnered with product, engineering, and learning science to turn research into a scalable AI-powered experience.

Team

1 Product Owner
2 Learning Designers
2-3 Software Engineers
1 QA Engineer

Responsibilities

User Research
Usability Testing
Product Design
Visual Design
Design Systems

design process

Discover

Building on 1.0 research and feedback, we ran targeted discovery with client design partners and internal audits to refine the product vision


Define

Aligned on core problem framing and success metrics; prioritized design goals for voice, coaching, and learner experience consistency


Develop

Rapid prototyping and weekly design sprints with continuous stakeholder feedback to improve the core learner experience


Deliver

Beta launched with design partner clients (Jan 2025), followed by general availability (April 2025) with a completely new platform + 14 scenarios

01

Discover

Man sitting at a desk, resting his head on his hand, in an office environment

The Problem

Leadership training often fails where it matters most—real-world application.

  • Too much theory, not enough practice

  • Hard to scale across learners and clients

  • No easy way to measure if people were improving

We needed an immersive, scalable, and measurable solution.

02

Define

A person with black hair wearing a red plaid shirt is sitting at a wooden desk, with a virtual avatar on a computer screen in front of him, in an office setting with a white wall and a door in the background.

The Challenge

We built an immersive, scalable, and measurable solution with Pivotal Practice 1.0. The product resonated but had limits:

  • Clients wanted more variety and org-specific customization

  • Learners expected more dynamic, responsive interactions

  • Content creation was slow, relying on manual scripting and voice acting

Laptop screen displaying a virtual coaching session with a male coach named Maya and a participant named Matt, with a sidebar and options for feedback and ending the meeting.

Our Approach

For Practice 2.0, we reimagined the product using GenAI to:

  • Speed up content creation and enable org-specific customization

  • Make conversations more dynamic, human, and responsive

  • Test quickly and build a scalable, immersive learning experience that’s measurable and ROI-driven

03

Develop

Evolving the Experience

Phase 1: Concept Validation

Tested early text and voice prototypes with learners and clients to validate memory, summaries, and just-in-time coaching flow.


Phase 2: Product Refinement

Fast 1–2 week design–build–test cycles. Usability tests with success targets:

  • 80%+ ease of use

  • 70%+ coaching helpfulness

  • 70%+ memory recall

  • 70%+ would continue using

  • 40%+ “very disappointed if removed”


Phase 3: Scenario Expansion

Tested the experience across leadership challenges, expanding from feedback on communication to performance coaching and leading team change.

A computer screen displaying a virtual training session titled 'Q2 Marketing Strategy Session' featuring two participants, Matt and Valerie, in circular profile pictures. The left sidebar contains instructions and goals for giving structured feedback, with options to proceed to the next step.

Intro Page

Early testing showed our first intro page didn’t give learners enough context. They felt unclear about the conversation goal and how to know when they had finished.

We redesigned it to better prepare them without overwhelming.

  • Outlined the scenario and coaching goal

  • Explained which skills they would practice and why

  • Used clear, supportive language and reduced distractions

  • Helped learners enter the simulation with more confidence and less anxiety

Screenshot of an online training platform with a dark themed interface. On the left side, a black panel lists skills like Providing Structured Feedback, Asking Questions, and Offering Affirmations. The right side displays a virtual meeting labeled 'Q2 Marketing Strategy Session' with two avatars named Matt and Valerie, both with distinct backgrounds and appearances.

Skills Pages

We added a Skills Preview step so learners could see 2–3 key behaviors they’d practice. Early designs let users skip this, but many who skipped felt confused.

We changed the flow to make reviewing skill pages required.

  • Provided skill purpose, plain-language description, and example

  • Allowed experienced learners to quickly tap through if they wanted

  • Prevented frustration by giving everyone a consistent preview

  • Managers liked the quick refresher before starting conversations

Computer screen showing a chat window with messages between Matt and Lana, and options for instruction, skills guide, captions, and ending practice at the bottom.

Voice vs. Text Interaction

We started with text-based roleplays to test and validate the core experience. Once stable, we explored adding voice to make conversations feel more natural and immersive.


At one point, we supported both options, but learner feedback and our instincts aligned:

  • Text felt flat and didn’t differentiate us

  • Voice was more realistic and better for building communication skills

  • Voice helped learners prepare for real workplace conversations

Screenshot of a virtual coaching session with a male avatar named Matt, a chat window with AI coach Maya offering skill improvement tips, and various interface options including show instructions, skills guide, captions toggle, and end practice button.

Meeting Simulation

Early user testing revealed that many learners didn’t realize they should speak aloud or if their audio was being picked up.

  • We redesigned it to feel more natural and intuitive—like a real video meeting.

  • Simplified the layout to remove unnecessary UI elements

  • Had the AI character speak first to signal it was a live conversation

  • Added voice activity indicators and mic input feedback for clarity

  • Included live captions to support accessibility and reinforce what was said

Collage of six diverse professionals, three men in the top row and three women in the bottom row, each in different modern office or workspace environments.

Images vs. Animation

We explored bringing back animated characters like we had used in earlier products.
In testing, learners responded more positively to realistic still images. We chose to move forward with AI-generated still photos for characters.

  • Animation had technical issues: jittery lip sync, missing features, inconsistent rendering

  • Learners said still images felt more believable and less distracting

  • Voice acting created enough emotional presence without needing animation

  • Photo-based characters allowed faster scenario creation and scaled better across new scenarios

A computer screen displaying a digital interface from Praxis Labs with a pop-up message about recognizing defensive behavior, featuring options to see an example or acknowledge.

Real-Time Coaching

Early on, learners sometimes struggled to move conversations forward or understand their goal. We introduced Maya’s real-time coaching to provide support in the moment. This became a differentiator from other tools.

  • Repeated the scenario and coaching goal in the side panel for reference

  • Delivered one automatic tip around 34 seconds to avoid disruption

  • Allowed learners to request additional tips on demand

  • Redesigned coaching as a side panel to reduce visual clutter and distractions

Multiple testing rounds helped us fine-tune the tone, timing, and overall experience.

Screenshot of a coaching report from Praxis Labs with feedback and skill measurement details for a user named Lana.

Coaching Report

In early demos, clients weren’t sure how we calculated skill scores or what we measured. The original overview was too vague and the feedback wasn’t as actionable. I redesigned the report to:

  • Show how well the learner achieved the overall goal

  • Group skill measurement by specific skills

  • Add a transcript so learners could review where to improve

  • Replace unclear numeric scores with intuitive star ratings

Learners found it valuable to reflect on their responses, and the feedback felt more trustworthy in context. We also saw increased engagement with the report.

A color palette chart with nine shades of grey labeled from Grey 50 to Grey 900 on the left. The right side contains text samples with various font styles and sizes, including headings such as "Display Small," "Headline Large," "Title Large," "Body X-Large," "Body Large," and a paragraph of placeholder text in Latin.

Visual Design

We needed a system that supported learning without distracting from it. I designed a visual design system that reduced cognitive load, aligned with our product brand, and applied accessibility best practices.

  • Minimized visual noise and simplified layouts to lower cognitive load

  • Applied consistent patterns for hierarchy, typography, spacing, and reusable components

  • Used a dark theme with off-white text to reduce eye strain and keep focus on the conversation

  • Ensured accessibility with left-aligned text, 16px minimum font size, high contrast, and generous spacing

Collage of six diverse professionals in office settings, three women in the top row and three men in the bottom row.

Character Design

To make practice feel more immersive, I translated character backstories and scenarios into visuals and voice design that reinforced the narrative.

  • Established a visual prompt library to generate diverse, high-quality character images with subtle cues that reflected each scenario

  • Created a voice design prompt library so tone and language matched character roles, backstories, and context

  • Iterated continually to maintain quality and consistency as AI models evolved (ImageFX, Hume EVI3, Claude Sonnet)

These choices helped learners immediately understand the dynamics of each scenario and made conversations feel more authentic and engaging.

04

Deliver

Laptop screen displaying a website for Praxis Labs with a man in a red checkered shirt smiling in a modern office setting, promoting a course on giving feedback on communication.

Learner Journey Walkthrough

After multiple rounds of iteration and testing, we created a seamless experience designed for clarity, presence, and psychological safety.

The learner journey flows through five core stages:

  1. Intro Page

  2. Skill Pages

  3. Meeting Simulation

  4. In-the-Moment Coaching

  5. Coaching Report

Laptop screen showing a virtual feedback training session with a man in a red checkered shirt standing in a modern office. The page titled 'Let's practice giving structured feedback' from PRAXIS LABS includes a scenario, goals, skills, and a 'Next' button.

Intro Page

Learners land on a short, welcoming screen designed to reduce anxiety.

  • Conversational tone

  • Clean, dark-mode layout

  • Character photo provides emotional context

  • Minimal cognitive load to set expectations gently


The goal: foster presence and safety before learners begin.

Laptop displaying a presentation slide titled 'Provide structured feedback' with a photo of a man in a red checkered shirt standing in a modern office with large windows and plants.

Skill Page

Learners preview 2–3 inclusive leadership skills they will practice.


Each skill includes:

  • Purpose

  • Plain-language description

  • Quick example

I designed this page to be scannable, structured, and low-pressure. Consistent spacing and clear hierarchy help orient learners without overwhelming them.

Laptop computer screen displaying an online coaching video call with two participants, Matt and Lana, in a dark-colored interface. The left side has a chat window with a greeting and options for tips and goals, while the right side shows their video feeds and a button to end the meeting.

Meeting Simulation

I redesigned this screen to feel like a familiar video call, removing clutter from Practice 1.0.

  • Learners speak aloud to an AI character in real time

  • Character speaks first to signal a live conversation (early testing insight)

  • Voice indicators, mic input feedback, and live captions increase trust and accessibility

This simplified, conversational interface helped learners feel confident, present, and focused during practice.

Laptop screen displaying a virtual meeting with a man named Matt and a participant named Lana, along with coaching interface elements and options.

Real-Time Coaching

We added Maya’s real-time coaching to guide learners during conversations—not just afterward.

  • Scenario + goal shown in side panel for reference

  • One automatic tip triggers at ~34 seconds to offer gentle support

  • Learners can request additional tips on demand

  • Coaching panel is collapsible to reduce distraction

This balance gave learners confidence to move forward when stuck, while respecting their flow if they preferred independent practice.

Laptop screen displaying a dark-themed skill measurement report with sections on feedback, skill level, strengths, and improvement tips on a black background.

Coaching Report

After the session, learners get skill-based feedback.

  • Replaced numeric scores with 5-star ratings to reduce anxiety

  • Feedback organized by skill + “what went well” + “what to improve”

  • Transcript of the conversation added for reflection + trust

  • Learners can revisit key moments for deeper learning and behavior change

05

Impact

Solving the Challenge

Business Impact

Pivotal Practice 2.0 delivered strong outcomes for both clients and the business:

  • Re-engaged accounts at risk of churn and unlocked stalled deals

  • Generated $1.5M in new and renewal deals within 6 months of beta

  • Adopted by enterprise clients including Amazon, Salesforce, Uber, Accenture, ADP, Conagra, and Broadridge

  • Reduced scenario creation time from 2 months → 2 weeks


Learner Impact

The experience consistently drove meaningful skill growth and confidence for managers across programs:

  • 86% reported higher confidence applying skills on the job

  • 23–63% skill proficiency growth across feedback, restating, and validating emotions

  • 90% said the experience improved their job performance

  • 89% believed broader adoption would positively impact team performance


Learner Sentiment

Learners rated the experience highly across multiple programs, often exceeding industry benchmarks:

  • Amazon: 89/100 satisfaction (vs. 64 eLearning benchmark)

  • Conagra: 88% completion; 95.7% found modules actionable, 87% highly engaging

  • Broadridge: 76/100 satisfaction across 4,300+ sessions

  • Hotjar surveys averaged 4.4/5 average satisfaction, with 88% using it to prepare for future conversations and 90% saying they’d be disappointed if removed

Learner Quotes

Placeholder image with quotation marks, no specific content visible.

This is the best on-demand learning experience for managers I’ve ever used. The opportunity to practice challenging conversations and have real-time discussions that could go anywhere is really something special.


There is no image provided.

As a newer manager, I’ve already had to have some tough conversations with direct reports, and it’s been tough to know if I’m “doing it right.” This experience practicing having hard conversations with AI was really helpful, as it has allowed me to have space to “mess up”, get feedback and get more practice is a safe space.


Placeholder for an image

Genuinely helpful practice and preparation for uncomfortable or confrontational coaching situations, and using an incredibly common real world example.

06

Learnings

Designing an 
AI Product

Design flexible systems


Design for unpredictability


Test relentlessly

You create templates, guardrails, and adaptable flows, not scripts, so the product feels intentional while adapting to unpredictable user inputs.

We built in fallback behaviors and system protections to maintain learner experience even when AI responses were unexpected.

We stress-tested edge cases and adversarial conversations to refine fallback behaviors and protect learner experience when AI responses were unpredictable.

Designing with 
AI

Experiment constantly


Stay current


Work with AI as a creative partner

I use AI across writing, research, visuals, prototyping, and testing to explore where GenAI excels and where it struggles.

I subscribe to newsletters from UX and education leaders who track AI advancements to continuously improve my approach.

AI helps me move faster, unstick design problems, and explore new ideas. I see AI as both collaborator and co-creator in my design process.

AI Team Collaboration

Create fast feedback loops


Embrace ambiguity and experimentation


Build rituals to stay aligned

We organized as a GenAI Tiger Team using a Build + Recon model. The Build team prototyped weekly; the Recon team gathered external feedback bi-weekly to refine priorities.

Traditional roles blurred, designers, PMs, and learning scientists collaborated on prompt design and prototyping. We defined “good enough” as “not obviously wrong” to ship quickly and keep momentum.

We held daily check-ins, weekly prioritization meetings, and cross-team collaboration sessions. During early development, we ran Tiger Feast, a weekly review of user feedback and test videos to guide iteration.