How to apply product management principles to learning design

Transform how you design learning by thinking like a product manager. This breakdown shows you how to move from traditional training programs to targeted, evolving solutions that drive real performance change. Learn ways to test ideas quickly, gather meaningful feedback, and scale what works.
How to apply product management principles to learning design
How to apply product management principles to learning design

I love finding frameworks that help us think differently about learning and development—especially ones that push us beyond traditional L&D approaches. That's why I was excited to discover this playbook from Kinfolk, which shows how we can apply product management principles to create more impactful people experiences.

Kinfolk: Your AI-powered People Ops Assistant
Streamline HR Ops, support more employees faster, know where to focus. With AI agents, a collaborative helpdesk, automations, integrations, real-time reporting.

*This resource by Kinfolk is recommended for its educational value and is not an EDU Fellowship original work. All rights belong to the original creators.

While written primarily for People/HR teams, the insights are incredibly relevant for L&D professionals. The big shift it advocates is moving away from one-off programs and checklist-driven processes to treating our learning initiatives like products—with clear customer problems to solve, iterative development, and measurable impact.

The playbook introduces concepts like the PX (People Experience) Flywheel and the Double Diamond framework, showing how we can use product thinking to identify the most meaningful opportunities, develop targeted solutions, and continuously improve based on honest feedback.

What I find particularly valuable is how it emphasizes starting small, working fast, and letting data guide our decisions - rather than trying to build perfect programs from the start. This represents a mindset shift in how we approach L&D work: from being program administrators to becoming product managers of the learning experience.

💡
Please note: The following insights reflect my personal interpretations, reflections, and advice based on the original resource. This breakdown may incorporate AI-assisted tools to help organize thoughts and expand on content for clarity. While I aim to highlight key points and offer valuable takeaways, it may not capture all aspects or perspectives of the original material. I encourage you to engage with the resource directly to form your own understanding and conclusions.

💭 Storytime: The Tale of Two Kitchens

Think of traditional HR/L&D like a restaurant that creates its entire menu based on what the kitchen thinks people should eat, prepares everything in advance, and hopes people will like it. Each dish is carefully planned and perfectly plated, but diners often leave food untouched or wish they could customize their meals.

Now imagine transforming that restaurant into a modern kitchen that constantly analyzes order data, tests new recipes in small batches, and iterates based on real-time feedback. They start with minimal viable menus, watch what sells, gather feedback, and adjust quickly. They're not afraid to remove unpopular items or modify recipes. Most importantly, they're always asking: "What problem are we solving for our hungry customers?" Sometimes the solution isn't even a new dish - it might be better delivery timing or different packaging.

This is the shift from traditional L&D to product-thinking L&D. Instead of creating perfect programs in isolation, we're constantly cooking up solutions, testing them, and refining based on actual usage and impact.


🥡 How to apply product management principles to L&D

audio-thumbnail
AI Audio Summary (Using NotebookLM)
0:00
/1240.32

Design learning experiences like digital products, not training programs

The shift from program-centric to product-centric thinking changes how we create learning solutions. When I look at successful tech products, they don't start with features - they start with user problems. Think about how apps like Duolingo approach language learning - they start with user behavior, test small features, and build based on actual usage patterns.

We can bring this same mindset to corporate learning by focusing on learner behavior first and solution second. The same principle applies to L&D. Instead of beginning with content or curriculum, we need to deeply understand the challenges our learners face and build solutions that evolve based on their actual needs and behaviors. This means getting comfortable with launching simpler solutions faster and improving them through real feedback, rather than trying to build the perfect program from the start.

How to start thinking and working like a product manager in L&D:

  • Create a product canvas for each major learning initiative - spend time mapping out who this solution is really for, what specific problem it solves, and how you'll measure success. Include key metrics, learner segments, and expected outcomes.
  • Set up monthly learning feedback sessions with a mix of learners, managers, and stakeholders. These focused conversations should focus on specific challenges and how current solutions are or aren't helping. Keep the groups small (5-7 people) and the format consistent. Document patterns and insights that emerge.
  • Implement a systematic way to track how people actually use your learning solutions. Look at which resources they return to, what they skip, where they spend the most time. This could be through LMS data, observation, or regular check-ins. Use this data to inform your next iteration.
  • Start with a "minimum viable learning experience" (MVLE). Launch with just enough content and structure to solve one specific performance problem. For example, instead of a full sales training curriculum, start with one critical conversation scenario and build from there based on actual usage.
  • Develop what I call a "learning improvement cycle" - document each version of your solution, what you changed based on learner feedback, and what impact those changes had. This creates a clear story of evolution and impact.
Want more insights like this? Consider becoming a Pro Member and unlock all our deep dive resources!

Treat discovery as a learning experiment, not a needs analysis

I've noticed that in L&D, we often jump straight to solutions when someone requests training. The double diamond framework flips this approach on its head. It pushes us to spend more time understanding problems before designing solutions. Through the discover and define phases, we often uncover that the real learning need is different from the initial request. Then in develop and deliver, we can create targeted solutions that actually move the needle on performance.

How to transform your discovery process into an experimental learning lab:

  • Create a "learning hypothesis journal." Instead of a standard training needs analysis, document your assumptions as testable hypotheses. For example: "We think sales reps aren't hitting targets because they lack negotiation skills" becomes "If we observe sales calls, we'll find missed negotiation opportunities." Then go test it. I've often found the real gaps aren't what we initially assumed.
  • Run "day-in-the-life shadowing" sessions. Pick 3-5 people who represent your target audience. Follow them through their work day, noting pain points, workarounds, and moments of excellence. Pay special attention to the tools they actually use versus what they skip. This gives you environmental context you'd never get from a survey.
  • Map the current informal learning paths. Before designing anything new, understand how people currently learn to do their jobs. What resources do they actually use? Who do they go to for help? These natural learning flows often reveal the best ways to embed new solutions.
  • Test your problem assumptions immediately. If you think people need help with difficult conversations, watch some conversations. If you think managers need coaching skills, observe their 1:1s. Getting direct exposure to the problem helps separate symptoms from root causes.
  • Set up clear success metrics that link to business outcomes. Move beyond completion rates to metrics that show actual behavior change or performance improvement. Make these visible to stakeholders and review them regularly.

Start smaller to solve problems faster

  • Break down large learning initiatives into smaller testable chunks. Instead of launching a complete leadership academy, start with one critical skill that managers need right now. For example, if feedback conversations are a pain point, build a focused solution just for that. Test it, refine it, then add more components based on what works.
  • Create what the PDF calls "tight feedback loops" but for learning impact. Run 2-week experiments where people try applying new skills or knowledge, then gather immediate feedback about what helped and what got in the way. These quick cycles tell you more than end-of-course surveys ever will.
  • Build your "learning beta group" - a mix of early adopters and skeptics who'll test your solutions and give honest feedback. The key is making it safe for them to tell you what's not working. Their insights will help you spot problems before you scale.
  • Start with tools and resources people can use immediately in their work. The PDF talks about reducing friction - in L&D terms, this means creating job aids, quick reference guides, or practice scenarios that directly connect to people's daily tasks. Perfect for testing what actually helps improve performance.
  • Document your iteration story - what you started with, what feedback you got, what you changed, and what impact it had. This builds credibility with stakeholders and gives you evidence for future approaches. Plus, it helps you spot patterns in what works for your organization.

Measure what matters, not what's easy

The PDF talks about leading and lagging indicators, and this completely changes how we should think about learning measurement. But here's what's missing for L&D: we need to connect immediate learning engagement with actual performance change. It's not enough to know people completed something - we need to understand if and how they're using it to work differently.

How to evolve your learning measurement approach:

  • Create what I call "learning signal metrics" - early indicators that your solution is working. This goes beyond completion rates. Look for things like how often people reference a resource, moments where they apply a new skill, or questions they ask their managers. These signals help you adjust before waiting for major performance metrics to shift.
  • Set up regular performance touchpoints with managers. The PDF suggests stakeholder check-ins, but for L&D, we need to be more specific. Every month, have short conversations with managers about where they're seeing behavior change (or not) from recent learning initiatives. Use their observations to guide your next iterations.
  • Track both learning adoption and business impact together. For example, if you're training sales teams, monitor not just their engagement with the learning content, but also their win rates, deal sizes, and customer feedback. Look for patterns between how people engage with learning and how their performance changes.
  • Build a simple dashboard that tells your learning impact story. Include metrics at three levels: engagement (are people using it?), application (are they doing anything differently?), and impact (is it improving performance?). This helps stakeholders see the connection between learning activities and business results.
  • Document unexpected outcomes and insights. Sometimes the biggest impact comes from places you weren't looking. Keep track of surprising ways people use your solutions or unexpected benefits they report. These insights often guide your next best move.

Scale what works, not what looks good

The PDF emphasizes user adoption and scaling successful solutions. In L&D, we often do the opposite - we try to scale programs before we know if they actually change behavior. Based on the product approach, we should only scale learning solutions that show clear evidence of impact. And here's what's crucial: scaling doesn't always mean making something bigger - sometimes it means making it more focused or accessible.

How to scale learning solutions effectively:

  • Create a learning impact threshold. Before expanding any solution, set clear criteria for what "working" looks like. I learned this from product teams - they don't scale features unless they hit specific success metrics. For learning, define what behavior change or performance improvement you need to see before expanding.
  • Look for organic adoption patterns. The PDF talks about user engagement, but for L&D this is particularly telling. When people seek out and share learning resources without being prompted, you've found something worth scaling. Pay attention to which tools or resources spread naturally through teams.
  • Build what product teams call a "scaling roadmap". Map out how you'll expand successful learning solutions in phases. Each phase should have clear triggers (what success looks like) and specific plans for maintaining quality as you grow. This prevents the common L&D trap of diluting impact during expansion.
  • Document your "scaling story" - not just what worked, but why it worked and what conditions were necessary for success. When a pilot program succeeds, study the environment that enabled that success. What support did managers provide? What made people engage? This context is crucial for successful scaling.
  • Focus on maintaining impact while increasing reach. Sometimes scaling means creating multiple focused versions rather than one big program. For example, if a sales training works well, you might need different versions for different regions or product lines rather than one generic global version.

Build capability, not just content

The PDF discusses organizational readiness and team capabilities, but for L&D this goes deeper. We need to shift from being content creators to capability builders. This means developing the organization's ability to learn and adapt, not just delivering training programs. The real win isn't in any single program - it's in how we help the organization get better at learning itself.

How to move from content delivery to capability building:

  • Map your organization's "learning ecosystem." Look at where real learning happens now - formal training, sure, but also peer networks, tools, resources, and on-the-job experiences. Understanding this ecosystem helps you spot gaps and opportunities to strengthen natural learning flows.
  • Build learning support networks. The PDF mentions stakeholder management, but we need to go further. Identify and enable your organization's natural teachers - the people others go to for help. Give them tools and support to share their knowledge more effectively. These networks often drive more impact than formal programs.
  • Create feedback mechanisms that drive continuous improvement. Instead of just gathering feedback on programs, set up ways for the organization to spot and respond to learning needs faster. This might mean regular skill-gap discussions with managers or quick pulse checks on emerging challenges.
  • Develop what I call "learning accelerators" - tools, templates, and processes that help teams learn from their own experiences. Show teams how to run effective debriefs, document lessons learned, and share insights across groups. This builds learning into the workflow rather than making it a separate event.
  • Track organizational learning maturity, not just program metrics. Look at how quickly teams adapt to challenges, how effectively they share knowledge, and how well they apply learning to improve performance. These indicators show if you're building true learning capability.

Common questions about implementing Product Management principles in L&D

Q: I work in a traditional organization that expects full, polished training programs. How do I start shifting to this more iterative approach?

A: Start small and demonstrate wins. Pick one upcoming project where stakeholders are open to experimentation. Frame it as a pilot to "maximize impact" rather than a complete process change. Show how getting early feedback saves resources and improves results. Document everything - especially the improvements you make based on user feedback. This evidence helps make the case for more iterative approaches.

Q: How do I balance the need for quick iterations with our compliance and quality requirements?

A: Separate your learning initiatives into two tracks. Compliance and mandatory training keep their rigorous development process. For skill development and performance support, use the product approach. You can also build quality checks into your iteration process - they just happen throughout development rather than only at the end. This hybrid approach often works better than trying to change everything at once.

Q: We don't have sophisticated analytics tools. How can we still make data-driven decisions?

A: Start with simple, manual tracking that focuses on behavior change. Create quick feedback forms in Microsoft Forms or Google Forms. Have regular check-ins with managers about what they're observing. Track questions and requests that come to your team. Even basic Excel tracking can reveal patterns in how people use your learning solutions. The key is consistency in collecting and reviewing whatever data you can access.

Q: My team is already stretched thin. How do we manage this more iterative approach without burning out?

A: Actually, this approach can reduce workload over time. Instead of building complete programs that might miss the mark, you're creating focused solutions and improving them based on real needs. Start by mapping your current projects and identifying which ones would benefit most from this approach. Often, you can reduce scope initially by focusing on the most critical needs first.

Q: How do I get stakeholders comfortable with launching "minimum viable" learning solutions?

A: Reframe the conversation around risk reduction. Show how launching smaller solutions faster actually reduces the risk of building the wrong thing. Use examples: "Instead of spending six months building a full program that might not address the real need, we can test our core assumption in two weeks with a simple solution. This tells us if we're on the right track before investing more resources." Having clear success criteria also helps stakeholders feel more comfortable with this approach.

Q: How do you recommend handling stakeholders who want to see a detailed project plan before starting? The product approach seems more fluid.

A: Create what I call a "flexible roadmap" instead of a traditional project plan. Show your stakeholders clear phases (discover, test, build, scale) with specific checkpoints and decision criteria for moving between phases. For example: "We'll spend two weeks understanding the problem, then create a small solution to test our assumptions. At each checkpoint, we'll review data together to decide next steps."

Include what you do know: your problem hypothesis, who you'll talk to, how you'll measure success, and what early solutions might look like. But frame the plan as a series of learning cycles rather than a fixed path. I've found that showing stakeholders how this approach actually reduces risk - because we're validating assumptions before full investment - helps them get comfortable with the flexibility.

Define clear moments where you'll make go/no-go decisions together based on actual data. This gives stakeholders the control points they need while maintaining the agility to adjust based on what you learn.


🌎 Case Study: How empathy and coaching transformed Sarah’s team

From overwhelmed to enabled: a story of transforming manager onboarding

Sarah, the L&D lead at a growing tech company, stared at her screen displaying yet another concerning email from HR. Three more new managers were struggling with basic team challenges, despite completing the company's comprehensive 3-day manager training just two months ago. She thought back to the countless hours her team had spent building what they thought was the perfect program – detailed slides, expert speakers, role-plays, the works.

"There has to be a better way," she muttered.

That's when she remembered an article about product thinking in L&D. Instead of adding more content to the program as her stakeholders were suggesting, she decided to try a different approach.

First, she spent a week simply watching and listening. She shadowed Maya, a new engineering manager, and noticed something interesting. Maya wasn't struggling with the leadership theories they'd covered in training – she was having trouble with immediate challenges like giving feedback on code reviews while maintaining team morale.

Sarah saw similar patterns with other new managers. They weren't lacking knowledge – they needed support in the moment, when real situations emerged.

Rather than revamping the entire program, Sarah decided to experiment. She identified the most pressing challenge (running effective 1:1s) and created a simple solution: a digital toolkit with templates and examples, paired with a 2-hour practical workshop.

"This feels too small," Sarah thought as she presented the scaled-down approach to her stakeholders. But she stood firm, explaining how this would let them learn and adjust quickly based on real usage.

The breakthrough came three weeks later. During a check-in, Maya shared how she'd used one of the templates to handle a difficult conversation with a team member. "The best part?" Maya said, "I could quickly reference the example scenarios right before my meeting. In the old program, I would have had to dig through a 100-page manual to find that information."

Sarah's team kept iterating based on feedback. They noticed managers were spontaneously sharing tips in their Slack channel, so they nurtured this community. When managers asked for more examples, they added short video clips of effective conversations. They created "office hours" where experienced managers could share recent wins and challenges.

The results surprised even Sarah. Not only were new managers more confident, but their teams reported better 1:1s and clearer feedback. HR escalations dropped significantly. Best of all, managers started contributing their own tips and examples to the toolkit.

Six months later, Sarah presented the impact to her leadership team. "By starting small and focusing on real needs, we actually achieved bigger results than our comprehensive program ever did," she explained. "We're not just training managers anymore – we're building a community of learning and support."

The biggest lesson? Sometimes less really is more, especially when that "less" is exactly what people need, when they need it.

Note: This case study is a hypothetical example created for illustrative purposes only.


💡Other ideas for bringing product thinking to learning & development

  • 🔄 'Learning Lab' Fridays: Dedicate Friday afternoons to testing new learning approaches with small groups. Run 2-hour experiments where you try out new formats, tools, or content with volunteer learners. Use immediate feedback to iterate and improve.
  • 🎯 'Problem-Solution Mapping' Sessions: Host monthly sessions where L&D partners with managers to map current performance challenges. Instead of asking "what training do you need?", explore root causes and test small solutions quickly.
  • 📱 'Micro-MVP Mondays': Launch one small learning solution every Monday for a month. Could be a simple job aid, a 5-minute video, or a practice exercise. Track usage and impact, then iterate based on what works.
  • 🔍 'Learning Detective' Program: Create a rotating program where L&D professionals shadow different roles for a day. Document actual learning needs and moments where support would be most valuable. Use these insights to design targeted solutions.
  • 📊 'Impact Intelligence' Dashboard: Build a simple dashboard tracking both learning engagement and performance metrics. Focus on showing the connection between learning activities and business outcomes. Use this to guide your iteration decisions.
  • 🤝 'Peer Learning Pods': Set up small groups of learners who test new solutions together. They become your "beta testers" for new learning approaches, providing quick feedback and insights about what actually helps them improve.
  • 🎪 'Learning Experience Showcase': Monthly showcase where teams share how they're using learning resources in their daily work. Capture these stories to understand what's working and spread successful practices.
  • 🔄 'Rapid Replay Reviews': Weekly 30-minute sessions where you review one piece of learning content with actual users. Watch how they interact with it, what confuses them, what helps them. Use these insights for immediate improvements.
  • 📝 'Solution Scaling Framework': Create a simple checklist for deciding when to scale a learning solution. Include metrics like usage patterns, behavior change evidence, and manager feedback. Use this to make data-driven scaling decisions.
  • 🎯 'Learning Journey Maps': Work with high-performing employees to map how they developed key skills. Use these maps to design learning pathways that mirror natural skill development rather than formal training sequences.
  • 🔄 'Feedback Fast Lanes': Set up multiple quick ways for learners to give feedback - Slack reactions, simple forms, quick polls. Make it easy for people to tell you what's working and what isn't.
  • 'Quick Win Workshops': Two-hour sessions focused on solving one specific performance challenge. Test different learning approaches (job aids, peer practice, micro-learning) to see what drives immediate improvement.
  • 📱 'Learning Tech Trials': Run monthly experiments with different learning technologies. Instead of big platform implementations, test specific features with small groups to see what actually enhances learning.

Product management foundations that translate well to L&D:

Learning experience design:

Measuring learning impact:

About the author
Brandon Cestrone

Brandon Cestrone

Level Up With The Best L&D Resources

Join 10,000+ other learning professionals getting the latest insights, tools, and trends every week in their inbox.

EDU Fellowship

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to EDU Fellowship.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.