How to Run Jobs-to-Be-Done Research That Shapes Product Decisions
Learn how to run JTBD interviews, analyze switching stories, and turn jobs insights into product, positioning, and roadmap decisions.
If you want better product decisions, you need better evidence about customer progress. Jobs-to-Be-Done (JTBD) research gives you that evidence by focusing on what people are trying to get done in a specific moment, under specific constraints.
Most teams say they are customer-centric, but still ship features that do not move adoption, retention, or willingness to pay. The usual pattern is predictable: interviews collect preferences, surveys collect ratings, and the roadmap still reflects the loudest request. JTBD helps you break that pattern by studying switching decisions and the forces behind them.
This guide shows you how to run JTBD research end to end: scope, recruit, interview, analyze, and convert findings into roadmap and go-to-market decisions.
TL;DR
- Focus your research on real switching moments, not hypothetical feature wish lists.
- Structure interviews around trigger, context, timeline, trade-offs, and hire/fire logic.
- Analyze data as decision stories, then build a jobs map your team can act on.
- Present findings as choices with implications for product, onboarding, pricing, and positioning.
- Use JTBD with quantitative data so you get both causal depth and confidence at scale.
What JTBD Research Is (and Is Not)
JTBD research is a method for understanding the progress a person is trying to make in a specific situation. The unit of analysis is not “the user type”. It is the struggling moment plus the attempted progress.
That distinction matters. When you organize around demographics or static persona traits, you often miss why someone changed behavior right now. When you organize around jobs, you capture the forces that created urgency, the alternatives people considered, and the trade-offs they accepted.
You can connect this guide to the related concepts in jobs to be done theory, customer insight, value proposition design, market fit, and user persona.
When to Run JTBD Research
Run JTBD research when you need to make a decision with product or commercial consequences, for example:
- You are redesigning onboarding and do not know what first-use success should mean.
- You are seeing churn spikes and your current exit survey categories feel shallow.
- You are deciding between two roadmap bets with equal internal support.
- You are entering a new segment and need to understand switching barriers.
- You want sharper positioning than “faster” or “easier” claims.
Do not run JTBD as a generic discovery exercise without decision intent. You should define the business decision before the first interview.
Define the Decision Brief Before You Recruit
Before you talk to participants, write a one-page decision brief your product lead and research lead both sign off on.
Your Minimum Brief Template
-
Decision to inform
Example: “Should you prioritize collaborative workflows or solo automation in Q3?” -
Target actor and context
Example: “First-time team admins in B2B SaaS during setup week.” -
Behavioral event of interest
Example: “Started trial, invited at least one teammate, then either upgraded or churned in 30 days.” -
What would change if you are right
Example: “Onboarding sequence, pricing page narrative, and activation metric definition.” -
Out-of-scope questions
Example: “Long-term enterprise security concerns are out of scope for this study.”
This brief prevents the most expensive JTBD mistake: gathering interesting stories that do not resolve a real product choice.
Sampling: Recruit for Switching Stories, Not Representativeness Theater
You are not running a census. You are collecting high-quality decision narratives.
Recruit participants who made a relevant change recently, ideally in the last 3 to 6 months:
- New adopters (hired your product)
- Churned users (fired your product)
- Considerers who evaluated you but picked an alternative
- Power users who expanded use after an initial low-intent start
How Many Interviews Do You Need?
For one tightly scoped decision, start with 12 to 20 strong interviews. In practice:
- 6 to 8 interviews usually reveal early force patterns.
- 12 to 15 interviews stabilize recurring themes.
- 18 to 20 interviews help you pressure-test edge cases and segment differences.
If your interviews are vague, you may do 30 and still learn little. Quality of story detail matters more than raw count.
JTBD Interview Template You Can Use
The interview should reconstruct one real decision timeline. You are not looking for opinions about your roadmap. You are looking for causal sequence.
Section 1: Switching Trigger (Why Now)
Your goal is to locate the push that disrupted the status quo.
Ask:
- “What happened that made this problem feel urgent?”
- “Why did this become a priority that week, not three months earlier?”
- “Who else noticed the problem?”
Capture specific events, not generalized dissatisfaction.
Section 2: Context and Constraints
Your goal is to understand operating reality.
Ask:
- “What was your workflow before you considered switching?”
- “What constraints were non-negotiable (budget, security, timing, approvals)?”
- “What workaround were you using, and where did it break?”
Without context, you cannot distinguish preference from necessity.
Section 3: Timeline Reconstruction
Your goal is to map the sequence from first thought to adoption or rejection.
Ask:
- “When did you first start looking?”
- “What did you evaluate first, second, third?”
- “What almost made you stop the process?”
Build a simple timeline live in your notes: Trigger → Search → Evaluate → Decide → Onboard → Use/Churn.
Section 4: Trade-Offs and Alternatives
Your goal is to reveal what they gave up and why.
Ask:
- “What options did you seriously consider?”
- “What felt better about each alternative?”
- “What risk did you accept by choosing this option?”
Real choices always include sacrifice. If you cannot identify sacrifice, you probably have surface-level data.
Section 5: Hire and Fire Logic
Your goal is to capture the forces behind adoption and abandonment.
Ask:
- “What did you hope this would help you accomplish?”
- “What sign told you it was working?”
- “What would have caused you to abandon it in the first month?”
This gives you the language needed for onboarding promises, activation metrics, and churn prevention.
Interview Mechanics That Improve Data Quality
A few operating rules will improve your research immediately:
- Use incident-based prompts. Ask for specific meetings, deadlines, and interactions.
- Avoid leading language. Replace “Did you like X?” with “How did you decide between X and Y?”
- Probe for evidence. If someone says “It was confusing,” ask “Where exactly did confusion show up?”
- Distinguish actor levels. Buyer, admin, and daily user may have different jobs.
- Record with consent and timestamp key moments. Analysis quality depends on traceable evidence.
Analysis Framework: From Raw Interviews to Job Insights
After interviews, many teams stall because they have quotes but no decision structure. Use this analysis flow.
Step 1: Code Each Interview Into Force Statements
For each transcript, extract short statements under four force categories:
- Push of the current situation (what is painful now)
- Pull of a new solution (what progress seems possible)
- Anxiety of change (what could go wrong)
- Habit of the present (what keeps them from moving)
This force framework keeps your synthesis tied to behavior, not personality labels.
Step 2: Build Job Stories
Turn coded evidence into job stories in this structure:
When [situation], you want to [motivation], so you can [expected outcome].
Example:
When your VP asks for weekly retention insights before Monday stand-up, you want to build a reliable cohort view in under 30 minutes, so you can defend decisions with confidence.
Good job stories are situational and outcome-driven. Bad job stories read like feature requests.
Step 3: Create a Jobs Map
Map each core job across stages. A practical version for product teams:
- Define the goal
- Gather inputs
- Prepare environment
- Execute task
- Monitor progress
- Resolve exceptions
- Conclude and communicate outcome
Under each stage, note frictions, workaround patterns, and desired outcomes from interviews.
Step 4: Score Opportunity Intensity
Score each job-stage pain point using two dimensions:
- Frequency: how often it appears across interviews
- Consequence: how costly failure is (time, risk, revenue, trust)
High-frequency + high-consequence problems are your strongest candidates for roadmap prioritization.
Step 5: Convert Insights Into Explicit Decisions
For each high-intensity problem, document:
- Product implication (what to build, remove, or simplify)
- Experience implication (onboarding, defaults, guidance)
- Commercial implication (packaging, pricing, sales narrative)
- Measurement implication (north-star and guardrail metrics)
If an insight does not map to a decision, treat it as context, not priority.
Practical Worksheet: Raw Data → Insight → Decision
Use this simple worksheet in your synthesis workshop.
| Raw evidence | Interpreted insight | Decision you can make |
|---|---|---|
| ”We tried 3 tools in two weeks before board reporting.” | Decision urgency is tied to reporting deadlines, not general dissatisfaction. | Prioritize first-week reporting template and import setup experience. |
| ”I needed legal approval and almost gave up.” | Compliance anxiety is a major adoption blocker. | Add trust artifacts and approval-ready docs to onboarding and website. |
| ”Team adopted only after manager mandated it.” | Individual motivation is weaker than team-level accountability in this segment. | Build manager visibility features and team adoption nudges. |
This table is simple on purpose. It forces your team to show its reasoning chain.
Named Examples and What You Can Learn
Intercom: Using JTBD to Sharpen Strategy Language
Intercom’s JTBD work is a useful example of turning customer stories into clearer positioning and product decisions. Their published material emphasizes interviewing around switching and progress, then using those findings to clarify messaging and roadmap priorities. The practical lesson for you: do not keep JTBD in research docs; wire it into product and go-to-market narratives.
Resource: Intercom on Jobs-to-be-Done (free book).
Christensen’s Milkshake Study: Context Beats Demographics
The milkshake case is still instructive because it reframed demand around circumstance: commuters hiring a milkshake for a specific morning job versus other options. The core lesson is not the product category; it is the method. You gain strategic insight when you ask what progress the customer is trying to make in that moment, what alternatives compete, and what trade-off wins.
Resources: HBR overview and case summary.
Basecamp Shape Up: Shaping Before Shipping
Basecamp’s Shape Up method is not branded as pure JTBD, but it shares an important discipline: define the problem boundary and appetite before execution, and force trade-off decisions early. For product teams doing JTBD research, this is useful because it prevents endless backlog growth from weakly framed “insights.” You shape bets around the most important jobs and constraints first.
Resource: Shape Up.
Bob Moesta: Demand-Side Interviewing Discipline
Bob Moesta’s JTBD resources reinforce the operational side of this method: interview for demand-side causality, reconstruct decision timelines, and separate struggling moments from generic preferences. Use this as a quality standard for your interview practice.
Resource: Jobs-to-be-Done resources.
Turning JTBD Into Product, Pricing, and Positioning Moves
Your work is not done when the research deck is finished. You need a translation ritual that lands in delivery.
Product Roadmap
Turn top job-stage frictions into hypotheses with clear success criteria:
- Hypothesis: “If you reduce setup ambiguity in first session, activation will increase.”
- Leading indicator: percent of new users completing first meaningful workflow.
- Guardrail: support tickets per new activated account.
Pricing and Packaging
JTBD often reveals willingness to pay logic. You might find customers pay for risk reduction, speed to confidence, or cross-team alignment rather than raw feature volume. Use these findings to revisit package boundaries and value metrics.
Messaging and Sales Enablement
Replace generic value claims with job language from interviews. For example, “ship insights before Monday review” is stronger than “faster analytics.” Give sales and marketing real phrases customers used when describing urgency and success criteria.
Common Failure Modes (and How to Prevent Them)
-
Feature-led interviews
You ask about existing functionality instead of decision context.
Fix: start with the switching story, not your product surface. -
No decision owner in synthesis
Insights stay descriptive and no one commits to action.
Fix: include product and commercial decision owners in synthesis workshop. -
Mixing segments too early
You collapse different contexts into one average narrative.
Fix: analyze one segment and one job context at a time. -
Confusing persona with job
You write “SMB founder job” when the real job is deadline-specific reporting confidence.
Fix: phrase jobs in situational terms. -
No follow-through instrumentation
You cannot tell whether your JTBD-informed changes worked.
Fix: define success and guardrail metrics before shipping changes.
30-Day Implementation Plan
If you want to start quickly, use this cadence.
Week 1: Scope and Recruit
- Lock decision brief
- Align stakeholders on target segment
- Recruit 12 to 15 participants with recent switching events
Week 2: Run Interviews
- Conduct 6 to 8 interviews
- Debrief daily for early force patterns
- Improve prompts where recall quality is weak
Week 3: Complete Fieldwork and Synthesize
- Conduct remaining interviews
- Code for push/pull/anxiety/habit
- Draft jobs map and opportunity intensity scores
Week 4: Decision Workshop and Execution Handoff
- Run cross-functional synthesis workshop
- Decide top 2 to 4 product/commercial moves
- Publish decision memo with owners, metrics, and timeline
At the end of 30 days, you should have fewer but stronger bets backed by behavioral evidence.
FAQ
How Is JTBD Different From User Personas?
Personas describe recurring user characteristics and can help with communication design. JTBD explains why a person changes behavior in a specific situation. You can keep personas for team alignment, but you should use JTBD when you need to explain causality behind adoption, churn, and willingness to pay.
How Many Interviews Do You Need for JTBD Research?
For one focused decision area, 12 to 20 high-quality interviews is usually enough to expose stable patterns. If you are researching multiple segments or very different contexts, split the study and run separate interview sets rather than blending everything into one sample.
How Do You Present JTBD Findings to a Product Team?
Present findings as a decision memo, not a quote repository. Include the top jobs, evidence snippets, opportunity scores, and explicit implications for roadmap, onboarding, pricing, and messaging. Assign owners and metrics in the same meeting so insights convert to action.
Can JTBD Replace Analytics and Experimentation?
No. JTBD gives you causal depth and sharper hypotheses. Analytics and experiments test prevalence and impact. The strongest teams combine both: JTBD defines what to test, and quantitative methods tell you where to scale investment.
Final Checklist Before You Close the Project
- You can point to at least one real switching timeline per major insight.
- Each top insight maps to a concrete decision with a named owner.
- Product, marketing, and sales language uses shared job phrasing.
- Metrics for success and risk are defined before rollout.
- You have documented what JTBD could not answer and what needs quantitative follow-up.
If you follow this approach, JTBD research stops being a “research artifact” and becomes a decision system your product team can use repeatedly.