innovationterms .com
🔍 Customer & Market Insights · 16 min read April 2026

How to Run Jobs-to-Be-Done Research That Shapes Product Decisions

Learn how to run JTBD interviews, analyze switching stories, and turn jobs insights into product, positioning, and roadmap decisions.

If you want better product decisions, you need better evidence about customer progress. Jobs-to-Be-Done (JTBD) research gives you that evidence by focusing on what people are trying to get done in a specific moment, under specific constraints.

Most teams say they are customer-centric, but still ship features that do not move adoption, retention, or willingness to pay. The usual pattern is predictable: interviews collect preferences, surveys collect ratings, and the roadmap still reflects the loudest request. JTBD helps you break that pattern by studying switching decisions and the forces behind them.

This guide shows you how to run JTBD research end to end: scope, recruit, interview, analyze, and convert findings into roadmap and go-to-market decisions.

TL;DR

What JTBD Research Is (and Is Not)

JTBD research is a method for understanding the progress a person is trying to make in a specific situation. The unit of analysis is not “the user type”. It is the struggling moment plus the attempted progress.

That distinction matters. When you organize around demographics or static persona traits, you often miss why someone changed behavior right now. When you organize around jobs, you capture the forces that created urgency, the alternatives people considered, and the trade-offs they accepted.

You can connect this guide to the related concepts in jobs to be done theory, customer insight, value proposition design, market fit, and user persona.

When to Run JTBD Research

Run JTBD research when you need to make a decision with product or commercial consequences, for example:

Do not run JTBD as a generic discovery exercise without decision intent. You should define the business decision before the first interview.

Define the Decision Brief Before You Recruit

Before you talk to participants, write a one-page decision brief your product lead and research lead both sign off on.

Your Minimum Brief Template

  1. Decision to inform
    Example: “Should you prioritize collaborative workflows or solo automation in Q3?”

  2. Target actor and context
    Example: “First-time team admins in B2B SaaS during setup week.”

  3. Behavioral event of interest
    Example: “Started trial, invited at least one teammate, then either upgraded or churned in 30 days.”

  4. What would change if you are right
    Example: “Onboarding sequence, pricing page narrative, and activation metric definition.”

  5. Out-of-scope questions
    Example: “Long-term enterprise security concerns are out of scope for this study.”

This brief prevents the most expensive JTBD mistake: gathering interesting stories that do not resolve a real product choice.

Sampling: Recruit for Switching Stories, Not Representativeness Theater

You are not running a census. You are collecting high-quality decision narratives.

Recruit participants who made a relevant change recently, ideally in the last 3 to 6 months:

How Many Interviews Do You Need?

For one tightly scoped decision, start with 12 to 20 strong interviews. In practice:

If your interviews are vague, you may do 30 and still learn little. Quality of story detail matters more than raw count.

JTBD Interview Template You Can Use

The interview should reconstruct one real decision timeline. You are not looking for opinions about your roadmap. You are looking for causal sequence.

Section 1: Switching Trigger (Why Now)

Your goal is to locate the push that disrupted the status quo.

Ask:

Capture specific events, not generalized dissatisfaction.

Section 2: Context and Constraints

Your goal is to understand operating reality.

Ask:

Without context, you cannot distinguish preference from necessity.

Section 3: Timeline Reconstruction

Your goal is to map the sequence from first thought to adoption or rejection.

Ask:

Build a simple timeline live in your notes: Trigger → Search → Evaluate → Decide → Onboard → Use/Churn.

Section 4: Trade-Offs and Alternatives

Your goal is to reveal what they gave up and why.

Ask:

Real choices always include sacrifice. If you cannot identify sacrifice, you probably have surface-level data.

Section 5: Hire and Fire Logic

Your goal is to capture the forces behind adoption and abandonment.

Ask:

This gives you the language needed for onboarding promises, activation metrics, and churn prevention.

Interview Mechanics That Improve Data Quality

A few operating rules will improve your research immediately:

Analysis Framework: From Raw Interviews to Job Insights

After interviews, many teams stall because they have quotes but no decision structure. Use this analysis flow.

Step 1: Code Each Interview Into Force Statements

For each transcript, extract short statements under four force categories:

This force framework keeps your synthesis tied to behavior, not personality labels.

Step 2: Build Job Stories

Turn coded evidence into job stories in this structure:

When [situation], you want to [motivation], so you can [expected outcome].

Example:

When your VP asks for weekly retention insights before Monday stand-up, you want to build a reliable cohort view in under 30 minutes, so you can defend decisions with confidence.

Good job stories are situational and outcome-driven. Bad job stories read like feature requests.

Step 3: Create a Jobs Map

Map each core job across stages. A practical version for product teams:

  1. Define the goal
  2. Gather inputs
  3. Prepare environment
  4. Execute task
  5. Monitor progress
  6. Resolve exceptions
  7. Conclude and communicate outcome

Under each stage, note frictions, workaround patterns, and desired outcomes from interviews.

Step 4: Score Opportunity Intensity

Score each job-stage pain point using two dimensions:

High-frequency + high-consequence problems are your strongest candidates for roadmap prioritization.

Step 5: Convert Insights Into Explicit Decisions

For each high-intensity problem, document:

If an insight does not map to a decision, treat it as context, not priority.

Practical Worksheet: Raw Data → Insight → Decision

Use this simple worksheet in your synthesis workshop.

Raw evidenceInterpreted insightDecision you can make
”We tried 3 tools in two weeks before board reporting.”Decision urgency is tied to reporting deadlines, not general dissatisfaction.Prioritize first-week reporting template and import setup experience.
”I needed legal approval and almost gave up.”Compliance anxiety is a major adoption blocker.Add trust artifacts and approval-ready docs to onboarding and website.
”Team adopted only after manager mandated it.”Individual motivation is weaker than team-level accountability in this segment.Build manager visibility features and team adoption nudges.

This table is simple on purpose. It forces your team to show its reasoning chain.

Named Examples and What You Can Learn

Intercom: Using JTBD to Sharpen Strategy Language

Intercom’s JTBD work is a useful example of turning customer stories into clearer positioning and product decisions. Their published material emphasizes interviewing around switching and progress, then using those findings to clarify messaging and roadmap priorities. The practical lesson for you: do not keep JTBD in research docs; wire it into product and go-to-market narratives.

Resource: Intercom on Jobs-to-be-Done (free book).

Christensen’s Milkshake Study: Context Beats Demographics

The milkshake case is still instructive because it reframed demand around circumstance: commuters hiring a milkshake for a specific morning job versus other options. The core lesson is not the product category; it is the method. You gain strategic insight when you ask what progress the customer is trying to make in that moment, what alternatives compete, and what trade-off wins.

Resources: HBR overview and case summary.

Basecamp Shape Up: Shaping Before Shipping

Basecamp’s Shape Up method is not branded as pure JTBD, but it shares an important discipline: define the problem boundary and appetite before execution, and force trade-off decisions early. For product teams doing JTBD research, this is useful because it prevents endless backlog growth from weakly framed “insights.” You shape bets around the most important jobs and constraints first.

Resource: Shape Up.

Bob Moesta: Demand-Side Interviewing Discipline

Bob Moesta’s JTBD resources reinforce the operational side of this method: interview for demand-side causality, reconstruct decision timelines, and separate struggling moments from generic preferences. Use this as a quality standard for your interview practice.

Resource: Jobs-to-be-Done resources.

Turning JTBD Into Product, Pricing, and Positioning Moves

Your work is not done when the research deck is finished. You need a translation ritual that lands in delivery.

Product Roadmap

Turn top job-stage frictions into hypotheses with clear success criteria:

Pricing and Packaging

JTBD often reveals willingness to pay logic. You might find customers pay for risk reduction, speed to confidence, or cross-team alignment rather than raw feature volume. Use these findings to revisit package boundaries and value metrics.

Messaging and Sales Enablement

Replace generic value claims with job language from interviews. For example, “ship insights before Monday review” is stronger than “faster analytics.” Give sales and marketing real phrases customers used when describing urgency and success criteria.

Common Failure Modes (and How to Prevent Them)

  1. Feature-led interviews
    You ask about existing functionality instead of decision context.
    Fix: start with the switching story, not your product surface.

  2. No decision owner in synthesis
    Insights stay descriptive and no one commits to action.
    Fix: include product and commercial decision owners in synthesis workshop.

  3. Mixing segments too early
    You collapse different contexts into one average narrative.
    Fix: analyze one segment and one job context at a time.

  4. Confusing persona with job
    You write “SMB founder job” when the real job is deadline-specific reporting confidence.
    Fix: phrase jobs in situational terms.

  5. No follow-through instrumentation
    You cannot tell whether your JTBD-informed changes worked.
    Fix: define success and guardrail metrics before shipping changes.

30-Day Implementation Plan

If you want to start quickly, use this cadence.

Week 1: Scope and Recruit

Week 2: Run Interviews

Week 3: Complete Fieldwork and Synthesize

Week 4: Decision Workshop and Execution Handoff

At the end of 30 days, you should have fewer but stronger bets backed by behavioral evidence.

FAQ

How Is JTBD Different From User Personas?

Personas describe recurring user characteristics and can help with communication design. JTBD explains why a person changes behavior in a specific situation. You can keep personas for team alignment, but you should use JTBD when you need to explain causality behind adoption, churn, and willingness to pay.

How Many Interviews Do You Need for JTBD Research?

For one focused decision area, 12 to 20 high-quality interviews is usually enough to expose stable patterns. If you are researching multiple segments or very different contexts, split the study and run separate interview sets rather than blending everything into one sample.

How Do You Present JTBD Findings to a Product Team?

Present findings as a decision memo, not a quote repository. Include the top jobs, evidence snippets, opportunity scores, and explicit implications for roadmap, onboarding, pricing, and messaging. Assign owners and metrics in the same meeting so insights convert to action.

Can JTBD Replace Analytics and Experimentation?

No. JTBD gives you causal depth and sharper hypotheses. Analytics and experiments test prevalence and impact. The strongest teams combine both: JTBD defines what to test, and quantitative methods tell you where to scale investment.

Final Checklist Before You Close the Project

If you follow this approach, JTBD research stops being a “research artifact” and becomes a decision system your product team can use repeatedly.

Clara avatar

Contributor

Clara @cla_reinholt

Focuses on innovation communication, facilitation, and turning frameworks into team habits.

Clara writes about the human systems behind innovation: facilitation quality, communication clarity, and the routines that help teams move from ideas to decisions. She follows practical team-method sources such as the Atlassian Team Playbook, alongside innovation coverage from McKinsey and Harvard Business Review.

Her contributions often combine editorial storytelling with practical templates that leaders can reuse for team rituals, retrospectives, and portfolio reviews, informed by research and practices from McKinsey on Innovation, Harvard Business Review, and the Atlassian Team Playbook.

Clara tends to ask one recurring question in her drafts: Will this help someone lead a better conversation tomorrow? If the answer is yes, the piece is ready.