Building a Lead Qualification Framework That Actually Converts
BANT was built for a different era. Here's a modern qualification framework designed for today's buyers — and how to automate it without losing the nuance.

BANT Had a Good Run. It's Over.
Budget. Authority. Need. Timeline. Four letters that have dominated sales qualification for the better part of four decades — ever since IBM codified them in the 1980s as a standardized way to evaluate prospects.
And for a long time, BANT worked fine. When software was sold in six-figure on-premise deals with 18-month cycles and procurement committees, asking "Do you have budget?" was a reasonable gating question. The buyer knew their budget. The authority structure was clear. The timeline was set by fiscal year planning.
That world doesn't exist anymore.
Today's B2B SaaS buyer starts with a Google search, evaluates 3–5 vendors before ever talking to sales, signs a $24K annual contract with a credit card, and can churn in 90 days if the product doesn't deliver. The power dynamic has inverted. The information asymmetry has collapsed. And BANT's assumptions have aged into irrelevance.
Here's the core problem: BANT qualifies for the seller's readiness, not the buyer's.
Asking "Do you have budget approved?" in the first call tells you whether this deal is easy for you. It doesn't tell you whether this prospect is a good fit, whether they'll succeed with your product, or whether the deal is worth pursuing even if the budget isn't allocated yet. Some of the best deals start without a budget line item — the pain is so acute that the prospect creates budget after a compelling first conversation.
BANT filters those out. And that's a problem worth solving.
Introducing PACT: A Framework for Modern SaaS Qualification
After studying hundreds of thousands of qualification calls across B2B SaaS companies — and seeing what actually correlates with closed-won revenue — a clearer pattern emerges. The leads that convert aren't the ones with budget already approved. They're the ones with four different attributes.
P — Pain A — Authority C — Consequence T — Timeline
If this looks like a remix of BANT, it is — partially. Authority and Timeline carry over (though we'll redefine both). Budget is replaced by Pain and Consequence, which together do a far better job of predicting whether a deal will close and stick.
Let's walk through each dimension.
P — Pain: What's Actually Broken?
This isn't "need." Need is passive. Need is "yeah, we could probably use a better solution." Pain is active. Pain is "this problem cost us $140,000 last quarter and our VP of Sales is furious."
The distinction matters because pain creates urgency. Need creates interest. Urgency closes deals. Interest fills pipelines with opportunities that stall at Stage 2 and die quietly.
Qualifying questions for Pain:
- "What prompted you to start looking at solutions like ours right now?" (The "right now" forces specificity.)
- "Can you walk me through what happens today when [problem] occurs?" (Process-level detail reveals real pain vs. theoretical pain.)
- "How long has this been an issue?" (If it's been a problem for three years and they're just now looking, the urgency may be lower than it seems.)
- "On a scale of 1 to 10, how much does this problem impact your day-to-day?" (Simple, but surprisingly revealing. Anything below a 7 is a yellow flag.)
What you're listening for: Specificity. Dollar amounts. Named stakeholders who are frustrated. Recent triggering events (a lost deal, a board meeting, a competitor win). If the prospect can't articulate the pain in concrete terms, it may not be real enough to drive a purchase.
A — Authority: Who's Actually Involved?
BANT treats authority as binary: either you're talking to the decision-maker or you're not. That was never quite right, and it's especially wrong now.
Modern B2B purchases involve an average of 6.8 stakeholders, according to Gartner's research on buying group dynamics. The person on your call might be the champion, the evaluator, the budget holder, or the technical validator. Each role matters differently at different stages. Disqualifying a lead because your first contact isn't the VP is one of the most expensive mistakes in sales.
The real question isn't "Are you the decision-maker?" (which people lie about anyway). It's: "What does the decision process look like, and where does this person fit?"
Qualifying questions for Authority:
- "Besides yourself, who else would be involved in evaluating a solution like this?" (Non-threatening way to map the buying committee.)
- "How have decisions like this been made at your company in the past?" (Reveals process patterns — consensus-driven, top-down, committee-based.)
- "If you saw a demo and loved it, what would the next steps look like on your end?" (Forces them to articulate the path to purchase, which reveals authority structure.)
- "Is there anyone who might have concerns about making a change here?" (Surfaces potential blockers early.)
What you're listening for: Not a title. A process. You want to understand the decision architecture — how many people, what roles, what sequence, and where friction is likely to emerge. A champion with a clear view of the process is often more valuable than a VP who says "I make the call" but actually doesn't.
C — Consequence: What Happens If They Do Nothing?
This is the dimension BANT misses entirely, and it's arguably the most predictive of deal velocity.
Budget is a lagging indicator. It tells you about decisions that have already been made. Consequence is a leading indicator. It tells you about decisions that will be made — because the cost of inaction is becoming intolerable.
When a prospect can clearly articulate what happens if they don't solve this problem in the next 3–6 months, you have a deal with natural momentum. When they can't, you have a "nice to have" that will lose to every competing priority on their roadmap.
Qualifying questions for Consequence:
- "If you don't solve this in the next quarter, what happens?" (Direct and clarifying.)
- "How is this problem affecting [revenue / retention / team productivity] right now?" (Ties the pain to a business metric.)
- "Is this issue getting better or worse over time?" (Worsening problems create increasing urgency. Stable problems don't.)
- "What have you tried before, and why didn't it work?" (Previous failed attempts indicate both urgency and sunk cost awareness.)
What you're listening for: Escalation language. "Our churn is increasing." "We're losing to competitors who move faster." "The board is asking questions." When the consequence of inaction is tied to something that matters to someone senior, the deal has structural momentum that doesn't depend on your sales skills to maintain.
T — Timeline: When Does This Need to Happen?
Timeline survives from BANT, but with an important reframe. The old approach was "When are you looking to buy?" The new approach is "When does this need to be solved by, and what's driving that date?"
The difference is enormous. "We're looking to buy in Q3" is a vague aspiration. "We need this live before our annual sales kickoff on March 14th because our CEO committed to announcing the new process" is a hard deadline with organizational accountability.
Qualifying questions for Timeline:
- "Is there a specific date or event driving your timeline?" (Events create real deadlines. Vague timelines don't.)
- "What would need to happen for you to have a solution in place by [date]?" (Forces them to think backward from the deadline, revealing dependencies and potential blockers.)
- "Have you already started evaluating other solutions?" (If they're mid-evaluation, the timeline is probably real. If you're the first call, it might not be.)
- "What's your team's capacity to implement something new right now?" (Even urgent timelines stall if the team is drowning in other projects.)
What you're listening for: External drivers. Deadlines anchored to events, board meetings, renewal dates, or strategic initiatives are far more reliable than internally set timelines. A prospect who says "by end of Q2" with no external driver attached will almost certainly slip.
Scoring: Making PACT Actionable
A framework without a scoring mechanism is just a conversation guide. To make PACT operational — especially at scale — each dimension gets a 1–5 score:
| Score | Meaning |
|---|---|
| 5 | Strong signal. Specific, urgent, well-articulated. |
| 4 | Good signal. Clear but missing some specificity. |
| 3 | Moderate. Present but vague or low urgency. |
| 2 | Weak. Mentioned only when prompted, no conviction. |
| 1 | Absent or negative signal. |
Total PACT score: 4–20.
From analysis across B2B SaaS companies with average deal sizes between $12,000 and $60,000 ACV, the correlation between PACT score and close rate looks like this:
- Score 16–20: Close rate of 31.4%. These are your "fast-track" opportunities. Get an AE on them immediately.
- Score 12–15: Close rate of 14.7%. Solid prospects that need a well-run sales process. Worth investing time.
- Score 8–11: Close rate of 5.2%. Nurture territory. Not ready for a sales cycle, but worth staying in touch.
- Score 4–7: Close rate of 1.1%. Unlikely to convert this quarter. Automate the follow-up and revisit later.
The beauty of scoring is that it removes the subjectivity from pipeline reviews. Instead of debating whether a deal is "qualified" based on gut feel, the team can look at PACT scores and make data-driven decisions about where to allocate rep time.
The False Negative Problem: When Your Framework Is Too Strict
Here's a mistake that ambitious RevOps teams make: they build a qualification framework, set a threshold score for meeting handoff, and then wonder why pipeline dropped 30%.
The problem is false negatives — legitimate opportunities that score low because the framework penalizes early-stage or unconventional deals.
A few common scenarios:
- The champion who doesn't know the decision process yet. They score a 2 on Authority because they said "I'm not sure who else would be involved." That doesn't mean they're a bad lead. It means they're early in their internal process.
- The problem that's getting worse but hasn't hit crisis yet. Consequence scores a 3 because the prospect says "it's frustrating but manageable." Give it two months and that changes to "our top rep just quit over this." You want to be in the conversation before it becomes a crisis, not after.
- The inbound lead who can't articulate their pain. Some of the best buyers are terrible at describing their problems. They know something's wrong. They can't put it into words yet. A low Pain score doesn't always mean low pain — sometimes it means low self-awareness.
The fix: Don't use PACT as a binary gate. Use it as a routing mechanism. High scores go to senior AEs. Mid scores go to SDRs for further development. Low scores go into automated nurture. Nothing gets thrown away unless it's a clear misfit on ICP criteria (wrong industry, wrong company size, wrong geography).
A qualification framework should prioritize your pipeline, not truncate it.
Automating PACT Without Losing the Nuance
The obvious question: can you automate PACT scoring?
Yes — but not by turning it into a survey. Nobody wants to answer 16 qualifying questions from a robot reading a checklist. The art is in making the qualification conversational.
AI voice agents are particularly effective here because they can:
-
Ask qualifying questions naturally within a conversation, adapting the sequence based on previous answers. If the prospect volunteers pain information early, the agent skips the pain questions and goes deeper on authority.
-
Score in real-time during the call, not after. By the time the call ends, the CRM has a PACT score, a confidence level for each dimension, and verbatim quotes supporting each rating.
-
Handle the high-volume tier automatically. For companies with 200+ inbound leads per month, having a human SDR run PACT on every single lead is wasteful. AI handles the first pass, scores the lead, and routes it to the right human at the right stage.
-
Maintain consistency across thousands of calls. The scoring criteria don't drift based on who's having a good day. Every lead gets evaluated on the same standard.
The key is designing the AI agent's conversation flow around PACT's four dimensions without making it feel like an interrogation. The best implementations sound like a natural discovery conversation — because they are one. The scoring happens behind the scenes.
Making It Real: Implementation Checklist
If you want to implement PACT in your organization this quarter, here's the sequence:
Week 1: Define your scoring rubric. For each dimension, write out what a 5 looks like and what a 1 looks like, with examples from your actual deal history. Generic criteria produce generic scores.
Week 2: Backtest against your last 50 closed-won and 50 closed-lost deals. Score each retroactively. If the framework doesn't clearly separate winners from losers in your historical data, refine the criteria until it does.
Week 3: Train your team (or configure your AI agent) on the framework. Role-play calls using PACT questions. Calibrate scoring across reps so a "4 on Pain" means the same thing to everyone.
Week 4: Go live. Score every new inbound lead. Review scores weekly in pipeline meetings. Adjust thresholds based on what you learn.
Month 2 onward: Correlate PACT scores with actual outcomes. Which dimension is most predictive for your specific business? For some companies it's Consequence. For others it's Authority. Let the data tell you where to weight your scoring.
The companies that run this process rigorously end up with something most sales orgs lack: a defensible, data-backed definition of "qualified" that everyone — marketing, SDRs, AEs, and leadership — agrees on.
That alignment alone is worth the effort. Everything else is upside.
Ready to automate lead qualification without losing the nuance? See how TalkWise implements PACT scoring in real-time — on every call, at any volume.