Choose it: How to Decide Between Viable Options When You’ll Never Have 100% Confidence
Q&A is live Dec 9th through 16th for our "Ask Me Anything" post - December 2025 ed.
You’ve spent 6 weeks and $50K investigating three materials. Your confidence has climbed from 40% to 60-85% across options. Your manager asks: ‘Which one? Are you sure? Should we test more?’
Real decisions involve trade-offs between confidence, cost, timeline, and performance. In Phase 1, we identified a High Impact/Low Likelihood decision (40% confidence). In Phase 2, we stacked literature review, expert consultation, and accelerated testing to boost confidence to 60-85%. Now we choose.
We’re not proving we’re right, we’re ensuring we’re not wrong.
In 2 hours, you’ll know how to choose systematically, and what to monitor after you’ve chosen.
Before You Choose-It: Are You Ready?
✓ Have you reached target confidence for your impact tier?
✓ Would another test cost more than the information value?
✓ Do you have multiple viable options (not a fundamental flaw)?
If yes → Proceed to Phase 3 decision tools
If no → Return to Phase 2 or pivot your design
How long should Phase 3 take?
You’ve already gathered the information you need in Phases 1 and 2. Phase 3 is about pulling it together and evaluating that information against your goals. It shouldn’t take long. If you are faced with a choice at 8AM, you should have a clearer choice by 10AM.
If Phase 3 is taking longer than a few hours, that’s a signal:
You’re still gathering new data (return to Phase 2)
Your criteria aren’t clear (revisit Phase 1)
You have analysis paralysis (pick the most robust option and move)
We’ve done activities to increase our confidence in our decision. Phase 1 taught you to frame the impact (High: $120K risk). Phase 2 taught you when to stop investigating (we’ve hit our confidence threshold).
But it’s still not 100%. We could test more, but should we?
• Perfect certainty is impossible and often not worth pursuing. Sometimes “good enough” is the right answer. Most of us aim to optimize our decisions, but there are decisions where a satisfactory outcome is preferable over an optimized one.
• There is an opportunity cost of waiting for more data vs. moving forward. Our decision is likely tied to the project, timeline expectations, budgets, business goals, and other decisions. Waiting and testing more could do more harm than good considering the bigger picture.
• The decision risk is asymmetric. Sometimes the downside of being wrong is much bigger than the upside of being right. What are we aiming to do? Prove we’re right, or ensure we’re not wrong? They’re different questions. We started out in Phases 1 and 2 with proving we’re right. That helped us to gather more information. Now, to make a final decision, the most rational approach isn’t to chase a big win, but to avoid a devastating loss.
You’re now at a point in your project where you have three material options. And you need to choose one.
Phase 3 teaches you how to choose between multiple viable options…systematically, not politically.
In Phase 3, Paired Comparison and Expected Value thinking gives you a systematic way to answer.
The shift in thinking you need for these decisions
You need to estimate both upside AND downside
We need to estimate both the upside and downside of this decision. This helps us to address the asymmetric risk. It’s not that our success is worth $50K and our failure is $0. Or, on the other hand, our success is worthless but costs us if we fail. Our decision adds value to both the success and failure of our project/product.
The key is to understand what success adds and what failure means to our project. This is why practicing the Frame-It in phase 1 is also so important. It not only aligns the team and sets us up with understanding what action to take and what is needed in our Investigate-It phase. It also sets us up to understand the success and failure of this decision.
Revisit your problem description in the Frame-It section. Does it still apply? If you’re working an active project right now, chances are other aspects of the project kept moving forward. Things may have changed. Re-evaluate the current state to understand what those changes are. They could affect your decision.
The probabilities come from your Phase 2 updated confidence
How confident are you in your decision after you’ve done more investigation? You’ve done activities to boost your confidence. What is your confidence now? That’s your probability of success to use now. The probability of failure is 1 minus your boosted confidence level.
In Phase 2, we identified three materials. Our confidence in our technical decision about materials varies between our options. This is not based on gut feel, but evidence quality from our investigation stack:
Remember Phase 2’s “when to stop testing” criteria? We’ve reached 60-85% confidence across options. This is sufficient for a High Impact decision requiring 80-90% confidence in one direction, but here we’re choosing between multiple viable paths.
This isn’t about being “right”. It’s about being systematic. Let’s walk through an example to show you how.
Our Scenario’s Decision Context: Mold Design for New Product
We’re launching a new product requiring a custom injection-molded part. The mold is expensive to build, and if the part fails in performance testing, we may need to rework the mold — which costs time and money.
Key Facts:
If we succeed, revenue is $500K (assume from launch to Year 1)
If we proceed and fail, cost is $165K (rework + delays + lost opportunity). This includes:
$45K for new tooling
$120K for complete revalidation (testing, retooling, delays)
12 weeks of delay (beyond initial 8-week delay if we proceed now)
Our risk is not symmetric.
Not only are we not sure if we should proceed, but we also have options.
Compare Multiple Options: which material do we choose?
We’ve narrowed down your mold design decision to three candidate materials for a new injection-molded product part. Each material offers different trade-offs in confidence of performance, cost, and lead time.
A Paired Comparison Analysis systematically compares each pair of options based on key decision criteria.
You’re at Phase 3: “Choose It”, which means you need to decide now, not delay further (unless the comparison shows a compelling reason to test more). You’ve already done testing that boosted your confidence from 40%. You’re not going back to Phase 2, but you’re evaluating three viable options to choose from.
📊Decision Criteria & Steps
1. Compare all pairs on the three criteria
Confidence in Success: % probability of the material working as intended (informed by Phase 2 Investigate It)
Cost: total cost to produce the mold and prototype with this material (in $K)
Lead Time: weeks to produce and validate the part (shorter = better)
2. Assign a “preference score” for each pair
The best option by a wide margin is assigned a 3-Major
The better option by a moderate margin is assigned a 2-Moderate
The better option by just a bit is assigned a 1-Minimal
3. Sum the scores across all criteria for each pair
Applying a weight to technical confidence
Phase 1, we identified that failure would trigger $120K revalidation costs and 12-week delays. Given this High Impact designation, we’re doubling the weight of technical confidence. Why? Because in our Phase 1 framing, the downside risk of material failure far exceeds the upside of saving $20-40K in tooling costs.
Let’s begin!
In the next section, you’ll see exactly how to score options, calculate Expected Value, and make the final call. We use our injection molding decision as a complete worked example.
📊Step 1: Material Options
Based on our Phase 2 investigation, we’ve determined the value of three factors that are influential in our decision: our confidence that it will succeed, the cost, and the lead time.
📊Step 2: Assign a Preference Score in a Paired Comparison Matrix
We compare each factor Confidence, Cost, and Lead Time. For each factor, we’ll evaluate which option is better than the other by pairs: A vs B, A vs C, B vs C.
Confidence
We compare each pair based on confidence level — higher confidence means better. Whichever option wins, we multiply the score by 2 to add our weight to this confidence measure.
🟢 Pair A vs B: A: 75% vs B: 85% → B is better. Difference: 10% → Moderate improvement → B gets 2 X 2 (weight) = 4
🟢 Pair A vs C: A: 75% vs C: 60% → A is better. Difference: 15% → Major improvement → A gets 3 X 2 (weight) = 6
🟢 Pair B vs C: B: 85% vs C: 60% → B is better. Difference: 25% → Major improvement → B gets 3 X 2 (weight) = 6
Cost
We compare each pair — lower cost is better.
🟢 Pair A vs B: A: $120K vs B: $140K → A is better. Difference: $20K → Moderate saving → A gets 2
🟢 Pair A vs C: A: $120K vs C: $100K → C is better. Difference: $20K → Moderate saving → C gets 2
🟢 Pair B vs C: B: $140K vs C: $100K → C is better. Difference: $40K → Major saving → C gets 3
Lead Time
We compare each pair — lower lead time = better.
🟢 Pair A vs B: A: 10w vs B: 12w → A is better. Difference: 2 weeks → Moderate improvement → A gets 2
🟢 Pair A vs C: A: 10w vs C: 8w → C is better. Difference: 2 weeks → Moderate improvement → C gets 2
🟢 Pair B vs C: B: 12w vs C: 8w → C is better. Difference: 4 weeks → Major improvement → C gets 3
📊 Step 3: Total Scores by Option
Now, we add up the scores from all three criteria:
The weighted paired comparison has resulted in a three-way tie.
🤔 What does this mean?
This is a strong signal: no single option dominates when confidence is heavily weighted.
Material C (Nylon) dominates in cost and lead time, with major advantages over both A and B.
It underperforms in confidence. But even with confidence weighted ×2, the cost and lead time advantages of Nylon are substantial and real.
Expected Value: Should we do additional testing?
Now that we’re leaning toward choosing Nylon to minimize our project risks of costs and lead times, we’re setting our confidence boost to 60% (up from our original 40%). It’s not as high a bump in confidence as the other materials.
What if we could have the best of both worlds: minimize project risk WHILE ALSO bumping our performance confidence?
What if we invest $50K to raise Nylon’s confidence to 75%? Let’s use Expected Value to help us with this situation.
Expected Value measures the average outcome you can expect from a decision, factoring in both the likelihood and impact of success and failure, helping you choose the option with the highest net benefit.
It’s a decision-making metric that calculates the net benefit or loss of a choice by combining the probability of success with its upside value and subtracting the probability of failure multiplied by its downside cost.
The formula: Expected Value = (Probability of Success × Value if Successful) - (Probability of Failure × Cost if Failed)
Specifically, for this material selection decision:
Success is defined as the mold or product working as intended, adding $500K in value (e.g., through reduced costs, faster time to market, or revenue).
Failure is defined as the mold or product not meeting requirements, incurring a $165K cost (e.g., rework, delays, lost opportunity).
Probability of success is based on updated confidence levels (e.g., 75% after Phase 2 testing, 60% for Nylon).
Probability of failure is simply 1 minus the probability of success.
Expected Value if we proceed with Nylon (60% confidence):
EV = (Probability of Success × Upside) - (Probability of Failure × Downside)
EV = (0.60 × $500K) - (0.40 × $165K)
EV = $300K - $66K = $234K
Optional: What if we invest $50K to raise Nylon’s confidence to 75%?
EV = (0.75 × $500K) - (0.25 × $165K) - $50K
EV = $375K - $41.25K - $50K = $283.75K
Testing is better than the $234K EV without testing, so yes it’s worth testing if we care only about expected value.
If we choose Nylon with additional testing, we incur a 6-week delay, as the test takes 6 weeks to complete. This conflicts with the lead time advantage Nylon offers. This means we net 2 weeks savings instead of 4, reducing our time advantage over the other materials.
Let’s compare the expected value of Nylon (after we’ve tested) to the other materials:
🔑 Key Insight: While Nylon wins in paired comparison due to cost and speed, its EV is lower than both ABS and PC. This highlights the trade-off between actionability vs. expected reward.
🤔Strategic Implications & Recommendation
The paired comparison showed Nylon wins due to strong cost and lead time advantages, even though its EV is lower.
This reflects the core tension in Phase 3 decisions:
Maximizing EV → Choose PC (highest EV at $399.25K)
Minimizing regret / delivering on time → Choose Nylon (fastest, cheapest, acceptable risk)
The asymmetric risk principle applies: you don’t need to be “right”. You need to avoid being “wrong”.
Our Recommendation: Proceed with Nylon
Why Nylon wins:
$20-40K cheaper than alternatives
2-4 weeks faster to market
60% confidence is acceptable because we’re choosing between three validated options, not betting on one path
The confidence paradox resolved:
Phase 2’s 80-90% threshold applies when you have ONE path forward. Here, we’re choosing between THREE validated options, each independently investigated in Phase 2.
The 60% represents Nylon’s technical confidence, but when combined with its operational advantages (cost, speed), the decision confidence is much higher. We’re 85% confident Nylon is the right choice, even if we’re 60% confident in its absolute performance.
If you must test more:
Additional testing (+$50K) improves EV to $283.75K but adds 6 weeks, erasing Nylon’s timeline advantage. Only pursue if schedule flexibility exists.
Bottom Line: We’re choosing the most actionable, least disruptive, and project-aligned option, not the highest EV.
Document the trade-off: You’re choosing faster delivery over higher expected value.
Post-decision monitoring: Track material performance vs. expectations, cost savings realization, and timeline adherence.
Also, remember in Phase 2 we stacked evidence to reach 60-85% confidence. If our Phase 3 choice doesn’t work out, that documentation shows us exactly where to investigate next.
When NOT to use these tools
Don’t use Paired Comparison when:
You only have one real option (it’s yes/no, not A vs. B)
The criteria are so interrelated that pairwise comparison breaks down
Time pressure means you need a faster heuristic (then use: what’s the reversibility of this decision?)
Don’t use Expected Value when:
The downside risk is existential (company-killing). Some risks you just don’t take, even if EV is positive.
You’re in a regulated industry where “best effort” has legal meaning beyond EV
The uncertainty is so high (below 30% confidence) that the numbers are meaningless—go back to Phase 2
What happens when this framework leads to wrong decisions?
This framework isn’t a math problem, is it?
These are complicated decisions with real impact, which is why we get a stomachache about them. This framework guides you from knee-jerk reactions and gut instinct to a systematic approach for data and information you need to make a better-informed decision.
Stopping, evaluating, gathering information, and focusing on what matters for your project goes a long way toward making the right decision. Document how you got there: your Phase 1 framing, Phase 2 evidence stack, and Phase 3 rationale. This document is your learning asset.
When a systematically made decision doesn’t work out, you have something valuable: a documented trail of what evidence you had, what you expected, and where reality diverged. This turns ‘failure’ into organizational learning.
Teams that use this framework don’t make fewer mistakes; instead, they learn from them faster.
Here’s the key lesson from Choose-It:
Paired Comparison tells you which option is most robust across criteria.
Expected Value tells you which option is “best on average”.
In a Phase 3 “Choose It” decision, especially when project timelines and budgets are tight, robustness and actionability matter more than pure EV.
You’re not optimizing for maximum reward, you’re optimizing for practical, timely, and low-friction execution. Recall that we’re not trying to prove we’re right, we’re trying to ensure we’re not wrong.
We’re not choosing the highest EV option. We’re choosing the most balanced, actionable, and project-aligned option. That’s what makes us smart, not just mathy.
In our example, Nylon reduces operational risk (cost, time), even if it slightly increases technical risk. In this context, operational risk is the bigger threat to the project.
How Phase 3 Completes the System
Three months ago, you had a design decision that felt impossible. High stakes, low confidence, pressure from all sides.
Now you have a system:
• Phase 1 (Frame It): You identified it as a Critical Unknown and baselined your confidence at 40%
• Phase 2 (Investigate It): You stacked evidence strategically and updated to 60-85% confidence
• Phase 3 (Choose It): You used Paired Comparison and calculated Expected Value to choose the best option given remaining uncertainty
You didn’t eliminate all risk. You managed it systematically. And you can defend your decision with data, not just gut feel.
That’s how great design decisions get made.
Where might you get stuck?
“But I don’t know the exact revenue/cost!”
That’s fine, use ranges and see if the decision is robust. There’s information about costing: your project wouldn’t have been approved without it. You may just need to have a conversation with the PM or get their help to find it.
“This feels like false precision.”
It is, but it’s better than gut feel disguised as experience. Instead of using your experience to jump to a conclusion, use it to nail down some numbers that are meaningful for measuring and comparing options. This will also drive better conversations for a better-informed team.
“My manager won’t accept probabilistic thinking.”
Show them the expected loss if you’re wrong, that usually gets their attention.
What happens after you choose?
Monitor how the choice is affecting the project. Then, be prepared to pivot if you need to.
Making this choice came with certain expectations about what was going to happen. Think of those as goal posts against the choice. If you fall short, evaluate why and how far. If you exceed those goal posts, you’re probably fine.
Check against goal posts at 2 weeks, 1 month, and 3 months. If you chose at 60% confidence, expect more frequent monitoring than an 85% decision. If you learn new information that plummets your confidence, stop and go back to Phase 1.
Remember: choosing Nylon at 60% confidence means we expected some uncertainty. If performance falls within our expected range (even at the lower end), that validates our decision process, not invalidates it.
How does this fit with existing processes?
You may be working with Agile sprints, Stage-Gate reviews, or other organizational decision processes. That’s fine. This fits. This is problem solving and decision making.
We started this journey with a high-stakes decision that affected the project. Ensure that the rest of the team has visibility to it. Don’t wait for an official review. Approach your team with it when you’ve identified this risky decision.
You can report on the results of the decision-making process and your monitoring of it at those official team updates.
What’s next?
If you have a very complex technical decision, you may need a more robust decision-making framework. For those cases, I refer you to the DMRCS method and the help of a statistician. Try this Method to Help with Complex Decisions (DMRCS) - Deeney Enterprises
If you want another example of a Paired Comparison After the ‘Storm: Compare and Prioritize Ideas - Deeney Enterprises
What design decision is keeping you up at night? Hit reply and let me know—your challenges shape my content. And if you need hands-on help: Book a discovery call









