The 10-20-70 Rule: Why Technology Alone Doesn't Transform Businesses
BCG popularised a finding that should reshape every AI Transformation budget: successful transformation is 10% algorithm, 20% technology, and 70% people and process. Most AI Transformation programmes invert that ratio — spending 70% of attention on the algorithm and technology, and treating people and process as a footnote.
The result is predictable: working AI systems with poor adoption, marginal business impact, and slow ROI. The technology was fine. The 70% was missing.
What each part of 10-20-70 actually means
10% — algorithm. The model itself. The choice between gradient boosting and neural networks. The fine-tuning approach. The eval methodology. Important for performance, almost never determinative for outcome.
20% — technology. The systems that surround the algorithm. Data pipelines, MLOps, integration, deployment, monitoring. Where most engineering effort goes, where most procurement budget goes, but still not where outcomes come from.
70% — people and process. Operator workflows. Adoption design. Trust calibration. Decision authority. Change management. Incentive alignment. The slow, structural work of getting humans to use AI in ways that actually move the business.
Why most programmes invert the ratio
Two reasons. First, the 70% is harder to measure. Algorithm accuracy is a number. Adoption is messy. Boards prefer the number. Second, the 70% is harder to outsource. A consultancy can hand you a model. They can't hand you the cultural change that gets your operators to trust it.
So programmes default to what is measurable and outsourceable. They invest heavily in the 30%, declare technical victory, and discover six months later that the metric never moved because the 70% was never built.
What "designing the 70% in" actually looks like
The 70% can't be bolted on after the algorithm ships. It has to be designed into the specification. Concretely:
Adoption-aware UX from spec stage. If operators won't trust an AI suggestion, design the UI to show the reasoning and let humans override. Built into the spec, not added in v2.
Decision authority defined explicitly. Who can approve an AI-driven recommendation? Who escalates? Where is the human in the loop? Specified before code is written.
Change management embedded in delivery cycles. ADKAR or equivalent framework integrated with the build, not run as a separate workstream by a separate vendor.
Outcome metric tracked from Day 1 of operations. If the metric isn't moving, surface why early — usually the answer is in the 70%, not the algorithm.
For mid-market: the ratio matters more, not less
Enterprises can afford to fail at 70% several times. They have the balance sheet for multiple iterations. Mid-market companies can't. A failed AI initiative consumes a year of transformation budget — and credibility with the board for the next round.
For $20M-$200M companies, getting the 70% right on the first attempt is a budget question, not a quality question. The structural fix is to choose vendors and operating models that treat the 70% as primary work, not as something the customer's HR team will figure out after launch.