Flat frameworks are simple, defensible, and wrong in a particular way. They are simple because the math is uniform — every signal counts as one, and the prioritization output is the sum. They are defensible because the uniformity is auditable; no one can complain that their signal was undervalued because no signal was. They are wrong because the assumption of equivalence is rarely true, and the organizations using flat frameworks tend to handle the inequality informally — through a senior PM's intuition, through back-channel escalations, through a quiet thumb on the scale.
Role-weighted signal makes that thumb explicit. The framework assigns multipliers to different sources of input — paying customer above free trial, enterprise above SMB, senior engineer above junior, internal staff below external customer — and the priority score is the weighted sum. The weights are public, debated, and revisable. The output is still a number, but the number now reflects a model of which voices the organization has decided matter more, and that model is open to challenge.
What the practice captures that flat frameworks miss is asymmetric risk. A complaint from a top-ten customer carries more than its weight in revenue: it correlates with churn risk, with reference-customer status, with the kind of reputational signal that affects future deals. A senior engineer's architectural concern carries more than a passing observation: it correlates with blast radius, with debt that compounds, with the kind of decision that is expensive to reverse. Treating these as equivalent to a forum comment systematically under-prioritizes the high-leverage signals.
The new failure mode is calibration. Once weights are explicit, they become a target. Sales teams advocate for higher customer-account weights; engineering for higher architectural-concern weights; CS for higher complaint weights. Each constituency has a reasonable case. Without governance — a regular review cycle, an explicit owner, a way to challenge the weights from data — the weights drift toward whichever group lobbies hardest, and the system degrades into political compromise dressed as prioritization.
The organizations that do this well treat the weights as decisions rather than constants. They review them quarterly. They tie them to outcomes — if the highest-weighted signals are not, in retrospect, the ones that drove value, the weights are wrong. The discipline is the same as for any predictive model: calibrate against ground truth, accept that the calibration changes as the business changes, and refuse to let the weights ossify into policy that nobody remembers setting.