When to Trust Robots with Decisions, and When Not To

A risk-oriented framework for deciding when to have humans vs Bots to decide. The framework differentiates problems along two independent dimensions: predictability and cost per error.

As these “robots” become a bigger part of our lives, we don’t have any framework for evaluating which decisions we should be comfortable delegating to algorithms and which ones humans should retain. That’s surprising, given the high stakes involved.

I propose a risk-oriented framework for deciding when and how to allocate decision problems between humans and machine-based decision makers. I’ve developed this framework based on the experiences that my collaborators and I have had implementing prediction systems over the last 25 years in domains like finance, healthcare, education, and sports.

The framework differentiates problems along two independent dimensions: predictability and cost per error.

Predictability refers to how much better than random we should expect today’s best predictive systems to perform.

Changes in predictability and cost per error can nudge a problem in or out of the robot zone. For example, as driverless cars improve and we become more comfortable with them, the introduction and resolution of laws limiting their liability could facilitate the emergence of insurance markets that should drive down the cost of error.

The Decision Automation Map can be used by managers, investors, regulators, and policy makers to answer questions regarding automated decision making. It can help people prioritize automation initiatives, and it can highlight problems for which the required expertise is learnable by machines from data with minimal preprogramming and for which error costs are low.

Read more here: When to Trust Robots with Decisions, and When Not To

Leave a Reply