Madison Chock & Evan Bates stepped off the ice last night in Milan believing they'd just skated the performance of their lives. 3-time reigning world champions. A decade-plus partnership.
They lost the Gold anyway.
The French duo that beat them ... Laurence Fournier Beaudry & Guillaume Cizeron ... had been skating together less than a year. They made visible errors. Wobbly step sequences. Messy twizzles. Yet, the judges still gave them the edge.
"It's a subjective sport," Bates said. Hard to argue with that.
Here's what stopped me cold (pun intended)
Chock nailed it when she said through tears: "There needs to be some sort of judgment for the judges. So that we know we're getting the best from the judges & have a level and fair playing field."
She's not just talking about figure skating. She's talking about Your company.
Right now, most organizations are "judging" their AI transformation the same way Olympic ice dance judges score a free dance ... subjectively. Gut feel. Vibes. Someone in the C-suite says "I think it's going well" & everyone nods along.
That's how you lose the Gold.
Here's the pivot. When you're rolling out AI across your organization, you need to judge the judges. Meaning: who is evaluating whether your AI initiatives are actually working? What are they measuring? And are those metrics fair across every team?
3 things to do over the next week:
(1) Pick one AI initiative in your company. Just one. Ask the person leading it: "How do we know this is working?" If the answer sounds like an ice dance score ... vague, subjective, open to interpretation ... you've got a problem.
(2) Define what a "perfect skate" looks like before anyone hits the ice. The clearest signal of a broken evaluation system is one where the criteria get decided after the performance. Set your success metrics before you launch, not after.
(3) Watch for the "French judge" problem. In the Olympics, people immediately questioned whether national bias influenced the scoring. In your company, the equivalent is the executive who champions a tool & then "evaluates" its success. Separate the sponsor from the scorekeeper.
Chock and Bates did everything right; yet still came up short because the system that judged them wasn't transparent. Don't let that happen to your AI transformation.
Skating is subjective. Your results don't have to be.
Lace up,
Jeff