Really happy to share that our work on “Learning Metrics that Maximise Power for Accelerated A/B-tests” will appear in the Applied Data Science track of #KDD2024. We address a common problem in industry: how can we minimise type-II errors in online controlled experiments? Can we simply learn a metric combination that directly minimises this quantity? This project started out as “let’s implement the ideas from this great paper by Eugene Kharitonov et al.”, but we ended up proposing some extensions to their seminal work that seem to be quite effective, and are now used across ShareChat. Proud to have this work out there, let us know what you think. Additionally, we were assigned an Area Chair that clearly cared: initial low-confidence reviews prompted them to invite more reviewers (we got 8!), and the AC themselves correctly grilled our rebuttals to point out arguments they deemed insufficient. It’s easy to be positive about peer review when your paper gets accepted, but it seems there are still some unsung heroes out there... Work with Aleksei U.. ACM SIGKDD & Annual KDD Conference
Amazing would love to give a read, its a big problem we face as well
Highly relevant contribution - extremely valuable for most internet platforms
Congratulations on your publication Olivier Jeunen 👏
Congratulations, Olivier Jeunen and team!
Congratulations Olivier Jeunen and team!
Congrats to you and the team!
Congrats!
Principal Research Scientist at Aampe
9moPreprint: https://arxiv.org/abs/2402.03915