Asankhaya Sharma’s Post

View profile for Asankhaya Sharma

CTO @ Patched (YC S24)

Beating o1-preview on AIME 2024 with Chain-of-Code reasoning in Optillm In the past week, there has been a flurry of releases of o1-style reasoning models from DeepSeek, Fireworks AI and Nous Research. In our open-source optimizing inference proxy, optillm (https://lnkd.in/gN6_kNky) we have implemented several techniques that use additional inference time compute to improve accuracy and work with a variety of base models. Today, we are happy to announce that by using chain-of-code (coc) plugin in optillm we are able to beat OpenAI's o1-preview on AIME 2024 (pass@1) using SOTA base models from both Anthropic and Google DeepMind. For reference, also see the original paper that introduced the idea of CoC: Chain of Code: Reasoning with a Language Model-Augmented Code Emulator -https://lnkd.in/gtq7hbjx We have done an independent implementation of coc in optillm as the original source code was not released.

  • No alternative text description for this image
Benjamin Anderson

Stanford CS Grad, Chief Scientist, Taylor AI (YC S23)

3mo

👀👀

To view or add a comment, sign in

Explore topics