Brian K. Buntz’s Post

View profile for Brian K. Buntz

Editor-in-Chief, R&D World @ WTWH Media LLC | Data-Driven Storyteller

The Information explores how OpenAI's reasoning models could help counteract potentially slowing progress from pretraining of larger more sophisticated models with more incremental gains over earlier models. It also notes that "relatively few developers" are using o1 models. While I haven't seen the full-fledged o1 models (and am certainly not a developer), the current models sometimes think hard about how to break code -- have seen them replace functional authentication with placeholders that you have to go back and update later, or swap out the correct variables with wrong ones -- for instance upper- or lowercasing something you didn't ask for, or adding an underscore where you didn't want one. At present, I think it still is better to have tighter feedback loops with the user -- otherwise, the genAI is thinking long and hard -- while a game of telephone can ensue where mistakes sometimes increase in the process of reflection -- about something that might not be what you wanted. A snippet from the paywalled article. The Information has done a lot of unique reporting on the subject.

  • graphical user interface, text, application

To view or add a comment, sign in

Explore topics