Latest Posts

AI Machine Learning & Data Science Research

Scaling Multi-Objective Optimization: Meta & FAIR’s CGPO Advances General-purpose LLMs

In a new paper The Perfect Blend: Redefining RLHF with Mixture of Judges, a research team from Meta GenAI and FAIR developed Constrained Generative Policy Optimization (CGPO), which offers a more structured approach to RLHF, advancing the performance of general-purpose LLMs.

AI Machine Learning & Data Science Research

NVIDIA’s Wolf: World Summarization Framework Beats GPT-4V on Video Captioning by 55.6%

In a new paper Wolf: Captioning Everything with a World Summarization Framework, a research team introduces a novel approach known as the WOrLd summarization Framework (Wolf). This automated captioning framework significantly advances video captioning—both in terms of quality (improved by 55.6%) and similarity (improved by 77.4%)—compared to GPT-4V.

AI Machine Learning & Data Science Research

Achieving 8× Performance Gains with Reinforcement Learning on Synthetic Data in Large Language Models

In a new paper RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold, a research team provides insights into how synthetic data affects performance, suggesting that a specific schema can achieve consistent gains over using only positive data, achieving performance by 8× in synthetic data volume.

AI Machine Learning & Data Science Research

Contrastive Learning Advances Sleep Science: Superior Multi-Modal Model Enhances Disorder Detection

In a new paper SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory Signals, a research team introduces SleepFM, the first attempt at developing a multi-modal contrastive learning (CL) approach for PSG analysis, outperforming baselines in tasks like demographic attribute prediction and sleep stage classification.