Charles Antoine Malenfant
New York, New York, United States
3K followers
500+ connections
View mutual connections with Charles Antoine
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Charles Antoine
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Charles Antoine’s full profile
Other similar profiles
-
Xuan(Sabrina) Hu
Shanghai, ChinaConnect -
Zora Liu
Tokyo, JapanConnect -
Palmer Cluff
Portland, ORConnect -
Rachel Abbe
New York, NYConnect -
Ossama Ayesh
New York City Metropolitan AreaConnect -
Tony Hsieh
New York, NYConnect -
Dev Patale
New York, NYConnect -
Abhishek Raut
San Francisco Bay AreaConnect -
Maitri Vasa
Pittsburgh, PAConnect -
Ruchit M.
Copenhagen Metropolitan AreaConnect -
Joseph Perry
Boston, MAConnect -
Kalyani Nawathe
Milpitas, CAConnect -
Joseph Otori
Dimondale, MIConnect -
Fei Feng
Greater Seattle AreaConnect -
Jeremy Malaluan
Salesforce Developer | 2x Salesforce Certified | Software/Web Developer | IT Specialist | PowerBI Expert | HOH Fellowship Candidate | Photographer 📷
Honolulu, HIConnect -
Aleksandra Deis
Sunnyvale, CAConnect -
Jonathan Cole
Commerce | Startups | 40 Under 40
Petaluma, CAConnect -
Akshata Patel
New York, NYConnect -
Shubhangi Khanna
San Francisco Bay AreaConnect -
Bryan Wang
San Francisco, CAConnect
Explore more posts
-
Ahmad Basyouni
CUNY Tech Prep: This week, we explored open-source models with Hugging Face. For my wordle group project, I’m planning to incorporate Meta's NLLB (No Language Left Behind) translation model, which supports over 200 languages. The model uses Torch tensors, essentially multi-dimensional arrays that efficiently process large-scale data, to handle translations smoothly. By integrating NLLB, I can offer translations and personalized hints in real-time, making the game accessible to a diverse audience. Checkout Meta's model here: https://lnkd.in/eCbZJz6x #MachineLearning #DataScience #CTP #HuggingFace #Meta
27
-
Jim Blomo
"Why did Santa's AI-driven sleigh keep running into trees?" So much 🔥 from Julie Wang and John Allard in today’s OpenAI live stream! They covered Reinforcement Fine-Tuning, medical research, and shared some truly punny jokes. Moments like these remind me why I love my team: brilliant people solving fascinating problems—and having fun along the way! https://lnkd.in/gJnNiTXc If you'd like to participate in the Research Program by sharing data, check out https://lnkd.in/gZYehx9T or ping me. Reinforcement Fine-Tuning is a new technique for teaching the new models how to reason. It's kinda like how AlphaGo taught a model how to play by "grading" games, then learning what "good" moves are by comparing moves in winning vs losing games. This opens up a whole new paradigm for models, where you're not just teaching them the answer, but teaching them how to arrive at the right answer. What is Reinforcement Fine-Tuning (RFT)? It’s a groundbreaking technique for teaching models how to reason. Think of it like AlphaGo: the model learns by being “graded” on games and understanding what makes moves in winning games better than losing ones. With RFT, prompts are like the “games” and chain-of-thought reasoning is like the “moves.” By evaluating the "winning" reasoning steps, the model learns which ones lead to better answers. This means we’re not just teaching models the answers—we’re teaching them how to think through problems and consistently arrive at the right solutions. My team’s mission is simple: give you the world’s best model for your domain. If you’re finding it hard to get API responses to work for your business—whether it’s a performance issue, cost concern, or accuracy challenge—let me know. We have tools to improve outcomes, and I love working with customers to see how we can do even more. Oh, and Santa’s sleigh? Julie gave us the answer: “Because he hadn’t ‘pine’-tuned yet!” 🥁
26
1 Comment -
Friea Berg
🎯Data was, is, and will be the only fuel. Analogy with the fossil fuel is a good one, since the data we have today is as dirty as oil and coal. What big tech companies tried is to put a lot of "garbage" data, and what they did get was fantastic. Yet, we are at the point of diminishing returns. Data is the beginning and the end of this cycle.” Great thread, particularly this comment 👆from Ilija Lazarevic
20
2 Comments -
Sandra Zelen
I recently completed my summer research at Columbia Summer Undergraduate Research Experiences in Mathematical Modeling (CSUREMM)! Throughout the program, I explored how subway station centrality influences the performance of public high schools in Manhattan. This opportunity allowed me to delve deeply into Graph Theory, Multiple Linear Regression, Principal Component Analysis, and K-Means Clustering, which led to intriguing findings. 🔍𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀: 🔹𝗠𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗟𝗶𝗻𝗲𝗮𝗿 𝗥𝗲𝗴𝗿𝗲𝘀𝘀𝗶𝗼𝗻: The models exhibited limited predictive power of subway station centrality on school performance. 🔹𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗮𝗹 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: Identified three key components (Neighbor-driven Centrality, Distance-driven Centrality, Performance Metrics) which captured most of the variance in the data. 🔹𝗞-𝗠𝗲𝗮𝗻𝘀 𝗖𝗹𝘂𝘀𝘁𝗲𝗿𝗶𝗻𝗴: Revealed that neighbor-driven and distance-driven centrality PCs generated similar clusters, indicating that the clustering approach effectively captured centrality-performance patterns. Showcasing my research at: 🔹Columbia Undergraduate Pathway Programs Symposium 🔹Summer Undergraduate Math Symposium 🔹Summer Math Day at Jane Street allowed me to gain new insights, deepen my enthusiasm for the research topic, and connect with like-minded peers. I would like to thank Caroline Smyth and Kathy Xu for their incredible collaboration on this project, as well as George Dragomir and Vihan Pandey for their invaluable mentorship and guidance throughout the challenges we faced.
193
27 Comments -
Sujay Srivastava
Ilya Sutskever confirmed at NeurIPS '24 pre-training scaling law will not be the same. Internet data, the AI fossil fuel, is running out. It is not the end of the AI hype, the rapid rise of foundational AI has outpaced our ability to fully grasp their long-term societal impact. AI agents and applications are poised to transform every facet of our lives, and we will see it happening in coming few years. There will still be improvements in model architecture, and synthetic data creation, so models will improve, just not at the lightning speed they did till now. Looking forward, the single most important milestone in AI is creating systems capable of advancing themselves—AI researchers that can innovate and improve foundational models autonomously. We need to assess how far are we from this stage.
9
-
Henry Diep
Today, I found a fascinating way to think about “Tokens” in LLMs. It came from a conversation between Jensen Huang and Patrick Collison at a recent Stripe event. When asked about the future of compute capacity in five years, Jensen smartly dodged that question a bit, but then gave an excellent explanation and analogy about why tokens are the new forces that will power the next decades of humanity. He explained that we're now producing something unprecedented (and at scale): floating point numbers that possess “value”, which we now call tokens. These tokens are valuable because they encapsulate “intelligence”. People are now taking these tokens and transforming them into English, French, images, videos, chemicals, proteins, robotic movements, etc. And many are working hard to expand the range of concepts and ideas we can create with these tokens. He then goes on to made a compelling comparison between tokens and electricity. In the past industrial revolution, we successfully found a way to convert “atoms” into “electrons” (by boiling water to power electricity turbines). And now, we've discovered a way to convert "electrons" into "tokens" (by using energy to power data centers that train and run LLMs). When electricity was first introduced, few people understood its value. Today, paying for kilowatts is routine. The same will happen with tokens. Right now, only early adopters and builders are paying tokens. Soon, everyone will be paying for tokens on a daily basis to supercharge productivity and power new products and services. Many new industries will be born from and built on top of tokens. When I first heard about this way of thinking, I got goosebumps. Perhaps it’s because Jensen has a good storytelling skill that helps him sell what he builds, but I have to admit this way of thinking gave me a brief surge of pride. I'm proud of humanity's collective effort and creativity, which have taken us from living under rocks to building machines that can "think." That's nothing short but a miracle.
18
5 Comments -
Jagmeet Singh Minhas
Week 7 of CUNY Tech Prep is coming to a close, so I'd like to give you all a few updates. First of all, my project is going along great! I’ve been diving deep into image classification and transfer learning, and it’s exciting to see everything come together. Second, the key phrase of this week is... decision trees! We’ve been learning about this powerful machine learning algorithm, which helps in both classification and regression tasks by breaking down data into branches based on feature splits. Decision trees are intuitive and easy to interpret, but can also be prone to overfitting, so I’ve been learning and experimenting with techniques like pruning to keep the models efficient and accurate.
17
-
Maximillian Moundas
Over the last nine months, I've had the honor of contributing to the development of Amplify, an innovative open-source platform that facilitates secure, private use of generative AI for organizations. Amplify supports a diverse range of AI models, including those from OpenAI, Mistral, and Anthropic. Our platform continuously evolves, with plans to integrate additional models, providing users with a wide array of options to find the perfect fit for their specific generative AI needs. With Amplify, users gain extensive control over model configurations, enabling them to tailor generative AI responses by adjusting settings like temperature and verbosity, as well as creating custom instructions. The platform supports robust collaboration features, allowing users to share conversations, craft and distribute prompt templates, and share tailored AI assistants that can be created in minutes. Additionally, Amplify utilizes retrieval-augmented generation to enable users to prompt against documents of any size. Unlike direct partnerships with Generative AI providers like OpenAI or Microsoft, Amplify operates on a pay-as-you-go model, where your costs come solely from infrastructure and model usage costs. No subscriptions necessary. I use Amplify every day, and I am still working to make the platform better. If you would like to deploy Amplify for yourself or view the open source code, please visit https://lnkd.in/etSszJeN or visit https://lnkd.in/ehe-_DHr to learn more. I am so grateful to have had the opportunity to work alongside Jules White, Allen Karns and Karely Rodriguez while creating Amplify. I have learned so much during my time at Vanderbilt, and I can’t wait to share what else I have been working on!
39
5 Comments -
Jimmy Guerrero
Join Daniel G. and Jacob Marks, PhD for an exciting computer vision hackathon in NYC: https://lnkd.in/gCFbrxTd … where ML enthusiasts and college students alike will come together to tackle real-world challenges in the field and win cash prizes and swag. Participants can expect an immersive experience filled with learning, networking, and the opportunity to showcase their skills. Whether you’re a beginner eager to explore foundational concepts or an intermediate looking to add flair to your projects, this event offers something for everyone. What you can expect: 🚀 Tech talks: Deep dive tech talks on Computer Vision and Data-Centric AI. 🔧 Hands-on Workshops: Learn how to build AI applications through code examples 🏅 Engaging Challenges: Tackle real-world problems with AI solutions. 🤝 Networking Opportunities: Connect with fellow developers, builders, and industry experts. 🎁 Exciting Prizes: Demo your projects and compete for cash prizes. #computervision #ai #artificialintelligence #machinevision #machinelearning #datascience
2
-
Jimmy Guerrero
Join Daniel G. and Jacob Marks, PhD for an exciting computer vision hackathon in NYC: https://lnkd.in/gCFbrxTd … where ML enthusiasts and college students alike will come together to tackle real-world challenges in the field and win cash prizes and swag. Participants can expect an immersive experience filled with learning, networking, and the opportunity to showcase their skills. Whether you’re a beginner eager to explore foundational concepts or an intermediate looking to add flair to your projects, this event offers something for everyone. What you can expect: 🚀 Tech talks: Deep dive tech talks on Computer Vision and Data-Centric AI. 🔧 Hands-on Workshops: Learn how to build AI applications through code examples 🏅 Engaging Challenges: Tackle real-world problems with AI solutions. 🤝 Networking Opportunities: Connect with fellow developers, builders, and industry experts. 🎁 Exciting Prizes: Demo your projects and compete for cash prizes. #computervision #ai #artificialintelligence #machinevision #machinelearning #datascience
9
-
Jimmy Guerrero
Join Daniel G. and Jacob Marks, PhD for an exciting computer vision hackathon in NYC: https://lnkd.in/gCFbrxTd … where ML enthusiasts and college students alike will come together to tackle real-world challenges in the field and win cash prizes and swag. Participants can expect an immersive experience filled with learning, networking, and the opportunity to showcase their skills. Whether you’re a beginner eager to explore foundational concepts or an intermediate looking to add flair to your projects, this event offers something for everyone. What you can expect: 🚀 Tech talks: Deep dive tech talks on Computer Vision and Data-Centric AI. 🔧 Hands-on Workshops: Learn how to build AI applications through code examples 🏅 Engaging Challenges: Tackle real-world problems with AI solutions. 🤝 Networking Opportunities: Connect with fellow developers, builders, and industry experts. 🎁 Exciting Prizes: Demo your projects and compete for cash prizes. #computervision #ai #artificialintelligence #machinevision #machinelearning #datascience
8
1 Comment -
Sonal Kumar
Excited to share that two of our papers have made it to EMNLP '24 under main track. 1. GAMA - A Large Audio Language Model that is capable of performing complex reasoning on audio. Demo, code and dataset available at: https://lnkd.in/e23DZ6A4 Since its release, GAMA has consistently outperformed other LALMs, and we’re excited to explore a wider range of tasks with GAMA 2.0. 2. EH-MAM: Easy-to-Hard Masked Acoustic Modeling for Self-Supervised Speech Representation Learning - a self-supervised learning approach for speech representation that uses an adaptive masking strategy, progressively focusing on harder regions for reconstruction. Paper and code will be shared soon. This would have not been possible without my amazing team - Sreyan Ghosh, Utkarsh Tyagi, Ashish Seth, S Sakshi, Chandra Kiran Reddy Evuru, Ramaneswaran S, Oriol Nieto and without the support of my advisors Dinesh Manocha and Ramani Duraiswami.
105
11 Comments -
Amruth Niranjan
🚀 BostonHacks 2024! Last weekend, Andrew Sun and I won 2nd place for the CometCare track at BostonHacks 2024, which was our first Hackathon. We had a lot of fun working from the ground up to create a final product called GoodDeeds, a platform designed for NGOs, non-profits, and other local organizations looking to post about events requiring community involvent across the spectrum of urgency, ranging from severe post-disaster recovery efforts to more relaxed community service initiatives. 🏢 For Organizations Organizations can sign up to create events, specifying a location, urgency rating, volunteer needs, and a description of the event. 🙋 For Volunteers Individuals like you and me can sign up as volunteers with a specified “home” location and a distance radius we’re willing to travel. Volunteers can view all events within their set radius, filter by date and start time, and sign up. Their email is added to a mailing list for updates, and they can reach out to event organizers or ask questions to an OpenAI chatbot directly within the app. 🎮 XP Levels To encourage community involvement, we introduced a game-like feature: XP levels for individual users that increase with volunteering attendance. Higher XP points are awarded for events with higher severity ratings (like post-hurricane cleanup). Users and organizers can see a leaderboard displaying top volunteers with their XP points and approximate location. 📣 Real-Time Notifications We integrated a notification system, allowing organizers to alert individuals within a set radius about upcoming events or updates for events they’ve signed up for, powered by the Twilio SendGrid API. The emails look nice, too! 💻 Tech Stack We had a lot of fun learning Streamlit, a frontend library built in Python, along with building a Flask backend and using a PostgreSQL database hosted on Neon. The frontend was deployed on Streamlit Cloud, while the backend was deployed on Render. Given the short time constraint, we’re thrilled with what we accomplished, and we look forward to future development! 🌐 Check It Out! Demo Video: https://lnkd.in/ecJyGufP Live Site: https://lnkd.in/ey5ht6yG (starts cold, may take a while on first signup) Source Code: https://lnkd.in/ew7PJySZ
37
14 Comments -
Rahul Agrawal
Over last few months, I am increasingly leveraging LLMs to write code. From 10% to nearly 78% currently (based on lines of code) of my code is from LLMs. I have my own pipeline to fine tuned LLMs to augment my needs. Observations - 1. I am producing more code --> MORE IMPACT (approx 2.3X) 2. I am reading more code --> MORE MAINTAINABLE CODE 3. I am learning more as now I can start from concept and expand out ---> MORE LEARNING 4. I am incorporating best practices --> BETTER CODE 5. Lesser typing --> LOWER REPETITIVE STRAIN INJURY #AIBUDDY
76
8 Comments -
Zaheer Mohiuddin
Entry-level SWEs are making about ~$395k at Jane Street in New York. Interestingly, when compared to the median salaries for the region using our heatmap, New York’s median SWE salary is about $190k, meaning that Jane Street is paying above the 90th percentile for its talent (not uncommon for hedgefunds). At a glance, the median salary for New York is lower than that of San Francisco, Seattle, or other tech hubs across the country. However, firms like Jane Street demonstrate that exceptional opportunities can exist at top firms even outside of the conventional “best places to work as a SWE.” These hedgefunds are often fairly small (few hundred employees to few thousand employees at largest). But there are dozens of these firms. For ambitious professionals, I hope these stats convey the realm of what's possible. In general, the top paying companies in any location will often beat the median pay by a high margin and may not always be proportional to the median. Check out more on our heatmap here: https://lnkd.in/djT-8G7Y #salarytransparency
507
37 Comments -
Natalie Leal Blanco
More often than not, I’m here reflecting: What are the implications of making LLMs endlessly larger? My mind visualizes impossible levels of energy consumption, increasing exponentially each day. It feels like a scene from The Matrix, and not a pretty one. Recently, I’ve been researching about RAG (Retrieval-Augmented Generation), and it sounds promising, there are key points showing potential to reshape how we develop and deploy AI. The aspect that stands out the most to me is the potential to shift away from the “Texas approach” (remember my last post?), RAG could let us move towards smaller(!), modular, specialized models that integrate only the data they need, when they need it, rather than scaling endlessly. It’s like my dream of sibling models working together, creating a more targeted approach for better results. Why am I against ginormous models, you might ask again? LLMs can consume as much energy as powering a small city. City. RAG could allow us to significantly cut energy consumption, by eliminating constant retraining and enormous parameter counts. Have I mentioned I dislike the massive cash burning? Yes, is not my money. Yes, who cares about the overlords money. But honestly, I just hate waste. Did you know the cost of training foundation models have been doubling every 6-10 months? Doubling. Read again. RAG offers the potential to invest once in a base model and then build on it through knowledge retrieval, rather than constant retraining. This approach could make AI access more affordable, opening doors for smaller organizations and underserved communities. Of course, the cynic in me asks: are we simply shifting the computational burden elsewhere? RAG could reduce training costs, but it would introduce database management and retrieval overhead. My cent and a half? What do you think is more costly, monetarily and environmentally, that overhead, or energy consumption? I think you all know my answer. But let’s not forget: RAG doesn’t rely solely on LLM parameters; it retrieves data in real-time from external sources. Meaning APIs, websites, organizational databases, and beyond, and this retrieval approach possess risks and requires robust data security. However, again, the cost will still likely so much lower than today’s soaring energy bills and carbon footprint. So, the question for the 15$ in my wallet is: will AI’s future depend on more power or will be ruled by better processes? My other half cent? A leaner, privacy-conscious, eco-friendlier AI isn’t just a possibility; it should be our goal. All of us. #AI #SustainableTech #EnergyEfficientAI #DataPrivacy
18
6 Comments -
Daniel Carrera
Proud to share our work on enhancing LLM efficiency through dynamic early exit strategies! Collaborating with Juan D. Castano and Amir Voloshin, we built on Meta’s LayerSkip framework to introduce interpretable heuristics that identify when token predictions stabilize. Excited about the potential this unlocks for faster, more efficient LLM inference. A big shoutout to my amazing teammates for their hard work and insights!
22
-
Fasih Munir
With Meta releasing their new open source LLama 3.1 model, it makes me wonder how valuable #opensource work might be for a VC. Jensen Huang of NVIDIA suggests that he would (and I paraphrase) much rather have his leather jacket made for him rather than have to make it himself. Which is to say, open source leather is great but one would rather not spend the time figuring out how to make a leather jacket. And possibly for #venturecaptial, investing in the leather jacket is a better bet given the more predictable business models rather than giving out leather for free. But there is still a clear value add for open source. They set the frameworks upon which other companies build. It would be like controlling raw materials such as coal or oil. Everyone becomes dependant on them. But open source also means free, so while you build dependency you do not generate income. There are ways around this of course. Many open source projects like Oracle MySQL provide support and consulting services to enterprise customers. But that is restrictive to enterprise support and money is left on the table for customers on the tail end that do not require enterprise level support. For VCs, since their model relies on maybe a 10 year horizon at the high end, they would need companies that can show rapid growth and profitability. Open source projects that run enterprise support can become profitable but it can take much longer than VCs expect (see article below). Companies may choose to offer more attractive subscriptions to non enterprise customers which is common in #SaaS but where would the defensibility of the product be if it is open sourced? Competitors could launch similar products at a cheaper rate, maybe triggering a price war if companies do not keep their cool. Lastly, there is still the problem of valuation. How do you value open source if you give out the core product for free. You can value the support business but what about valuing the product itself. How much of the market it can capture/already captured and how realistic can this valuation be? You would probably not only need to value the supply ie the cost to recreate, but also the cost of demand ie the use in other companies and the money that they may save (see below paper by Harvard Business School). How then do you decide how much to invest or why to invest? Maybe hope for a big buy out? But is hope enough? Curious to learn from anyone who has a view or experience in this. Some references: 1. https://lnkd.in/eg3WqTR6 - 24 minutes onwards for open vs closed and the leather jacket 2. https://lnkd.in/ev3_4hNP - HBS paper on valuations 3. https://lnkd.in/eyCYyWuU - Article on open source monetization Runa Capital Anna Elokhina Hugo Mkhize Faisal Chowdhry Jonas Eichhorst
12
-
Kurt Alan Sell
Yesterday was my last day on the BAT team at iRobot. It’s going to be hard to express the profundity of my time there with only words, but I shall try. When I got to iRobot I had no clue what I was doing. I was trained in CS theory, but I didn’t know how to write quality code, and had even less of a clue how to manage distributed systems. I made lots of mistakes at the beginning, lots of them. But the thing is, I never forget a mistake I make, so I rarely repeat them. In fact, I don’t consider an issue fixed until I figure out a way to prevent it from happening again (i.e. adding a test, telemetry, ect). So in making mistakes, I learned the ropes. Truth is, I don’t think most people know what they’re doing, and with how fast the world changes, it’s hard to believe that anyone knows what they’re doing. But by admitting to your mistakes and learning from them, there’s nothing you can’t do. For my fellow iRobot coworkers who allowed me the grace to make mistakes and come to terms with who I am, there’s no shortage of thanks I can give. It was wonderful working with each and every one of y’all, and I am so greatful for having had the experience.
164
6 Comments -
Juan Castano
Excited to share our final project from Georgia Tech's Deep Learning course! 🎓 With my teammates Amir Voloshin and Daniel Carrera, we enhanced Meta's LayerSkip framework by introducing heuristics for dynamic early exits in LLMs, improving efficiency without sacrificing quality. Check out our work: https://lnkd.in/gUXB4XvS 🚀
47
4 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Charles Antoine Malenfant
1 other named Charles Antoine Malenfant is on LinkedIn
See others named Charles Antoine Malenfant