Sarah Floris, MS
Greater Seattle Area
79K followers
500+ connections
View mutual connections with Sarah
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Sarah
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Websites
- Company Website
-
dutchengineer.org
Experience
View Sarah’s full profile
Other similar profiles
-
Yinghan Liu
Shanghai, ChinaConnect -
Eve Chen
AustraliaConnect -
Bahar Zarin, PhD
San Francisco Bay AreaConnect -
Yash Vijay
Redwood City, CAConnect -
Jiawen Li
United StatesConnect -
Hank Zhu
New York City Metropolitan AreaConnect -
Anastasia Melnikov
Boulder, COConnect -
Nitin Mali
New York, NYConnect -
Olivia Yao
San Jose, CAConnect -
Hamza Tazi
New York, NYConnect -
Venkatesh Babu Sekar
South San Francisco, CAConnect -
Mithil Gupta
Atlanta, GAConnect -
Andrew M.
San Francisco Bay AreaConnect -
Shruti Taware
San Francisco, CAConnect -
Siri Raavi
Seattle, WAConnect -
Prarit Lamba
San Diego, CAConnect -
Roujing Chen
Santa Clara, CAConnect -
Daniel McAuley
Seattle, WAConnect -
George Iordanescu
Cambridge, MAConnect -
Mahshad Samnejad, PhD
Los Angeles, CAConnect
Explore more posts
-
Abhishek Tandon
🤔 A Fascinating Question: How could LLMs transform medical coding? Here's a data-driven hypothesis worth exploring... The Challenge: Medical coders today spend hours translating clinical notes into ICD-10 codes, juggling multiple reference guides and databases. It's like trying to translate a book while simultaneously checking three different dictionaries. Here's what an LLM-powered solution could look like: Think of it as a smart highlighter that could: - Extract key medical terminology from clinical notes - Map common diagnoses to preliminary codes - Create documentation checklists Based on current LLM capabilities and publicly available healthcare data, we could potentially see: - 25-30% faster processing for routine cases - Improved documentation completeness - Reduced lookup time for common codes The Technical Framework: - Fine-tuned LLM on medical terminology - Integration with existing systems - Human oversight for complex cases Why This Matters: For a hospital processing 200 records daily, even a 20% efficiency improvement means 40 more patients getting faster care. It's not about replacing human expertise - it's about augmenting it. 💡 Key Insight: The real value isn't in automation, but in reducing cognitive load. Like having a knowledgeable assistant who handles the initial research while experts focus on critical decisions. Question for healthcare professionals: Which parts of the coding workflow would benefit most from this kind of AI assistance? #HealthTech #AI #Healthcare #Innovation --- What specific challenges would you want this system to address first? Share below 👇
4
5 Comments -
Juniper Feld
Our team at HRai Labs is incredibly excited to announce the release of the Employee Equity Toolkit, a free, easy to use, and fully open-source disparate-impact-analysis tool for your business: https://app.hrailabs.com/. In an era where diversity, equity, and inclusion (DEI) are at the forefront of organizational values and strategy, having the right tools to monitor and analyze workforce data is essential. It shouldn’t cost hundreds of thousands of dollars a year to outsource disparate impact studies to a legal firm. These studies aren’t dynamic, don’t allow for more real time analysis, and are one-time engagements. Most importantly, employees should never face disparate impact due to organizations not regularly conducting these critical studies. That’s why HRai Labs created this open-source toolkit for organizations to run these analyses: https://lnkd.in/g3muuvgJ. We invite you to explore the Employee Equity Toolkit and leverage data to promote fairness and equity in your workplace. Ready to learn more? Check out our Medium article explaining the details or visit us at hrailabs.com.
5
-
Simon Smith
The cost of training top models is increasing 2-3X per year, and at this rate will reach $1 billion per model by 2027. The findings are from Epoch AI and follow their recent analysis of increases in compute for foundation models running at 4-5X per year. They've also provided some interesting data on the relative breakdown of cost components like staff, hardware, and energy. A few thoughts after reading this: - I would assume staff costs won't increase as much as hardware and energy costs, so the relative contribution of human labor to future AI models will decrease, in yet another sign of capital's increasing dominance over labor in the industry (just look at NVIDIA's ever-rising stock price). - If compute is increasing 4-5X per year but cost is increasing 2-3X, is it safe to assume that training runs are getting more efficient? - Some people, like Anthropic CEO Dario Amodei, have said we'll see $1 billion training runs in the next year. Does this mean the 2-3X pace is about to increase? That Epoch is wrong in its estimate? Or that Amodei is overstating the cost, possibly to put fear into competitors who can't keep pace? - Only a few companies are going to be able to train future models. We're likely going to see further consolidation. Perhaps we get to a future with only two large general purpose models, like the Windows/MacOS and Android/iOS duopolies we've seen in the past. Article here: https://lnkd.in/g5SejXen
7
-
Artjom Shestajev
The hype has shifted from LLMs to Agents. The first ideas emerged right after the recent "AI-resurrection" (read: ChatGPT release 2 years back) and now it has become a buzzword on all levels. Don't get me wrong, I am an "Agent-optimist". Both consumers and businesses want to move on from "request-based assistance" ("summarize me this article") to autonomous minions helping us with everyday tasks without explicit orders given. We have already seen these attempts of introducing Agents to our email services and allowing them to take over the computer control. The explosive burst of such tools will happen in 2025, though – just scan the billboards of AI conferences' participants and ask your favorite LLM to extract the most commonly used term. What is less obvious is that Agents will eventually follow the LLM's path – huge models are great for consumers and everyday support. Still, enterprise businesses require distilled dedicated fine-tuned versions tailored to specific tasks. Think of Agents as not of a single almighty deity but more like a team of skilled professionals working on assigned projects (and later starting their own ones). Each of these "distilled Agents" will have their core competency, communicate with each other, and enrich the world picture for the whole group. The skills will not be fully isolated, though each has an area of strength. In the same way as it is now a reality in effective software development teams: an engineer or PM can draw a mock-up and it is warmly welcomed, then the final prototype work is better left to the designer. There are opportunities and challenges. Companies (or single project owners) will be able to "hire" a specific Agent from the marketplace to "join the existing A-Team". The biggest technical riddle is about having an efficient and universal protocol for Agents of various origins to effectively communicate with each other. And not drink too much coffee in the kitchen between tasks. #ai #ml #future #futureofai #aiagents
8
4 Comments -
Greg Coquillo
What if your queries required access to multiple sub-related data from different documents, for an LLM to parse and make sense of before returning a coherent and accurate output? We often ask complicated questions, right? 🔹Well to solve this problem, these researchers released Multi-Head RAG (MRAG) which leverages activations of Transformer’s multi-head attention layer, instead of the decoder layer, as keys for fetching multi-aspect documents. They are taking advantage of the fact that multi-head attention can learn to capture different data aspects. 🔹The interesting part is that MRAG is both cost-effective and energy-efficient. In other words, it does not require additional LLM queries, multiple model instances, increased storage, or multiple inference passes over the embedding model. 🔹Moreover, MRAG can be seamlessly integrated with existing RAG frameworks and benchmarking tools like RAGAS as well as different classes of data stores. 🔹The researchers provide a comprehensive evaluation methodology, including specific metrics, synthetic datasets, and real world use cases to demonstrate MRAG’s effectiveness. 🔸Do check it out! #genai #technology #artificialintelligence Click #linkedangle and follow for more content Set the 🔔 notification on my page, don’t miss a post!
181
10 Comments -
Michelle Currie MS, RN, CPHQ, CPHIMS
I'm currently going through some very old SnagIT screenshots from 2015. I'm trying to recall which hype cycle this is from. Which hype cycle was relevant in 2015? #SelfServeAnalytics? Had we reached #BigData yet? The progress in "#reusability" has been remarkable for the cartoon industry. 😂 #SameHypeDifferentDay
2
-
Salif Tankoano
Pathways to Tech Week #2 at Headstarter: Being surrounded by people who are building amazing software and companies is a truly inspiring experience. My favorite speakers were: • Tariq Ahmed: First Engineer at AppLovin ($30Billion IPO) • Julian Alvarez 🚀: CEO of Wisdolia • Hamza Ali: CEO of Olostep Here's what I have learned from them: 𝟭- "𝗙𝗮𝗶𝗹𝘂𝗿𝗲 𝗮𝗻𝗱 𝗽𝗶𝘃𝗼𝘁" Tariq, one of the first engineers at AppLovin, shared that AppLovin tried several things in their early stages that failed. However, the company had incredible success when it pivoted to mobile advertising; They didn't have expertise in the field. Today if you've experienced in-app advertising, AppLovin was likely behind it. 𝟮- "𝗦𝘁𝘂𝗱𝘆 𝘆𝗼𝘂𝗿 𝘂𝘀𝗲𝗿𝘀, 𝘆𝗼𝘂'𝗹𝗹 𝗯𝗲 𝘀𝗵𝗼𝗰𝗸𝗲𝗱 𝗮𝘁 𝗵𝗼𝘄 𝗺𝘂𝗰𝗵 𝘆𝗼𝘂 𝗰𝗮𝗻 𝗹𝗲𝗮𝗿𝗻." Julian runs Wisdolia a startup aimed at helping college students actively study. By simply asking what types of students they were. His team found that medical students were the ones the ones who converted the most into paying customers. It seemed evident because they were the most overwhelmed by the amount of information they had cram. But the evidence in their face was a lightbulb moment for Wisdolia. 𝟯- "𝗦𝗰𝗿𝗮𝗽𝗽𝗶𝗻𝗲𝘀𝘀 𝗮𝗻𝗱 𝗶𝗻𝗶𝘁𝗶𝗮𝘁𝗶𝘃𝗲 𝘄𝗲'𝗹𝗹 𝗴𝗲𝘁 𝘆𝗼𝘂 𝗳𝗮𝗿." Hamza, CEO of Olostep, describes his company as "a company that enables AI agents to browse the web just like humans". I was fascinated when I heard that, and as the Olostep team acquires more customers they value scrappy and people that initiative on board. Those qualities set engineers apart when building successful products.
11
1 Comment -
Mohd Ridzwan (Reez) Nordin
🔥 Data Engineering skills are HOT right now. Like, RM7k-9k++ starting salary hot (based on JobStreet's data -- https://lnkd.in/gHZ7uyY8). But here's the thing... Most "learn to code" courses teach you Python basics and call it a day. That's. Not. Enough. Real Data Engineering is about mastering the full stack: - Building bulletproof data pipelines - Orchestrating complex ETL workflows - Architecting cloud data solutions - Writing production-grade Python That's exactly what you'll learn in our 7-week intensive starting Jan 7th. We're partnering with Magiks to bring you a course designed by engineers who've built data infrastructure at scale. Miss our packed webinar on Dec 17? DM me for the recording. Limited spots left. Ready to level up? Registration links in the comments below 👇
7
1 Comment -
AJ Jobanputra
$2 H100s: How the GPU Bubble Burst Last year, H100s were $8/hr if you could get them. Today, there's 7 different resale markets selling them under $2. What happened? The market has flipped from shortage ($8/hr) to oversupplied ($2/hr), because of reserved compute resales, open model finetuning, and decline in new foundation model co’s. Rent instead. https://lnkd.in/gpQEidmC
2
1 Comment -
Franck Stéphane NDZOMGA
I can’t believe we used to do this to get structured outputs from LLMs. That was before DSPy, Instructor and anonLLM. It made me want to reflect on the best practices going forward. This is super important because of the rise of compound AI systems that require structured outputs to streamline communication between components. Check out my article below for more info! https://lnkd.in/ejCunhQ5
3
-
Dave Danowski
🌟 **Repost Alert! 🌟** Exciting times lie ahead as we delve deeper into the future of market intelligence. At IntelliSell, we're embracing the power of linguistics to transform uncertainty into opportunity. Understanding market narratives isn't just about numbers—it's about the words that drive those numbers. By integrating linguistic insights into our strategies, we're enabling precision and confidence in navigating market complexities. 🚀 I invite you all to join the conversation and explore how this approach can redefine your understanding of market movements. Let's rethink and reshape the future together! #MarketIntelligence #Innovation #StrategicInsights #AI #MachineLearning #SalesStrategy #DataDriven #GoldenWords #IntelliSell #Finance
1
-
Jonathon Johnson
I’m currently working with two incredible founders in early-stage startups, helping them find the tech talent that will drive their visions forward. Lately, I’ve been connecting with some truly brilliant Machine Learning Engineers to support these companies (particularly those focused on GPU/LLM inference projects). Many believe that finding talent is difficult. The truth? Finding the right talent is the real challenge. When a company invests in innovation, the stakes are high. Hiring the wrong person can be costly, but partnering with the wrong recruiter can be just as damaging. Your energy, money, and reputation are on the line when you choose a recruiting partner. I’m eager to connect with more founders who understand that at this critical stage, every hire is vital. If you’re not just looking for resumes but for the right talent that fits your vision, let’s connect. #startup #artificialintelligence #GPU #LLM #techtalent #talent #opentowork #hiring #engineers #chatgpt #AI #opensource #leadership #recruiting #partner #management
9
1 Comment -
Adil Syed
The current YC Fall 2024 batch includes a lot of "AI startups" something like 87% based on data report by Gravity (https://lnkd.in/gxxJnGSV). The takeaway for a lot of people is "wow, AI focus is overheating," but I actually think something different is going on: the way software is being built has fundamentally changed. It makes sense to consider AI tooling and capabilities from day 1 for any company building in the software space, because it can make your product and operations better in an undeniable way. Practical use of AI is not a nice to have, but a requirement if you're going to build something new and novel moving forward...so I think this trend continues and maybe the new normal is 100% of companies are "AI startups" the way 100% of tech companies consider using the internet. 🤷♂️ Great chart/stats by Gravity below, which includes a number of YC companies from Fall 2024 using Rippling to build their business and make the administrative junk relative to payroll, benefits, IT, and spend happen in the background, so they can focus on product, sales, customers...rinse and repeat. Here's to everyone operating in Founder Mode! 🙂 🚀 "AI-focused" YC batches over the past 2 years: - 87% in Fall 2024 - 77% in Summer 2024 - 70% in Winter 2024 - 57% in Summer 2023 - 32% in Winter 2023
121
4 Comments -
Greg Toroosian
🎙 Episode 43 of Machine Minds with Bob Rogers, CEO & Co-Founder of Oii.ai is live! Bob's background speaks so itself so imagine what the insights from the episode are like! Background: - PhD in Physics - Expert in residence at UCSF who helped build the first FDA-cleared AI product that saves lives every day - Member of the Board at Harvard IACS - Led Data Science and AI for Intel - Was the Co-founder and Chief Scientist of an acquired healthcare AI company - Co-founded and ran a quantitative hedge fund - Is currently a Chief Scientific Advisor for two companies and a Co-Founder of one of them - Authored 4 books... And I'm sure there's more I'm missing. Check out the full episode wherever you get your podcasts to hear all about Oii as they just came out of stealth! #machineminds #techpodcast #ai #supplychain #startups
10
2 Comments -
Ted Michaels
LLMs are terrible at basic math. Why? Here's why: Unlike people, LLMs don’t conceptualize mathematics. They simply recognize patterns in the training data (text) they’ve learned from. When asked what 2 + 2 is, an LLM recalls instances of seeing ‘2’, ‘+’, ‘2’ and so on, allowing it to approximate solutions but not match the precision of a calculator. That’s how we end up with LLMs asserting that 2 + 2 = 5. But for analytical teams, like finance, that makes LLMs really hard to adopt in day-to-day workflows. The productivity gains of having AI do calculations are negated by needing to fact-check and audit every calculation and number. LLMs are also not deterministic, so no amount of prompting is going to guarantee you’ll get the correct answer every time. And that’s just for basic math – the outputs I’ve seen from LLMs on more complex calculations and analyses can get very wonky. Add thousands of rows of data and you’re squarely in hallucination land. So what’s the solution? Calculate things manually? No. The good news is that this is changing quickly and there are options: 1 - AI is getting better at math. Researchers continue to train LLMs with more specialized datasets that include a much wider array of mathematical problems and solutions. The newest models, like OpenAI’s latest family of models, are incorporating things like symbolic reasoning and “chain of thought” methods to break complex problems into smaller, logical steps the LLM can more easily process. Results have been really promising and we’ve seen each new iteration of models continue to beat previous benchmarks. 2 - Leverage AI agents For intensive workflows, and to really make sure your outputs are correct, working with companies that are building high-performing AI agents is likely your best bet. Agentic workflows allow LLMs to be used for what they are great at (inferring user intent, making decisions, etc.) while outsourcing the things they struggle with (complex calculations) to other parts of a broader system. Text-to-SQL, “tool-use”, few-shot prompting, and other strategies can be combined with real-time data integrations to dramatically improve performance on analytical operations. At Concourse we’ve spent the better half of 2024 getting our AI Agents to be really good at math, and I’m so excited to continue growing our capabilities to tackle even more complex mathematical problems for our customers in 2025. Reach out if you’re interested in learning more about how we’ve built Concourse and the impact we’re already delivering for finance teams.
42
5 Comments -
Dillon Morrison
Wrote a post on how we're thinking about semantic modeling at Sigma, specifically focused on a GUI-based model, flexibility, integrations, and (of course) leveraging them for AI. Curious what folks think. Do these points resonate? Am I missing something important? Want to chat more about the topic in general? Let me know! https://lnkd.in/gnqtxnY3
9
-
Harry Stebbings
I’ve been fortunate enough to do AI shows with Sam Altman, Yann LeCun & Arthur Mensch. But this show today with David Cahn at Sequoia Capital is the best of all of them: 🧠 AI’s $600BN question 🏃 Why no one will never train a frontier model on the same data center twice 🏗️ Servers, steel & power: The 3 pillars of the future of AI Not one to be missed, my 8 key takeaways 👇 1. The Oligopoly That is Blowing Billions on AI - Microsoft, Azure, & Google now represent $7T market cap. - Of course they will spend billions to protect their oligopoly - What is most interesting is Meta is the only incumbent that is financing this not with cloud being their cash cow. 2. How Startups Could Win with the Cost of Compute Coming Down - Big tech companies are producers of compute & startups are consumers of compute. - Overproduction of compute = lower prices. - This gives startups higher margins & makes them more valuable. 3. Is Compute the Currency of the Future or Do the Costs Come Down and It Is Commoditised? - We are forgetting “compute” is a euphemism for a $2BN data center in the middle of nowhere. - They have GPUs & liquid cooling systems that need to be constantly optimized & upgraded. - We cannot simply build 15 years worth of compute all at once. No one's ever going to train a frontier model on the same data center twice. 4. Data Centre Teams and Model Teams Need to Be Coupled - You cannot have separate teams running data centers & building models. - Elon & Zuck control all their data centres. - It will not work as these models get better & better. 5. Why the Data Center Is the Most Important Asset - There is not much difference between these models today. - AI researchers are jumping from one lab to another. - Scaling laws will become more dominant as these models get bigger. 6. Servers Steel and Power: The Three Bottlenecks of AI Three things: 1. Servers: NVIDIA, AMD, Broadcom… 2. Steel: Construction & real estate developers for data centers. 3. Power: All the energy required for AI. - More factories will be built in 12 months for AI and we will see another industrial revolution. 7. You Have to Have a Cash Machine to Play the Big Model Game - Amazon has AWS - Meta has Instagram - Microsoft has Azure - You need a cash machine to compete and it CANNOT be the AI business. 8. There is One Definition of Success in Venture: Are you a Slugger? - If you are generating billion dollar gains, you are a slugger. - Doug Leone, Pat Grady, Andrew Reed are all sluggers. - You cannot be good at this business without being a slugger. (links in comments) #founder #funding #business #investing #vc #venturecapital #entrepreneur #startup #seed #funding #sequoia
190
26 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More