We were fortunate to be granted the ability to highlight the work NeoMINDAI has been doing over the past year at the American Academy of Pediatrics, AI Webinar series as a work in progress. We highlighted why there is a need for a group like NeoMINDAI to be working collaboratively as we study, evaluate and learn how to apply AI in neonatal critical care medicine and in pediatrics in general. See full recording: https://lnkd.in/g6cnrTZi Video Timeline for NeoMINDAI: 03:45 Start of NeoMINDAI presentation 09:07 Highlights of the NeoMINDAI website 11:05 Co-founders and their areas of interest 12:42 NeoMINDAI focus on education 14:00 a Call for Collaboration with NeoMINDAI It was a privilege and honor to present on behalf of NeoMINDAI for the AAP. #UsingWhatWeHaveBetter
NeoMINDAI
Hospitals and Health Care
Neonatal Machine learning, INnovations, Development, and Artificial Intelligence group
About us
We are a collective group of clinicians and scientists working to enhance neonatal outcomes and family experience by leveraging the power of data, innovation, and collaboration among clinicians, clinician scientists, and data scientists while upholding the highest ethical and professional standards. Our mission is to transform healthcare delivery through innovation and collaboration, by seamlessly integrating artificial intelligence into clinical practice to deliver personalized, safe, ethical, and effective newborn and family care. We value innovation in perinatal and neonatal care that is safe, effective, consequential, value-adding, and ethical that improves patient and family-centered outcomes.
- Website
-
https://neomindai.com
External link for NeoMINDAI
- Industry
- Hospitals and Health Care
- Company size
- 11-50 employees
- Type
- Nonprofit
- Founded
- 2022
Updates
-
It is time. Are you interested in learning how Dell Medical School at the University of Texas at Austin clinicians and scientists are leveraging #AI to gain important insight? I know I am certainly excited to learn what David Paydarfar and his colleagues in the Oden Institute for Computational Engineering and Sciences are doing to advance clinical knowledge and insight with AI. Please join us this coming Thursday at 1 pm MST for our monthly NeoMINDAI webinar: (https://lnkd.in/gAAQEE9c) #UsingWhatWeHaveBetter
-
-
Another day. Another reveal in the #AI world that holds tremendous promise and, to me, is somewhat earth shaking 😳 . Google unveiled a Co-Scientist Model that takes an agentic approach to scientific discovery (https://lnkd.in/g3Asn78e). I had seen something similar a while ago from another group, Sakani, highlighting their model's capabilities to act like an entire research community (https://lnkd.in/g_Z-3WC8) from ideation to study design to study publication for as little as $15. The Google AI model has already hypothesized a novel gene transfer mechanism (which took human scientists about a decade to uncover) and suggested potential treatments for liver fibrosis. By identifying gaps and generating new research ideas, Google's AI aims to rapidly advance scientific progress. Their tool aims to enhance scientists' capabilities. This development reflects a broader trend of tech companies investing BIG $$$ in AI to revolutionize various industries, especially healthcare. Wow and What's Next? 👆🏽 For you in scientific research, what do you think? All hype, hope, or concern? #UsingWhatWeHaveBetter Other links to the topic: https://lnkd.in/gaWuq4PV https://lnkd.in/gSs549cb https://lnkd.in/gymnUxfn
-
Pediatrics being left out? Is that a good thing in this AI race? In the evaluation and application of AI-enabled devices approved by the FDA? Maybe this is a good thing as the FDA approval process is less robust than what clinicians applying these models desire and are accustomed to prior to an intervention, device or medication making its way into clinical practice (https://lnkd.in/gbVis2Fp): ⭕ Only about half of FDA approved AI-enabled devices have clinical validation. ⭕ Of those that have clinical validation half include retrospective studies which brings in the potential for considerable bias and significant confounding. Another study FDA approved AI-enabled devices highlights (https://lnkd.in/gmUD5TrN): ⭕<1% of models provide socioeconomic data on studied populations ⭕Only 63% report sample size for studies used to gain FDA approval ⭕Only 21% report risk to potential users ⭕Only 2% provide documentation on their safety and effectiveness The FDA stipulates that #AI/#ML algorithms undergo clinical validation, but there are no requirements for manufacturers to specify whether testing included pediatric individuals or for device labels to present standard information on the age of patients for which a device was developed. A research letter this week in JAMA, Journal of the American Medical Association Pediatrics by Ryan Brewster, Matthew Nagy, MD MPH, Susmitha Wunnava, Florence Bourgeois (https://lnkd.in/gS34nMTv) demonstrate that: 🔅 Among devices labeled for pediatric patients, few device manufacturers disclosed information in regulatory documents on whether algorithm validation was performed in pediatric cohorts and only 18.7% explicitly described validation using datasets that included children. In short- we need to proceed with caution in using these FDA approved AI enabled devices, especially in children. We need local evaluation to ensure these models are safe, effective, ethical, and that they improve inequities, not worsen them. While the widening pediatric care deserts, diminishing pediatric primary care workforce, vanishing pediatric inpatient care beds, and the fact that the majority of children requiring emergency care are receiving it in adult-focused EDs (not well equipped for children---leading to worse outcomes) requires immediate action, we need to proceed slower and with CAUTION in applying these AI models in pediatric healthcare. At NeoMINDAI (https://neomindai.com/), we are fostering a collaborative community to ensure that AI application in #neonatal medicine and #pediatric medicine is safe, effective, ethical, and equitable. #UsingWhatWeHaveBetter
-
Easier said than done? FDA kicking the can down the road?(https://lnkd.in/g7T6XMPX) FDA tasked with an impossible job & not structured to regulate and oversee approval of the flood of AI-enabled medical devices? The FDA Digital Health Advisory Committee recently met to discuss AI regulation and oversight (https://lnkd.in/dkJ9HRpZ). Robert Califf, the FDA commissioner has been highlighted as making (at least) 3 important points during this session: 1- Healthcare systems and hospitals need to develop robust quality assurance mechanisms for AI. 2-There's no way the FDA can oversee every algorithm. 3-He does not believe that there is a healthcare system or hospital in the U.S. that has the infrastructure to effectively evaluate and maintain these AI models today. As of August 2024 there have been 950 FDA approved AI-enabled devices (https://lnkd.in/dvXnRwyA). The FDA approval process is not as rigorous as one would hope. A report demonstrates gaps for 692 FDA approved devices( https://lnkd.in/e9u7QFnU): ⛔ only 3.6% of devices had supporting population data on race/ethnicity (bias is a significant problem with #AI) ⛔<1% of devices approved had socioeconomic population data ⛔only 18% of devices identified the age of the population studied ⛔<2% had data on safety & efficacy ⛔<10% used a prospective study for post-market surveillance Another group of investigators evaluated transparency of AI clinical trial data included in 65 RCTs from 2020-2022 and found critical aspects were often underreported, including(https://lnkd.in/gpdF2g8R): ⛔Algorithm version: Essential for replicability and understanding the AI model and important aspects of the model's evolution. ⛔Accessibility of AI intervention or code: Important for validation and transparency. ⛔References to a study protocol: Necessary for assessing methodological rigor. Another group of investigators found (https://lnkd.in/gZ4uGGCd) of 521 device FDA authorizations; ⛔292 (56%) reported clinical validation; ⛔144 (27.6%) were retrospectively validated ⛔148 (28.4%) were prospectively validated ⛔ Only 22 (4.2% of total authorized devices) were validated with prospective RCTs. I have found these data and studies quite sobering. It is imperative if any of these #AI models are being used in your hospital or healthcare system that your IT/data science/clinical informatics team is involved in evaluation and maintenance of these models to ensure they are safe and effective for your population. #UsingWhatWeHaveBetter
-
As #AI marches ahead in healthcare, some centers are investing significantly in their infrastructure and collaborations to support model development, application, and evaluation. Breaking down silos among clinical care, basic science, data science, IT, and healthcare data management is essential for the success of these centers. However, the approaches to achieving this vary widely. It will require significant financial investment: Mount Sinai Health System recently opened the Hamilton and Amabel James Center for Artificial Intelligence and Human Health, backed by a $100 million investment. This highlights the level of commitment needed to build robust AI ecosystems (https://lnkd.in/ghH_y-XC). It will require forward thinking: Washington University School of Medicine in St. Louis and BJC Health System have launched the joint Center for Health AI (https://lnkd.in/gXFwiGFt) incorporating opportunities for medical residents and students to gain skills in AI-driven care delivery. Integrating AI education into medical training is a key step forward. It requires collaboration across disciplines: The Department of Biomedical Informatics (DBMI) at Vanderbilt University Medical Center has launched its center for health Artificial Intelligence (AI) – ADVANCE (AI Discovery and Vigilance to Accelerate Innovation and Clinical Excellence). This center, Co-directed by Peter Embí, M.D., M.S. and Brad Malin (https://lnkd.in/gja4fcra) emphasizes the need to break down silos among clinical care, basic science, and data science. Centers like these set a standard for interdisciplinary teamwork. It may require collaboration between large healthcare systems and academic/scientific institutions with significant resources: Hartford HealthCare has launched The Center for AI Innovation in Healthcare and was created through collaboration with University of Oxford and Massachusetts Institute of Technology (https://lnkd.in/gZRGgRSi). Bringing together such institutions with significant technological resources and healthcare systems may be an effective model-- leveraging economies of scale (in this case technology know how and clinical care know how). The Google and Mayo Clinic partnership to advance generative AI applications in healthcare represents another promising model.(https://lnkd.in/gD4gfmcu). It will be fascinating to see which of these models thrives, the lessons learned, and how they shape the future of AI in healthcare. What do you think will drive the most successful outcomes? #UsingWhatWeHaveBetter
-
Complement or complicate? #AI is transforming many aspects of healthcare; revenue cycle management, operations, diagnostics, treatment management, & patient monitoring/communication. As AI revolutionizes care delivery, it’s also changing what it means to be a physician. Are we training future physicians to meet this challenge? A recent study from Ilker Hacihaliloglu et al. (https://lnkd.in/gSgrXvQv) outlines a solid framework for AI education in Canadian medical schools, offering lessons others can learn from: 🟢 Need for AI Literacy: most medical students & practicing physicians lack training to effectively and ethically integrate AI tools into education & clinical care, hindering appropriate AI use in medicine (https://lnkd.in/gZmngTY3). Physicians need to understand the capabilities, limitations, & ethical implications of AI in healthcare. 🟢 Ethics: From data privacy to algorithmic bias to legal implications, AI brings unique, & not easily approached, ethical challenges. A structured curriculum on Responsible AI can prepare (future) physicians to navigate these issues, ensuring AI complements patient care rather than complicates it. 🟢 Competency-Based Learning: Using frameworks like Bloom’s Taxonomy, CanMEDS, & EPAs.. we can teach specific skills—like understanding AI’s strengths and limitations, output validation, & how to inform patients about AI's role in their care. 🟢 Acquisition of Practical Skills: Training should go beyond theoretical knowledge & equip students with practical skills--data analysis, interpretation of AI outputs, & integration of AI evidence into clinical decision-making. 🟢 Integration Without Overhaul: A practical approach doesn’t require starting from scratch. AI education can be embedded into existing courses—such as biostatistics, ethics, and clinical rotations—to give students hands-on exposure while maintaining a focus on traditional medical training. But that requires AI literate/knowledgeable senior educators/mentors (currently lacking). Other approaches: - AI electives for medical students (https://lnkd.in/g8CAzAih) -Specialized training for radiology residents (https://lnkd.in/gnz_xTtd), -leveraging mobile apps for AI skill-building (https://lnkd.in/gB5FB-SQ). -leveraging #LLMs in medical education(https://lnkd.in/gKkze7AG) Tomorrow’s physicians must understand the capabilities, limitations, and ethical implications of AI to make informed decisions that enhance—not replace—the art of medicine. The path forward is clear: AI-focused education, integrated thoughtfully into existing curricula, is essential for equipping physicians as AI becomes embedded in all aspects of healthcare. Are we ready to integrate AI into medical education? University of Colorado School of Medicine, Shanta Zimmer, American Academy of Pediatrics, ACGME. #UsingWhatWeHaveBetter
-
we are live now with Alvaro Moreira if you want to join our Webex virtual session: https://lnkd.in/gbHnXqnZ
-
Will #AI have an impact on the work you do today, tomorrow, next year, or 5 years from now? The National Academies of Sciences, Engineering, and Medicine are providing their perspectives (https://lnkd.in/gC2wU_FX). A few highlights: 🔆 AI as a General-Purpose Technology: AI, particularly in its generative forms, is a transformative technology with far-reaching implications across all sectors. Similar to historical turning points like the steam engine and electricity, AI's rapid advancement and wide applicability position may be a key driver of future economic growth and societal change. 🔆 Challenges & Opportunities: They acknowledge the potential pitfalls of AI, including bias, hallucinations (inaccurate outputs--- JUST call them ERRORS), equity of access, and ethical concerns. We need thoughtful governance, responsible development, persistent evaluation and error-feedback mechanisms, and the need to proactively address these issues to ensure that AI benefits society as a whole. 🔆 Effects on Expertise & the Workforce: They highlight the nuances between AI and human expertise. A central argument is that AI's capacity to learn and execute non-routine tasks will reshape the demand for skills. While some jobs will be automated, a greater emphasis will be placed on uniquely human skills such as judgment, dexterity, adaptability, and complex problem-solving. 🔆 Support of Cognitive Work: Most recent technological advances have automated many routine physical and cognitive tasks done by humans (think calculator, leaf blowers, Mapquest, word editor...). But generative AI systems will mostly affect cognitive work—both routine tasks and non-routine tasks. 🔆 Augmentation vs. Substitution: AI will be used mostly to assist us in performing tasks, thereby augmenting and complementing our work or expertise rather than replacing us. 🔆 Societal and Ethical Concerns: AI may be used for things it should not which will increase cybersecurity risks and costs, potentially violate customer privacy, or open the possibility to increasing the number and scope of nefarious actors (homemade weapons or toxins creation) as data is democratized. Currently, there are issues with AI's ability to discriminate between good and bad outputs effectively and absolutely. 🔆 Educational Shifts: AI will redefine (disrupt current approaches) learning by supporting personalized, adaptive education. 🔆 Equity and Inclusion: Without deliberate interventions, the benefits of AI may exacerbate inequalities in income and access to opportunities. With great promise, and great power comes even greater responsibility--- for us to be involved in the evaluation and application of these tools. #UsingWhatWeHaveBetter
-
My series of posts this week will be about my learnings and the many positive interactions I had at AIMed 2024 this past week. My first exposure and understanding about #AI in #healthcare came from taking an introductory class from American Board of Artificial Intelligence in Medicine (ABAIM) that was created by Anthony Chang, MD, MBA, MPH, MS, Robert Hoyt MD FACP FAMIA ABPM-CI, Alfonso Limon, Ph.D., Mijanou Pham, Ioannis A. Kakadiaris, now some years ago. I frequently talk and teach about AI and its many uses in medical education. It so happens one very inquisitive and bright pediatric resident, Daelyn Richards, asked me a year ago how I began to learn about AI. I told her about ABAIM. As they say... the rest is history. She took the course and attended AIMED 2024 this past week--- she is one to watch in this field. Dr. Richards is already accomplished well beyond her years of training. She has helped create and sits on the Board for a company called flok--a company developed to help those with rare metabolic diseases manage their conditions with unique approaches such as diet management. My first lesson that I will share for AIMED-- we need to engage, support, and collaborate with the next generation of clinicians and students as they will teach many, if not all of us, how to leverage the technology that is AI to improve patient and family care safely, effectively, ethically, and wisely. #UsingWhatWeHaveBetter
-