Dear All, This is a gentle reminder about tomorrow's talk by Prof. Marta D'Elia from Stanford University, USA, as part of the Param-Intelligence (π) Seminar Series. The seminar will take place on December 5, 2024, from 12:00 to 1:00 PM ET. Please mark your calendars and join us for this session. The Zoom link is provided below. Details of the talk are as follows: Title: On the use of Graph and Point networks in scientific applications Abstract: In the context of scientific and industrial applications, one often has to deal with unstructured space-time data obtained from numerical simulations. The data can be either in the form of a mesh or a point cloud. In this context, graph neural networks (GNNs) have proved to be effective tools to reproduce the behavior of simulated data; however, depending on the physical nature of the datasets, variations of vanilla GNNs have to be considered to ensure accurate results. Furthermore, when only a point cloud is available, one can also consider a graph-free approach by building a "point network" that doesn't require connectivity information. In this presentation we focus on particle-accelerator simulations; a computationally demanding class of problems for which rapid design and real-time control are challenging. We propose a machine learning-based surrogate model that leverages both graph and point networks to predict particle-accelerator behavior across different machine settings. Our model is trained on high-fidelity simulations of electron beam acceleration, capturing complex, nonlinear interactions among macroparticles distributed across several initial state dimensions and machine parameters. Our initial results show the model’s capacity for accurate, one-shot tracking of electron beams at downstream observation points, outperforming baseline graph convolutional networks. This framework accommodates key symmetries inherent in particle distributions, enhancing stability and interpretability. Speaker's Biography: Marta D'Elia is the Director of AI and ModSim at Atomic Machines and an Adjunct Professor at Stanford University, ICME. She previously worked at Pasteur Labs, Meta, and Sandia National Laboratories as a Principal Scientist and Tech Lead. She holds a PhD in Applied Mathematics and master's and bachelor's degrees in Mathematical Engineering. Her work deals with design and analysis of machine-learning models and optimal design and control for complex industrial applications. She is an expert in nonlocal modeling and simulation, optimal control, and scientific machine learning. She is an Associate Editor of SIAM and Nature journals, a member of the SIAM industry committee, the Vice Chair of the SIAM Northern California section, and a member of the NVIDIA advisory board for scientific machine learning. #SciML #DeepLearning Zoom link is available here:
Ameya D. Jagtap’s Post
More Relevant Posts
-
Dear All, A gentle reminder about today's talk by Dr. Chris Rackauckas from Massachusetts Institute of Technology as part of our Param-Intelligence (π) Seminar Series at 12 p.m. ET. The Zoom link is provided below. Details of the talk are as follows: Title: Enabling Industrially-Robust AI for Engineering through Scientific Machine Learning Abstract: The combination of scientific models into deep learning structures, commonly referred to as scientific machine learning (SciML), has made great strides in the last few years in incorporating models such as ODEs and PDEs into deep learning through differentiable simulation. Such SciML methods have been gaining steam due to accelerating the development of high-fidelity models for improving industrial simulation and design. However, many of the methods from the machine learning world lack the robustness required for scaling to industrial tasks. What needs to change about AI in order to allow for methods which can guarantee accuracy and quantify uncertainty? In this talk we will go through the details of how one can enable robustness in building and training SciML models. Numerical robustness of algorithms for handling neural networks with stiff dynamics, continuous machine learning methods with certifiably globally-optimal training, alternative loss functions to mitigating local minima, integration of Bayesian estimation with model discovery, and tools for validating the correctness of surrogate models will be discussed to demonstrate a next generation of SciML methods for industrial use. Demonstrations of these methods in applications such as two-phase flow HVAC systems, modeling of sensors in Formula One cars, and lithium-ion battery packs will be used to showcase the improved robustness of these approaches over standard (scientific) machine learning. Speaker's Bio.: Dr. Chris Rackauckas is the VP of Modeling and Simulation at JuliaHub, the Director of Scientific Research at Pumas-AI, Co-PI of the Julia Lab at MIT, and the lead developer of the SciML Open Source Software Organization. For his work in mechanistic machine learning, his work is credited for the 15,000x acceleration of NASA Launch Services simulations and recently demonstrated a 60x-570x acceleration over Modelica tools in HVAC simulation, earning Chris the US Air Force Artificial Intelligence Accelerator Scientific Excellence Award. He is the lead developer of the Pumas project and has received a top presentation award at every ACoP in the last 3 years for improving methods for uncertainty quantification, automated GPU acceleration of nonlinear mixed effects modeling (NLME), and machine learning assisted construction of NLME models with DeepNLME. For these achievements, Chris received the Emerging Scientist award from ISoP. Zoom Link is available here: https://lnkd.in/dzdRiFeX All are welcome!
To view or add a comment, sign in
-
Dear All, We will be hosting Dr. Chris Rackauckas from Massachusetts Institute of Technology as part of our Param-Intelligence (π) Seminar Series on Thursday, December 12, at 12 p.m. ET. The Zoom link is provided below. Details of the talk are as follows: Title: Enabling Industrially-Robust AI for Engineering through Scientific Machine Learning Abstract: The combination of scientific models into deep learning structures, commonly referred to as scientific machine learning (SciML), has made great strides in the last few years in incorporating models such as ODEs and PDEs into deep learning through differentiable simulation. Such SciML methods have been gaining steam due to accelerating the development of high-fidelity models for improving industrial simulation and design. However, many of the methods from the machine learning world lack the robustness required for scaling to industrial tasks. What needs to change about AI in order to allow for methods which can guarantee accuracy and quantify uncertainty? In this talk we will go through the details of how one can enable robustness in building and training SciML models. Numerical robustness of algorithms for handling neural networks with stiff dynamics, continuous machine learning methods with certifiably globally-optimal training, alternative loss functions to mitigating local minima, integration of Bayesian estimation with model discovery, and tools for validating the correctness of surrogate models will be discussed to demonstrate a next generation of SciML methods for industrial use. Demonstrations of these methods in applications such as two-phase flow HVAC systems, modeling of sensors in Formula One cars, and lithium-ion battery packs will be used to showcase the improved robustness of these approaches over standard (scientific) machine learning. Speaker's Bio.: Dr. Chris Rackauckas is the VP of Modeling and Simulation at JuliaHub, the Director of Scientific Research at Pumas-AI, Co-PI of the Julia Lab at MIT, and the lead developer of the SciML Open Source Software Organization. For his work in mechanistic machine learning, his work is credited for the 15,000x acceleration of NASA Launch Services simulations and recently demonstrated a 60x-570x acceleration over Modelica tools in HVAC simulation, earning Chris the US Air Force Artificial Intelligence Accelerator Scientific Excellence Award. He is the lead developer of the Pumas project and has received a top presentation award at every ACoP in the last 3 years for improving methods for uncertainty quantification, automated GPU acceleration of nonlinear mixed effects modeling (NLME), and machine learning assisted construction of NLME models with DeepNLME. For these achievements, Chris received the Emerging Scientist award from ISoP. Zoom Link is available here: https://lnkd.in/dzdRiFeX All are Welcome! #SciML #DeepLearning #MachineLearning
To view or add a comment, sign in
-
Join us for an Exciting Talk on Scientific Machine Learning! We are excited to welcome Prof. Marta D'Elia from Stanford University, USA, for the 🚀Param-Intelligence (π) Seminar Series🚀 on December 5, 2024 from 12:00 to 1:00 PM ET. Mark your Calendar, the Zoom link is available below. Details of the talk are as follows: Title: On the use of Graph and Point networks in scientific applications Abstract: In the context of scientific and industrial applications, one often has to deal with unstructured space-time data obtained from numerical simulations. The data can be either in the form of a mesh or a point cloud. In this context, graph neural networks (GNNs) have proved to be effective tools to reproduce the behavior of simulated data; however, depending on the physical nature of the datasets, variations of vanilla GNNs have to be considered to ensure accurate results. Furthermore, when only a point cloud is available, one can also consider a graph-free approach by building a "point network" that doesn't require connectivity information. In this presentation we focus on particle-accelerator simulations; a computationally demanding class of problems for which rapid design and real-time control are challenging. We propose a machine learning-based surrogate model that leverages both graph and point networks to predict particle-accelerator behavior across different machine settings. Our model is trained on high-fidelity simulations of electron beam acceleration, capturing complex, nonlinear interactions among macroparticles distributed across several initial state dimensions and machine parameters. Our initial results show the model’s capacity for accurate, one-shot tracking of electron beams at downstream observation points, outperforming baseline graph convolutional networks. This framework accommodates key symmetries inherent in particle distributions, enhancing stability and interpretability. Speaker's Biography: Marta D'Elia is the Director of AI and ModSim at Atomic Machines and an Adjunct Professor at Stanford University, ICME. She previously worked at Pasteur Labs, Meta, and Sandia National Laboratories as a Principal Scientist and Tech Lead. She holds a PhD in Applied Mathematics and master's and bachelor's degrees in Mathematical Engineering. Her work deals with design and analysis of machine-learning models and optimal design and control for complex industrial applications. She is an expert in nonlocal modeling and simulation, optimal control, and scientific machine learning. She is an Associate Editor of SIAM and Nature journals, a member of the SIAM industry committee, the Vice Chair of the SIAM Northern California section, and a member of the NVIDIA advisory board for scientific machine learning. All are welcome! Zoom Link is available here: https://lnkd.in/dzdRiFeX
To view or add a comment, sign in
-
Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an AI-driven approach to low-discrepancy sampling using graph neural networks (GNNs). This method improves simulation accuracy by distributing data points more uniformly across multiple dimensions, benefiting fields such as robotics, finance, and computational science. The Message-Passing Monte Carlo (MPMC) framework allows for the generation of uniformly spaced points, emphasizing dimensions crucial for specific applications. The team's work, published in the Proceedings of the National Academy of Sciences, represents a significant advancement in generating high-quality sampling points for various complex systems. The implications of this research extend to computational finance, where MPMC points outperform traditional quasi-random sampling methods, and to robotics, where improved uniformity can enhance path planning and real-time decision-making processes. The team plans to further enhance the accessibility of MPMC points by addressing current training limitations, paving the way for future advancements in numerical computation using neural methods.
To view or add a comment, sign in
-
Constraint Free Physics-Informed Machine Learning for Micromagnetic Energy Minimization https://lnkd.in/ekv58nvH Abstract We introduce a novel method for micromagnetic energy minimization which uses physics-informed neural networks to find a magnetic configuration which minimizes the Gibbs Free energy functional without the need of any constraint optimization framework. The Cayley transform is applied to a neural network to assure that the model output lives on the Lie group of rotation matrices SO(3) . For the stray field computation we use the splitting ansatz of Garcia-Cervera and Roma together with a hard constraint extreme learning machine in combination with a Taylor series approximation for very accurate evaluation of the single layer potential which only requires a very coarse discretization of the surface. Further, we present a modeling framework for constructive solid geometry which uses R-functions to exactly satisfy essential boundary conditions arising in the course of stray field computation. This framework can be applied to many other areas of interest. Our method shows promising results on the NIST μMAG Standard Problem #3, and is also applied to compute the demagnetization process of a hard magnetic Nd2Fe14B cube.
To view or add a comment, sign in
-
✨ Excited to share our latest publication in Computer Methods in Applied Mechanics and Engineering! Audrey Olivier, Nicholas Casaprima, and I have developed a novel approach for incorporating functional priors into Bayesian neural networks via anchored ensembling, with applications in surrogate modeling for mechanics and materials. It bridges the gap between parameter-space and function-space priors, offering enhanced accuracy and uncertainty quantification in both in-distribution and out-of-distribution data scenarios. I am deeply grateful to my advisor, Dr. Audrey Olivier, for her invaluable guidance, encouragement, and expertise throughout this research. Key points: ✅ Traditional NNs are powerful but often deterministic and lack robust uncertainty quantification. Our approximate Bayesian approach enables better handling of epistemic uncertainties due to limited training data. ✅ By introducing a novel training scheme, our method leverages low-rank correlations in NN parameters to efficiently transfer knowledge from function-space priors to parameter-space priors in Deep Ensembling. ✅ We demonstrate that accounting for correlations between NN weights, often overlooked, significantly enhances the transfer of information and the quality of uncertainty estimation. 📊 Results: - A 1D example illustrates how our method excels in both interpolation and extrapolation scenarios. - A multi-input–output materials modeling case highlights its superior accuracy and uncertainty quantification for both in-distribution and out-of-distribution data. 📄 Read the paper: https://lnkd.in/gVKwvTnj 💻 Code and data: https://lnkd.in/gKQgDY-8 #BayesianNeuralNetworks #DeepLearning #FunctionalPriors #UncertaintyQuantification #SurrogateModeling
To view or add a comment, sign in
-
Reflecting on my recent research into the modeling of porous materials and the vast amounts of data generated therein, I’ve found myself increasingly intrigued by the new unearthed frontiers for AI in advancing rapid new material development and their engineering for targetted applications. From an application perspective, what has captured my interest has been the research done in utilization of Graph Neural Networks (GNNs) in modeling diverse materials, both at ab-initio and microstrucural scales. GNNs are particularly adept at processing the non-Euclidean data typical of material structures, making them ideal for analyzing complex microstructures. They have been utilized to predict material properties, optimize compositions, and even guide the synthesis of new materials by learning from the graph-based representations of molecular or atomic structures. However, despite these advancements, there remain gaps in the universal application of GNNs across different types of materials. One major challenge is the lack of standardized datasets that represent a diverse range of material types and conditions. This limitation hinders the ability of GNNs to learn universally applicable models, confining their use to specific types of materials or experimental conditions. The other aspect that itself is revolutionary is the combination of Informatics , Robotics and Materials development- The rise of self-driving laboratories (SDLs). Recent works at top research institutions e.g in UC Berkley and also within the MAP projects of the Acceleration Consortium exemplifies how AI-driven systems can autonomously guide experiments, rapidly iterating through possibilities to identify optimal materials. While the ideas and the scientific/engineering learnings out of these projects are openly disseminated using various platforms, it still is inherently challenging for small teams to adopt such solutions in a robust manner, due to the diverse scientific nature of the problem ( Data engineering, ,Data science, robotics , chemistry). Inspired by recent advancements, I plan to explore diverse scientific works that utilize ML/AI in materials science through a series of posts. I aim to share insights, foster discussions on best AI practices, and learn from the experiences of others in the field. If you share a passion for the intersection of AI and science, let's connect and explore these exciting possibilities together. #AI4Science #MaterialsDiscovery
To view or add a comment, sign in
-
A key novelty lies in using Graph Neural Networks, which allow points to "communicate" and self-optimize for better uniformity. This approach marks a pivotal enhancement for simulations in robotics, finance, and computational science. https://lnkd.in/gKEaiMn9
To view or add a comment, sign in
-
HEMANTH LINGAMGUNTA Revolutionizing Antimatter Production with Advanced Technologies Exciting breakthroughs in antimatter production are on the horizon, leveraging the latest advancements in quantum computing, machine learning, and particle physics. By combining quantum algorithms, neural networks, and deep learning techniques, we can optimize particle accelerator designs and extraction processes to dramatically increase antimatter yields[1][3]. Key innovations include: • Quantum-enhanced particle simulations for accelerator optimization • AI-driven beam control and focusing systems • Neural interfaces for precise accelerator tuning • Blockchain-secured data management for experiment integrity • Cloud-based distributed computing for complex calculations • Robotic systems for safe handling of antimatter particles • Advanced cybersecurity protocols to protect sensitive research These technologies could enable the production of usable quantities of antimatter for revolutionary applications in space propulsion, medical imaging, and fundamental physics research[5][7]. The future of antimatter science is bright. Join the conversation on how we can harness these cutting-edge tools to unlock the potential of antimatter! #AntimatterTech #QuantumComputing #AIforScience #SpacePropulsion #ParticlePhysics Citations: [1] Antimatter Quantum Interferometry - MDPI https://lnkd.in/gBEZfB9P [2] [PDF] Antimatter Production at a Potential Boundary https://lnkd.in/gzQrukDD [3] Quantum Computing and Simulations for Energy Applications https://lnkd.in/gzmBBcNU [4] MiniCERNBot Educational Platform: Antimatter Factory Mock-up ... https://lnkd.in/gx4Pwr7w [5] Steven Howe Breakthroughs for Antimatter Production and Storage https://lnkd.in/gyMvH5tF [6] Using AI to unlock the secrets of antimatter | by Ari Joury, PhD https://lnkd.in/g_VrdtMB [7] AEgIS Experiment Breakthrough: Laser Cooling Opens Door to New ... https://lnkd.in/g42Kjs8b [8] Artificial Intelligence in the world's largest particle detector https://lnkd.in/gBwdxb2G
To view or add a comment, sign in
-
❓ Is it possible to use Gaussian Processes (GPs) for solving partial differential equations (PDEs)? Yes! In fact, GPs are naturally suited for solving linear PDEs (since the linear transformation of a GP is still a GP!). However, nonlinear PDEs require a bit more work—they need to be linearized before applying GP regression. This is where neural networks often have the edge: they can handle nonlinear PDEs directly, without needing linearization. 💡 To bridge this gap, in our recent paper, we propose a new GP-based framework that combines the best of both worlds: the local generalization power of kernels with the flexibility of deep neural networks. 🗝️ The resulting model is particularly attractive because (1) it automatically satisfies linear constraints (like boundary and initial conditions) thanks to the inherent structure of kernels, and (2) it can integrate any differentiable function approximator—including neural networks— into the GP's mean function, making it highly adaptable for nonlinear constraints. Through a diverse set of examples, we found that our approach not only outperforms but also enhances existing physics-informed machine learning methods that depend solely on neural networks. A special thanks to my labmates Amin Yousefpour and Shirin Hosseinmardi, and my advisor Ramin Bostanabad for their invaluable support! Check out the full paper and code below: 📄 Paper: https://lnkd.in/gMZpD7sn 💻 Code: https://lnkd.in/gfgrqUuC
To view or add a comment, sign in