Halim A.
Mountain View, California, United States
861 followers
500+ connections
View mutual connections with Halim
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Halim
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Halim’s full profile
Other similar profiles
-
Ranjan Sinha, Ph.D.
San Jose, CAConnect -
Chandra Khatri
Palo Alto, CAConnect -
Theo Vassilakis
Reno, NVConnect -
Debajyoti Ray
Miami, FLConnect -
Gang Hua
Bellevue, WAConnect -
Gopal Pingali
Raleigh, NCConnect -
Luciano Pesci, PhD
Salt Lake City, UTConnect -
Vivienne Ming
Berkeley, CAConnect -
Hassan Sawaf
San Jose, CAConnect -
Luyuan Fang, PhD
Seattle, WAConnect -
Mohammad Sabah
Los Angeles Metropolitan AreaConnect -
Eric Horvitz
Redmond, WAConnect -
Li Deng
United StatesConnect -
Chu-Cheng Hsieh
San Francisco Bay AreaConnect -
Gideon Mann
New York, NYConnect -
Jake Porway
Brooklyn, NYConnect -
Neil Daswani
San Jose, CAConnect -
Keith McFarlane
Punta Gorda, FLConnect -
Bijal Shah
Denver Metropolitan AreaConnect
Explore more posts
-
Eddie A.
Interesting paper about the limitations of the generative AI. Actually Yann LeCun talks about this limitation as inherited problem in the existing models for long time. Knowing the limitation helps to align expectations. https://lnkd.in/dCid9B_J #generativeai #llm #artificailintelligence #ai
73 Comments -
Nick Tarazona, MD
👉🏼 Assessing the Decision-Making Capabilities of Artificial Intelligence Platforms as Institutional Review Board Members 🤓 Kannan Sridharan 👇🏻 https://lnkd.in/eB3EnqJE 🔍 Focus on data insights: - AI platforms identified GCP issues - Offered guidance on GCP violations - Detected conflicts of interest and SOP deficiencies - Recognized vulnerable populations - Suggested expedited review criteria 💡 Main outcomes and implications: - AI tools could aid IRB decision-making - Improve review efficiency - Human oversight remains critical for accuracy 📚 Field significance: - Institutional Review Boards (IRBs) - Artificial Intelligence (AI) - Standard Operating Procedures (SOPs) 🗄️: [#IRBs #AI #SOPs #DataInsights]
-
Abhishek Chaurasia
🚀 We are thrilled to announce the release of our new paper: "The Evolution of Multimodal Model Architectures." In this paper, we provide a bird's-eye view of all the state-of-the-art multimodal (large) models that have emerged since the inception of Transformers. It highlights the pros and cons of each model architecture, grouping them into four broad categories. These categories can serve as a blueprint to help you determine which architecture is best suited for your specific scenario. A huge thank you to all the co-authors and especially to Shakti Wadekar for putting this paper together during his PhD at Purdue University. Paper: 🔗 [https://lnkd.in/gZHQUS6c] Authors: Shakti Wadekar, Abhishek Chaurasia, Aman Chadha, and Eugenio Culurciello #AI #MachineLearning #MultimodalModels #Transformers #Research #PurdueUniversity
804 Comments -
William W Collins
Efficient Quantum Gradient and Higher-order Derivative Estimation via Generalized Hadamard Test by Dantong Li, Dikshant Dulal, Mykhailo Ohorodnikov, Hanrui Wang, Yongshan Ding via Quantum sensing via zongquanzhou on Inoreader ([Global] Quantum Computing) URL: https://ift.tt/zTAh6GC arXiv:2408.05406v1 Announce Type: new Abstract: In the context of Noisy Intermediate-Scale Quantum (NISQ) computing, parameterized quantum circuits (PQCs) represent a promising paradigm for tackling challenges in quantum sensing, optimal control, optimization, and machine learning on near-term quantum hardware. Gradient-based methods are crucial for understanding the behavior of PQCs and have demonstrated substantial advantages in the convergence rates of Variational Quantum Algorithms (VQAs) compared to gradient-free methods. However, existing gradient estimation methods, such as Finite Difference, Parameter Shift Rule, Hadamard Test, and Direct Hadamard Test, often yield suboptimal gradient circuits for certain PQCs. To address these limitations, we introduce the Flexible Hadamard Test, which, when applied to first-order gradient estimation methods, can invert the roles of ansatz generators and observables. This inversion facilitates the use of measurement optimization techniques to efficiently compute PQC gradients. Additionally, to overcome the exponential cost of evaluating higher-order partial derivatives, we propose the $k$-fold Hadamard Test, which computes the $k^{th}$-order partial derivative using a single circuit. Furthermore, we introduce Quantum Automatic Differentiation (QAD), a unified gradient method that adaptively selects the best gradient estimation technique for individual parameters within a PQC. This represents the first implementation, to our knowledge, that departs from the conventional practice of uniformly applying a single method to all parameters. Through rigorous numerical experiments, we demonstrate the effectiveness of our proposed first-order gradient methods, showing up to an $O(N)$ factor improvement in circuit execution count for real PQC applications. Our research contributes to the acceleration of VQA computations, offering practical utility in the NISQ era of quantum computing.
-
Thomas Gilbertson
This approach could turn the Generative AI industry upside down. Most everyone knows that AI requires computers to do a lot of calculations and especially for Large Language Models (LLMs) like ChatGPT. The majority of these calculations are actually “Matrix Multiplications”. (You remember, a 3x3 matrix times another 3x3 matrix). If the researchers in the article are proven correct, they have demonstrated a way that Matrix Multiplication can be replaced with a more efficient alternative inside the LLM getting nearly the same results. The article positions this savings as a win for the environment (yeah environment), but I think they are missing the boat. More efficient means less electricity and potentially a smaller need for GPU resources, the bread and butter of stock darling Nvidia. When you combine this cost saving with reduced privacy concerns of hosted LLMs (Open AI) and innovative LLM licensing schemes (LibreChat - https://lnkd.in/gKQjMfz5) you can see the cracks in the current direction of the LLM industry. Q: If you could run your own LLM on your own private environment with MIT open-source licensing and without expensive GPUs, why exactly do you need Nvidia, Open AI and alike? Bold prediction: if the researchers get this working at scale, we could see a rapid shift and commoditization in the LLM and generative AI space. All the pieces are in place. TLDR details from the article, describing the innovation: “The researchers' approach involves two main innovations: first, they created a custom LLM and constrained it to use only ternary values (-1, 0, 1) instead of traditional floating-point numbers, which allows for simpler computations. Second, the researchers redesigned the computationally expensive self-attention mechanism in traditional language models with a simpler, more efficient unit (that they called a MatMul-free Linear Gated Recurrent Unit—or MLGRU) that processes words sequentially using basic arithmetic operations instead of matrix multiplications.” #AI #GenerativeAI #Innovation
24 -
Julianna Ianni
Proud today to announce Concentriq Embeddings: Proscia's Concentriq platform now enables generation of foundation model embeddings from pathology images right where your data lives. No learning how to use a bunch of new tools, fumbling with massive infrastructure and compatibility issues every time you want to try out a new model, or wasting precious hours waiting on downloads--a collection of foundation models is available at your fingertips. This launch marks a monumental first step in our vision to make AI for pathology effortless - from proof of concept to development to deployment. Your AI work should happen where the rest of your work does. The future is as bold as the difference between sending a telegraph and sharing video with a smartphone. We can't wait to see what our users will build. 🚀 Congrats to the team that has grown this idea into an epic reality! Special kudos to Kyriakos Toulgaridis Corey Chivers Vaughn Spurrier Jeff Baatz Casey Wahl Nate Langlois Ashley Faber Elbert Basolis III
202 Comments -
Martin Bednar
I'll do even a braver poll than last, but only because of this Eric Topol, MD article in Science Magazine :) Multimodal large AI models will be used in medical forecasting (disease risk, onset, trajectory) A) now B) soon but at population level only C) sooner than in diagnostics D) later than in diagnostics E) never https://lnkd.in/gGBHSptv
7 -
Nick Tarazona, MD
👉🏼 How effective are large language models in product risk assessment? 🤓 Zachary A Collier 👇🏻 https://lnkd.in/eZy9nVrh 🔍 Focus on data insights: - Large language models, like ChatGPT, show promise in brainstorming potential failure modes and risk mitigations. - However, there were errors and inconsistencies in some results. - The guidance provided by the models was perceived as overly generic and occasionally outlandish. 💡 Main outcomes and implications: - ChatGPT performed better at divergent thinking tasks but lacked depth of knowledge compared to subject matter experts. - Similar patterns in strengths and weaknesses were observed when comparing ChatGPT to other large language models. - Despite challenges, there may be a role for large language models in assisting with ideation in product risk assessment. 📚 Field significance: - Product safety professionals can leverage large language models for brainstorming and idea generation in risk assessment processes. - Experts should focus on critically reviewing AI-generated content to ensure accuracy and relevance. 🗄️: [#productrisks #AI #riskassessment #ChatGPT]
11 Comment -
Fouad Bousetouane, Ph.D
Responsible and accountable AI models are essential as generative AI continues to revolutionize industries. The rise of this technology, however, presents challenges in understanding how these models make decisions. Mechanistic interpretability addresses this by going beyond traditional methods that track statistical relationships. A recent research work by a team from #Anthropic has involved visualizing which neurons activate in response to specific prompts, advancing monosemanticity—ensuring each neuron’s activation correlates to a single, clear function. This step is crucial for fostering more transparent and safer AI systems in various applications. This is a great progress toward interpreting the thinking mechanisms of large language models (LLMs). #GenerativeAI #AI #LLMs #GenerativeAI #Grainger #MachineLearning #DeepLearning #ExplainableAI #DataScience
452 Comments -
Jagadish Venkataraman
Kolmogorov–Arnold Networks seem like a paradigm shift in ML architectures. Curious to see how they evolve. "While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. " https://lnkd.in/gdPk_qCQ
29 -
Svetlana Sicular
Organizations have insufficient technical capabilities for the speed and scale of their ML model implementations. My colleague Sumit Agarwal just published a very helpful research with a capability view and sample vendors for defining a modular, flexible and scalable MLOps solution. https://lnkd.in/gnB_T9bq #MLOps #LLMOps #AI
14917 Comments -
Michael S Carroll, PhD, MEd
Interesting that on the MIMIC IV readmission models, the LLMs get closer to the traditional ML models. Perhaps because LLMs are not great at representing numeric values (especially at the model sizes tested) that are informative of the prediction for acute outcomes, whereas readmission relies more on diagnoses and other more linguistically-coded inputs. (Though that doesn't explain the poor performance on MIMIC-III readmission)
3 -
Nick Tarazona, MD
👉🏼 Generating synthetic clinical text with local large language models to identify misdiagnosed limb fractures in radiology reports 🤓 Jinghui Liu 👇🏻 https://lnkd.in/entpHxYh 🔍 Focus on data insights: - 🌐 Local LLMs can generate synthetic clinical reports, providing a cost-effective alternative to commercial models. - 📈 Performance of synthetic reports is comparable to using real-world annotated data for training AI systems. - 🧠 Incorporating synthetic data significantly enhances model training, achieving over 90% effectiveness relative to real-world data. 💡 Main outcomes and implications: - 🔑 Open-source LLMs facilitate the generation of essential training data while addressing privacy concerns associated with sensitive clinical information. - ⚙️ The study supports the notion that synthetic data can be reliable and beneficial in healthcare applications, particularly in diagnostic processes. - 🏥 The findings could lead to greater adoption of synthetic data generation techniques in medical AI, improving efficiency and outcomes in diagnostics. 📚 Field significance: - 🔬 Highlights the transformative potential of artificial intelligence in healthcare through effective data utilization. - 💻 Encourages further exploration of open-source tools in medical applications, promoting transparency and accessibility. - 📊 Provides a framework for integrating synthetic data generation in clinical settings, enhancing research and development in medical imaging and diagnostics. 🗄️: [#largeLanguageModels #syntheticData #healthcareAI #radiology #dataPrivacy #machineLearning #diagnostics #clinicalReports]
-
Nick Tarazona, MD
👉🏼 Perils and opportunities in using large language models in psychological research 🤓 Suhaib Abdurahman 👇🏻 https://lnkd.in/e9an387i 🔍 Focus on data insights: - Highlighted the importance of recognizing global psychological diversity - Cautioned against treating LLMs as universal solutions for text analysis - Advocated for developing transparent, open methods to address LLMs' opaque nature 💡 Main outcomes and implications: - Showed concrete impact in various empirical studies - Argued for diversifying human samples and expanding psychology's methodological toolbox - Promoted an inclusive, generalizable science and countered over-reliance on LLMs 📚 Field significance: - Psychological research - Large language models - Text analysis 🗄️: [#psychologicalresearch #largelanguagemodels #textanalysis]
-
Tejas Kulkarni
If you think compute scaling will plateau, think again. We are in the baby states of inference time compute. VLLMs coupled with computer tools, reward learning and RL can leverage orders of magnitude more inference time compute than what we have seen with foundation models. Here’s a concrete training architecture (replace \pi with a large pre-trained vLLM and add it to the reward learner too, \R is the space of all computer tools)
211 Comment -
Rosha Pokharel, Ph.D.
"Why did the neural network go to therapy? It had too many layers of unresolved issues! 😂 On a more serious note, whether you’re dealing with deep learning models or business challenges, it's all about peeling back those layers to find clarity and solutions. #AI #MachineLearning #NeuralNetworks #TechHumor #DataScience"
12 -
Andrés Corrada-Emmanuel
You had us at bringing up the infinite regress of programs testing programs that test programs! Check out how one can ameliorate that paradox of AI safety using logic and algebra as shown in the ntqr Python package documentation page (https://lnkd.in/e_9fs_Js)
1 Comment -
Andrés Corrada-Emmanuel
I am working on a paper that explains the logic of unsupervised evaluation for binary classifiers (or takers of binary response tests). Consulting some recent articles from the Journal of Machine Learning Research (Su's "On Truthing Issues in Supervised Classification") I was reminded of a paper co-authored by Usama Fayyad, the director of the Institute for Experiential AI at Northeastern University, back in 1994 on this very topic - "Inferring Ground Truth from Subjective Labelling of Venus Images" (https://lnkd.in/e-wPzDKZ) This topic, unsupervised evaluation, is a fundamental problem in safe AI when we are forced to monitor deployed systems using their own noisy decisions. Nice to see that the director of the Institute has contributed to the literature on this important topic.
-
Nick Tarazona, MD
👉🏼 Integrating Text and Image Analysis: Exploring GPT-4V's Capabilities in Advanced Radiological Applications Across Subspecialties 🤓 Felix Busch 👇🏻 https://lnkd.in/g6Gwkcf8 🔍 Focus on data insights: - GPT-4V demonstrated superior performance compared to GPT-4 in analyzing 207 cases with 1312 images from the Radiological Society of North America Case Collection. - The integration of text and image analysis provided enhanced insights into radiology subspecialties. - Data-driven approaches utilizing GPT-4V showcased promising results for advanced radiological applications. 💡 Main outcomes and implications: - Improved accuracy and efficiency in radiological image analysis. - Enhanced diagnostic capabilities and decision-making support for radiologists. - Potential for increased automation and standardization in radiological practices. 📚 Field significance: - Advancements in artificial intelligence and natural language processing technologies are revolutionizing radiological applications. - Integration of text and image analysis tools can significantly impact the efficiency and accuracy of radiological diagnoses. - Data-driven insights from GPT-4V have the potential to reshape the future of radiology practice. 🗄️: #radiology #AI #textanalysis #imageanalysis #GPT-4V
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Halim A.
-
Halim A. Sawaya
Director of Operations / General Management / Entrepreneur
Abu Dhabi Emirate, United Arab Emirates -
Al Halim A
Chief Executive Officer at Abu Dhabi Finance
Ras Al Khaimah -
Abdul Halim A. Aziz
Kajang -
Halim A
Finance Broker at Abu Dhabi Finance
Dubai -
Mohd Halim Hasni A Hasan
Shah Alam
48 others named Halim A. are on LinkedIn
See others named Halim A.