Harvard’s Kennedy School Shorenstein’s Center released a document that can help make AI systems more transparent. The report introduces the CLeAR (Comparable, Legible, Actionable, and Robust) Documentation Framework that outlines important guiding principles for AI documentation. This is could be an important strategy for helping to address the problem of AI systems’ perceptions as “black boxes,” where people don’t fully understand how their algorithms work or the origins of their data sources. Note that the document is only a discussion paper, but it still highlights some interesting concepts.
How to make AI systems transparent
More Relevant Posts
-
We have just published the *CLeAR Documentation Framework for #AI Transparency and governance,* co-authored by experts across academia, industry, and civil society. With the authors’ combined experience creating documentation for AI, the report shares guiding principles for immediate use in AI documentation, including practical tips for AI practitioners and policymakers. The CLeAR Framework states that AI Documentation, whenever possible, should be: -> Comparable: Able to be compared with other documentation of similar assets -> Legible: Able to be read and understood; clear and accessible for the intended audience -> Actionable: Able to be acted on; having practical value, useful for the intended audience -> Robust: Able to be sustained over time; up to date In addition to the CLeAR framework, the report offers recommendations, including: 1. Document throughout the lifecycle 2. Expand the focus of documentation to be context-aware 3. Consider risk and impact assessments in the context of documentation 4. There is an opportunity to drive behavior through documentation requirements The report also includes tips for practitioners: - Look at past work on AI documentation - Be realistic - Start early and revise often - Consider your audience - Document regardless of size or scale Co-authored by practitioners across domains including S. Newman, Kasia Chmielinski, Chris N. Kranzinger, Michael Hind, Jenn Wortman Vaughan, Kathleen Esfahany, Mary Gray, Julia Stoyanovich, Emily McReynolds, Margaret Mitchell, Angelina McMillan-Major, Maui Hudson Find the full report here: https://lnkd.in/ePgdUNnt
To view or add a comment, sign in
-
The CLeAR Documentation Framework for AI Transparency offers sound guidance for practitioners developing documentation for datasets, models, and AI systems. An especially compelling idea is "Data Nutrition Labels." https://lnkd.in/eRF4enpz #ArtificialIntelligence #AI
To view or add a comment, sign in
-
#Standards for #AI will require #AIimpactassessments to be #auditable, so if you haven't started thinking about this documentation in your business, now is the time. Some great principles on CLEAR AI documentation, by many, including Margaret Mitchell: 'At a high level, the CLeAR Principles state that documentation should be: • Comparable: Able to be compared; having similar components to documentation of other datasets, models, or systems to permit or suggest comparison; enabling comparison by following a discrete, well-defined format in process, content, and presentation. • Legible: Able to be read and understood; clear and accessible for the intended audience. • Actionable: Able to be acted on; having practical value, useful for the intended audience. • Robust: Able to be sustained over time; up to date.' https://lnkd.in/g-Y8SZ-v
To view or add a comment, sign in
-
The Coalition for Health AI (CHAI) has made an open-source version of its model card for health care which can be used for more transparency in health care AI: https://hubs.la/Q032V-V40 #HealthTech #AINutritionLabel #Ai
To view or add a comment, sign in
-
In our latest webinar, join Philip Eisenhart from Access to Nutrition Initiative and Frank Meehan Chairman of Improvability AI, as we dive into the transformative power of AI. Discover how AI is revolutionizing the research sector by increasing efficiency and accuracy, streamlining workflows, and enabling researchers to achieve more with less effort. Don't miss this insightful discussion on the future of AI in research! 📊 #AI #Webinar #Innovation #Increase #Efficiency #Accuracy #Streamline #Workflows #Research #MoreOutput #LessEffort #AutomateReporting
Access to Nutrition Initiative - Transformative Power of AI to Increase Efficiency and Accuracy
https://www.youtube.com/
To view or add a comment, sign in
-
💡🥗 Insightful discussion with Philip Eisenhart from Access to Nutrition Initiative and Frank Meehan Chairman of Improvability AI (who Co-Founded the AgTech Fund SparkLabs Cultiv8) about #AI in #Research - and how to use it to improve the world's knowledge of #nutrition and #food. Discover how AI is revolutionizing the research sector by increasing efficiency and accuracy, streamlining workflows, and enabling researchers to achieve more with less effort. 💶 Investors will be able to access research faster and more efficiently to make better investment decisions. 📈Food suppliers and distributors will be able to tap into latest nutrition and food research from scientists to deliver the highest quality of food to all of us. 👩🏻⚖️Regulators will be able to use science-backed research to measure compliance of companies faster. Improvability for all. No business left behind. #AI #Research #AutomateReporting
In our latest webinar, join Philip Eisenhart from Access to Nutrition Initiative and Frank Meehan Chairman of Improvability AI, as we dive into the transformative power of AI. Discover how AI is revolutionizing the research sector by increasing efficiency and accuracy, streamlining workflows, and enabling researchers to achieve more with less effort. Don't miss this insightful discussion on the future of AI in research! 📊 #AI #Webinar #Innovation #Increase #Efficiency #Accuracy #Streamline #Workflows #Research #MoreOutput #LessEffort #AutomateReporting
Access to Nutrition Initiative - Transformative Power of AI to Increase Efficiency and Accuracy
https://www.youtube.com/
To view or add a comment, sign in
-
Latest video - Had a very insightful discussion with Philip Eisenhart from Access to Nutrition Initiative about #AI in #Research - and how to use it to improve the world's knowledge of #nutrition and #food....Discover how AI is revolutionizing the research sector by increasing efficiency and accuracy, streamlining workflows, and enabling researchers to achieve more with less effort. #AI #AutomateReporting
In our latest webinar, join Philip Eisenhart from Access to Nutrition Initiative and Frank Meehan Chairman of Improvability AI, as we dive into the transformative power of AI. Discover how AI is revolutionizing the research sector by increasing efficiency and accuracy, streamlining workflows, and enabling researchers to achieve more with less effort. Don't miss this insightful discussion on the future of AI in research! 📊 #AI #Webinar #Innovation #Increase #Efficiency #Accuracy #Streamline #Workflows #Research #MoreOutput #LessEffort #AutomateReporting
Access to Nutrition Initiative - Transformative Power of AI to Increase Efficiency and Accuracy
https://www.youtube.com/
To view or add a comment, sign in
-
In industry 4.0, where the #automation and digitalization of entities and processes are fundamental, #artificialintelligence (AI) is increasingly becoming a pivotal tool offering #innovative solutions in various domains. In this context, nutrition, a critical aspect of #publichealth, is no exception to the fields influenced by the integration of AI technology. https://lnkd.in/e4XTUFkv
To view or add a comment, sign in
-
🌟 How Well Do AI Chatbots Handle Nutrition Questions? 🌟 A recent study evaluated three cutting-edge language models—GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro—using the Registered Dietitian (RD) exam as a benchmark. While the research sheds light on chatbot performance, it also reveals areas for improvement. Strengths of the Study 💡 Comprehensive Benchmarking: Using the RD exam ensures diverse nutrition topics and proficiency levels are evaluated, providing a robust testing framework. 💡 Innovative Prompting Techniques: Testing methods like Chain of Thought (CoT) and Retrieval-Augmented Prompting (RAP) highlights how prompt design impacts model performance—a key insight for real-world use. 💡 Focus on Consistency: Measuring variability in chatbot responses (both inter-rater and intra-rater) addresses a critical but often-overlooked aspect of reliability. Concerns and Limitations 🚩 Prompts Need Realism: While the study explores various techniques, the use of multiple-choice questions falls short of mimicking real-world patient interactions. Open-ended queries—which are more reflective of actual use—were left untested. 🚩 Narrow Metrics: Accuracy and consistency are crucial, but the study doesn’t evaluate safety, cultural sensitivity, or patient-centricity—key factors for deploying chatbots in healthcare. 🚩 AI Evolution Outpacing Findings: With AI models evolving rapidly, results based on 2024 versions may already be outdated. Open-source alternatives were also excluded, raising transparency concerns. 🚩 Retrieval Pitfalls: Retrieval-Augmented Prompting sometimes led to irrelevant or incorrect answers, showing potential risks when relying on external data. 🚩 Limited Error Analysis: While accuracy improves with techniques like CoT, the study doesn’t offer actionable solutions to address persistent issues, especially for complex, expert-level questions. Why it matters: As chatbots increasingly support healthcare, we must evaluate them against real-world challenges. Let’s ensure research on AI uses real metrics amnd methods that fit the speed of development of this technology. What metrics or testing approaches do you think are essential to improve AI in healthcare? #AI #HealthcareInnovation #NutritionTech #PatientCare #Chatbots https://lnkd.in/eaSvueEs
To view or add a comment, sign in
-
The Coalition for Health AI (CHAI) has launched an open-source AI Applied Model Card on GitHub, aiming to enhance transparency and trust in healthcare AI systems by detailing model training, performance, and fairness assessments.The Coalition for Health AI (CHAI) has unveiled an open-source AI Applied Model Card on GitHub, designed to act as a 'nutrition label' for healthcare AI systems. #AI #bias #CHAI #fairness #healthcare #modelcard #OpenSource #Transparency
To view or add a comment, sign in
AI Education Policy Consultant
9moThanks for sharing. Anthropic just released something as well that I haven't had time to look at :)