During the San Francisco Climate Week, the enthusiasm for integrating artificial intelligence (AI) in addressing climate issues was palpable, with numerous events featuring AI applications such as flood risk monitoring and carbon sequestration modeling. Despite this growing interest, many investors remain cautious. Venture capitalists are adopting a "wait and see" approach, skeptical about the immediate business opportunities AI might unlock in the climate sector. While AI's potential in novel areas like EV battery chemistry or climate-friendly proteins is acknowledged, tangible successes and practical applications are still limited. This cautious sentiment is echoed in discussions about the use of generative AI models in various sectors, highlighting both the excitement for future possibilities and the current limitations in AI's practical implementation in climate technologies. Investors emphasise the need for more evidence of success before committing to large-scale investments, indicating a significant gap between current capabilities and the envisioned future of AI in climate innovation. Heatmap News - Climate Investors Aren’t Buying Your AI Startup - https://lnkd.in/eG6iKqQX #responsibleai #climateai #ethicalai #climatecrisis #climatechange #climatesolutions #aiinvestment #investment #startup
About us
RAICA Foundation is a collaborative platform of forward thinking organisations and individuals facilitating the responsible use of AI in addressing the climate crisis. Our foundation operates on the conviction that AI can be a powerful tool, provided it is used ethically, transparently, and with accountability.
- Website
-
http://raica.foundation
External link for RAICA Foundation
- Industry
- Non-profit Organizations
- Company size
- 2-10 employees
- Type
- Nonprofit
- Founded
- 2024
- Specialties
- Generative AI, Artificial Intelligence, Sustainability, Climate Action, and Responsible AI
Employees at RAICA Foundation
Updates
-
A study conducted by Carmen Atkins, Gina Girgente, Manoochehr Shirzaei & Junghwan Kim published in Communications Earth & Environment evaluated the accuracy and reliability of ChatGPT in identifying climate change-related hazards to enhance climate literacy and inform the responsible use of AI in educational contexts. The discussion and results of the study focused on evaluating the accuracy of ChatGPT in identifying climate change-related hazards, comparing these results with credible indices from the International Panel on Climate Change (IPCC). This evaluation was centred around three major hazards: floods, droughts, and cyclones. The study found that ChatGPT, especially the GPT-4 version, showed a relatively high accuracy in identifying floods and cyclones. The accuracy for cyclones was noted to be around 80.6%, and for floods, it was slightly lower at 76.4%. However, the tool performed less effectively in recognising droughts, with an accuracy of only 69.1%. These figures were drawn from confusion matrices, which detailed the counts of true positives, false negatives, and false positives for each hazard. Further, when assessing the consistency of ChatGPT’s responses across multiple iterations, the study noted minimal variation in the accuracy of the tool's hazard identification, suggesting a reliable performance in the case of floods and cyclones. However, for droughts, the consistency in accuracy was not as stable, indicating potential areas for improvement in the AI model's learning and response generation. The authors speculate that inaccuracies might stem from several sources, including language biases since the study and the AI's training predominantly involve English. This might limit the AI's effectiveness in regions with non-English languages or diverse dialectical variations. The inherent complexity and variability in defining and understanding droughts compared to more identifiable hazards like cyclones might contribute to lower accuracy rates. Despite some inaccuracies, the authors posit that ChatGPT can still be a valuable tool for enhancing climate literacy, particularly for more reliably identified hazards like floods and cyclones. However, caution is advised when using the tool for educational purposes regarding droughts, where the information might be less accurate. The performance difference between GPT-3.5 and GPT-4 also raises ethical questions about accessibility and the digital divide. Higher-performing, more advanced models like GPT-4 may not be as accessible to all users, particularly in less developed regions, potentially exacerbating existing inequalities in digital literacy and access to information. This underscores the importance of ongoing validation and calibration of AI tools used in educational settings, particularly concerning critical issues like climate change. communications earth & environment - https://lnkd.in/eB-XNGnE #climatecrisis #generativeai #responsibleai
-
The California State Senate bill proposing the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aims to regulate the development and deployment of artificial intelligence (AI) models through a series of safety and compliance measures. The bill focuses on ensuring that AI developers can preemptively assess and control the potential hazards of AI models before training, with robust oversight and penalties for non-compliance. Functions of the bill include: - Developers can determine if their non-derivative AI models are exempt from certain responsibilities by confirming the models do not possess, nor are likely to develop, hazardous capabilities - a Limited Duty Exemption. - Before training begins, developers must implement a mechanism to completely shut down the AI model if it does not meet the exemption criteria. - Developers of non-exempt models must annually certify their compliance with these safety requirements, under penalty of perjury, through their company's senior technology officer, as prescribed by the newly established Frontier Model Division within the Department of Technology. - Any safety incidents involving AI models must be reported to the Frontier Model Division which also has the role of enforcing regulations, and managing a fund specifically for these purposes. - Operators of computing clusters must evaluate customers' intentions to use such resources for training covered models and ensure appropriate usage. - Violations of the bill's provisions are subject to civil penalties, enforceable by the Attorney General. - The Department of Technology will also set up a public cloud computing cluster called CalCompute, focusing on research and safe deployment of large-scale AI models. California State Senate - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - https://lnkd.in/eXirdb_h #ai #aimodels #airegulation #regulation #legislation #aiethics #responsibleai #ethicalai
-
With the urgent need to combat climate change, the rapidly growing energy demands of AI and data centres have led to speculation and debate on the future of energy supplies. On one side, there is concern about how this surge in electricity consumption might impede our environmental goals. On the other, there is optimism about the potential for technology to actually accelerate our transition to renewable energy. On challenges, AI and data centers consume an enormous amount of power. This has led to increased strains on electricity grids, with some utilities resorting to burning more fossil fuels or delaying the shutdown of such plants. The CEO of Duke Energy, Lynn Good, highlighted this issue at the Columbia Global Energy Summit, mentioning how unexpected growth in power consumption is complicating efforts to phase out coal and achieve net-zero emissions by 2050. On opportunity, conversely, the Editorial Board of Bloomberg presented a more aspirational scenario where the very creators of this high demand—major tech companies—also possess the means and motivation to drive significant advancements in renewable energy. These companies are already leading buyers of renewable energy and are investing in innovative solutions like hydrogen storage and small modular nuclear reactors. Such initiatives not only cater to their vast power needs but also help reduce costs and increase the accessibility of clean energy technologies. On balance, the dichotomy here is not just about whether AI and technology exacerbate climate challenges but whether they also hold the keys to their solution. While the immediate impacts on energy grids and fossil fuel use are tangible and concerning, the potential for tech companies to revolutionise and fund the clean energy sector could offset and possibly exceed these challenges. Bloomberg - Opinion: AI Is a Humongous Electricity Hog. That’s Great. - https://lnkd.in/enjKzZmp Bloomberg Green - Power Demand Surge Is Complicating Carbon Goals, Duke CEO Says - https://lnkd.in/eayNtMFt #responsibleai #sustainableai #ethicalai #electricity #renewableelectricity #renewables #ethics #ai #generativeai #electricitydemand #electricitysupply
-
Events and historical uses of “AI” suggest that what is often touted as AI might actually involve some level of human intervention. This has been made evident in recent situations in various tech applications, such as AI-powered voice interfaces in fast-food drive-thrus, which require human oversight for 70% of tasks, and Amazon’s retraction of its automated checkout systems due to excessive human verification needs. Historically, companies have promoted AI chatbots as autonomous entities capable of handling tasks like scheduling meetings or booking flights, yet often these ‘bots’ were actually humans emulating AI behaviour. This practice is not new; it recalls the 18th-century Mechanical Turk, a supposed chess-playing machine that actually concealed a human player. While AI promises to replicate or surpass human capabilities, the reality often falls short, necessitating human backup in many applications. This calls in to question the legitimacy of application and claims. Ongoing reliance on humans raises important questions about the nature and definition of AI alongside the ethical contextualisation and responsibility of AI application creators. In light of these insights: What constitutes AI? #responsibleai #whatisai #ethicalai #aiethics #ethics #ai #generativeai
-
Carrying out an exercise of prompting a Generative AI model with a simple request can help in understanding underlying training data, particularly in the case of closed/proprietary models such as ChatGPT 3.5 (free-to-use version) and ChatGPT 4 (paid version) from OpenAI. The intention with these notes is to provide an understanding of go-to terms related to "climate" for the most widely used models. This informs to what extent you need to provide context and/or direction as to what approach you want the responses and content generation to take. Read more here: https://lnkd.in/erJwfQhA #responsibleai #ethicalai #aiethics #generativeai #climateai #climatechange #climatecrisis #climate #greenwashing
-
Google recently had a paper published in which discusses the advancement in AI that has enabled large-scale global flood forecasting, which was previously challenging due to lack of data in many regions. Floods are the most frequent natural disasters causing economic and human impact globally, with an exposure of nearly 1.5 billion people to severe flood risks. Since 2017, efforts have been made to develop a real-time flood forecasting system, which is now operational and provides alerts through various Google platforms. The paper highlights the use of machine learning (ML) to enhance flood forecasting capabilities, particularly in data-scarce regions, extending the reliability of flood nowcasts from zero to five days and improving forecasts in Africa and Asia to levels comparable to Europe. The research, in collaboration with the European Centre for Medium-Range Weather Forecasts (ECMWF) and other academic partners, has led to the development of ML models capable of predicting extreme flood events even in ungauged watersheds. These models, particularly Long Short-Term Memory (LSTM) neural networks, have shown superior performance in river forecasting by synthesising information from multiple data sources, including publicly available weather and watershed data. The models are trained on a global scale, enabling predictions for any river location, and are designed to produce probabilistic forecasts, which are crucial for managing flood risks effectively. The initiative is part of Google's adaptation and resilience efforts and commitment to leveraging AI and ML in addressing climate change and enhancing community resilience. Google - Using AI to expand global access to reliable flood forecasts - https://lnkd.in/ebwanw2e #climateai #responsibleai #climateaction #climatecrisis #climatechange #floods #forecast #forecasting #weather #weatherforecasting #climateadaptation #climatemitigations
-
The US House has imposed a ban on the use of Microsoft Copilot, an AI-driven chatbot, by congressional staff, due to cybersecurity concerns. This move highlights the federal government's cautious approach to AI usage within its operations while also working on establishing regulations for the technology. Previously, the House had limited the use of ChatGPT to a subscription-based version, disallowing the free variant. The ban on Microsoft Copilot stems from the potential risk of sensitive House data being exposed to unauthorised cloud services. Microsoft is developing a suite of AI tools tailored for government use, aiming to meet higher security standards, and hopes these will alleviate the House's concerns. The House's administrative body is open to considering the government-specific version of Copilot once it's available, reflecting a broader trend of organizations grappling with the balance between leveraging AI's capabilities and safeguarding sensitive information. Axios - Scoop: Congress bans staff use of Microsoft's AI Copilot - https://lnkd.in/eJxabW59 #responsibleai #ethicalai #generativeai #genai #ethics #aiethics #datasecurity #cybersecurity #news
-
The Utah Artificial Intelligence Policy Act (UAIP), enacted on March 13, 2024, aims to regulate generative AI, defined as AI systems trained on data to interact through text, audio, or visual communication and produce human-like, non-scripted outputs with minimal human oversight. It excludes non-generative AI tools like recommendation engines from its scope. Key obligations under the UAIP include: Disclosure Requirements Entities in regulated occupations must prominently inform consumers at the start of any interaction that they are engaging with generative AI. This applies to both oral and written communications. For businesses outside regulated fields but subject to Utah's consumer protection laws, there's a need to clearly disclose the use of generative AI upon consumer inquiry, though the law doesn't specify how such disclosures should be made. Company Responsibility for AI Output The UAIP holds companies accountable for any violations of consumer protection laws by generative AI, disallowing them from deflecting blame onto the AI. This implies that businesses must treat statements made by AI similarly to those made by employees. Fines and Penalties Violations of the UAIP could lead to administrative fines up to $2,500 per incident, and courts, through actions brought by the Utah Division of Consumer Protection, can impose additional penalties, including injunctions and disgorgement of profits. The Utah Attorney General may seek further penalties of $5,000 per violation. Encouragement of AI Innovation Despite the regulatory framework, the UAIP also promotes AI innovation. It establishes an Office of Artificial Intelligence Policy to oversee an "Artificial Intelligence Learning Laboratory Program" (AI Lab), which acts as a regulatory sandbox, offering regulatory relief and guidance for companies developing AI technologies in Utah. Some implications to consider for businesses: Companies within the UAIP's purview must establish a compliant disclosure mechanism by the May 1, 2024 deadline. This involves clear communication with consumers about the use of generative AI. Entities not directly under the UAIP's scope might still consider adopting similar disclosure practices to enhance transparency and trust with users. Businesses are advised to carefully select and train their AI tools to minimise misinformation risks and possibly include disclaimers about AI-generated content being for general informational purposes only. The UAIP sets a precedent that might influence other states to enact similar regulations, potentially leading to a diverse regulatory landscape for AI across the U.S., emphasising the need for robust AI compliance programs within companies. Utah State Legislature - S.B. 149 Artificial Intelligence Amendments - https://lnkd.in/gRfDvzEN #responsibleai #ethicalai #airegulation #regulation #statelegislation #generativeai #genai #legislation #guidance
-
The US and UK have signed a Memorandum of Understanding (MOU) to collaborate on developing tests for advanced AI models, continuing their commitment from the AI Safety Summit. This partnership involves aligning scientific approaches and accelerating the development of evaluation suites for AI models and systems. Both nations aim to establish a common AI safety testing approach, share resources, and possibly conduct joint testing exercises. The agreement emphasises immediate collaboration between the US and UK AI Safety Institutes, focusing on sharing expertise and personnel. The collaboration also seeks to extend similar partnerships globally to promote AI safety. US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan reinforced the importance of this partnership in addressing AI risks and enhancing the understanding and evaluation of AI systems. Although, this agreement underscores the importance of international collaboration in ensuring AI safety and security, sharing vital information, and establishing a common scientific foundation for AI safety testing, it signifies a continued trend in the UK and US of seeking guidance over regulation. This is in the belief of fostering innovation and becoming leaders in this space, strategically and economically. Where the EU has legislated, the UK and US pursue these exploratory and non-enforced engagements with the technology. US Department of Commerce - US and UK Announce Partnership on Science of AI Safety - https://lnkd.in/erFHk3-K UK Department for Science, Innovation and Technology - Collaboration on the safety of AI: UK-US memorandum of understanding - https://lnkd.in/eWCkSeDV #responsibleai #ethicalai #ai #generativeai #aisafety #regulation #guidance #regulatoryframeworks #safety #us #uk #innovation