Three methods will be the foundation for AGI: RGM (Renormalising Generative Models - Friston et al, 2024), JEPA (Joint Embedding Predictive Architecture - LeCun, 2022++), and CORTECONs(R) (COntext-Retentive, TEmporally-CONnected neural networks - Maren, 1991-93, 2014-present). #agi #ai #artificialintelligence #artificialgeneralintelligence #CORTECON #JEPA #RGM #activeinference https://lnkd.in/gDuabFdc
Alianna Maren’s Post
More Relevant Posts
-
CORTECONs(R) - using an obscure statistical mechanics equation (the cluster variation method, or CVM) - can help us achieve signal-to-symbolic connections in AGIs. #ai #agi #artificialintelligence #artificialgeneralintelligence #JEPA #CORTECON #activeinference #RGM
Three methods will be the foundation for AGI: RGM (Renormalising Generative Models - Friston et al, 2024), JEPA (Joint Embedding Predictive Architecture - LeCun, 2022++), and CORTECONs(R) (COntext-Retentive, TEmporally-CONnected neural networks - Maren, 1991-93, 2014-present). #agi #ai #artificialintelligence #artificialgeneralintelligence #CORTECON #JEPA #RGM #activeinference https://lnkd.in/gDuabFdc
AGI: Three Foundation Methods - RGM, JEPA, and CORTECONs(R)
https://www.youtube.com/
To view or add a comment, sign in
-
My talk "How Do We Build a General Intelligence?" is now online! This talk was part of the "How Far are We from AGI?" workshop at ICLR 2024. It was an interesting experience attending this workshop. Many of the participants felt that AGI was just around the corner. By contrast, I argue that we're at least 50-100 years away from automated systems that can propose scientific theories like general relativity or quantum mechanics. On the other hand, I think it is possible to build generally intelligent systems despite what might be suggested from results like the "no free lunch theorems", and we can provably get some understanding of the principles for creating relatively universal learners. The talk is essentially in five parts: 1) How do we build systems that learn and generalize, from a perspective of probability and compression? Can we use these principles to resolve mysterious generalization behaviour in deep learning? 2) Is it possible to build general-purpose AI systems in light of results like the no free lunch theorems? 3) What are the prescriptions for general intelligence? 4) What are the demonstrations of those principles in scientific settings? 5) What are we far away from solving?
How Do We Build a General Intelligence?
https://www.youtube.com/
To view or add a comment, sign in
-
Read our new article on AI for hip fracture classification. I’m a proud colleague and co-supervisor to Ehsan Akbarian!
Development and validation of an artificial intelligence model for the classification of hip fractures using the AO-OTA framework Ehsan Akbarian, Mehrgan Mohammadi, Emilia Tiala, Oscar Ljungberg, Ali Sharif Razavian, Martin Magnéli, and Max Gordon https://lnkd.in/dyN9R7Xv
To view or add a comment, sign in
-
-
Extremely interesting workshop on "Human and artificial meaning: On the sense and sense-making of AIs" https://lnkd.in/dtZ275uC My contribution was on "Quality of models for man and machine: A semiotic perspective to difference between human and artificial meaning" - Work in progress :-) #neuroconceptualization
To view or add a comment, sign in
-
-
Polymathic AI : un outil IA multi domaines scientifiques : "To usher in a new class of machine learning for scientific data, building models that can leverage shared concepts across disciplines." https://polymathic-ai.org/
To view or add a comment, sign in
-
See what's new in the latest build of GenFEA 2024! - Improved meshing capabilities - Improved report generation (with AI assistance) - Improved DesignMate (AI) functionality and automation - Improved results visualization https://lnkd.in/gxcbnqVe #genfea #ai #structuralengineering #structuraldesign
GenFEA - What's New (24 Sep 2024)
https://www.youtube.com/
To view or add a comment, sign in
-
Chain of thought (CoT) mirrors human reasoning, facilitating systematic problem-solving through a coherent series of logical deductions. https://lnkd.in/dPZPbAye
Chain of Thought Prompting: The Next Frontier in Generative AI
dmai287.medium.com
To view or add a comment, sign in
-
I am not believing in the possibility of an AGI or ASI, unless the definition of Intelligence departs from that typical of the Human beings. Despite this I am following since the late 90s the work of Ben Goertzel. The preprint indicated in this post will be one of my new readings in this field. https://lnkd.in/d6FPQ5Vg
2412.16559
arxiv.org
To view or add a comment, sign in
-
The Characteristics and Personality of Generative Artificial Intelligence Cybernetics Systems.
To view or add a comment, sign in
-
-
François Chollet introduced the ARC benchmark to test AI on tasks that require core, fundamental knowledge rather than memorization. Here is a summary of Dwarkesh Patel's conversation with François Chollet on why LLMs won’t lead to AGI and why ARC benchmark is crucial: ▪️ Purpose of the ARC benchmark: It is designed to assess whether AI can handle tasks that require a basic, fundamental understanding of concepts, such as elementary physics and object recognition. The tasks in ARC are novel, demanding genuine reasoning rather than recall. ▪️ Critique of scaling LLMs: Chollet argues that simply increasing the size of LLMs by feeding them more data doesn't boost true intelligence. This scaling up enhances models' ability to interpolate between known data points, which is basically a sophisticated pattern matching. ▪️ Intelligence vs. memorization: True intelligence is the ability to encounter a new problem and devise a solution without prior experience. In contrast, current AI systems often rely on 'memorizing' solutions from numerous similar training examples. ▪️ The $1 million ARC Prize: To motivate advancements towards genuine AI reasoning capabilities, @fchollet and @mikeknoop have announced a million-dollar prize for the team that first achieves a score of 85% on the ARC benchmark. This score marks the average human performance ▪️ Concerns over AI research concentration: Collet claims that corporations like OpenAI have monopolized AI research and hindered open innovation by restricting access to their latest findings. This centralization is believed to slow down broader AI progress for several years. ▪️ Future of AI development: For AI to achieve human-like general intelligence, models need to go beyond mere data interpolation. This includes development of methods for discrete program synthesis and search that can generalize across diverse tasks with minimal data. Here's the full video of the conversation. It's worth watching: https://lnkd.in/e6Bczav8
Francois Chollet - Why The Biggest AI Models Can't Solve Simple Puzzles
https://www.youtube.com/
To view or add a comment, sign in