Talk:Theory of everything: Difference between revisions
m Reverted 4 edits by 67.81.228.192 (talk) to last revision by Mindmatrix |
→You idiots deleted the answer: new section |
||
Line 49: | Line 49: | ||
:This leads to the issue of what makes a mathematical theory "natural" and (partially in particular) trustworthy. Pondering the issue it becomes clear that the main reason is psychological: humans are used to considering sets of things, this is a very sensory experience, we visualize, verbalize, or "auditivize" several similar objects, and we pick one within them. We have cognitive structures, hardwired in our brains, well-designed to carry out such tasks, to realize such neural dynamics. Thus it is natural, easiest for us to abstract from such "real" mechanisms. We build up our mathematical and mathematical modeling abilities throughout life in introspection/reflection and in interaction with sensory experience (a little) and mostly with society in particular with its scientific output and discussing science with other humans. Society will search for an optimal presentation of its scientific theories, through various processes. And so far the first-order theory of sets has settled as the most efficient way to do mathematics/computations. As remarked previously most mathematics do not require the full power of ZFC, and set theorists would like to reject choice as standard axiom, but the mathematical community likes to have it (in Zorn's lemma, algebraic closures, maximal ideals,...). It is practical, it makes some results simpler to prove too. And there is a sense that it is consistent, which we could reformulate in the form "it is infinitely counterintuitive that ZFC is consistent". When reformulated this way we see that consistency should really be considered from a quantitative perspective: imagine a theory that is inconsistent but whose shortest proof of a contradiction is <math>10^100</math>-deductions long, then two things 1 humans would never find any inconsistency and all the results they would obtain would be "essentially" consistent and as useful as if working in a consistent theory and 2 actually it is probable that such a theory would have to have a rather complicated axiom system and that the contradiction would rely on some tricky, interesting mathematics. Now i don't know if the question has been researched, but i think that inconsistencies in a given "size n" theory can with high probability be reached in "few steps" relative to n. Thus having as a community explored set theory quite thoroughly without finding any inconsistency we can (could) be highly confident that it is consistent -of course this raises the issue of how representative or exhaustive human exploration is, but i think it would not be too "nonuniform". But to be honest, even this is overkill: as humans we make innumerable mistakes, but through education we learn to correct, to check better when necessary, to make-do with approximate results; so if ZFC was inconsistent we would instantly work around it, it would surely be in an interesting way, and the inconsistency would obviously bear very little relevance to building bridges or sending rockets -whose theories rely on much weaker mathematical theories than ZFC as hinted above. Of course the consistency any mathematical theory that will be used to formulate a FOP will never be fully explored; but this is somewhat misleading: we can actually make sure to a very high-degree that it is consistent, by normal mathematical research, but also in a more systematic way, and we could probably quantify in a nice way how consistent we have made sure it is -by the hypothesized results on the complexity of inconsistencies as function of the size of a theory. |
:This leads to the issue of what makes a mathematical theory "natural" and (partially in particular) trustworthy. Pondering the issue it becomes clear that the main reason is psychological: humans are used to considering sets of things, this is a very sensory experience, we visualize, verbalize, or "auditivize" several similar objects, and we pick one within them. We have cognitive structures, hardwired in our brains, well-designed to carry out such tasks, to realize such neural dynamics. Thus it is natural, easiest for us to abstract from such "real" mechanisms. We build up our mathematical and mathematical modeling abilities throughout life in introspection/reflection and in interaction with sensory experience (a little) and mostly with society in particular with its scientific output and discussing science with other humans. Society will search for an optimal presentation of its scientific theories, through various processes. And so far the first-order theory of sets has settled as the most efficient way to do mathematics/computations. As remarked previously most mathematics do not require the full power of ZFC, and set theorists would like to reject choice as standard axiom, but the mathematical community likes to have it (in Zorn's lemma, algebraic closures, maximal ideals,...). It is practical, it makes some results simpler to prove too. And there is a sense that it is consistent, which we could reformulate in the form "it is infinitely counterintuitive that ZFC is consistent". When reformulated this way we see that consistency should really be considered from a quantitative perspective: imagine a theory that is inconsistent but whose shortest proof of a contradiction is <math>10^100</math>-deductions long, then two things 1 humans would never find any inconsistency and all the results they would obtain would be "essentially" consistent and as useful as if working in a consistent theory and 2 actually it is probable that such a theory would have to have a rather complicated axiom system and that the contradiction would rely on some tricky, interesting mathematics. Now i don't know if the question has been researched, but i think that inconsistencies in a given "size n" theory can with high probability be reached in "few steps" relative to n. Thus having as a community explored set theory quite thoroughly without finding any inconsistency we can (could) be highly confident that it is consistent -of course this raises the issue of how representative or exhaustive human exploration is, but i think it would not be too "nonuniform". But to be honest, even this is overkill: as humans we make innumerable mistakes, but through education we learn to correct, to check better when necessary, to make-do with approximate results; so if ZFC was inconsistent we would instantly work around it, it would surely be in an interesting way, and the inconsistency would obviously bear very little relevance to building bridges or sending rockets -whose theories rely on much weaker mathematical theories than ZFC as hinted above. Of course the consistency any mathematical theory that will be used to formulate a FOP will never be fully explored; but this is somewhat misleading: we can actually make sure to a very high-degree that it is consistent, by normal mathematical research, but also in a more systematic way, and we could probably quantify in a nice way how consistent we have made sure it is -by the hypothesized results on the complexity of inconsistencies as function of the size of a theory. |
||
:PS: I'm pretty sure i'm forgetting things i thought i would write, but that will be it for now. A big thank you to the admins who allow my commenting here, which i think is still relevant to the page's topic. [[User:Plm203|Plm203]] ([[User talk:Plm203|talk]]) 05:04, 16 August 2023 (UTC) |
:PS: I'm pretty sure i'm forgetting things i thought i would write, but that will be it for now. A big thank you to the admins who allow my commenting here, which i think is still relevant to the page's topic. [[User:Plm203|Plm203]] ([[User talk:Plm203|talk]]) 05:04, 16 August 2023 (UTC) |
||
== You idiots deleted the answer == |
|||
It’s still 1=1, still will be in a bajillion years. [[Special:Contributions/2600:1000:B11F:FA79:9C50:7EBD:2422:11EE|2600:1000:B11F:FA79:9C50:7EBD:2422:11EE]] ([[User talk:2600:1000:B11F:FA79:9C50:7EBD:2422:11EE|talk]]) 18:45, 8 December 2024 (UTC) |
Revision as of 18:45, 8 December 2024
This page is not a forum for general discussion about Theory of everything. Any such comments may be removed or refactored. Please limit discussion to improvement of this article. You may wish to ask factual questions about Theory of everything at the Reference desk. |
This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
Index
|
|||
This page has archives. Sections older than 90 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
Regarding the incompleteness argument against attempting a TOE
Wikipedia talk pages are not a forum, and changes to the article require reliable sources; please see WP:NOTFORUM and WP:RS |
---|
The following discussion has been closed. Please do not modify it. |
First of all the statement « any "theory of everything" will certainly be a consistent non-trivial mathematical theory » is dubious: a physical theory is not a mathematical theory. There are actually several notions of a mathematical theory, the most commonly used being a 1st-order classical theory with equality, but we could well consider alternatives, like intuitionistic logic, or other non-classical logics. Beyond this, when modeling some physics we only attribute a physical interpretation to very few mathematical variables and well-formed mathematical formulas, clear from the context. When we work in ZF set theory we will construct real numbers, vector spaces,... many mathematical objects and then apply them intuitively to some physical situation, assuming a certain level of accuracy, limitations on how faithful our mathematical description is. But there is of course plenty in our ZF theory that we can express that is not supposed to model anything physical, whatever the context, whatever the physicists or mathematicians doing the modeling. In fact much weaker theories than ZF probably suffice to prove all rigorous results of current physics -though of course there are many concepts which are not yet rigorous, like solutions of equations of fluid mechanics, or the quantum field theories of high-energy physics. It seems unrealistic to expect any nontrivial mathematical theory to be a physical theory -in particular a TOE. After all, how could we expect a simple intellectual construct to exactly match some set of physical phenomena ? And there is little point in attempting that: it is enough to have a flexible mathematical framework where to easily model physics of all kinds, which is what usual set theory provides. Next it seems to me that one difficulty with the existence of a TOE is not mentioned in that section: proving existence of some mathematical objects satisfying what we observe in reality may be impossible in usual mathematics, in ZF(C) set theory. For instance it may be that the standard model of particle physics cannot be proved to exist in ZF -proving existence of quantum Yang-Mills theory with the expected properties is one of CMI's millenium problems. It would make sense then to add to our mathematical theory the desired existence as axiom, or some other axioms which imply it. Especially if we can prove that those axioms are consistent with ZF. I should mention here that the standard model is known to only be an effective theory, only an approximation of a more accurate theory, like the Navier-Stokes equations are only an approximation of the evolution of newtonian fluids; and in both cases it is conceivable that the more accurate models have regularizing effects which allow existence for all (initial or boundary or otherwise) conditions, while the effective theory provably does not have solutions in some cases. So even though a TOE might be a good framework to study small scale/large energy phenomena it may not provide the nice expected existence results for effective theories, which may be consistent with a given TOE without being provable in the TOE, so we would have useful physical models in ZF or some extension thereof not strictly consequences of our TOE -this does not seem to be mentioned in the various critiques cited in the page. Gödel's incompleteness theorem is really a theorem about computability, in particular about self-reference. So eventually, the question of its applicability to a physical theory depends on what we mean to use our physical theory for. Perhaps the very concept of a TOE implies that we want to do "everything" with that physical theory: we would use it to model any kind of computer. We could actually do the same with a theory of psychology: after all it is always humans who think about such issues, so a theory describing all their mental processes must describe all the theories they come up with; so perhaps psychology is a TOE. Research, reflection is an inherently circular process, which is well embodied by our brains: with recursive connections, and rythmic looping activation of its diverse pathways. We may consider psychology as part of chemistry (so perhaps chemistry is a TOE), chemistry as part of quantum physics (...), and quantum physics as part of a TOE, and this as part of set theory, and set theory as part of psychology. But most mathematicians would recognize set theory as a theory of everything, mathematical or not: except for some logicians, set theorists, and "foundationalists" (who may consider higher-order logic or some axiomatization of category theory as alternative to ZF), sets are more than enough for mathematicians, and for physicists. So arguing that logical incompleteness is a limitation to the existence of a TOE seems to imply that we want to apply our TOE to problems of logic and computability, which are already well understood within logic and computability. This does not make much practical sense, and we know already the conclusion. It is like expecting our TOE to explain to us why planes fly, or water boils at 100ºC: those questions belong in theories which are accurate enough where they apply, and have well understood answers -though some may argue that water is very complicated and still poorly understood, with papers published in Nature and Science every year. :) The issue of limits on accuracy, observed in the text, is very interesting; i do not know how far it has been explored in the literature, but to the comments and citations in this wiki page i would add considerations of complexity: there may be tradeoffs between accuracy of a theory and its computational complexity. For instance theory X may be in theory more accurate than theory Y, but the computations in X may be so complex as to make theory X unwieldy and yielding poorer results than theory Y. This is related to the above. If we try to describe chemical reactions or psychological phenomena with the standard model of particle physics we won't get very far. Such tradeoffs can be observed in a finer situations, for instance in video games we may use ray-tracing or lightmaps to render a scene, and although the former is more accurate, it will yield poorer results on a slow machine -not in a given image once rendered, but in the whole animated result being unbearably slow. In conclusion i would say that the "incompleteness critique" is (given present knowledge) either trivially right, or trivially wrong, depending on what we mean our TOE for: if we mean to use it as a mathematical theory to decide all mathematical statements (say the continuum hypothesis) then the critique is right, but if we mean, as seems to be the intention of high-energy physicists, to be a physical theory modelling all fundamental physical phenomena leaving place to other theories at large scales, low energies, or high complexity (low entropy), thus most probably to be a theory of quantum gravity, then Gödel's incompleteness is irrelevant as we would just prove the necessary existence results in ZF, or add them as axioms -and all known mathematical undecidability results would surely be unaffected. Given the remarks above i guess that the name "theory of everything" itself makes little sense: it would be more descriptive and less polemical to call that kind of theory a "fundamental theory of physics", "theory of fundamental phenomena", "theory of fundamental forces", "foundation of physics". Note also that there would actually be infinitely many such theories, as we could add random axioms which are untestable, for instance large cardinal axioms in set theory -though i think some set theorists believe that some large cardinal axioms could somehow imply down-to-earth existence results in analysis, or more plausibly in computability theory, but anyway... PS: I hope wiki's editors will not be annoyed by this lengthy entry. I feel that the wiki discussion page is the ideal place to make such comments, as for now the subject is not difficult, serious, well-defined, or useful enough for physics journals. Yet it is good to have a somewhat centralized discussion, where to gather comments, some of which will turn out useful. Plm203 (talk) 19:02, 11 August 2023 (UTC)
You idiots deleted the answerIt’s still 1=1, still will be in a bajillion years. 2600:1000:B11F:FA79:9C50:7EBD:2422:11EE (talk) 18:45, 8 December 2024 (UTC) |