Welcome to the staging ground for new communities! Each proposal has a description in the "Descriptions" category and a body of questions and answers in "Incubator Q&A". You can ask questions (and get answers, we hope!) right away, and start new proposals.
Are you here to participate in a specific proposal? Click on the proposal tag (with the dark outline) to see only posts about that proposal and not all of the others that are in progress. Tags are at the bottom of each post.
Post History
I'll note that this won't fully answer the question, because I don't know the modern academic scene to provide a "consensus." Maybe that makes for a bad first answer, here, and for that, I apologi...
Answer
#1: Initial revision
I'll note that this won't fully answer the question, because I don't know the modern academic scene to provide a "consensus." Maybe that makes for a bad first answer, here, and for that, I apologize. However, I'll work from fairly traditional computer science theory. I also can't provide a complete answer, because as you note, it all depends on a key definition, though I can narrow down how that definition needs to look. # The Basics First, we classify modern computer hardware as (essentially) [Turing complete](https://en.wikipedia.org/wiki/Turing_completeness). In other words, subject to material limitations such as limited storage, any general-purpose computer can execute any algorithm, which the theoreticians represent as [computable functions](https://en.wikipedia.org/wiki/Computable_function). Turing completeness appears to represent an *upper limit* for computational complexity, in that parallel processors, networked computers, multiple I/O streams, and any other hardware that you might care to add to a system will never increase the classes of algorithms that it can execute, only the efficiency in which it can run the algorithms. In addition, software can't make hardware do anything that it can't do already, because it only kicks off existing instructions. If you don't have circuits that can generate truly random numbers, for a typical example, no amount of pseudo-random algorithmic work will give you actual randomness. Likewise, as much as industry pundits insist that adding enough hardware will provide the opportunity for the [emergence](https://en.wikipedia.org/wiki/Emergence) of intelligence in their system, you (by definition, really) can't predict emergence, and certainly can't predict *what* will emerge. Maybe they'll get the [Frosty the Snowman](https://en.wikipedia.org/wiki/Frosty_the_Snowman) that they all seem to envision, or maybe they'll get a pattern that'll make wild-looking wallpaper. # Complexity Now, we get to the problem with definitions. A language model (as it exists) can't do anything that the hardware running it can't do, and the computer running it only has the capabilities of any Turing complete system. To answer the question, then, we need to know if "reason" happens algorithmically. Or to generalize that question, we need to know which [complexity class](https://en.wikipedia.org/wiki/Complexity_class) reason falls into, if any. If reasoning falls into [EXPSPACE](https://en.wikipedia.org/wiki/EXPSPACE) or a subset that includes simpler problems, then computers can reason, meaning that algorithms can reason, meaning that certain language models can reason. If it falls outside EXPSPACE, then it can't, because I *believe* that boundary marks the outer realm of computability. Given the known EXPSPACE problems, and knowing that computational completeness means that you can reduce all problems in the class to any other, I have a *feeling* on how that gets answered, but I don't know of anyone who has answered it to any degree of credibility, in the thousands of years that people have tried to model intelligence and decision-making. # Consequences As mentioned, I won't say that language models definitely can or can't reason, because of that gap in definitions. But I will say that it'll require a massive leap forward for mathematics to either make it happen or confirm that it happens. As I say, we have thousands of years of philosophers, logicians, mathematicians, psychologists, and other thinkers and researchers trying to decipher how thinking works, and none of them have come up with a plausible model, in all that time. Either we can compute reasoning, in which case all computers have always had the capability to reason at the hardware level, or we can't and they don't. The people arguing for emergent intelligence assume the latter, and suggest that it doesn't matter, because some non-mathematical force will make it happen anyway. (š¶ *There must have been some magic in that old silk hat they found...* š¶) However, consider the practical effects beyond an exotic jump forward in math. If a computer can simulate or directly engage in reasoning, then we can write algorithms to do the same, without the overhead of simulating millions or billions of tiny computers, the abstract neurons. That not only means the ability to "outsource" reasoning to software, but the ability to do so *on paper*, because a person can follow an algorithm with pencil. It may also mean, depending on what reasoning encompasses and whether we humans can do more than that, we can simulate entire personalities, again in code or on paper, and would need to deal with the politics of that. And I'd call those the **immediate** effects.