Welcome to the staging ground for new communities! Each proposal has a description in the "Descriptions" category and a body of questions and answers in "Incubator Q&A". You can ask questions (and get answers, we hope!) right away, and start new proposals.
Are you here to participate in a specific proposal? Click on the proposal tag (with the dark outline) to see only posts about that proposal and not all of the others that are in progress. Tags are at the bottom of each post.
Comments on Do Large Language Models "reason"?
Post
Do Large Language Models "reason"? Question
There is a lot of debate about the "cognitive" capabilities of LLMs and LLM-based chatbots, like ChatGPT. It's common to see statements like "these models just apply statistical pattern matching" and "they have no concept of the world." On the other hand, they are clearly very able to follow simple instructions, and manipulate things like code very effectively.
Is there currently a scientific consensus on whether large language models are capable of reasoning? I'm looking for hard science, backed up by theory or experiment, not simple assertions. If there is no consensus, what are the main results pointing in the different directions?
This most likely depends on how "reasoning" is defined, in which case, I'm interested in any answers for any specific definition of reasoning.
It also depends on the model, of course. I'm interested in whether LLMs are capable of reasoning in principle, rather than on average. That is, if most LLMs don't reason, but one particular model does (because of, say, the amount of training data), then the answer is "yes".
1 comment thread