Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Incubator Q&A

Welcome to the staging ground for new communities! Each proposal has a description in the "Descriptions" category and a body of questions and answers in "Incubator Q&A". You can ask questions (and get answers, we hope!) right away, and start new proposals.

Are you here to participate in a specific proposal? Click on the proposal tag (with the dark outline) to see only posts about that proposal and not all of the others that are in progress. Tags are at the bottom of each post.

Comments on Do Large Language Models "reason"?

Post

Do Large Language Models "reason"? Question

+2
−0

There is a lot of debate about the "cognitive" capabilities of LLMs and LLM-based chatbots, like ChatGPT. It's common to see statements like "these models just apply statistical pattern matching" and "they have no concept of the world." On the other hand, they are clearly very able to follow simple instructions, and manipulate things like code very effectively.

Is there currently a scientific consensus on whether large language models are capable of reasoning? I'm looking for hard science, backed up by theory or experiment, not simple assertions. If there is no consensus, what are the main results pointing in the different directions?

This most likely depends on how "reasoning" is defined, in which case, I'm interested in any answers for any specific definition of reasoning.

It also depends on the model, of course. I'm interested in whether LLMs are capable of reasoning in principle, rather than on average. That is, if most LLMs don't reason, but one particular model does (because of, say, the amount of training data), then the answer is "yes".

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

No. (1 comment)
No.

Posting as a comment because I don't feel like doing a literature review.

They don't reason. They produce text that fits certain patterns. Internally, the model is a neural networks, with most nodes dedicated to analyzing and generating semantically coherent text.

A lot of what we humans informally consider "reasoning" is actually just semantics and syntax. LLMs also have a lot of reasoning in their training data (philosophy, math texts, etc). Together, these two enable creating the illusion of reasoning to uncritical users. The illusion falls apart if you attempt to directly challenge it the way you would do with a school exam. It quickly becomes apparent that the only "reasoning" which you see is that which resembles the "reasoning" presented in source material.