Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Incubator Q&A

Welcome to the staging ground for new communities! Each proposal has a description in the "Descriptions" category and a body of questions and answers in "Incubator Q&A". You can ask questions (and get answers, we hope!) right away, and start new proposals.

Are you here to participate in a specific proposal? Click on the proposal tag (with the dark outline) to see only posts about that proposal and not all of the others that are in progress. Tags are at the bottom of each post.

Post History

66%
+2 −0
Incubator Q&A Do Large Language Models "reason"?

There is a lot of debate about the "cognitive" capabilities of LLMs and LLM-based chatbots, like ChatGPT. It's common to see statements like "these models just apply statistical pattern matching" a...

2 answers  ·  posted 7mo ago by pbloem‭  ·  last activity 6mo ago by mr Tsjolder‭

#2: Post edited by user avatar pbloem‭ · 2024-05-08T16:40:23Z (7 months ago)
#1: Initial revision by user avatar pbloem‭ · 2024-05-08T16:31:24Z (7 months ago)
Do Large Language Models "reason"?
There is a lot of debate about the "cognitive" capabilities of LLMs and LLM-based chatbots, like ChatGPT. It's common to see statements like "these models just apply statistical pattern matching" and "they have no concept of the world." On the other hand, they are clearly very able to follow simple instructions, and manipulate things like code very effectively. 

Is there currently a scientific consensus on whether large language models are capable of reasoning? I'm looking for hard science, backed up by theory or experiment, not simple assertions. If there is no consensus, what are the main results pointing in the different directions?

This most likely depends on how "reasoning" is defined, in which case, I'm interested in any answers for any specific definition of reasoning. 

It also depends on the model, of course. I'm interested in whether LLMs are capable of reasoning _in principle_, rather than on average. That is, if most LLMs don't reason, but one particular model does (because of, say, the amount of training data), then the answer is "yes".