Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Incubator Q&A

Welcome to the staging ground for new communities! Each proposal has a description in the "Descriptions" category and a body of questions and answers in "Incubator Q&A". You can ask questions (and get answers, we hope!) right away, and start new proposals.

Are you here to participate in a specific proposal? Click on the proposal tag (with the dark outline) to see only posts about that proposal and not all of the others that are in progress. Tags are at the bottom of each post.

Comments on Aggregate 404 errors from log file on the Linux command line

Parent

Aggregate 404 errors from log file on the Linux command line Question

+0
−0

My web server is logging in combined log format and my host gives me SSH access to my server where the logs are stored. I see entries in the log file for 404 errors like:

10.10.10.10 - - [10/Jun/2023:10:00:00 +0000] "GET http://example.com/some-page.html HTTP/1.1" 404 - "https://referringsite.example/linking-page.html" "-"

Is there a way to use Linux command line tools to list the 404 URLs on my site having the most hits with referrers?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

Post
+0
−0

I'm sure there's a program somewhere that parses common log entries, but I don't know what it is. However, the task as stated is pretty simple so I'd try hack something together myself. You need to:

  1. Pull out the error code and referrer with a regex
  2. Filter for 404 codes
  3. Filter out empty referrers
  4. Print requests
  5. Count them

1 can be done with a regex and sed, but you need a lot of []{}()+ in the pattern, and sed makes these annoying to type (you have to escape them all). So instead I would read into a Python script.

If you use sed for 1, you would use grep for 2 and 3, also with regex.

If you want to stick with the shell, you can use Python to only dump a JSON, CSV, or whatever else you want. You can filter JSON with jq and CSV with csvkit or csvq.

Since I used Python, it's easier to use Python's syntax for filtering as well, which is what I did:

import re
import sys
from collections import Counter
from typing import NamedTuple

RE_SYSLOG = re.compile(r'(?:\S+ ){3}\[([^[\]]+)\] "([^"]+)" (\d+) \S+ "([^"]+)"')


class LogEntry(NamedTuple):
    timestamp: str
    request: str
    error_code: str
    referer: str


def main():
    raw = sys.stdin.readlines()
   
    parsed = [parse_syslog_message(s) for s in raw]
    filtered = [i for i in parsed if i.error_code == "404" and len(i.referer) > 3]

    # Print requests
    for i in filtered:
        print(i.request)


def parse_syslog_message(msg: str) -> LogEntry:
    m = RE_SYSLOG.search(msg)
    return LogEntry(*m.groups())


if __name__ == "__main__":
    main()

You then do cat web.log | python parse.py and you'll get a list of the requests you want (4). I assumed you do care about request type (because it's easier that way) but I'm sure you can see how to get only the URL from i.request.

What remains is to count. You can do this with uniq (which requires pre-sorted input):

cat web.log | python parse.py | sort | uniq -c
      1 GET http://example.com/some-page.html HTTP/1.1
History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

2 comment threads

What version of Python is this for? (3 comments)
Syslog =? common log (1 comment)
What version of Python is this for?
Stephen Ostermiller‭ wrote over 1 year ago

With Python 2.7.18 I got "File parse.py, line 10 timestamp: str SyntaxError: invalid syntax."

matthewsnyder‭ wrote over 1 year ago · edited over 1 year ago

I tested this code with Python 3.11. It might work as far back as 3.7 - I think typing.NamedTuple is the newest feature I'm using.

Definitely not Py2.7 though! You should not use Python2, it's been EOL since 2020: https://www.python.org/doc/sunset-python-2/ I'm guessing you're on an old Debian-family server - consider updating (I think even Debian is defaulting to python3 now).

Stephen Ostermiller‭ wrote over 1 year ago

I'm testing with whatever is installed on my web host. Looks like they have a separate binary for Python 3 installed so I'll try that next.