Or any other log files/output? I’m open to any solution, but what I would like…
…is something where I can just click on a word or select some text and say “filter that out”
Something that colors different log levels differently, preferably automatically.
Something that can parse the “columns” and give me a nice quick list of values, like different unit names to filter out/solely include.
Something that lets me choose a time and go there. Something that lets me select only a specific timeframe of logs.
I know this can probably be done by going in/out of journalctl, recalling the last command and adding specific filter options… but it just feels slow. It’s so many keypresses when I could just right click on the word and -> “Filter out/Search for” or something.
Centralized logging like Graylog or Grafana Loki can help with a lot of this.
It might be a bit overkill but I use Grafana to do this (with Loki). It’s a pretty involved setup as well, but you can filter and search by content, or date/time. It’s doable on a desktop but mainly servers use it
tbh my go to command is just… journalctl -fe -u service
ex :
journalctl -fe -u jellyfin
journalctl -fe -u nordvpndso I’d also like to know the answer to this question. my other go to is dumping journalctl to text files and parsing with grep and awk and creating my own reports with that parsed information.
Apparently,
less
also has a feature built-in to filter out lines based on keywords:https://raymii.org/s/snippets/Exclude_lines_in_less_or_journalctl.html#%3A~%3Atext=Once+your%2Cterm (skip the first paragraph, past those three links)
That’s great to know!
systemctl status <service>
If you are on gnome,.gnome logs do most of the things you want (if I recall correctly, some years since I run gnome)
Same for KDE https://apps.kde.org/fr/kjournaldbrowser/
I don’t know of any graphical tools that let you do this, but generally, if you want to search for specific terms/times/commands or anything of that sort, piping journalctl into grep (and optionally grep into less) is pretty effective at finding stuff.
I sometimes pipe journalctl into lnav, but it never works quite as well as i really want…
lnav is pretty cool and does mostly what you are describing.
uuhhh maybe here? https://lnav.org/
I wish there was something nice like that too.
In the server world that would usually involve doing something like sending the journal data to Elasticsearch using an Elasticsearch integration. But that involves setting up an Elasticsearch server and Kibana and so on which is very unwieldy for a desktop computer. It does work pretty well though in terms of filtering. But it also stores the data internally in indexes to speed up search.
Of course journald has a seemingly simple C API but writing code is a lot of work. There are probably API bindings for various languages.
Sounds like you want a siem like Wazuh. Its agent can collect journald logs from any number of systems. It also has a gui you can interact with to parse logs.
Well, just a monitoring stack, like for example Grafana, would probably be more suitable for this specific task (if we’re doing central hosting/collection).
Kind of my main recommendation is to use something with OpenTelemetry. It’s pretty much the standard protocol for transferring logs, traces and metrics, so if you set everything up with that, then you can swap out the visualization software with less pain.
Here’s a guide for Grafana + OpenTelemetry Collector: https://grafana.com/docs/loki/latest/send-data/otel/I’m seconding this recommendation