- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Sometimes I talk to friends who need to use the command line, but are intimidated by it. I never really feel like I have good advice (I’ve been using the command line for too long), and so I asked some people on Mastodon:
if you just stopped being scared of the command line in the last year or three — what helped you?
This list is still a bit shorter than I would like, but I’m posting it in the hopes that I can collect some more answers. There obviously isn’t one single thing that works for everyone – different people take different paths.
I think there are three parts to getting comfortable: reducing risks, motivation and resources. I’ll start with risks, then a couple of motivations and then list some resources.
I’d add ImageMagick for image manipulation and conversion to the list. I use it to optimize jpg’s which led me to learn more about bash scripting.
Can’t live without oh-my-zsh, powerlevel10k and zsh autocomplete/autosuggestions plugins. It’s the first thing I install whenever I’m on a new computer.
And if I’m constrained to Windows (for work) then posh-git and PSReadLine is the next best thing.
I’ve had ohmyzsh installed for years. TBH, I still don’t know what it gives me over bash. In your experience, what is the “killer feature” of zsh?
Not OP, but I very recently switched from bash. Autocomplete with suggestions is a way better exeperience on zsh than bash. The way you can choose between options of the autocomplete/suggest interactively feels way better than bash. I set it up to be case-insensitive, so I can type
cd dow
and it will becomecd Downloads
. Gettig autocomplete for bothkubectl
and its aliask
is seamless in zshrc but requires an extra line with a weird dunder function in bashrc.This is just what I found in a few days of using it. There was no learning curve at all, everything just felt easier.
Can’t live without oh-my-zsh, powerlevel10k and zsh autocomplete/autosuggestions plugins. It’s the first thing I install whenever I’m on a new computer.
I run this exact same setup, it’s pretty much a prereq on a fresh install. I wonder if we’ve all been exposed to the same blog articles
and fzf for Ctrl+R drop-in replacement
Fish: look what they need to mimic a fraction of our power
I’ve seen quite a few articles on why you should never install oh-my-…s over the years. I’ve also never bothered to remember anything past “install the plugins and prompt separately or you will suffer”, so someone please link if you know what I’m talking about.
Thanks for naming these, I definitely need to look into them!
That’s a good article. From my observation, there are a few things:
- Necessity. I’m active in communities with people who don’t use the terminal until it’s an absolute necessity. Like people running unraid, docker, or whatever containerized server. Eventually they need to type commands.
- The prettiness. Yeah, I run oh-my-zsh. It’s nice having a setup pretty environment. Some people’s only experience might be opening up the powershell default display to run one command… And that is a bad experience.
- Niche commands/programs. Take ffmpeg as an example. It’s probably the most powerful media tool that exists, but has no official gui. And it’s expansive enough that no GUI really covers what it can do. There are a bunch of other things like this.
Edit: And yeah, git. I’ve never used a graphical client. Seen a handful in use and don’t like it.
You’ve never used a graphical git client?!
I’m comfortable on the command line but a decent git UI is a way better experience.
git diff
is so basic using a GUI makes it far easier to compare changes.Same for merge conflicts. I’m not sure you can even resolve them on the CLI?
Any form of rebase: I think I used the CLI to do an interactive rebase a few times in the early days but I’d never do so without a GUI now.
Managing branches: perhaps I’m a little too ott but I keep a lot of branches preserved locally, a GUI provides a decent tree structure for them whereas I assume on the command line I’d just get a long list.
Managing stashes: unless you just want to apply latest stash (which admittedly is almost always the case) then I’d much rather check what I’m applying through a GUI first.
There are some things I still use the CLI for though:
git remote add
git remote set-url
because I’m just too lazy to figure out how to do that in a GUI. It’s usually hidden away somewhere.git push --force
because every GUI makes it such an effort. C’mon! I know what I’m doing - it’s /probably/ not going to mess things up…I use git on the CLI exclusively. I almost never rebase, but otherwise get by with about 5-10 commands. One that will totally change your experience is
git add -p
I also have my diff/mergetool configured to use kaleidoscope, but still do everything else in the CLI.
-p –patch
Interactively choose hunks of patch between the index and the work tree and add them to the index. This gives the user a chance to review the difference before adding modified contents to the index.
This effectively runs add --interactive, but bypasses the initial command menu and directly jumps to the patch subcommand. See “Interactive mode” for details.
The documentation is entirely meaningless? What does it do?
You can stage individual chunks of a file.
Useful if you have a large set of changes you want to make separate commits for. I also just find that it’s a good way to do a review of each chunk before committing changes blindly.
Give it a shot some time, worst case is you stage some stuff that you don’t want to commit, but it’s non-destructive.
I’ll occasionally
- stash my changes
- unstash them.
- Revise the file in my editor so only the chunk I want to commit is present
- Commit
- Unstash the changes again to get back the uncommitted change
It’s clunky but it’s robust and safe. It does sound a lot cleaner to just use
commit -p
thoughYeah, -p can help with that. I’m not much for “commit grooming” - as long as a branch merges to main cleanly and passes tests, I don’t care about an “ugly” commit history.
git add -p
is great to know, but IMO one shouldn’t rely on it too much, because one should strive committing early and often (which eliminates the need for that command). Also usinggit add -p
has the risk of accidentally not adding some code that actually belongs to the change you are trying to commit. That has happened to me sometimes in the past and only later do I see that the changes I commited are broken because I excluded some code that I thought didn’t belong to that feature.There are other reasons to use it. A major one is doing a “code review” of changes before committing, or even deciding to drop a chunk of code from a commit entirely (like a debug statement that no longer is necessary.)
I’m all about frequent commits (and right-sized commits), but the functionality can still be beneficial even in those scenarios.
I also don’t care if I have a broken commit. This turns up very quickly, and there is zero expectation that feature branches are always in a working/stable state. The expectation is that pending work gets off the local machine on a regular interval.
Same for merge conflicts. I’m not sure you can even resolve them on the CLI
How are they solved when using a GUI? When using cli, it simply tells auto-merging failed and you can open the conflicting files in a text editor and solve the conflicts, then add them and continue the merge.
Managing branches: perhaps I’m a little too ott but I keep a lot of branches preserved locally, a GUI provides a decent tree structure for them whereas I assume on the command line I’d just get a long list.
git log --graph --all --oneline
There’s also --pretty, but it uses a lot of screen space.
Managing stashes: unless you just want to apply latest stash (which admittedly is almost always the case) then I’d much rather check what I’m applying through a GUI first.
You can attach a message when stashing with
-m
.And you can check them out by doing
git checkout stash@{1}
or similar.I also exclusively use the git CLI. I have tried to use a graphical client and could never figure out what it was doing and what was going on. I probably picked it up so easily because when I learned git, I was already used to using a CLI version control client. At the time, I was working at a company that heavily used Perforce and had a custom wrapper around the
p4
cli that injected a bunch of custom configuration.To be fair, I like to use VSCode for resolving merge conflicts, because it is easy to see the deviations and apply/edit as needed. Still, I use the CLI for everything else, including commiting that merge. Plus the gh cli client when I’m using github as I can create a repo or push a repo with zero effort.
It is possible to resolve conflicts through any text editor, but not an amazing experience.
Do everything on your command line.
Why click and drag files when you can mv *.png
Why use an IDE when you can setup vim (or space vim on neo vim) to load all the plugins your IDE would need, but only for the particular tasks that would leverage all those plugins (saving you overhead when you’re not leveraging those plugins)
Why use word when you can use pdflatex to turn your .tex files into .PDFs, with vim setup to trigger pdflatex on every post buffer write event, while zathura renders the PDF automatically
Why manually start your code, when you can set it up to trigger automatically at start up or in response to other events
The answer, of course, is because it’s effort. The pay off is getting marginally better on a skill curve with an infinite ceiling. But the point being, anything your computer does, can be interacted with functions and variables / data structures, and can be automated with shell languages
For me it was using command line (linux/vim/sql/powershell) at work, for same mundane tasks over and over. Due to that, I started remembering commands so I didn’t have to look it up, and was more comfortable trying something I hadn’t done before as well.
I think this is an important lesson in general, and one that applies in other contexts:
You don’t need a “cheatsheet” for most stuff. The things you do all the time will become muscle memory, and the other stuff is easy enough to look up as it’s needed.
You don’t need to memorize the entire class structure of your projects. The “hot paths” get the most attention, and you’ll remember the most critical stuff as you work in a codebase. There’s lots of code that is basically “dark matter” - we know it’s there, and it’s doing something, but because we rarely review/modify it, it’s only important to understand its observable effects, not the precise way that it works.
Your brain is basically like an LRU cache - the stuff that you touch a lot will stay loaded, and the stuff that you rarely use will get dropped. Embrace this property.
What did it for me is I stubbornly refused to use Git via VSCode and stuck with the terminal. I also stubbornly refused to change my default text editor for GIT to something other than VIM. One light bulb moment I had, funnily enough, was when I finally read the VIM docs and learned how to save and close rather than panicking when it popped up (this was early on… but not THAT early on … so still funny). That sparked my curiosity to truly learn VIM.
After that, I realized command line tools could be learned and advantageous and so it just went up from there.
Honestly, I’ve noticed a difference in the confidence level of peers using command line tools based on whether or not they learned GIT using command line or jumped straight to just clicking the buttons in VSCode.
Fun little utilities: robotfindskitten, cowsay, ponysay, botsay, sl, aafire, bb, viewing videos in the terminal with “mpv, --vo=tct” and perhaps feedgame that comes with orbiton. Htop is also pretty colorful. Lazygit and mc too.
I knew basic CLI commands (such as
cd
andls
) for a while, but did not do learn much more. Some things have helped me grow my skills:- Necessity: Some times I need to do something on a VM or container that does not have a graphical interface installed. Some utilities only have a command line interface and not a graphical client. My only option is to Google how to do it. The more I do it, the less I have to Google and the more focused my searches become (instead of searching for “How to do x”, I search for “How to do x in utility”).
- Learning from others: For many tasks, I follow internal or external guides, which typically use CLI commands. Often I look at how my coworkers accomplish tasks and pay attention to what commands they use. Then, when I have time, I look up any new commands I saw and decide if they will be useful for me too. Lately, I have been doing code reviews that involve shell scripts. Those are especially nice, because I can take my time, going line by line, and understand what each command does.
- Keep notes: Every time I find a command that I think I will need again, I copy it into a text file (and I have many such text files). It also makes it easier when I need to run the command with slightly different arguments (a different commit id or something), I can just edit the command in my editor (with searching and undo) and paste it in to my terminal with all the flags and arguments correct.
2nd on the keep notes suggestion. I work on lots of unrelated projects, and each time I end up learning a bunch of new command line utilities, so I try to leave behind a text file describing some of the most useful commands I’d discovered that day. Usually helps me come back to a project and not be back at square one every time.
Eh, none of this is really addressing the fundamentals of getting comfortable figuring out how to do what you wanna do, which is what in my experience leads to people seeing command line use as magic incantations.
Like, if you’re on windows you know how to figure out how to do what you wanna do, right click a file, look for entries in the context menu, look at the properties, open with, etc.
This works because people fundamentally understand the metaphor behind the operating system.
If you’re in bash and don’t know how to do what you wanna do you don’t need any of this fancy zoomer shit, just use “which”, “man”, whatever your package manager offers and the other commands that had big oriley books written about em.
People need to develop the command line equivalent of the “click around and see if you learn anything” skills.
E: I gave the linked article another read and it really is about setting up a production environment in the command line and not about getting people comfortable with the command line at all.
Like, if someone needs to cut down a tree in their front yard they don’t need to know how to operate a felller-buncher, they need to know how to use an axe handle to judge where the tree will fall and what it will fall on.
Maybe see if you can introduce them to a GitHub project or tool that you think they’ll find interesting or useful. I know there are a ton, but I’m not coming up with anything off the top of my head. But if you can give someone a reason to be in the CLI, then they may start to branch out a bit more. I started learning more about the CLI when I started seeing it as cool. Yes, I’m a nerd.
It’s not exactly clear to me who is supposed to be more comfortable.
Anyhow:
- Avoid commands that require their own DSL. Most DSLs are ad hoc garbage, in particular when it comes to UX and UI design.
Arch has by far the best commands. Need updates? Yay! Need to install? Yay -s! Need to remove and nuke all involved? Yay -Rs!
I use Minecraft with my students to introduce them to command-line-like scripting. Especially the worldedit plugin is fantastic for this. It keeps things light and shows them the power of scripting, in an environment they are familiar and comfortable with. They are much more comfortable with using a terminal/cmd afterwards.
Electroshock. I that is too “harsh” or “inhumane” then a cheat sheet.
At the end of the day the command line is a tool that you are using to do something. If I have to google “how to commit file changes to bitbucket using the command line”, I’m probably just going to use whatever GUI tool is available. Or I may do something really silly like manually copy the changes into bitbucket’s web interface. If I had a cheat sheet easily available, then I would just look at that. The rest is just practice and repetition.
Just throwing this out there. It really helps if everyone on the team is comfortable enough to ask for help. If you are a manager, it’s your job to create this kind of environment. And if you see some newbie data analyst that just learned python and is intimidated by a bunch of software engineers copying a bunch of changes into bitbucket’s web interface. Don’t tell them that they are doing it wrong or they don’t know what they are doing. Just say “hey, there is a much easier way to do that” and then show them. If a tool makes somebody’s job easier then they will use it.
If their comfort level is limited due to lack of experience, I tend to sandbox them somehow and then walk them through a couple examples of “danger” vs “ok, this definitely won’t be irreversibel if it’s wrong, but I think it will do what I want.”
A couple of my go tos are the obvious rm -rf / vs ./ and a sed with and without -i on some random text file.
That naturally segues to “here’s the man page, here’s how to use it and search it.”
That tends to give them some confidence that they won’t accidentally cause real damage, and make it seem like they aren’t just typing arcane magic spells, but actually understanding how to responsibly put the pieces together.
No need, GUIs are better for most tasks.
It really depends. Maybe developing something like a game will require (almost) no CLI.
But do a little bit more server stuff, dev-ops, and you can literally not even do the job without cli.
Though in general, CLI is often better for the task, it often can easily be automated (via scripts etc.), which seems to be relevant for a lot of tasks programmers do…
deleted by creator
They basically aren’t?
If you’re doing one-off hobbyist stuff, maybe.
But literally anything in a professional setting should be in text that can be committed and searched in a source code repository. If you can’t commit it to git, it didn’t happen.
Logging called, they want their . log files back
I’m not sure if you’re being funny, but of course committing the output of your program isn’t what I was saying.
Sorry I literally misread your comment, let’s say I was trying to be funny lol
you could document the steps required in text and add Screenshots.
That’s fine until a UI changes, or the steps to reproduce it are incomplete (or a human doesn’t follow them exactly).
Text commands are unambiguous and precise.
GUIs are easier to learn, but they are not always available. Many services only have a CLI client. If you are connecting to a remote server or, especially, a container to debug it, it may not have a window manager installed. If you know how to do something via the CLI, you can automate it with a shell script.