Solved: ~/bin vs. ~/.local/bin for user bash scripts?
For one user account, I want to have some bash scripts, which of course would be under version control.
The obvious solution is just to put the scripts in a git repository and make ~/bin a symlink to the scripts directory.
Now, it seems on systemd systems ~/.local/bin is supposedly the directory for user scripts.
My question, is mostly, what are the tradeoffs between using ~/bin and ~/.local/bin as directory for my own bash scripts?
One simple scenario I can come up with are 3rd party programs which might modify ~/.local/bin and put their own scripts/starters there, similar to 3rd party applications which put their *.desktop files in ~/.local/applications.
Any advice on this? Is ~/.local/bin safe to use for my scripts or should I stick to the classic ~/bin? Anyone has a better convention?
(Btw.: I am running Debian everywhere, so I do not worry about portability to non systemd Linux systems.)
Solved:
Thanks a lot for all the feedback and answering my questions! I'll settle with having my bash scripts somewhere under ~/my_git_monorepo and linking them to ~/.local/bin to stick to the XDG standard.
Personally I put scripts in ~/.local/bin/scripts/ instead of just ~/.local/bin/ because I like to keep them separate from other binaries. To note: even though ~/.local/bin/ is in PATH, it's subfolders are not, so if you do that you need to add the scripts subfolder to PATH if you want to run the scripts directly.
Well actually my scripts are in mydotfilesrepo/home/.local/bin/scripts, and I use GNU Stow to symlink mydotfilesrepo/home to /home/myuser/ (same for mydotfilesrepo/etc/ and mydotfilesrepo/usr/ which are symlinked to /etc and /usr), but it's the same result. Stow is pretty cool for centralizing your configs and scripts in one repo !
I've never seen ~/bin before so I can't comment on whether it's a good idea.
I migrated to fish recently and at first I was really annoyed that I had to decompose my ~/.bash_aliases into 67 different script files inside ~/.config/fish/functions/, but (a) I was really impressed with the tools that fish gave me to quickly craft those script files (-
~> function serg
sed -i -e "s/$1/$2/g" $(rg -l "$1")
end
~> funcsave serg
funcsave: wrote ~/.config/fish/functions/serg.fish
) - and (b) I realized it was something I ought to have done a while ago anyway.
Anyway, all this to say that fish ships with a lot of cool, sensible & interesting features, and one of those features is a built-in place for where your user scripts should live. (Mine is a symlink to ~/Dropbox/config/fish_functions so that I don't need to migrate them across computers).
And if there's other users in the machine, it doesn't fuck things up for others
Or if it ends up messing something up, it is user-scoped, so its a lot easier to fix than a bricked system
Another follow up question: Is there any documentation for the linux standard/convention of ~/.local/bin? My initial search about this resulted in nothing which I would call authoritative/definitive.
Mostly this, but also, if you're going to manage many scripts in a system for many users, revision control doesn't help that. Either look at packaging them properly for your distro, or using something Ansible to distribute and manage their versioning on the system to make things easier on yourself.
I have ~/.local/bin added to my PATH for things i want in my PATH, and ~/scripts for things I don't want in my PATH. Both managed by chezmoi. I'm surprised if there's anyone who wants most of their bash scripts in PATH. I only have like 5 scripts in ~/.local/bin; the others get executed on an automated basis (eg on startup or by a cronjob), or so infrequently that I don't want them in my PATH.
Neither ~/bin or ~/.local/bin are part of most shell's default $PATH so you're going to have to modify the user's shell profile (or rc) to include it. It's possible that your favorite distro includes it but not mine. For example(s):
I'm not sure why you're bringing the XDG or systemd "standard" into this. POSIX standard would be more appropriate but they don't say anything on the matter, nor should they really. The most important thing is, be predictable. If the user has a problem with one of your scripts, what do they do first? which wolf_bin will show them the full path to the script. So really, the location does not matter much.
That said I would go with one of these two options:
Make a package for your distro. This may be overkill for a couple scripts but you did say they're in a git repository so you could automate it. The package would install to /usr/bin which would require sudo or root. If the scripts are only allowed to be run by one user, set the rwx and group permissions.
A pattern I like, especially for lightweight things such as scripts that don't require compiling or OS management and also are using git; a "hidden" or "dot" directory in the user's home where the repo lives e.g. ~/.lemmywolf/ Then add scripts directory to the user's $PATH e.g. PATH=$PATH:~/.lemmywolf/scripts. This is what some fairly large projects like pyenv or volta do. You could take it a step farther and modify this installer script to your liking https://github.com/pyenv/pyenv-installer/blob/master/bin/pyenv-installer
/edit 20 year Linux user (Redhat AS2.1) and 5 years of Unix (HPUX & Solaris) before that.
/edit2 I just noticed the pyenv-installer does not modify the user's shell profile. That could easily be added to the script though.
a distro package is way overkill for this, and its also better to not litter the home directory with yet anothet dotdir, that's why .local/bin is a good place, also recommended by systemd the freedesktop base directory standard: https://specifications.freedesktop.org/basedir-spec/latest/
I’m not sure why you’re bringing the XDG or systemd “standard” into this.
Probably because in their "basedir" specification they do recommend ~/.local/bin to be in $PATH. I'm sure there's more than one distro following that spec, whether we'd want to consider it standard or not. I also believe there's some software (like flatpak) that may place scripts there too, when configured to offer commands for user-level instalations.
Here's a quote from the spec:
User-specific executable files may be stored in $HOME/.local/bin. Distributions should ensure this directory shows up in the UNIX $PATH environment variable, at an appropriate place.
I've tried both and ~/.local/bin tends to be used by a bunch of tools to install their own binaries/scripts so depending on what you use it can become very messy (which did happen in my case). I used to have a ~/Documents/Scripts directory in my $PATH and that was much cleaner than my current setup so that's what I'd recommend, especially if you want to use Git with it! :)
Thank you very much! I was exactly looking for someone telling me that some tools install their own binaries/scripts to ~/.local/bin.
Most probably I'll just symlink my scripts from ~/.local/bin then, this would avoid troubles with 3rd partys and most of my dotfiles are symlinked anyway, so the infrastructure is there.
Personally, I put a ~/.get-going or whatever you want to call it and put all my scripts in there. Name them with numbers first like “10-first.sh” “20-second.sh” and then just put a line in .bashrc or .zshrc or whatever you like. Aliases and any critical stuff last. Then one line in your rc file can include them all.
I made some bash scripts for distro-hopping that are now [undiscloded] years old so I can basically backup a few folders — the second being ~/bin where I put AppImages and stuff and sometimes ~/Development (I don’t always need the dev one because backups of those exist as repos) folder if I need to reinstall. A lot of people backup their whole home directory. But I prefer my method and that’s why we use Linux. I don’t want my settings for every app coming with me when I go on a new journey. Choose your own adventure.
Thanks, I think I get the idea, I just don't understand the number-prefix, why did you start this convention?
(Btw.: For some years now I stick to the convention, that everything import is under one sub directory under my home. As long as I have a tarball backup of that sub directory, I am good to lose the whole hard disk w/o fear (e.g. ready for a clean upgrade, distro hop or just go traveling w/o fearing that I forgot to switch off the oven ;-)).
It’s just alphabetical so the scripts run in the right order. The numbers serve like “A” or “B” except you can add new scripts between one and ten if it comes up and your “10-whatever” file is a mess. It’s sort of a convention on Linux but not everyone does it.
Then you just add
for FILE in ~/.shellrc.d/*; do
source $FILE
done
To your ~./bashrc (or your preferred shell). Replace shellrc.d with whatever you choose. I use shellrc.d on servers and stuff because the dot d is also kind of a convention for naming folders. People have their own opinions about that but don’t worry about it until you have strong opinions.