Skip Navigation

Am in the only one who cringes at install instructions that require piping some curl output into bash?

curl https://some-url/ | sh

I see this all over the place nowadays, even in communities that, I would think, should be security conscious. How is that safe? What's stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?

I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written. Don't we have something better than "sh" for this? Something with less power to do harm?

163 comments
  • It's not much different from downloading and compiling source code, in terms of risk. A typo in the code could easily wipe home or something like that.

    Obviously the package manager repo for your distro is the best option because there's another layer of checking (in theory), but very often things aren't in the repos.

    The solution really is just backups and snapshots, there are a million ways to lose files or corrupt them.

  • Well yeah ... the native package manager. Has the bonus of the installed files being tracked.

    • And often official package maintainers are a lot more security conscious about how packages are built as well.

    • I agree.

      On the other hand, as a software author, your options are: spend a lot of time maintaining packages for Arch, Alpine, Void, Nix, Gentoo, Gobo, RPM, Debian, and however many other distro package managers; or wait for someone else to do it, which will often be "never".

      The non-rolling distros can take a year to update a package, even if they decide to include it.

      Honestly, it's a mess, and I think we're in that awkward state Linux was in when everyone seemed to collectively realize sysv init sucks, and you saw dinit, runit, OpenRC, s6, systemd, upstart, and initng popping up - although, many of these were started after systemd; it's just for illustration. Most distributions settled on systemd, for better or worse. Now we see something similar: the profusion of package managers really is a Problem, and people are trying to address it with solutions like Snap, AppImages, and Flatpack.

      As a software developer, I'd like to see distros standardize on a package manager, but on the other hand, I really dislike systemd and feel as if everyone settling on the wrong package manager (cough Snap) would be worse than the current chaos. I don't know if they're mutually exclusive objectives.

      For my money, I'd go with pacman. It's easy to write PKGBUILDs and to get packages into AUR, but requires users to intentionally use AUR. I wish it had a better migration process (AUR packages promoted to community, for instance). It's fairly trivial for a distribution to "pin" releases so that users aren't using a rolling upgrade.

      Alpine's is also good nice, and they have a really decent, clearly defined migration path from testing to community; but the barrier for entry to get packages in is harder, and clearly requires much more work by a community of volunteers, and it can occasionally be frustrating for everyone: for us contributors who only interact with the process a couple of time a year, it's easy to forget how they require things to be run, causing more work for reviewers; and sometimes an MR will just languish until someone has time to review it. There are some real heroes over there doing some heavy lifting.

      I'm about to go on a journey for contribution to Void, which I expect to be similar to Alpine.

      Redhat and deb? All I can do is build packages for them and host them myself, and hope users can figure out how to find and install stuff without it being in The Official Repos.

      Oh, Nix. I tried, but the package definitions are a nightmare and just being enough of Nix on your computer to where you can test and submit builds takes GB of disk space. I actively dislike working with Nix. GUIX is nearly as bad. I used to like Lisp - it's certainly an interesting and educational tool - but I've really started to object to more and more as I encounter it in projects like Nyxt and GUIX, where you're forced to use it if you want to do any customization.

      But this is the world of OSS: you either labor in obscurity; or you self-promote your software - which I hate: if I wanted to do marketing, I'd be in marketing. Or you hope enough users in enough distributions volunteer to manage packages for their distros that people can get to it. And you still have to address the issue of making it easy for people to use your software. curl <URL> | sh is, frankly, a really elegant, easy solution for software developers... of only it weren't for the fact that the world is full of shitty, unethical people forcing us to distrust each other.

      It's all sub-optimal, and needs a solution. I'm not convinced the various containerizations are the right direction; does "rg" really need to be run in a container? Maybe it makes sense for big suites with a lot of dependencies, like Gimp, but even so, what's the solution for the vast majority of OSS software which are just little CLI or TUI tools?

      Distributions aren't going to standardize on Arch's APKBUILD, or Alpine's almost identical but just slightly different enough to not be compatible PKGBUILD; and Snap, AppImage, and Flatpack don't seem to be gaining broad traction. I'm starting to think something like a yay that installs into $HOME. Most systems are single user, anyway; something that leverages Arch's huge package repository(s), but can be used by any user regardless of distribution. I know Nix can be used like this, but then, it's Nix, so I'd rather not.

      • The non-rolling distros can take a year to update a package, even if they decide to include it.

        There is a reason why they do this. For stable release distros, particularly Debian, they refuse to update packages beyond fixing vulnerabilities as part of a way to ensure that the system changes minimally. This means that for example, if a software depends on a library, it will stay working for the lifecycle of a stable release. Sometimes latest isn't the greatest.

        Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD

        You swapped PKBUILD and APKBUILD 🙃

        I’m starting to think something like a yay that installs into $HOME.

        Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user's home directory

      • As an Arch user, yeah, PKGBUILDs are a very good solution, at least for specifically Arch Linux (or other distros having the same directory-tree best practices). I have implemented a dozen or so projects in PKGBUILDs, and 150 or so from the AUR. It gives users a very easy way to essentially manually install yet control stuff. And you can just put it into the AUR, so other users can either just use it, or first read through, understand, maybe adapt and then use it. It shows that there is no need for packages to solely be either the authors, nor the distro maintainers responsibility.

  • When I modded some subreddits I had an automod rule that would target curl-bash pipes in comments and posts, and remove them. I took a fair bit of heat over that, but I wasn't backing down.

    I had a lot of respect for Tteck and had a couple discussions with him about that and why I was doing that. I saw that eventually he put a notice up that pretty much said what I did about understanding what a script does, and how the URL you use can be pointed to something else entirely long after the commandline is posted.

  • It's convenience over security, something that creeps in anywhere there is popularity. For those who just want x or y to work without needing to spend their day in the terminal - they're great.

    You'd expect these kinds of script to be well tested against their targets and for the user to have/identify the correct target. Their sources should at least point out the security issue and advise to grab and inspect before straight up piping it though. Some I have seen do this.

    Running them like this means you put 100% trust in the author, the source and your DNS. Not a big ask for some. Unthinkable for others.

  • Just use a VM or container for installing software. It can go horribly wrong in a isolated place.

  • I also feel incredibly uncomfortable with this. Ultimately it comes down to if you trust the application or not. If you do then this isn't really a problem as regardless they're getting code execution on your machine. If you don't, well then don't install the application. In general I don't like installing applications that aren't from my distro's official repositories but mostly because I like knowing at least they trust it and think it's safe, as opposed to any software that isn't which is more of an unknown.

    Also it's unlikely for the script to be malicious if the application is not. Further, I'm not sure a manual install really protects anyone from anything. Inexperienced users will go through great lengths and jump through some impressive hoops to try and make something work, to their own detriment sometimes. My favorite example of this is the LTT Linux challenge. apt did EVERYTHING it could think to do to alert that the steam package was broken and he probably didn't want to install it, and instead of reading the error he just blindly typed out the confirmation statement. Nothing will save a user from ruining their system if they're bound and determined to do something.

    • In this case apt should have failed gracefully. There is no reason for it to continue if a package is broken. If you want to force a broken package, that can be it's own argument.

      • I'm not sure that would've made a difference. It already makes you go out of your way to force a broken package. This has been discussed in places before but the simple fact of the matter is a user that doesn't understand what they're doing will perservere. Putting up barriers is a good thing to do to protect users, spending all your time and effort to cover every edge case is a waste of time because users will find ways to shoot themselves in the foot.

  • Just direct it into a file, read the script, and run it if you're happy. It's just a shorthand that doesn't require saving the script that will only be used once.

  • Most packages managers can run arbitrary code on install or upgrade or removal. You are trusting the code you choose to run on your system no matter where you get it from. Remember the old bug in ubuntu that ran a rm -rf / usr/.. instead of rm -rf /usr/... and wiped a load of peoples systems?

    Flatpacks, Apparmor and snaps are better in this reguard as they are somewhat more sandboxed and can restrict what the applications have access to.

    But really if the install script is from the authors of the package then it should be just as trustworthy as the package. But generally I download and read the install scripts as there is no standard they are following and I don't want them touching random system files in ways I am not aware of or cannot undo easily. Sometimes they are just detecting the OS and picking relevant packages to install - maybe with some thrid party repos. Other times they mess with your home partition and do a bunch of stuff including messing with bashrc files to add things to your PATH which I don't like. I would never run a install script that is not from the author of the application though and be very wary of install scripts from a smaller package with fewer users.

    • No serious distro package manager doesn't require cryptographic signatures in 2025.

      Software deep managers are all rubbish except for maven

163 comments