curl https://some-url/ | sh
I see this all over the place nowadays, even in communities that, I would think, should be security conscious. How is that safe? What’s stopping the downloaded script from wiping my home directory? If you use this, how can you feel comfortable?
I understand that we have the same problems with the installed application, even if it was downloaded and installed manually. But I feel the bar for making a mistake in a shell script is much lower than in whatever language the main application is written. Don’t we have something better than “sh” for this? Something with less power to do harm?
For security reasons, I review every line of code before it’s executed on my machine.
Before I die, I hope to take my ‘93 dell optiplex out of its box and finally see what this whole internet thing is about.
Not good enough. You should really be inspecting your CPU with a microscope.
| sh
stands for shake head at bad practicesWhat’s stopping the downloaded script from wiping my home directory?
What’s stopping any Makefile, build script, or executable from running
rm -rf ~
? The correct answer is “nothing”. PPAs are similarly open, things are a little safer if you only use your distro’s default package sources, but it’s always possible that a program will want to be able to delete something in your home directory, so it always has permission.Containerized apps are the only way around this, where they get their own home directory.
Don’t forget your package manager, running someone’s installer as root
It’s roughly the same state as when windows vista rolled out UAC in 2007 and everything still required admin rights because that’s just how everything worked…but unlike Microsoft, Linux distros never did the thing of splitting off installs into admin vs unprivileged user installers.
plenty of package managers have.
flatpak doesn’t require any admin to install a new app
nixos doesn’t run any code at all on your machine for just adding a package assuming it’s already been cached. if it hasn’t been cached it’s run in a sandbox. the cases other package managers use post install configuration scripts for are a different mechanism which possibly has root access depending on what it is.
Gonna ignore nix since they have two users, but flatpak is fair. However flatpak is a sandboxing scheme which is distinct from per-user installs. In many cases it can be the better route but not always. I think the reason it’s popular on Linux is also the dll hell problem.
idk if 2 users is fair, it may just be my circles but I see nixos mentioned more than almost anything else on lemmy/hn/etc in the past couple years
What’s stopping the downloaded script from wiping my home directory?
It isn’t more dangerous than running a binary downloaded from them by any other means. It isn’t more dangerous than downloaded installer programs common with Windows.
TBH macOS has had the more secure idea of by default using sandboxes applications downloaded directly without any sort of installer. Linux is starting to head in that direction now with things like Flatpak.
The security concerns are often overblown. The bigger problem for me is I don’t know what kind of mess it’s going to make or whether I can undo it. If it’s a .deb or even a tarball to extract in /usr/local then I know how to uninstall.
I will still use them sometimes but for things I know and understand - e.g. rustup will put things in ~/.rustup and update the PATH in my shell profile and because I know that’s what it does I’m happy to use the automation on a new system.
Damn that’s bad misinformation. Its a security nightmare
No it isn’t. What could a Bash script do that the executable it downloads couldn’t do?
It’s not just protection against security, but also human error.
https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/issues/123
https://hackaday.com/2024/01/20/how-a-steam-bug-once-deleted-all-of-someones-user-data/
Just because I trust someone to write a program in a modern language they are familier in, doesn’t mean I trust them to write an install script in bash, especially given how many footguns bash has.
Hilarious, but not a security issue. Just shitty Bash coding.
And I agree it’s easier to make these mistakes in Bash, but I don’t think anyone here is really making the argument that curl | bash is bad because Bash is a shitty error-prone language (it is).
Definitely the most valid point I’ve read in this thread though. I wish we had a viable alternative. Maybe the Linux community could work on that instead of moaning about it.
Hilarious, but not a security issue. Just shitty Bash coding.
It absolutely is a security issue. I had a little brain fart, but what I meant to say was “Security isn’t just protection from malice, but also protection from mistakes”.
Let’s put it differently:
Hilarious, but not a security issue. Just shitty C coding.
This is a common sentiment people say about C, and I have a the same opinion about it. I would rather we use systems in place that don’t give people the opportunity to make mistakes.
I wish we had a viable alternative. Maybe the Linux community could work on that instead of moaning about it.
Viable alternative for what? Packaging.
I personally quite like the systems we have. The “install anything from the internet” is exactly how Windows ends up with so much malware. The best way to package software for users is via a package manager, that not only puts more eyes on the software, but many package managers also have built in functionality that makes the process more reliable and secure. For example signatures create a chain of trust. I really like Nix as a distro-agnostic package manager, because due to the unique way they do things, it’s impossible for one package’s build process to interfere with another.
If you want to do “install anything from the internet” it’s best to do it with containers and sandboxing. Docker/podman for services, and Flatpak for desktop apps, where it’s pretty easy to publish to flathub. Both also seem to be pretty easy, and pretty popular — I commonly find niche things I look at ship a docker image.
This is a common sentiment people say about C, and I have a the same opinion about it. I would rather we use systems in place that don’t give people the opportunity to make mistakes.
The issue with C is it lets you make mistakes that commonly lead to security vulnerabilities - allowing a malicious third party to do bad stuff.
The Bash examples you linked are not security vulnerabilities. They don’t let malicious third parties do anything. They done have CVEs, they’re just straight up data loss bugs. Bad ones, sure. (And I fully support not using Bash where feasible.)
Viable alternative for what? Packaging.
A viable way to install something that works on all Linux distros (and Mac!), and doesn’t require root.
The reason people use curl | bash is precisely so they don’t have to faff around making a gazillion packages. That’s not a good answer.
A viable way to install something that works on all Linux distros (and Mac!), and doesn’t require root.
Nix portable installations, Soar.
The reason people use curl | bash is precisely so they don’t have to faff around making a gazillion packages.
Developers shouldn’t be making packages. They do things like vendor and pin dependencies, which lead to security and stability issues later down the line. See my other comment where I do a quick look at some of these issues.
You’re telling me that you dont verify the signatures of the binaries you download before running them too?!? God help you.
I download my binaries with apt, which will refuse to install the binary if the signature doesn’t match.
No because there’s very little point. Checking signatures only makes sense if the signatures are distributed in a more secure channel than the actual software. Basically the only time that happens is when software is distributed via untrusted mirror services.
Most software I install via curl | bash is first-party hosted and signatures don’t add any security.
All publishing infrastructure shouldn’t be trusted. Theres countless historical examples of this.
Use crypto. It works.
Crypto is used. It is called TLS.
You have to have some trust of publishing infrastructure, otherwise how do you know your signatures are correct?
TLS is a joke because of X.509.
We dont need to trust any publishing infrastructure because the PGP private keys don’t live on the publishing infrastructure. We solved this issue in the 90s
So tell me: if I download and run a bash script over https, or a .deb file over https and then install it, why is the former a “security nightmare” and the latter not?
Both are a security nightmare, if you’re not verifying the signature.
You should verify the signature of all things you download before running it. Be it a bash script or a .deb file or a .AppImage or to-be-compiled sourcecode.
Best thing is to just use your Repo’s package manager. Apt will not run anything that isn’t properly signed by a package team members release PGP key.
I have to assume that we’re in this situation because because the app does not exist in our distro’s repo (or homebrew or whatever else). So how do you go about this verification? You need a trusted public key, right? You wouldn’t happen to be downloading that from the same website that you’re worried might be sending you compromised scripts or binaries? You wouldn’t happen to be downloading the key from a public keyserver and assuming it belongs to the person whose name is on it?
This is such a ridiculously high bar to avert a “security nightmare”. Regular users will be better off ignoring such esoteric suggestions and just looking for lots of stars on GitHub.
No, you download the key from many distinct domains and verify it matches before TOFU
Ah yes, so straightforward.
Fortunately package managers already do this for you. Open a bug report to add to apt. Easy.
When I modded some subreddits I had an automod rule that would target curl-bash pipes in comments and posts, and remove them. I took a fair bit of heat over that, but I wasn’t backing down.
I had a lot of respect for Tteck and had a couple discussions with him about that and why I was doing that. I saw that eventually he put a notice up that pretty much said what I did about understanding what a script does, and how the URL you use can be pointed to something else entirely long after the commandline is posted.
And don’t forget to
sudo
!What does curl even do? Unstraighten? Seems like any other command I’d blindly paste from an internet thread into a terminal window to try to get something on Linux to work.
cURL (pronounced curl) stands for client for URL. It transfers data from a url, which you can then do things with.
Why would they call it that when it’s not a client for all urls? It’s more like httpc
What URLs is it not a client for? As far as I understand it will pull whatever data is presented by whatever URL. cURL doesn’t really care about protocol being http, you can use it with FTP as well, and I haven’t tested it yet but now that I’m curious I wanna see if it works for SMB
I’m not arguing it should, but an easy example of a scheme it doesn’t support is mailto. However I was surprised at the list it does support including mqtt, imap, and pop3.
I think safer approach is to:
- Download the script first, review its contents, and then execute.
- Ensure the URL uses HTTPS to reduce the risk of man-in-the-middle attacks
Ah yes for all of the bash experts who understand what they are reading.
Install scripts are bad in general. ideally use officially packaged software.
But then they’d have to lay some guy 15$ to package it and thats like, spending money
Distros do the packaging. Devs can not be trusted
Loads of distros have user packing like arch and nixos… also many distors accept donations to package your software either way so my point stands even then.
Meanwhile nix install instructions start of with a curl
?
If you’ve downloaded and audited the script, there’s no reason to pipe it from curl to sh, just run it. No https necessary.
The https is to cover the factthat you might have missed something.
I guess I download and skim out of principle, but they might have hidden something in there.
Wat. All https does is encrypt the connection when downloading. If you’ve already downloaded the file to audit it, then it’s in your drive, no need to use curl to download it again and then pipe it to sh. Just click the thing.
Yeah, https was for downloading it in the first place. My bad, I didn’t get my thoughts out in the right order.
Unpopular opinion, these are handy for quickly installing in a new vm or container (usually throwaway) where one don’t have to think much unless the script breaks. People don’t install thing on host or production multiple times, so anything installed there is usually vetted and most of the times from trusted sources like distro repos.
For normal threat model, it is not much different from downloading compiled binary from somewhere other than well trusted repos. Windows software ecosystem is famously infamous for exactly the same but it sticks around still.
Yeah and windows is famous for botnets lol.
Yet most botnets are Linux based.
You have the option of piping it into a file instead, inspecting that file for yourself and then running it, or running it in some sandboxed environment. Ultimately though, if you are downloading software over the internet you have to place a certain amount of trust in the person your downloading the software from. Even if you’re absolutely sure that the download script doesn’t wipe your home directory, you’re going to have to run the program at some point and it could just as easily wipe your home directory at that point instead.
You have the option of piping it into a file instead, inspecting that file for yourself and then running it, or running it in some sandboxed environment.
That’s not what projects recommend though. Many recommend piping the output of an HTTP transfer over the public Internet directly into a shell interpreter. Even just
curl https://... > install.sh; sh install.sh
would be one step up. The absolute minimum recommendation IMHO should be
curl https://... > install.sh; less install.sh; sh install.sh
but this is still problematic.
Ultimately, installing software is a labourious process which requires care, attention and the informed use of GPG. It shouldn’t be simplified for convenience.
Also, FYI, the word “option” implies that I’m somehow restricted to a limited set of options in how I can use my GNU/Linux computer which is not the case.
Showing people that are running curl piped to bash the script they are about to run doesn’t really accomplish anything. If they can read bash and want to review the script then they can by just opening the URL, and the people that aren’t doing that don’t care what’s in the script, so why waste their time with it?
Do you think most users installing software from the AUR are actually reading the pkgbuilds? I’d guess it’s a pretty small percentage that do.
Showing people that are running curl piped to bash the script they are about to run doesn’t really accomplish anything. If they can read bash and want to review the script then they can by just opening the URL
What it accomplishes is providing the instructions (i.e. an easily copy-and-pastable terminal command) for people to do exactly that.
If you can’t review a bash script before running it without having an unnecessarily complex one-liner provided to you to do so, then it doesn’t matter because you aren’t going to be able to adequately review a bash script anyway.
If you can’t review a bash script before running it without having an unnecessarily complex one-liner provided to you
Providing an easily copy-and-pastable one-liner does not imply that the reader could not themselves write such a one-liner.
Having the capacity to write one’s own commands doesn’t imply that there is no value in having a command provided.
unnecessarily complex
LOL
I don’t think you realize that if your goal is to have a simple install method anyone can use, even redirecting the output to install.sh like in your examples is enough added complexity to make it not work in some cases. Again, those are not made for people that know bash.
even redirecting the output to install.sh like in your examples is enough added complexity to make it not work in some cases
You can’t have an install method that works in all cases.
if your goal is to have a simple install method anyone can use
Similarly, you can’t have an install method anyone can use.
I mean if you think that it’s bad for linux culture because you’re teaching newbies the wrong lessons, fair enough.
My point is that most people can parse that they’re essentially asking you to run some commands at a url, and if you have even a fairly basic grasp of linux it’s easy to do that in whatever way you want. I don’t know if I personally would be any happier if people took the time to lecture me on safety habits, because I can interpret the command for myself.
curl https://some-url/ | sh
is terse and to the point, and I know not to take it completely literally.linux culture
snigger
you’re teaching newbies the wrong lessons
The problem is not that it’s teaching bad lessons, it’s that it’s actually doing bad things.
most people can parse that they’re essentially asking you to run some commands at a url
I know not to take it completely literally
Then it needn’t be written literally.
I think you’re giving the authors of such installation instructions too much credit. I think they intend people to take it literally. I think this because I’ve argued with many of them.
Who the fuck types out “snigger” haha
Teleports behind you
All the software I have is downloaded from the internet…
You should try downloading the software from your mind brain, like us elite hackers do it. Just dump the binary from memory into a txt file and exe that shit, playa!
Well yeah … the native package manager. Has the bonus of the installed files being tracked.
I agree.
On the other hand, as a software author, your options are: spend a lot of time maintaining packages for Arch, Alpine, Void, Nix, Gentoo, Gobo, RPM, Debian, and however many other distro package managers; or wait for someone else to do it, which will often be “never”.
The non-rolling distros can take a year to update a package, even if they decide to include it.
Honestly, it’s a mess, and I think we’re in that awkward state Linux was in when everyone seemed to collectively realize sysv init sucks, and you saw dinit, runit, OpenRC, s6, systemd, upstart, and initng popping up - although, many of these were started after systemd; it’s just for illustration. Most distributions settled on systemd, for better or worse. Now we see something similar: the profusion of package managers really is a Problem, and people are trying to address it with solutions like Snap, AppImages, and Flatpack.
As a software developer, I’d like to see distros standardize on a package manager, but on the other hand, I really dislike systemd and feel as if everyone settling on the wrong package manager (cough Snap) would be worse than the current chaos. I don’t know if they’re mutually exclusive objectives.
For my money, I’d go with pacman. It’s easy to write PKGBUILDs and to get packages into AUR, but requires users to intentionally use AUR. I wish it had a better migration process (AUR packages promoted to community, for instance). It’s fairly trivial for a distribution to “pin” releases so that users aren’t using a rolling upgrade.
Alpine’s is also good nice, and they have a really decent, clearly defined migration path from testing to community; but the barrier for entry to get packages in is harder, and clearly requires much more work by a community of volunteers, and it can occasionally be frustrating for everyone: for us contributors who only interact with the process a couple of time a year, it’s easy to forget how they require things to be run, causing more work for reviewers; and sometimes an MR will just languish until someone has time to review it. There are some real heroes over there doing some heavy lifting.
I’m about to go on a journey for contribution to Void, which I expect to be similar to Alpine.
Redhat and deb? All I can do is build packages for them and host them myself, and hope users can figure out how to find and install stuff without it being in The Official Repos.
Oh, Nix. I tried, but the package definitions are a nightmare and just being enough of Nix on your computer to where you can test and submit builds takes GB of disk space. I actively dislike working with Nix. GUIX is nearly as bad. I used to like Lisp - it’s certainly an interesting and educational tool - but I’ve really started to object to more and more as I encounter it in projects like Nyxt and GUIX, where you’re forced to use it if you want to do any customization.
But this is the world of OSS: you either labor in obscurity; or you self-promote your software - which I hate: if I wanted to do marketing, I’d be in marketing. Or you hope enough users in enough distributions volunteer to manage packages for their distros that people can get to it. And you still have to address the issue of making it easy for people to use your software.
curl <URL> | sh
is, frankly, a really elegant, easy solution for software developers… of only it weren’t for the fact that the world is full of shitty, unethical people forcing us to distrust each other.It’s all sub-optimal, and needs a solution. I’m not convinced the various containerizations are the right direction; does “rg” really need to be run in a container? Maybe it makes sense for big suites with a lot of dependencies, like Gimp, but even so, what’s the solution for the vast majority of OSS software which are just little CLI or TUI tools?
Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD; and Snap, AppImage, and Flatpack don’t seem to be gaining broad traction. I’m starting to think something like a yay that installs into $HOME. Most systems are single user, anyway; something that leverages Arch’s huge package repository(s), but can be used by any user regardless of distribution. I know Nix can be used like this, but then, it’s Nix, so I’d rather not.
The non-rolling distros can take a year to update a package, even if they decide to include it.
There is a reason why they do this. For stable release distros, particularly Debian, they refuse to update packages beyond fixing vulnerabilities as part of a way to ensure that the system changes minimally. This means that for example, if a software depends on a library, it will stay working for the lifecycle of a stable release. Sometimes latest isn’t the greatest.
Distributions aren’t going to standardize on Arch’s APKBUILD, or Alpine’s almost identical but just slightly different enough to not be compatible PKGBUILD
You swapped PKBUILD and APKBUILD 🙃
I’m starting to think something like a yay that installs into $HOME.
Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory
There is a reason why they do this.
Of course. It also prevents people from getting all improvements that aren’t security. It’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature. It’s fine if all you need is a box that’s going to run the same version of some software, sitting forgotten in a closet that gets walled in some day. IMO, it’s a crappy system for anything else.
You swapped PKBUILD and APKBUILD 🙃
I did! I’ve been trying to update packages in both, recently. The similarities are utterly frustrating, as they’re almost identical; the biggest difference between Alpine and Arch is the package process. If they were the same format - and they’re honestly so close it’s absurd - it’d make packager’s lives easier.
I may have mentioned I haven’t yet started Void, but I expect it to be similarly frustrating: so very, very similar.
I’m starting to think something like a yay that installs into $HOME.
Homebrew, in theory, could do this. But they insist on creating a separate user and installing to that user’s home directory
Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also updated those, which would defeat the whole purpose.
So you end up back with containers, or AppImages, Snap, or Flatpack. Although, of all of these, AppImages and podman are the most sane, since Snap and Flatpack are designed to manage system-level software, which isn’t much of am issue.
It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V. I’d include C, but the temptation to dynamically link is so ingrained in C - I rarely see really statically linked C projects.
t’s especially bad for software engineers who are developing applications that need on a non-security big fix or new feature
This is what they tell themselves. That they need that fix. So then developers get themselves unstable packages — but wait! If they update just one version further, then compatibility will with something broken, and that requires work to fix.
So what happens is they pin and/or vendor dependencies, and don’t update them, even for security updates. I find this quite concerning. For example, Rustdesk, a popular rust based remote desktop software. Here’s a quick audit of their libraries using cargo-audit:
[nix-shell:~/vscode/test/rustdesk]$ cargo-audit audit Fetching advisory database from `https://github.com/RustSec/advisory-db.git` Loaded 742 security advisories (from /home/moonpie/.cargo/advisory-db) Updating crates.io index warning: couldn't update crates.io index: registry: No such file or directory (os error 2) Scanning Cargo.lock for vulnerabilities (825 crate dependencies) Crate: idna Version: 0.5.0 Title: `idna` accepts Punycode labels that do not produce any non-ASCII when decoded Date: 2024-12-09 ID: RUSTSEC-2024-0421 URL: https://rustsec.org/advisories/RUSTSEC-2024-0421 Crate: libgit2-sys Version: 0.14.2+1.5.1 Title: Memory corruption, denial of service, and arbitrary code execution in libgit2 Date: 2024-02-06 ID: RUSTSEC-2024-0013 URL: https://rustsec.org/advisories/RUSTSEC-2024-0013 Severity: 8.6 (high) Solution: Upgrade to >=0.16.2 Crate: openssl Version: 0.10.68 Title: ssl::select_next_proto use after free Date: 2025-02-02 ID: RUSTSEC-2025-0004 URL: https://rustsec.org/advisories/RUSTSEC-2025-0004 Solution: Upgrade to >=0.10.70 Crate: protobuf Version: 3.5.0 Title: Crash due to uncontrolled recursion in protobuf crate Date: 2024-12-12 ID: RUSTSEC-2024-0437 URL: https://rustsec.org/advisories/RUSTSEC-2024-0437 Solution: Upgrade to >=3.7.2 Crate: ring Version: 0.17.8 Title: Some AES functions may panic when overflow checking is enabled. Date: 2025-03-06 ID: RUSTSEC-2025-0009 URL: https://rustsec.org/advisories/RUSTSEC-2025-0009 Solution: Upgrade to >=0.17.12 Crate: time Version: 0.1.45 Title: Potential segfault in the time crate Date: 2020-11-18 ID: RUSTSEC-2020-0071 URL: https://rustsec.org/advisories/RUSTSEC-2020-0071 Severity: 6.2 (medium) Solution: Upgrade to >=0.2.23 Crate: atk Version: 0.18.0 Warning: unmaintained Title: gtk-rs GTK3 bindings - no longer maintained Date: 2024-03-04 ID: RUSTSEC-2024-0413 URL: https://rustsec.org/advisories/RUSTSEC-2024-0413 Crate: atk-sys Version: 0.18.0 Warning: unmaintained Title: gtk-rs GTK3 bindings - no longer maintained Date: 2024-03-04 ID: RUSTSEC-2024-0416 URL: https://rustsec.org/advisories/RUSTSEC-2024-0416
I also checked rustscan and found similar issues.
I’ve pruned the dependency tree and some other unmaintained package issues, but some of these CVE’s are bad. Stuff like this is why I don’t trust developers to make packages, they get lazy and sloppy at the cost of security. On the other hand, stable release distributions inflict security upgrades on everybody, which is good.
Yeah, I got to thinking about this more after I posted, and it’s a horrible idea. It’d guarantee system updates break user installs, and the only way it couldn’t were if system installs knew about user installs and also
???. This is very incorrect. I don’t know where to start. If a package manager manages it’s own dependencies/libraries, like nix portable installs, or is a static binary (e.g: soar), then system installs will not interfere with the “user” package manager at all. You could also use something like launchd (mac) or systemd users services (linux) to update these packages with user level privileges, in the user’s home directory.
Also, I don’t know where you got the idea that flatpaks manage “system level” software.
It all drives me back to the realization that the best solution is statically compiled binaries, as produced by Go, Rust, Zig, Nim, V.
I dislike these because they commonly also come with version pinning and vendoring dependencies. But you should check out Soar and it’s repository. It also packages appimages, and “flatimages”, which seem to be similar to flatpaks but closer to appimages in distribution.