Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really don't understand this problem at all. I can cargo-deb and get a reasonable package with virtually no effort. What is uncompliant about that?


My understanding (which could be wrong) is that this is an attempt to preserve Debian's "global" view of dependencies, wherein each Rust package has a set of dependencies that's consistent with every other Rust package (or, if not every, as many as possible). This is similar to the C/C++ packaging endeavor, where dependencies on libraries are handled via dynamic linkage to a single packaged version that's compatible with all dependents.

If the above is right this contortion is similar to what's happened with Python packaging in Debian and similar, where distributions tried hard to maintain compatibility inside of a single global environment instead of allowing distinct incompatible resolutions within independent environments (which is what Python encourages).

I think the "problem" with cargo-deb is that it bundles all-in-one, i.e. doesn't devolve the dependencies back to the distribution. In other words, it's technically sound but not philosophically compatible with what Debian wants to do.


Packaging a distribution efficiently requires sharing as many dependencies as possible, and ideally hosting as much of the stuff as possible in an immutable state. I think that's why Debian rejects language-specific package distribution. How bad would it suck if every Python app you installed needed to have its own venv for example? A distro might have hundreds of these applications. As a maintainer you need to try to support installing them all efficiently with as few conflicts as possible. A properly maintained global environment can do that.

Edit: I explained lower down but I also want to mention here, static linkage of binaries is a huge burden and waste of resources for a Linux distro. That's why they all tend to lean heavily on shared libraries unless it is too difficult to do so.


> Packaging a distribution efficiently requires sharing as many dependencies as possible, and ideally hosting as much of the stuff as possible in an immutable state.

I don't think any of this precludes immutability: my understanding is Debian could package every version variant (or find common variants without violating semver) and maintain both immutability and their global view. Or, they could maintain immutability but sacrifice their package-level global view (but not metadata-level view) by having Debian Rust source packages contain their fully vendored dependency set.

The former would be a lot of work, especially given how manual the distribution packaging process is today. The latter seems more tractable, but requires distributions to readjust their approach to dependency tracking in ecosystems that fundamentally don't behave like C or C++ (Rust, Go, Python, etc.).

> How bad would it suck if every Python app you installed needed to have its own venv for example?

Empirically, not that badly. It's what tools like `uv` and `pipx` do by default, and it results in a markedly better net user experience (since Python tools actually behave like hermetic tools, and not implicit modifiers of global resolution state). It's also what Homebrew does -- every packaged Python formula in Homebrew gets shipped in its own virtual environment.

> A properly maintained global environment can do that.

Agreed. The problem is the "properly maintained" part; I would argue that ignoring upstream semver constraints challenges the overall project :-)


>I don't think any of this precludes immutability: my understanding is Debian could package every version variant (or find common variants without violating semver) and maintain both immutability and their global view.

Debian is a binary-first distro so this would obligate them to produce probably 5x the binary packages for the same thing. Then you have higher chances of conflicts, unless I'm missing something. C and C++ shared libraries support coexistence of multiple versions via semver-based name schemes. I don't know if Rust packages are structured that well.

>Empirically, not that badly. It's what tools like `uv` and `pipx` do by default, and it results in a markedly better net user experience (since Python tools actually behave like hermetic tools, and not implicit modifiers of global resolution state). It's also what Homebrew does -- every packaged Python formula in Homebrew gets shipped in its own virtual environment.

These are typically not used to install everything that goes into a whole desktop or server operating system. They're used to install a handful of applications that the user wants. If you want to support as many systems as possible, you need to be mindful of resource usage.

>I would argue that ignoring upstream semver constraints challenges the overall project :-)

Yes it's a horrible idea. "Let's programmatically add a ton of bugs and wait for victims to report the bugs back to us in the future" is what I'm reading. A policy like that can be exploited by malicious actors. At minimum they need to ship the correct required versions of everything, if they ship anything.


> Then you have higher chances of conflicts, unless I'm missing something.

For python, you could install libraries into a versioned dir, and then create a venv for each program, and then in each venv/lib/pythonX/site-packages/libraryY dir just symlinks to the appropriate versioned global copy.


That would make it difficult to tell at a system level what the exact installed dependencies of a program are. It would also require the distro to basically re-invent pip. Want to invoke one venv program from another one? Well, good luck figuring out conflicts in their environments which can be incompatible from the time they are installed. Now you're talking about a wrapper for each program just to load the right settings. This is not even an exhaustive list of all possible complications that are solved by having one global set of packages.


Do you think that's user friendly?


I see it as more user friendly - instead of forgetting to activate the venv and having the program fail to run/be broken/act weird, you run the program and it activates the venv for you so you don't have that problem.


Do you think your software is so important that people will do all of that rather than use something better? (For example your software but patched by a distribution to work easily without doing all of that complication)


I'm talking about a shim that distributions can use to launch python programs, some which they distribute, rather than software I write. In particular, ML researchers aren't sysadmins but a lot of their software is in the form of community python programs, which is to say not polished commerical apps with backing like, say, Adobe Photoshop, and this shim solves one of python's pitfalls for users.


> Debian is a binary-first distro so this would obligate them to produce probably 5x the binary packages for the same thing. Then you have higher chances of conflicts, unless I'm missing something.

Ah yeah, this wouldn't work -- instead, Debian would need to bite the bullet on Rust preferring static linkage and accept that each package might have different interior dependencies (still static and known, just not globally consistent). This doesn't represent a conflict risk because of the static linkage, but it's very much against Debian's philosophy (as I understand it).

> I don't know if Rust packages are structured that well.

Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

> These are typically not used to install everything that goes into a whole desktop or server operating system. They're used to install a handful of applications that the user wants.

I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).


>I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

Many software packages need root access but that is not what I was talking about. Distro users just want working software with minimal resource usage and incompatibilities.

>Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

Static linkage is deliberately avoided as much as possible by distros like Debian due to the additional overhead. It's overhead on the installation side and mega overhead on the server that has to host a download of essentially the same dependency many times for each installation when it could have instead been downloaded once.

>And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).

If the binary package is a thin wrapper around venv, then you're right. But these packages are usually designed to share dependencies with other packages where possible. So for example, if you had two packages installed using some huge library for example, they only need one copy of that library between them. Updating the library only requires downloading a new version of the library. Updating the library if it is statically linked requires downloading it twice along with the other code it's linked with, potentially using many times the amount of resources on network and disk. Static linking is convenient sometimes but it isn't free.


OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.


>OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.

Let's see. On one hand there is more compile time, disk usage, bandwidth usage, RAM usage required for static linking. On the other hand we have a different, slightly more involved linking scheme that saves on every hardware resource. It seems to me that static linking is rarely appropriate for most applications and systems.


> On the other hand we have a different, slightly more involved linking scheme that saves on every hardware resource.

But, at least as implemented on mainstream Linux distributions, at the cost of "DLL Hell" that makes releasing reasonably granular libraries on a reasonably granular schedule essentially impossible, as per the article.

I'm all for dynamic linking in theory, but the way the likes of Debian do it makes the costs too high.


>But, at least as implemented on mainstream Linux distributions, at the cost of "DLL Hell" that makes releasing reasonably granular libraries on a reasonably granular schedule essentially impossible, as per the article.

The article is about problems associated with Rust. I think Rust is too new of an ecosystem to have mature developers committing to stable versions of libraries and applications which depend on stable libraries, both of which are essential to making dynamic linkage work.

Linux doesn't have DLL Hell generally. That term comes from the Windows world where there are systems in place to store every version of every DLL ever seen, because even DLLs with the same version may not be interchangeable. That is truly hellish.


> Linux doesn't have DLL Hell generally.

It absolutely does, especially when there are incompatible upgrades. People still talk about libc.so.6 issues. Whenever you upgrade a system library you can get breakage, only last week I had to deal with that kind of problem (on a Debian system no less).

> there are systems in place to store every version of every DLL ever seen, because even DLLs with the same version may not be interchangeable. That is truly hellish.

The fact that they've put those systems in place makes it much better positioned than Linux IMO. The implementation may be ugly, but ultimately windows programs continue to work even in the face of incompatible library changes, and without having abandoned dynamic libraries entirely. I think it's possible to do dynamic linking right (perhaps with something like Nix), but as implemented on traditional Debian-style distributions the cost is too high.


Historically, the main reason why dynamic linking is even a thing is because RAM was too limited to run "heavy" software like, say, an X server.

This hasn't been true for decades now.


This is still true.

Static linking works fine because 99% of what you run is dynamically linked.

Try to statically link your distribution entirely and see how the ram usage and speed will degrade :)


But there's a pretty significant diminishing return between, say, the top 80 most linked libraries and the rest.


Or using OS IPC processes for every single plugin on something heavy like a DAW.


RAM is still a limited resource. Bloated memory footprints hurt performance even if you technically have the RAM. The disk, bandwidth, and package builder CPU usage involved to statically link everything alone is enough reason not to do it, if possible.


Other OSes got dynamic linking before it became mainstream on UNIX.

Plugins and OS extensions were also a reason why they came to be.


> Debian could package every version variant ... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.

That would work for distributions that provide just distributions / builds. But one major advantage of Debian is that it is committed to provide security fixes regardless of upstream availability. So they essentially stand in for maintainers. And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.


>... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.

That's not reasonable for library packages, because they may have to interact with each other. You're also proposing a scheme that would cause an explosion in resource usage when it comes to compilation and distribution of packages. Packages should be granular unless you are packaging something that is just too difficult to handle at a granular level, and you just want to get it over with. I don't even know if Debian accepts monolithic packages cobbled together that way. I suspect they do, but it certainly isn't ideal.

>And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.

When this is done, it is likely because updating is riskier and more work than maintaining a few old versions. Library authors that constantly break stuff for no good reason make this work much harder. Some of them only want to use bleeding edge features and have zero interest in supporting any stable version of anything. Package systems that let anyone publish easily lead to a proliferation of unstable dependencies like that. App authors don't necessarily know what trouble they're buying into with any given dependency choice.


> You're also proposing a scheme that would cause

I do not propose that scheme, i just cited the parent comment (from woodruffw).


> my understanding is Debian could package every version variant

unlike pypi, debian patches CVEs, so having 3000 copies of the same vulnerability gets a bit complicated to manage.

Of course if you adopt the pypi/venv scheme where you just ignore them, it's all much simpler :)


This is incorrect on multiple levels:

* Comparing the two in this regard is a category error: Debian offers a curated index, and PyPI doesn't. Debian has a trusted set of packagers and package reviewers; PyPI is open to the public. They're fundamentally different models with different goals.

* PyPI does offer a security feed for packages[1], and there's an official tool[2] that will tell you when an installed version of a package is known to be vulnerable. But this doesn't give PyPI the ability to patch things for you; per above, that's something it fundamentally isn't meant to do.

[1]: https://docs.pypi.org/api/json/#known-vulnerabilities

[2]: https://pypi.org/project/pip-audit/


It's a completely fair comparison is one is assessing which is more secure. The answer is completely straightforward.

One project patches/updates vulnerable software and makes sure everything else works, while the other puts all the effort on the user.


>How bad would it suck if every Python app you installed needed to have its own venv for example?

You just described every python3 project in 2024. Pretty much none will be expected to work with system python. But your point still stands, that's not a good thing that there is no python and only pythons. And it's not a good thing that there is no rustc only rustcs, etc, let alone trying to deal with cargo.


It’s not that they don’t work with the system Python, it’s that they don’t want to share the same global package namespace as the system Python. If you create a virtual environment with your system Python, it’ll work just fine.

(This is distinct from Rust, where there’s no global package namespace at all.)


> How bad would it suck if every Python app you installed needed to have its own venv for example?

You mean…the way many modern python apps install themselves? By setting up their own venv? Which is a sane and sensible thing to do, given pythons sub-par packaging experience.


Yes, I'm indirectly saying that the way many contemporary apps are managed sucks. Python's packaging experience is fine as far as tools in that category go. The trouble happens when packages are abandoned, or make assumptions that are bad for users. Even if everyone was a good package author, there would be inevitable conflicts.

The problem extends way beyond Python. This is why we have Docker, Snap, Flatpack, etc.: to work around inadequate maintenance and package conflicts without touching any code. These tools make it even easier for package authors to overlook bad habits. "Just use a venv bro" or "Just run it in Docker" is a common response to complaints.


I want to challenge this assumption that “the distro way is the good way” and anything else that people are doing is the “wrong” way.

I want to challenge it, because I’m beginning to be of the opinion that the “distro way” isn’t actually entirely suitable for a lot of software _anymore_.

The fact that running something via docker is easier for people than the distro way indicates that there are significant UX/usability issues that aren’t being addressed. Docker as a mechanism to ship stuff for users does suck, and it’s a workaround for sure, but I’m not convinced that all those developers are “doing things wrong”.


>The fact that running something via docker is easier for people than the distro way indicates that there are significant UX/usability issues that aren’t being addressed.

It's a lack of maintenance that isn't getting addressed. Either some set of dependencies isn't being updated, or the consumers of those dependencies aren't being updated to match changes in those dependencies. In some cases there is no actual work to do, and the container user is just too lazy to test anything newer. Why try to update when you can run the old stuff forever, ignoring even the slow upgrades that come with your Linux distro every few years?

Some people would insist that running old stuff forever in a container is not "doing things wrong" because it works. But most people need to update for security reasons, and to get new features. It would be better to update gradually, and I think containers discourage gradual improvements as they are used in most circumstances. Of course you can use the technology to speed up the work of testing many different configurations as well, so it's not all bad. I fear there are far more lazy developers out there than industrious ones however.


> How bad would it suck if every Python app you installed needed to have its own venv for example?

I would love to have that. Actually that's what I do: I avoid distribution software as much as possible and install it in venvs and similar ways.


Now tell your grandmother to install a software that way and report back with the results please.


Hahaha try asking your grandmother to install with apt and you get the same result.

I’d estimate most Unix distributions are used in one 3 ways:

- a technical application maintained by a tech savvy admin or a team of them, this would primarily be server usage. - desktop usage by a developer. - a restricted installation on older hardware for a non-tech savvy person. Their user account probably shouldn’t be given permission to install new software so none of this has any relevance to them.

You average non-technical user certainly isn’t running Debian.


> Hahaha try asking your grandmother to install with apt and you get the same result.

Hahahaha surely you're being daft on purpose pretending you don't know there's a GUI to do this right? https://apps.kde.org/it/discover/

> You average non-technical user certainly isn’t running Debian.

And you know this how? The same way you absolutely knew there's no GUI for apt? :D

Also just fyi, IT employees don't have unlimited time, so they'd rather use apt than whatever weird sequence of steps you devised to install your own application.


> How bad would it suck if every Python app you installed needed to have its own venv for example?

Yeah I hacked together a shim that searches the python program's path for a directory called venv and shoves that into sys.path. Haven't hacked together reusing venv subdirs like pnpm does for JavaScript, but that's on my list.


The sheer amount of useless busywork needed for this, only to end up with worse results and piss off application creators, is peak fucking Linux.

If I was paid to do this kind of useless work for a living I’d probably be well on the way to off myself.


Software bill-of-materials work has become a huge focus in the security/SRE space lately, due to the existence of actually attempted or successful supply chain attacks.

In that context, creating an ever increasing menagerie of dependencies of subtly different versions of the same thing is the pointless busy work because for the tiny bit of developer convenience, you're opening an exponentially growing amount of audit work to prove the dependencies are safe.


Cargo audits are published via cargo vet, once per package version, whereas this distro nonsense will be getting re-done for every distro, has to be updated despite potentially breaking code changes whenever upstream changes, and whenever the distro updates. What a nightmare by comparison.


You make a great point, but I think we're going to see stuff like Microsoft selling 'supported NPM' to corps, rather than a zillion volunteers showing up to do monkey packaging work for Debian. (In fact, inserting some rando to fork the software makes the problem worse.)


Right but the point is then open source by necessity needs to constrain it's dependencies to sensible, auditable sets.

But I'd note that a hypothetical "verified NPM" would also result in the same thing: Microsoft does not have infinite resources for such a thing, so you'd just have a limited set of approved deps yet again (which would in fact make piggy backing them relatively easy for distros).

I can't see a way to slice it where it's reasonable to expect such a world to just support enormous nests of dependency versions.


Agreed, but on the other hand I never understood the business model of the for-profit NPM Corporation, or why Microsoft would buy them out, so there has to be some angle here. Maybe they cannot solve all of "the supply chain", but like 80% of it, yeah sure, sign here. Debian can package that too, whatever, they will get the F500s.


The verified npm will be like the verified pypi: "this thing was built on github, but we actually have no fucking clue if it's a bitcoin miner or a legit library"


Cargo even supports multiple incompatible versions of the same library coexisting within a single binary. E.g. an application I maintain depended on both winit 0.28 and 0.29 for a while, since one of my direct dependencies required winit 0.28, another required winit 0.29, and the API changed significantly between the two versions. So if the set of Rust applications packaged by Debian grows large enough, this "let's just pick one canonical version of each package" approach seems like a complete non-starter.

I'd expect it to be much less effort for Debian to use the dependencies the application says it's compatible with, and build tooling allowing them to track/audit/vendor/upgrade Cargo dependencies as needed; rather than trying to shove the Cargo ecosystem into their existing tooling that is fundamentally different (and thus incompatible).


I think what I’m missing is why all of this is necessary in a world without dynamic linking. We’re talking about purely build time dependencies, who cares about those matching across static binaries? If it’s this much of a headache, I’d rather just containerize the build and call it a day.


This is pretty much my thought as well -- FWICT the ultimate problem here isn't a technical one, but a philosophical disagreement between how Rust tooling expects to be built and how Debian would like to build the world. Debian could adopt Rust's approach with minor technical accommodations (as evidenced by `cargo-deb`), but to do so would be to shed Debian's commitment to global version management.


Because they are also maintaining all those build time dependencies, including dealing with CVE's, back porting patches, etc.


World without dynamic linking was ended eons ago. You are just too young to remember that.

But, if you like it, find like-minded people here on HN and create a successful distro with static linking only. Maybe you will success where others failed. Thank you in advance.


I believe that's usually so they can track when a library has a security vulnerability and needs to be updated, regardless of whether the upstream package itself has a version that uses the fixed library.


This is a general Linux distro problem, which is entirely self-inflicted. The distro should only carry the software for the OS itself and any applications for the user should come with all their dependencies, like on Windows. Yes, it kind of sucks that I have 3 copies of Chrome (Electron) and 7 copies of Qt on my Windows system, but that sure works a hell of a lot better than trying to synchronise the dependencies of a dozen applications. The precise split between OS service and user application can be argued endlessly, but that's for the maintainers to decide. The OS should not be a vehicle for delivering enduser applications. Yes, some applications should remain as a nice courtesy (e.g. GNU chess), but the rest should be dropped. Basically the split should be "does this project want to commit to working with major distros to keep its dependencies reasonable". I really hope we can move most Linux software to flatpak and have it updated and maintained separately from the OS. After decades of running both Linux and Windows, the Windows model of an application coming with all its dependencies in a single folder really is a lot better.


> Yes, it kind of sucks that I have 3 copies of Chrome (Electron) and 7 copies of Qt on my Windows system, but that sure works a hell of a lot better than trying to synchronise the dependencies of a dozen applications.

Does it? Even if 2 of those chromiums and all of the QTs have actively exploited vulnerabilities and it's anyone's guess if/when the application authors might bother updating?


A common downside is that the distribution picks the lowest common denominator of some dependency and all apps that require a newer version are held behind. That version may well be out of support and not receive fixes at all any more, which leaves the burden of maintenance on the distribution. Depending on the package maintainer, results may vary. (We sysadmins still remember debians effort to backport a fix to OpenSSL and breaking key generation.)

This is clearly a tradeoff with no easy win for either side.


> We sysadmins still remember debians effort to backport a fix to OpenSSL and breaking key generation.

Could you remind those of us who don't remember that? The big one I know about is CVE-2008-0166, but that was 1. not a backport, and 2. was run by upstream before they shipped it.

But yes, agreed that it's painful either way; I think it comes down to that someone has to do the legwork, and who exactly does it has trade-offs either way.


It wasn't run by upstream. It was run by the wrong mailing list, and only for one half of the Debian changes.


You’re correct. It wasn’t a backport, it was an attempt to fix a perceived issue that upstream had not fixed - a read of uninitialized memory. What the maintainer did not understand was that the call was deliberate. With this and subsequent changes, they broke the randomness in the key generation.


Good thing openssl is so good it hasn't had any CVE at all in all the years since that happened right?


So we’re introducing new bugs to add a little extra spice?

Not trying to defend the code quality of OpenSSL, but the fallout of this bug affected everything and everyone and bad keys are still found 16 years later as of 2024.[1] And it was completely and utterly unnecessary - reporting the issue to upstream and fixing it upstream would have prevented the entire disaster.

[1]https://16years.secvuln.info/


Yeah and now they know (like everyone else already knew) that uninitialised memory is a bad idea :D


That is a problem for LTS distributions. Rolling distributions do not have this problem.


I don't think that's true? If packages foo and bar both need libbaz, and foo always uses the latest version of libbaz but bar doesn't, you're going to have a conflict no matter whether you're rolling or not. If anything, a slow-moving release-based distro could have an easier time if they can fudge versions to get an older version of foo that overlaps dependencies with bar.


When the situation you describe happens, the easiest thing for the distro to do is to make the two versions of libbaz coinstallable via different package names, different sonames, etc. This is how every distro, LTS or rolling, handled openssl 1.0 vs 1.1 vs 3 for example.

Regardless, the original point was:

>>That version may well be out of support and not receive fixes at all any more, which leaves the burden of maintenance on the distribution.

... and my response that this is only a problem for LTS distros stands. A rolling release distro will not get in the business of maintaining packages after upstream dropped support. It is only done by LTS distros, because the whole point of LTS distros is that their packages must be kept maintained and usable for N years even as upstreams lose interest and the world perishes in nuclear fire and zombies walk the Earth feasting on the brains of anyone still using outdated software.

---

Now, to play devil's advocate, here's an OpenSUSE TW (rolling distro) bug that I helped investigate: https://bugzilla.opensuse.org/show_bug.cgi?id=1214003 . The tl;dr is that:

- Chromium upstream vendors all its dependencies, but OpenSUSE forces it to compile against distribution packages.

- Chromium version X compiles against vendored library version A. OpenSUSE updates its Chromium package to version X and also has library version A in its repos, so the Chromium compiled against distro library works fine.

- Chromium upstream updates the vendored library to version B, and makes that change as part of Chromium version Y. OpenSUSE hasn't updated yet.

- OpenSUSE updates the library package to version B. Chromium version X's package is automatically rebuilt against the new library, and it compiles fine because the API is the same.

- Disaster! The semantics of the library did change between versions A and B even though the API didn't, so OpenSUSE's Chromium now segfaults due to nullptr deref. Chromium version Y contains Chromium changes to account for this difference, but the OpenSUSE build of Chromium X of course doesn't have them.

You will note that this is caused by a distro compiling a package against a newer version of a dependency than upstream tested it against, not an older one, but you are otherwise welcome to draw parallels from it.

In this case it was fixed by backporting the Chromium version Y change to work with library version B, and eventually the Chromium package was updated to version Y and the patch was dropped. In a hypothetical scenario where Chromium could not be updated nor patched (say the patch was too risky), it could have worked for the distro to make the library coinstallable and then have Chromium use library version A while everything else uses version B.


Another downside might be that developer just does not test his software with OS-supplied library versions. So it can cause all the kinds of bugs. There's a reason why containers won in server-side development.


That's why distros (at least of the kind that Debian is) aim to do everything themselves; they mirror upstream code ( https://sources.debian.org/ ), they write their own packaging scripts, they test the whole thing together, and then they handle problems in-house ( https://www.debian.org/Bugs/Reporting ). The upstream developer shouldn't need to have done any of this themselves.


MacOS and Windows both seem to do quite well actually on this front. You should have OS-level defense mechanisms rather than trying to keep every single application secure. For example, qBitTorrent didn’t verify HTTPS certs for something like a decade. It’s really difficult to keep everything patched when it’s your full time job. When it’s arbitrary users with a mix of technical abilities and understandings of the issue it’s a much worse problem.


Isn't that an argument that macOS/Windows aren't doing so well? On Debian, I can run `apt upgrade` and patch every single program. On Windows, I have to have a bunch of updater background processes and that still doesn't cover everything.


If you think that that gets you the latest patches to all possible software you may have installed, you may want to avoid people offering to sell you bridges. It’s an illusion - you’re relying on the best effort of volunteers “maintaining” an insane amount of software packages (both from bugs and security and often ignoring upstream or even having upstream treat them as outsourced support). I would point you to qBitTorrent having a 10 year bug where they didn’t verify SSL certs as an example of the kinds of bugs that you need broader in depth security mechanisms to keep you safe.


Possibly the not trying to keep applications secure explains why those systems get hacked all the time?


Far more reasonable explanation is that the addressable market for Windows and iOS is much larger and more lucrative - iOS exploits go for six figures whereas Linux exploits don’t generally go for as much. And exploits for Chrome and Safari go for a lot for similar reasons. No piece of software is likely to really be safe from a super determined and well funded adversary, and that’s why you need ways of structuring things (eg VMs, containers, peer ssl etc) to make things safer.

If Linux had anywhere near the market share of those OSes you would see much more publicly the problems of how Debian and other distros choose to maintain those OSes.


The problem here is ultimately visibility and actionability: half a dozen binaries with known vulnerabilities isn't much better than a single distribution one, if the distribution isn't (or can't) provide security updates.

Or, as another framing: everything about user-side packaging cuts both ways: it's both a source of new dependency and vulnerability tracking woes, and it's a significant accelerant to the process of getting patched versions into place. Good and bad.


"The distro should only carry the software for the OS itself and any applications for the user should come with all their dependencies, like on Windows."

You can run distros/OS that works as you like. Other people can run other OS that works like they want.


> any applications for the user should come with all their dependencies, like on Windows

s/dependencies/security holes/g


> any applications for the user should come with all their dependencies, like on Windows

Because that works so well on windows right?


You’re getting downvoted but you’re not wrong. The Linux Distro model of a single global shared library is a bad and wrong design in 2024. In fact it’s so bad and wrong that everyone is forced to use tools like Docker to work around the broken design.


I guess you're running vulnerable software happily :)


This seems like a fools errand to me. Unlike C/C++, Rust apps typically have many small-ish dependencies, trying to align them to a dist-global approved version seems pointless and laborious. Pointless because Rust programs will have a lot less CVEs that would warrant such an approach.


Laborious but not pointless.

Rust programs have fewer CVEs for two reasons: its safe design, and its experienced user base. As it grows more widespread, more thoughtless programmers will create insecure programs in Rust. They just won’t often be caused by memory bugs.


I'd think logic bugs are the majority of CVEs, and rust doesn't magically make those go away


The "majority of CVEs" isn't a great metric, since (1) anybody can file a CVE, and (2) CNAs can be tied to vendors, who are incentivized to pre-filter CVEs or not issue CVEs at all for internal incidents.

Thankfully, we have better data sources. Chromium estimates that 70% of serious security bugs in their codebases stem from memory unsafety[1], and MSRC estimates a similar number for Microsoft's codebases[2].

(General purpose programming languages can't prevent logic bugs. However, I would separately argue that idiomatic Rust programs are less likely to experience classes of logic bugs that are common in C and C++ programs, in part because strong type systems can make invalid states unrepresentable.)

[1]: https://www.chromium.org/Home/chromium-security/memory-safet...

[2]: https://msrc.microsoft.com/blog/2019/07/we-need-a-safer-syst...


Bad developers (aka most of the developers) want to be able to break compatibility every 3 days or so, and pinning a precise version lets them do that.

Some users commenting here are employed by python, which also has a policy of breaking compatibility all the time.

It's very fun to develop in this way, but of course completely insecure.

These developers might not care because they aren't in charge of security, while distributions care about security and distro maintainers understand that shipping thousands of copies of the same code means you can't fix vulnerabilities (and it's also terribly inefficient).

My suspicion is that most developers think their own software is a special snowflake, while everyone else's software is still expected to behave normally because who's got the time to deal with that anyway?


AFAICT, nobody in this thread is employed by the PSF (I assume that's what you meant by "python"). And, to the best of my knowledge, there is no "break things just because" policy within Python.


I don't know who is signing your checks, I know what you are paid to work on :)

The distinction between consultant and employee is pretty moot if you're not the tax office.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: