Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I never understood why "modern" tools like Docker have to provide everything: networking, firewall, repository, you name it...

I understand somebody wanting to type "docker run xxx" and have everything setup automatically, but if you're running anything but default networking and actually care where the xxx image comes from, it's gonna fail miserably. Coming from the VM world, I found it much easier to work with macvlan interfaces that lxd supports for example - the container gets it's own interface and IP address and all networking can be prepared and set up on the host instead of some daemon thinking it knows my firewall better than me...



Three Words . "Ease Of Use"

Containerization has obviously been around for ages before Docker came on the scene, yet it's adoption has dwarfed that of other similar solutions, why is that?

People have to start with a technology somewhere, if they get frustrated with the process, often they'll discount it and move on, that first moment of success of "ohh that was easy" is really important.

Docker has that. Sure it hides a wide variety of complexity under the skin that, in more complex deployments, can come back and bite you , but for people getting started it's much easier than the alternatives.

The "App store" like nature of Docker Hub is another part of that, the ability to easily find base images that you can use to prototype solutions is super-useful as a beginner.

Of course once you've been using it a while, you might have questions about image provenance, vulnerability management etc, but those typically aren't part of the initial evaluation.


>>> Containerization has obviously been around for ages before Docker came on the scene, yet it's adoption has dwarfed that of other similar solutions, why is that?

Containerization was mostly limited to Solaris and BSD. It took a while to get to Linux, in the form of Docker.


Well Linux had containerization before Docker came along. The initial release of LXC was in 2008, a decent distance after Jails/Zones but still 5 years before Docker. OpenVZ was even earlier than that, starting in 2005


Even earlier was Linux VServer, which was 2001 :) So yeah, Linux containers have been around in one form or another for over a decade before Docker helped popularise things.

This makes some interesting reading: https://blog.aquasec.com/a-brief-history-of-containers-from-...


Lxc predates docker, and other lesser forms of containerization (or perhaps "precursors") like simple chroot have been around much longer.


Docker was originally written to use LXC for containerization. It wasn't until right before 1.0 that Docker was written to use its own go based library for container handling. https://blog.docker.com/2014/03/docker-0-9-introducing-execu...


What about OpenVZ (2005) ? It’s not containerizing application but whole VPS of course, but still.

I use it for more than a decade myself and I liked the simplicity of it.


Doesn't that require out of tree kernel patches?


It sure did back then, that was its biggest downside (not sure if it's still required today).

We used it extensively at my work in a Web company to package and deploy all our stack as far as 2006.


Yeah, it feels too monolithic. Just to showcase it can run Hello World in one "docker run" command I guess?

Another thing people use Docker for but shouldn't is application packaging. Using Docker you build one fossilized fat package with both all OS and app dependencies baked in. Then some day after years of using that Docker image you need to upgrade your OS version in the image but you can't replicate the app build because you didn't pin exact library versions and the global app repository's (pip, npm) later version of the package is no longer compatible with your app.

Application packaging is better to be done in proper packaging systems like rpm od deb or other proprietary ones and stored in organization's package repository. Then you can install these rpm packages in your Docker images and deploy them into cloud.

The difference in OS dependencies and app dependencies are clear when looking at contents of actual dockerfiles. OS dependencies are installed leveraging rpm or deb ecosystem. Apps are cobbled together using a bunch of bash glue and remote commands to fetch dependencies. Why not use proper packaging for both OS and apps and then just assemble the Docker image using that?


> Using Docker you build one fossilized fat package with both all OS and app dependencies baked in.

Exactly. Most uses of Docker are like a junk drawer: neat on the outside, a total mess on the inside.

People stuff their python 2 app in there and forget what their dependencies are, or where they got them from.

Good luck upgrading that 2-3 years from now.


To be fair, this is how people build applications. "Oh there is this library let me just pull that in"


I LOVE the junk drawer analogy. I will credit you on my next blog ;-)


Well mostly because some of us are just devs and we don t know about it ? How about a blog writeup, I'd read it :)


I don't think I ever came across a developer who made a RPM/DEB package. I'm not sure I came across a devops/sysadmin who made a RPM/DEB package in recent years.

I wouldn't waste my time learning that if I were you. Software like python needs pip to build, they can't be built with the OS package manager alone.


> Software like python needs pip to build, they can't be built with the OS package manager alone.

Well yeah, OS packaging formats are for putting together your build artifacts. You can use whatever to build the software.

If you want to do it 'right' you would use pip to build packages for any missing runtime dependencies from your OS repos and the package your application. I swear it's much easier than it sounds.

But nothing says you have do it like an OS maintainer though. You can also just vendor all your dependencies, slurp them all up in an RPM and install in /opt.


+1. Back in the day, we used to mirror all of CPAN, because even as a young buck, I saw the gap in rebuilding apps without the source copied...


Well my team does that for one. We use Python packaging ecosystem, specify Python dependencies using standard tools like setup.py, requirements.txt and pip. All Python dependencies are baked into a fat Python package using PEX format[1]. Also tried Facebook's xar format[2], without success yet. What matters is to have a statically linked dependencies packaged in one executable file. Like a Windows exe file.

Then you proceed with bundling in higher level OS dependencies, because each app is not just a Python package but also a collection of shell scripts, configs, configuration of paths for caches, output directories, system variables, etc. For this we throw everything into one directory tree and run FPM [3] command on it which turns the directory tree into a RPM package. We use a fpm parameter to specify installation location of that tree to /opt or /usr elsewhere.

The way to bundle it properly is to actually use two rpm packages linked together by rpm dependency. One for the app and the other for the deployment configuration. The reason is you only have one executable, but many possible versions of deploying it. Ie. depending on environment (dev, staging, prod) or you just simply want to run the same app in parallel with different configuration.

eg. one rpm package for the app executable and statix files

my_app_exe-2.0-1.noarch.rpm

and many othe related config rpms

my_app_dev-1.2-1.noarch.rpm (depends on my_app_exe > 2.0)

my_app_prod-3.0-1.noarch.rpm (depends on my_app_exe == 2.0)

You then install only the deployment packages and rpm system fetches the dependencies for you automatically.

There are other mature places who use similar techniques for deployments, for example [4].

All of this then can be wrapped in even higher level of packaging, namely Docker.

[1] https://github.com/pantsbuild/pex [2] https://code.fb.com/data-infrastructure/xars-a-more-efficien... [3] https://github.com/jordansissel/fpm [4] https://hynek.me/articles/python-app-deployment-with-native-...


Everything in Amazon is packaged and deployed without containers since decades.

All engineers have to learn how to package their stuff.


Our team still uses debs. In fact I made a tool to greatly simplify the process: https://github.com/hoffa/debpack


I fail to see how RPM/Deb would be better at this (and I say this as someone who has a lot of experience in both). You will still need to pin dependencies with RPM/Deb, you still have to deal with OS release updates, and in the end it's just a matter of ensuring you frequently update and test your upstream dependencies.


You want to separate app and OS dependencies because in the future you could have the need to update the system which would mean rebuilding the docker package. In the future it can happen you are no longer able to reproduce your app build. But when you have the app package separate in some repository you can just create a new docker image but reuse the old app package without rebuilding it.


Because then you depend on your OS to upgrade your application dependencies.


But you can have a macvlan setup in Docker.

https://docs.docker.com/network/macvlan/

And you can tell also Docker not to touch your iptables rules.

https://docs.docker.com/network/iptables/

I don't think I've ever run into a container in the wild that is dependent on a specific network plug-in.


> I understand somebody wanting to type "docker run xxx" and have everything setup automatically

Well there's your answer. Simple things gain traction.


You can use macvlan (or ipvlan) with Docker, it's built in.

But yes I agree (as a former Docker Inc employee and current maintainer of moby which is what Docker is built from), the default networking in Docker is often problematic... firewall handling is annoying, at least from a sysadmin perspective.


That's because Docker was really designed to replace Vagrant, and that shows.

The use case is an individual developer who wants to get a test environment up and running quickly without needing to understand how IP routing works. It's great for that use, not so much for workloads in production.


Yeah exactly...and those developers who just want to run something quickly are not really your ideal customers. People using it for workloads in production are...


Because most people don’t know anything about networking or firewalls and don’t want to.


Most people nothing about technology and don't want to. Not sure where that argument goes, but I am guessing jobless...


Docker provides NAT but what do you mean it provides a firewall ? I see that quite often people deploy docker without a firewall, and at some point notice that they exposed services they didn't want to internet.


They mean it pokes holes in the firewall which are difficult to filter.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: