Seems the person with the most contributes is Natanael Copa, who is also one of the lead maintainers of Alpine Linux Distribution. Pretty cool! I look forward to the first stable release!
I didn't notice this, and this is a huge factor for raising my own interest. Natanael is prolific.
The project already piqued my interest because I think of kubernetes' main problems can be how hard it is to hold all the actual binaries that need to run in your head, but I think the real test will be how it migrates/upgrades. The "don't break user space" stability of projects like linux and postgres would be a game-changer in the cloud native space.
Is it just me or do other people also read the name as `chaos`?
I guess with k8s one hard part is to get it well-configured up and running and the other to maintain it and adjust to the project's needs. But seems the project tries to improve the whole experience. Good luck (honestly)!
Pronouncing kubernetes was a problem for me as a greek (greek way would be kee-vaer-nee-tees and it means governor). Getting my mind to use the english word tripped me out, and using the greek pronunciation didn't make sense to anybody.
Since trying to say "kubernetes" just trips my mind, I say kubernet (coobearnet). And use the same term "kubernet" for both "kubernetes" and the "k8s" shortcut.
Kubernet = "k for graph (eg k-anonymity, k-neighbor search), uber for awesome, net for network!" It's a very descriptive and easy to pronounce/memorize name that frees my mind from the greek pompousness.
PS. k0s, I would pronounce "Cause" or maybe... (to make a silly bloodborne reference) Kos, some say Kosm
As a native Hebrew speaker, when I was first exposed to Kubernetes years ago I could not have overlooked the resemblance to the Hebrew word "kbarnit" which has the same meaning. I'm sure the Greek word which inspired the authors and the Hebrew word share origins (don't know which predates the other), but in this case the Hebrew word happens to sound closer to the common pronunciation which helped me pick it up quite intuitively
I read it as k-aughts and k8s as k-eights, and all my juniors have at first as well. These aren't great abbreviations, we just get used to them. My main issue with this project is that if you want to use k8s and are using it, you want some measure of control. Unless you are just using it "because", your business goals intersect with the k8s philosophy and you need some element of it. In every case I have deployed it in there is a lot of fine tuning.
k8s always tripped me up, personally. I always mentally read it as "k-ubereats" (kind of like "k-means"), so it always effectively started with a "u" to me.
I don't understand why people seem to want K8s but then find it 'difficult'. It's really not that complex, and when you add stuff on to it, that is your own choice (just like with k3s and k0s). You essentially end up with a kubelet, apiserver (with etcd) and two or three copy-and-paste YAML files (manifests) to setup a 'full' k8s cluster.
If that is too much, is k8s even relevant to your case?
Say your workload is so small and so simple that having a minimum of 2 servers isn't what you want, then again, is k8s even relevant for your case?
Kubernetes does a whole lot more than just restart jobs when a node fails, in fact of all its features this is one most people probably see in action the least.
Kubernetes centralizes all the traditional technical nonsense related to providing a robust environment to deploy applications to. I want Kubernetes even in a single node scenario because I want Kubernetes-like packaging, deployment and network services for any app I work on, as they are a net simplification of what was previously an ad-hoc world of init scripts, language-specific supervisors, logging, monitoring, etc etc., and rearchitecting an app deployment from scratch simply because its resource requirements increased.
If people want to continue trying to scale it down further, where is the harm in this? There are plenty of legitimate cases where it makes sense. There's no real limit to that work either, it's conceivable with the right implementation improvements (in k8s and the container runtime), it might eventually be possible to reuse the same deployment model even for extremely small devices.
A lot of people seem to miss that one of the primary points of using K8S is that it’s a PaaS. Instead of relying upon cloud provider based services one could use K8S constructs and make it easier to port applications and services across hosting providers. The lock-in factor is a point of nervousness not because people consider wanting to move from AWS but because it can be hard to port new features supporting providers concurrently if every developer immediately reaches for the provider’s message queue, e-mail service, batch processing pipeline, etc. It takes a lot of effort to learn these various services’ nuances and that’s not necessarily useful for everyone.
And while K8S maintenance is fairly involved, trying to do deployments and system administration in its absence (read: production grade applications packaged prior to containers) is expensive and extremely error-prone as well.
As much as it’s a pain to deal with a mangled K8S setup, it absolutely beats 20+ idiosyncratic applications wired their own particular way with oddball service hacks on a snowflake server. This is a huge business liability that is changed at least to random containers in a container-based ecosystem of applications.
Again, what is it for? If you have a single server, what is K8S going to give you over say, a single static binary or a docker container?
K8S on a single node does nothing for you network-wise, you have no overlay network (because there is nothing to overlay) you have no ingress or egress (there is only 1 node so no matter what you 'configure' your ingress and egress will be that same node), and it's unlikely to have enough resources to do something like a full application deployment with some big helm chart.
While I agree that you can (ab)use K8S as a runtime and packaging format, all of those big benefits are removed when you are running it on 1 node, except perhaps the fact that you can talk to the apiserver and define your jobs/tasks/pods the same way. But even then you'd only do that locally, because 'testing' in a dev or staging env that doesn't match prod is going to give you non-representative results.
Ever tried to host multiple apps on a single machine? Oh look, a custom Nginx config only one person understands. Oh look, some hacked up letsencrypt config only one person understands, etc etc.
> K8S on a single node does nothing for you network-wise
- Container IP auto-assignment
- Container security policy
- Container DNS management
- Ingress management ("custom Nginx config")
- "Environment that feels like a large network and doesn't change if moved to a large network"
> Ever tried to host multiple apps on a single machine?
Yep works fine. Has been for decades, even before containers existed. Guess what the sites-available and sites-enabled directories for apache and nginx are for.
> Oh look, a custom Nginx config only one person understands.
Just because you put it in a container doesn't mean it's no longer custom or that everyone suddenly understands it.
> Oh look, some hacked up letsencrypt config only one person understands, etc etc.
Plenty of people put their nasty hacks in containers and pod definitions and still nobody (or just 1 person) understands it. Packaging changes none of this; a dirty pod, container, VM image it's all still dirty.
> K8S on a single node does nothing for you network-wise
> - Container IP auto-assignment
So does docker, or even an uncontainerized bridge interface
> - Container security policy
So does Docker, or a plain cgroup
> - Container DNS management
Yep, that it does. But when you only have 1 node, what is the point?
> - Ingress management ("custom Nginx config")
Great, but besides moving complexity from your app to the infra it doesn't help at all on a single node. It actually gets worse: node goes down, everything goes down (app, fallback, load balancing, routing, security)
> - "Environment that feels like a large network and doesn't change if moved to a large network"
So unless you are doing some local development that you later on push to dev/prod, we're talking about feelings. Not much objective to say about that except that it exists.
> What part of this is difficult to understand?
All of it. Shoving complexity and responsibility around doesn't reduce it, and having people make bad software isn't less bad because of the runtime it runs on.
Kubernetes in prod is great, and the envs that go with it (like development and staging), sure. But when you run something in prod, and you need availability, scalability and a host of standardised facilities, then a single node or some magic 'it works by default' config is very far removed from real-world production.
We could argue this ad infinitum, K8s of course doesn't remove all proprietary elements from a solution but it is a huge step up. Speaking as an ex-Googler, it took over 10 years but I'm so happy the rest of the world finally has a standard like this, the world is a better place for it even though I at one point had to unlearn all my traditional sysadmin habits and immerse myself in an environment practising it successfully to finally understand.
Your original question was what is the point. These are the points. As for why not Docker, k8s network effects and strategy of its sponsors mean Docker is on a lifeline, everyone knows that.
Of course Docker is doomed ;-) We have CRI-O, gvisor etc. showing that it works fine without it as well. Someone implemented a OCI compatible image runner in bash using standard cgroups, with a bit of luck we'll end up calling containers 'containers' and images 'images' instead of using docker's brand name.
Also, I'm not saying that k8s is bad, or using k8s as a practical API definition of the platform to target when packaging and configuring applications; I was aiming at the 'boo hoo k8s is too hard' tagline every "simple" version seems to hold on to.
One could also install standard K8S and remove the taint and run pods locally, same result.
Has anyone used K3s or this? Are they a good alternative to Dokku? I'm looking for something to either deploy on a server to host all my sideprojects (though Dokku has been basically perfect so far), or on my home server for various things like Gitea, Nextcloud and random scripts. Is K3s/K0s a good idea for either of those?
I’m a fan of Hashicorp Nomad (coupled with Vault and Consul). It’s a matter of preference though, if you love the kubernetes abstractions and APIs you’re probably better off with k3s/k0s. I personally very much prefer Nomads less obfuscating abstraction layers. This is coming from 2-3 years of managed k8s at work.
It does take a bit of effort to set up ”properly“ - ideally each of vault/consul/nomad server should run on 3 separate instance for HA quorum, for a total of 9 machines apart from the server (“client”) you’re actually running stuff on. For a casual homelab this is obviously severe overkill. You don’t need to. You can run everything perfectly fine as single instance on the same machine.
Personally I went a bit nuts with HA so I have 9 single-board-computers apart from my workload runners. You really don’t need to if OK with downtime in event of server restarts and maintenance, though.
This was one of the most surprising parts about Nomad for me.
I really want to like it, and for the most part I think it's a great experience, but if you're looking for a lightweight single-host PaaS-type thing, the minimum recommendation and default configuration for 3 servers is a bit wild.
I have Nomad running in single host mode on a DO box for personal stuff, and I get the distinct feeling I'm not really meant to be doing it.
Go read the best practices for Vault and the "best practices minimum" are even crazier. IME it's all fine to run all these on low-resource single instance in practice, it's just the target audience for the "minimum configuration" isn't really your homelab tinkerer or even SMB/small start-up. Consider that Hashicorp's revenue streams are consulting, enterprise licenses and, since recently, hosted PaaS. If a penny-pinching user comes with replication issues because of resource contention they can just point to those docs. RedHat does the same thing for minimum recommendations for a lot of linux software that in practice will work perfectly fine on much smaller machines.
Your single host mode DO box is most likely perfectly fine. You just won't have the availability benefits during downtime of that instance but, well, duh.
I guess the one practical catch is that if you feel strongly about security and noise neighbor issues, you really should at least separate out Consul Server/Vault Server/Nomad Server into separate machines.
Just whatever you do, make sure to run an odd number of servers (1 is better than 2, 3 is better than 4), or you may run into split-brain grief. Clients are fine to scale as you see fit as they don't partake in raft consensus. So you can have a single instance "client/server", but if you add just a single new instance to schedule jobs on, make it's a pure client.
I have a dozen Nomad clients and about 20 jobs at home. My active Nomad leader is yet to use over 300 MiB RAM, 10% CPU or 0.2 load on RPi-equivalents, even during leader rotation and restarts. It's only when you start to have very high throughput of evaluations and allocations and start to put pressure on raft that you may need to increase your raft_multiplier to make it more relaxed with timeouts.
Tl;dr take the minimum configuration advice with a heap of salt.
For home-use, I think you could run your servers without issues (apart from that SD cards are not great at all for root or raft storage) on $20 RK3328 SBCs like a NanoPi NEO3. I may scale down to something like this at some point, given what I have now (Rpi4B-equivalents) is pretty overkill.
3 servers is always going to be the minimum recommended production configuration for any highly available system (container schedulers, databases, search indexes, etc). You can run Nomad perfectly fine just by itself on a single server, you don't even need Consul in some cases.
I am biased as the original creator, but take a look at OpenFaaS -> https://docs.openfaas.com - there are a few things that people may enjoy who would turn to Dokku otherwise.
Deploys occasionally failing and leaving behind a lock file, requiring manual intervention.
Let’s Encrypt integration not working properly without dokku-event-listener, which must be built & installed from source and is basically undocumented.
At the time I used it, blue/green deployments were not available. (I see this is now supported, great to see that.)
To be fair, I didn’t report any of these issues, so I may have done something wrong or there may have been solutions that I missed. This was also 6+ months ago.
I also think that with all the major cloud providers now supporting Kubernetes as a first-class primitive, and with there being an abundance of CI/CD tools that can trivially deploy to Kubernetes, the PaaS concept is no longer as attractive as it once was.
> Deploys occasionally failing and leaving behind a lock file, requiring manual intervention.
Interesting, would have been good to have these reported. Aside, unlocking an app should just be an `apps:unlock` call.
> Let’s Encrypt integration not working properly without dokku-event-listener, which must be built & installed from source and is basically undocumented.
That shouldn't be the case - the letsencrypt plugin was available long before the dokku-event-listener project was, and the latter has been built/released automatically since April[1] (though didn't have documentation on it's internals till recently, as that wasn't really necessary for end users as there isn't any configuration).
> At the time I used it, blue/green deployments were not available. (I see this is now supported, great to see that.)
Not sure what youre referencing here. We don't have anything specific for blue/green deployments, but we have had zero-downtime checks for a while. If you can point out to me where you found this (or what you were expecting) that would be awesome :)
> To be fair, I didn’t report any of these issues, so I may have done something wrong or there may have been solutions that I missed. This was also 6+ months ago.
Reporting issues is always appreciated as its almost certain there were bugs on our end, but it seems you've found your solution :)
> I also think that with all the major cloud providers now supporting Kubernetes as a first-class primitive, and with there being an abundance of CI/CD tools that can trivially deploy to Kubernetes, the PaaS concept is no longer as attractive as it once was.
I don't think this is quite true. Kubernetes is still a large system that doesn't come "batteries included", and every setup is at least subtly different from the next. I think a large number of teams spend a disproportionate amount of time configuring their preferred toolchain - and then maintaining it - once you get to a certain scale. Additionally, a kubernetes manifest is much lower-level than is necessary for the general app developer who just wants to get things done.
I'm personally hoping that Kubernetes distributions begin to mature more (Tanzu and Openshift are examples, but not the only ones) so that folks start worrying less about operating and maintaining clusters and more on either the underlying infra or the end developer experience.
K3s is definitely more lightweight than K8s, but it still requires non-negligible resources. Compared to just a Docker engine, it takes more memory and CPU, which can be noticeable on smaller servers, Raspberry Pi, and such.
K3s needs 500MB of RAM on a server and 50MB on an agent. That's the lowest we've ever seen for Kubernetes and Darren is planning on getting something working on 256MB SoC devices next.
If you want to go ultra-lean, and only need a single node RPi or VPS (many applications don't need massive scale-out) then take a look at what we've been building with "faasd" and containerd - https://github.com/openfaas/faasd
I'd make systemd unit files around the docker-compose deployments. Upload everything with something like Ansible. If you want to go perfect go for NixOS instead, but that might be controversial.
I think the main question is: do you need orchestration? Ie scheduling workloads dynamically to different machines. If yes, then ks/Nomad/Swarm. If no and you want minimal overhead then the above is the answer.
I’d also suggest Podman as a Docker alternative, especially for these scenarios. Images and Dockerfiles (and many commands) are 1-to-1 compatible but there are some nice things especially with this scenario.
I wouldn’t say NixOS is controversial, at all, but it’s a very steep learning curve.
"restart: always" and "restart: unless-stopped" both work fine after reboot on my system. (Although Docker recommends managing it using systemd or upstart if you want more granular control).
This is for Docker, right? Not Compose? Though I guess Compose would just set the flag in Docker, in which case it'd do the same thing. I don't want more granularity than "unless stopped", basically, so this is ideal, thanks!
Docker compose is certainly enough for personal use, but if you ever want your apps to be available during update process (on docker compose you'll have to shut down your app container before you can update it), it's very easy to setup a basic blue/green deployment using the readiness probe option in kubernetes.
Also, kubernetes load balancer is pretty easy to setup and plays well with multiple services and letsencrypt. On docker compose, I'll have to either use nginx and update it everytime I add more apps or use something like traefik if I want a load balancer that can read configuration from app's label. On kubernetes you simply specify several lines of ingress definition in your yaml file, which is not too complex for a typical app.
I doubt it's going to get traction at this late stage, but I've got a soft spot in my heart for Flockport https://thenewstack.io/flockport-time-to-start-all-over-agai... attempting to use OS-native features instead of reinventing the wheel with another massive abstraction layer (cough Docker cough k8s)
Are you hosting any databases in Dokku, or do you keep data outside? For a very small PaaS keeping the DB on a separate node feels like the most reliable option.
I used to host them in Dokku but switched to outside because it's just so much more convenient to have everything in one database. I keep it on the same node, but yes, definitely outside. I have one redis outside too, rather than one per project.
The name seems to imply this project is 3x smaller than K3S[0], does it use less memory, CPU? If this is all about removing external dependencies, I don't think is worth creating an entire new project to avoid installing something like `socat`.
Technically, neither the limit or full progression is defined by the information available. The set {k8, k3, k0} may be equally represented as a decreasing series of container orchestrators with reducing prime differences.
k3s recently switched to embedded etcd, so if you want something very lightweight out of the box (yet with HA), try Canonical's microk8s it has dqlite support.
What differentiates this from k3s? Seems like there's a lot of overlap. Would be great if there was a quick summary of why someone would use this instead of k3s/microk8s/minikube/etc.
Based on the medium post: Is FIPS compliance a good thing? I was under the impression that FIPS sometimes... lagged... modern ideas about security, such that if you weren't for some reason mandated to use it then it was either neutral or bad.
I would be interested in hearing more how this works. Is there something like the Cluster Autoscaler and an HPA that scales up control plane pod and nodes?
I'm curious if the author(s) considered leveraging Hyperkube for this project and why or why not if so.
That was there when I added it, I think I thought it was just a citation for it being referred to as 'k8s'. I left the citation there but added to '(commonly stylized as k8s', ', a [[numeronym]]'.
:shrug: I thought it was pretty innocuous, succinctly explaining it and linking to a page with more examples for those curious.
I suppose the pace would be too much slower, but I really wish Wikipedia had more of a 'maintainer'-'change request' model (a la GitHub etc. or even mailing lists) sometimes, so edits had to actually be justified and approved.
StackExchange has a reasonable compromise I suppose - you're automatically a 'maintainer' with certain (quite low) karma, and until then your changes are proposed and can be (but generally not) justified per se, and approved by someone who does have sufficient points; then you get some points if its approved.
Yes, I think link [1] is better at showing its history. Personally I was exposed to e10s from Firefox before leaning about the who a11y and all other xNumberx Terms.