Has anyone used K3s or this? Are they a good alternative to Dokku? I'm looking for something to either deploy on a server to host all my sideprojects (though Dokku has been basically perfect so far), or on my home server for various things like Gitea, Nextcloud and random scripts. Is K3s/K0s a good idea for either of those?
I’m a fan of Hashicorp Nomad (coupled with Vault and Consul). It’s a matter of preference though, if you love the kubernetes abstractions and APIs you’re probably better off with k3s/k0s. I personally very much prefer Nomads less obfuscating abstraction layers. This is coming from 2-3 years of managed k8s at work.
It does take a bit of effort to set up ”properly“ - ideally each of vault/consul/nomad server should run on 3 separate instance for HA quorum, for a total of 9 machines apart from the server (“client”) you’re actually running stuff on. For a casual homelab this is obviously severe overkill. You don’t need to. You can run everything perfectly fine as single instance on the same machine.
Personally I went a bit nuts with HA so I have 9 single-board-computers apart from my workload runners. You really don’t need to if OK with downtime in event of server restarts and maintenance, though.
This was one of the most surprising parts about Nomad for me.
I really want to like it, and for the most part I think it's a great experience, but if you're looking for a lightweight single-host PaaS-type thing, the minimum recommendation and default configuration for 3 servers is a bit wild.
I have Nomad running in single host mode on a DO box for personal stuff, and I get the distinct feeling I'm not really meant to be doing it.
Go read the best practices for Vault and the "best practices minimum" are even crazier. IME it's all fine to run all these on low-resource single instance in practice, it's just the target audience for the "minimum configuration" isn't really your homelab tinkerer or even SMB/small start-up. Consider that Hashicorp's revenue streams are consulting, enterprise licenses and, since recently, hosted PaaS. If a penny-pinching user comes with replication issues because of resource contention they can just point to those docs. RedHat does the same thing for minimum recommendations for a lot of linux software that in practice will work perfectly fine on much smaller machines.
Your single host mode DO box is most likely perfectly fine. You just won't have the availability benefits during downtime of that instance but, well, duh.
I guess the one practical catch is that if you feel strongly about security and noise neighbor issues, you really should at least separate out Consul Server/Vault Server/Nomad Server into separate machines.
Just whatever you do, make sure to run an odd number of servers (1 is better than 2, 3 is better than 4), or you may run into split-brain grief. Clients are fine to scale as you see fit as they don't partake in raft consensus. So you can have a single instance "client/server", but if you add just a single new instance to schedule jobs on, make it's a pure client.
I have a dozen Nomad clients and about 20 jobs at home. My active Nomad leader is yet to use over 300 MiB RAM, 10% CPU or 0.2 load on RPi-equivalents, even during leader rotation and restarts. It's only when you start to have very high throughput of evaluations and allocations and start to put pressure on raft that you may need to increase your raft_multiplier to make it more relaxed with timeouts.
Tl;dr take the minimum configuration advice with a heap of salt.
For home-use, I think you could run your servers without issues (apart from that SD cards are not great at all for root or raft storage) on $20 RK3328 SBCs like a NanoPi NEO3. I may scale down to something like this at some point, given what I have now (Rpi4B-equivalents) is pretty overkill.
3 servers is always going to be the minimum recommended production configuration for any highly available system (container schedulers, databases, search indexes, etc). You can run Nomad perfectly fine just by itself on a single server, you don't even need Consul in some cases.
I am biased as the original creator, but take a look at OpenFaaS -> https://docs.openfaas.com - there are a few things that people may enjoy who would turn to Dokku otherwise.
Deploys occasionally failing and leaving behind a lock file, requiring manual intervention.
Let’s Encrypt integration not working properly without dokku-event-listener, which must be built & installed from source and is basically undocumented.
At the time I used it, blue/green deployments were not available. (I see this is now supported, great to see that.)
To be fair, I didn’t report any of these issues, so I may have done something wrong or there may have been solutions that I missed. This was also 6+ months ago.
I also think that with all the major cloud providers now supporting Kubernetes as a first-class primitive, and with there being an abundance of CI/CD tools that can trivially deploy to Kubernetes, the PaaS concept is no longer as attractive as it once was.
> Deploys occasionally failing and leaving behind a lock file, requiring manual intervention.
Interesting, would have been good to have these reported. Aside, unlocking an app should just be an `apps:unlock` call.
> Let’s Encrypt integration not working properly without dokku-event-listener, which must be built & installed from source and is basically undocumented.
That shouldn't be the case - the letsencrypt plugin was available long before the dokku-event-listener project was, and the latter has been built/released automatically since April[1] (though didn't have documentation on it's internals till recently, as that wasn't really necessary for end users as there isn't any configuration).
> At the time I used it, blue/green deployments were not available. (I see this is now supported, great to see that.)
Not sure what youre referencing here. We don't have anything specific for blue/green deployments, but we have had zero-downtime checks for a while. If you can point out to me where you found this (or what you were expecting) that would be awesome :)
> To be fair, I didn’t report any of these issues, so I may have done something wrong or there may have been solutions that I missed. This was also 6+ months ago.
Reporting issues is always appreciated as its almost certain there were bugs on our end, but it seems you've found your solution :)
> I also think that with all the major cloud providers now supporting Kubernetes as a first-class primitive, and with there being an abundance of CI/CD tools that can trivially deploy to Kubernetes, the PaaS concept is no longer as attractive as it once was.
I don't think this is quite true. Kubernetes is still a large system that doesn't come "batteries included", and every setup is at least subtly different from the next. I think a large number of teams spend a disproportionate amount of time configuring their preferred toolchain - and then maintaining it - once you get to a certain scale. Additionally, a kubernetes manifest is much lower-level than is necessary for the general app developer who just wants to get things done.
I'm personally hoping that Kubernetes distributions begin to mature more (Tanzu and Openshift are examples, but not the only ones) so that folks start worrying less about operating and maintaining clusters and more on either the underlying infra or the end developer experience.
K3s is definitely more lightweight than K8s, but it still requires non-negligible resources. Compared to just a Docker engine, it takes more memory and CPU, which can be noticeable on smaller servers, Raspberry Pi, and such.
K3s needs 500MB of RAM on a server and 50MB on an agent. That's the lowest we've ever seen for Kubernetes and Darren is planning on getting something working on 256MB SoC devices next.
If you want to go ultra-lean, and only need a single node RPi or VPS (many applications don't need massive scale-out) then take a look at what we've been building with "faasd" and containerd - https://github.com/openfaas/faasd
I'd make systemd unit files around the docker-compose deployments. Upload everything with something like Ansible. If you want to go perfect go for NixOS instead, but that might be controversial.
I think the main question is: do you need orchestration? Ie scheduling workloads dynamically to different machines. If yes, then ks/Nomad/Swarm. If no and you want minimal overhead then the above is the answer.
I’d also suggest Podman as a Docker alternative, especially for these scenarios. Images and Dockerfiles (and many commands) are 1-to-1 compatible but there are some nice things especially with this scenario.
I wouldn’t say NixOS is controversial, at all, but it’s a very steep learning curve.
"restart: always" and "restart: unless-stopped" both work fine after reboot on my system. (Although Docker recommends managing it using systemd or upstart if you want more granular control).
This is for Docker, right? Not Compose? Though I guess Compose would just set the flag in Docker, in which case it'd do the same thing. I don't want more granularity than "unless stopped", basically, so this is ideal, thanks!
Docker compose is certainly enough for personal use, but if you ever want your apps to be available during update process (on docker compose you'll have to shut down your app container before you can update it), it's very easy to setup a basic blue/green deployment using the readiness probe option in kubernetes.
Also, kubernetes load balancer is pretty easy to setup and plays well with multiple services and letsencrypt. On docker compose, I'll have to either use nginx and update it everytime I add more apps or use something like traefik if I want a load balancer that can read configuration from app's label. On kubernetes you simply specify several lines of ingress definition in your yaml file, which is not too complex for a typical app.
I doubt it's going to get traction at this late stage, but I've got a soft spot in my heart for Flockport https://thenewstack.io/flockport-time-to-start-all-over-agai... attempting to use OS-native features instead of reinventing the wheel with another massive abstraction layer (cough Docker cough k8s)
Are you hosting any databases in Dokku, or do you keep data outside? For a very small PaaS keeping the DB on a separate node feels like the most reliable option.
I used to host them in Dokku but switched to outside because it's just so much more convenient to have everything in one database. I keep it on the same node, but yes, definitely outside. I have one redis outside too, rather than one per project.