AFAIK every NAS just uses unauthenticated connections to pull containers, I’m not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).
So hopefully systems like /r/unRAID handle the throttling gracefully when clicking “update all”.
Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?
If only they used a distributed protocol like ipfs, we wouldn’t be in this situation
How long since getting an oracle CEO did this take?
Did they really? Oh my god please tell me your joking, that a company as modern as docker got a freaking oracle CEO. They pulled a Jack Barker. Did he bring his conjoined triangles of success?
A “jack barker” 🤣
Forgejo gives you a registry built-in.
Also is it just me or does the docker hub logo look like it’s giving us the middle finger?
https://distribution.github.io/distribution/
is an opensource implementation of a registry.
you could also self host something like gitlab, which bundles this or sonatype nexus which can serve as a repository for several kinds of artifacts including container images.
Gitea and therefore Forgejo also have container registry functionality, I use that for private builds.
Jumping on the forgejo love train
Is there a project that acts like a registry? It can proxy the request with TTL, and you can push images to it too?
Almost all of them. Forgejo handles containers already for example