I am confused: Can someone summarize what linux docker is ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

DrStein99

If it does not exist ? I am probably building it.
Feb 3, 2018
115
4
18
50
New Jersey, USA
I never knew what docker is or what it did, until I followed a confusing tutorial that explained to me how to setup xmr-stak in docker using linux. Unfortunately it confused me for the first time. I thought I had to actually run docker in order to run xmr-stak. I could not fully understand what docker was doing, if it was impacting any of my performance testing. I just resorted back to my usual linux experience, and now run xmr-stak as a system service (for the last month).

Can someone explain, summarize in just a few lines what the docker is generally used for, how / if it would make my tasks easier for miner or any other software ? I tried to figure this out on my own, but I was overwhelmed and confused by what I read so far with my google searches.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
The $0.02 summary is:
Docker lets you package up an application with all the bits n bobs it needs to run.

Why would you care? Lets you avoid installing wonky compilers or support libraries just to run one application, and then deploy that application and all the supporting items much easier. I can take Patrick's xmr-stack miner for example and run it on any of my linux boxes from the past 4 or 5 years - from Centos 6, 7 Ubuntu 14 and 16 - nothing special is needed to adapt to the various old versions of libraries on each platform to get it working just docker pull and run it.

From a practical standpoint it is more helpful for more complex applications than a simple program like most miners, but it is still super easy to build and share docker (or other container technologies) to a wide audience with less work than trying to maintain packages for dozens of distributions and versions of each.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
That is a good summary.

Think of Docker as a great way to package an application with dependencies. Our new universal cryptonight container can do Monero, ETN, Turtle, Sumo and Aeon.
https://forums.servethehome.com/index.php?threads/docker-xmrig-cryptonight-universal.18579/
You do not have to worry about compiler versions, libraries that have different names over time and etc. You can run them on any Linux server and still use gcc 7.x even without having to install that version on the system.

Docker also allows you to deploy and manage on clusters easily. It has logging and other tools built-in. When you want to remove the application you can remove the container and not have to worry about remnants in the system. You can upgrade the base OS and not have to worry about breaking the miner.
 
  • Like
Reactions: DrStein99

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Docker is a container system, Containerisation's a pretty age-old concept, one that's existed on x86 long before virtualisation was popular. If you've ever set up software within a chroot jail on a UNIX system, you've used a simple form of containers.

chroot jails's were initially a good way of adding some extra security to services exposed to the network, such as DNS and mail servers. You would have a directory tree containing only the files that were needed to get the service to run, and then even if the service was hacked, all they would have access to was the chrooted directory tree.

Docker took the same concept and ran with it, and thank to advances in the linux kernel allowing wholly separated userspace contexts, called namespaces. Essentially, you can take a bundle of files needed to run one of these services, and run them under a kernel as a wholly separated user with no access to any other files or running processes (although of course a privilege escalation attack against the kernel is still a concern).

Practically, this means you can provide relatively complex applications (esp. those that might require bleeding-edge or outdated files to run) in a single bundle along with completely ringfenced runtime dependencies that will be unaffected by upgrades to the OS. For example, application foo might require a library of libbar.so.4 but your OS might have upgraded to libbar.so.9 a long time ago, and the newer version is not backwards compatible. In this case, you'd package libbar.so.4 into your container and configure foo to use that instead of the OS version.

On the plus side, this means that containerised applications are very unlikely to be broken by OS upgrades (since their files remain untouched within the container), on the downside it means you need to maintain a separate patching methodology for your containers - and there's a high likelihood of bits of certain software used for backwards compatibility reasons to never receive an update... anyone who's worked in an enterprise environment will be all too aware of the ancient versions of apache, tomcat, weblogic, oracle that are "bundled" with applications and can't be upgraded separately without voiding your $upport contract.
 
  • Like
Reactions: leebo_28

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Can someone explain, summarize in just a few lines what the docker is generally used for, how / if it would make my tasks easier for miner or any other software ?
Docker is "virtualization on os level", it gives acces to the linux apis to the applications and doesn't emulate hardware (= almost no overhead & extreme small filesize compared to vms). Downside: you are "limited" to linux.
It can be used to virtualize applications (like databases, (web)servers, and a lot more), usually one application per contianer (the application "vm").
The advantages are the same as with vms (isolation, easier updates/upgrades etc.) + less overhead + smaller filesizes.
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
Docker is "virtualization on os level" ...
Downside: you are "limited" to linux.
I beg to differ, Docker is little more than process encapsulation.
Downside: It is not possible to migrate a running process to another node. That is trivial for loabalanced frontendish or non-stateful microservice containers/pods but a real pain in the posterior for databases and such. Yes, you can virtualize the nodes to get online migration capabilities, personally I'm not a fan and it won't work in most public clouds anyway.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I would describe Docker as combination of two ideas

The first is lightweight virtualisation with Linux (LX) container
If you install 10 Linux servers for different use cases and applications and you compare them, you will find that they are for 90% or more identical. The idea is now that you offer a system that can commonly use this 90% and a container for each that contains the differing 10%.

The second is a repository and way to deploy ready to use applications with their Linux environment based on a container and this is Docker.

Read this nice blog about the two items
Triton: Docker and the “best of all worlds” | Joyent
 
Last edited:
  • Like
Reactions: lowfat