I am confused: Can someone summarize what linux docker is ?

Discussion in 'Docker and Containers' started by DrStein99, Feb 11, 2018.

  1. DrStein99

    DrStein99 Member

    Joined:
    Feb 3, 2018
    Messages:
    86
    Likes Received:
    4
    I never knew what docker is or what it did, until I followed a confusing tutorial that explained to me how to setup xmr-stak in docker using linux. Unfortunately it confused me for the first time. I thought I had to actually run docker in order to run xmr-stak. I could not fully understand what docker was doing, if it was impacting any of my performance testing. I just resorted back to my usual linux experience, and now run xmr-stak as a system service (for the last month).

    Can someone explain, summarize in just a few lines what the docker is generally used for, how / if it would make my tasks easier for miner or any other software ? I tried to figure this out on my own, but I was overwhelmed and confused by what I read so far with my google searches.
     
    #1
  2. Blinky 42

    Blinky 42 Active Member

    Joined:
    Aug 6, 2015
    Messages:
    456
    Likes Received:
    152
    The $0.02 summary is:
    Docker lets you package up an application with all the bits n bobs it needs to run.

    Why would you care? Lets you avoid installing wonky compilers or support libraries just to run one application, and then deploy that application and all the supporting items much easier. I can take Patrick's xmr-stack miner for example and run it on any of my linux boxes from the past 4 or 5 years - from Centos 6, 7 Ubuntu 14 and 16 - nothing special is needed to adapt to the various old versions of libraries on each platform to get it working just docker pull and run it.

    From a practical standpoint it is more helpful for more complex applications than a simple program like most miners, but it is still super easy to build and share docker (or other container technologies) to a wide audience with less work than trying to maintain packages for dozens of distributions and versions of each.
     
    #2
    DrStein99 and Patrick like this.
  3. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,045
    Likes Received:
    3,996
    That is a good summary.

    Think of Docker as a great way to package an application with dependencies. Our new universal cryptonight container can do Monero, ETN, Turtle, Sumo and Aeon.
    https://forums.servethehome.com/index.php?threads/docker-xmrig-cryptonight-universal.18579/
    You do not have to worry about compiler versions, libraries that have different names over time and etc. You can run them on any Linux server and still use gcc 7.x even without having to install that version on the system.

    Docker also allows you to deploy and manage on clusters easily. It has logging and other tools built-in. When you want to remove the application you can remove the container and not have to worry about remnants in the system. You can upgrade the base OS and not have to worry about breaking the miner.
     
    #3
    DrStein99 likes this.
  4. DrStein99

    DrStein99 Member

    Joined:
    Feb 3, 2018
    Messages:
    86
    Likes Received:
    4
    Ok. Sounds good. I will start learning how to use that thank you.
     
    #4
  5. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    669
    Likes Received:
    233
    Docker is a container system, Containerisation's a pretty age-old concept, one that's existed on x86 long before virtualisation was popular. If you've ever set up software within a chroot jail on a UNIX system, you've used a simple form of containers.

    chroot jails's were initially a good way of adding some extra security to services exposed to the network, such as DNS and mail servers. You would have a directory tree containing only the files that were needed to get the service to run, and then even if the service was hacked, all they would have access to was the chrooted directory tree.

    Docker took the same concept and ran with it, and thank to advances in the linux kernel allowing wholly separated userspace contexts, called namespaces. Essentially, you can take a bundle of files needed to run one of these services, and run them under a kernel as a wholly separated user with no access to any other files or running processes (although of course a privilege escalation attack against the kernel is still a concern).

    Practically, this means you can provide relatively complex applications (esp. those that might require bleeding-edge or outdated files to run) in a single bundle along with completely ringfenced runtime dependencies that will be unaffected by upgrades to the OS. For example, application foo might require a library of libbar.so.4 but your OS might have upgraded to libbar.so.9 a long time ago, and the newer version is not backwards compatible. In this case, you'd package libbar.so.4 into your container and configure foo to use that instead of the OS version.

    On the plus side, this means that containerised applications are very unlikely to be broken by OS upgrades (since their files remain untouched within the container), on the downside it means you need to maintain a separate patching methodology for your containers - and there's a high likelihood of bits of certain software used for backwards compatibility reasons to never receive an update... anyone who's worked in an enterprise environment will be all too aware of the ancient versions of apache, tomcat, weblogic, oracle that are "bundled" with applications and can't be upgraded separately without voiding your $upport contract.
     
    #5
    leebo_28 likes this.
  6. hweisheimer

    hweisheimer New Member

    Joined:
    Dec 9, 2015
    Messages:
    14
    Likes Received:
    2
    This came up on Hacker News today, and has some good background information on how Linux containers (and Docker) work at a low level: Fewbytes/rubber-docker
     
    #6
    cactus likes this.
  7. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,357
    Likes Received:
    311
    Docker is "virtualization on os level", it gives acces to the linux apis to the applications and doesn't emulate hardware (= almost no overhead & extreme small filesize compared to vms). Downside: you are "limited" to linux.
    It can be used to virtualize applications (like databases, (web)servers, and a lot more), usually one application per contianer (the application "vm").
    The advantages are the same as with vms (isolation, easier updates/upgrades etc.) + less overhead + smaller filesizes.
     
    #7
  8. MBastian

    MBastian Member

    Joined:
    Jul 17, 2016
    Messages:
    49
    Likes Received:
    10
    I beg to differ, Docker is little more than process encapsulation.
    Downside: It is not possible to migrate a running process to another node. That is trivial for loabalanced frontendish or non-stateful microservice containers/pods but a real pain in the posterior for databases and such. Yes, you can virtualize the nodes to get online migration capabilities, personally I'm not a fan and it won't work in most public clouds anyway.
     
    #8
  9. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,843
    Likes Received:
    611
    I would describe Docker as combination of two ideas

    The first is lightweight virtualisation with Linux (LX) container
    If you install 10 Linux servers for different use cases and applications and you compare them, you will find that they are for 90% or more identical. The idea is now that you offer a system that can commonly use this 90% and a container for each that contains the differing 10%.

    The second is a repository and way to deploy ready to use applications with their Linux environment based on a container and this is Docker.

    Read this nice blog about the two items
    Triton: Docker and the “best of all worlds” | Joyent
     
    #9
    Last edited: Apr 25, 2018
    lowfat likes this.

Share This Page