STH xmrig-proxy for Docker

Patrick

Administrator
Staff member
Dec 21, 2010
12,009
4,991
113
Starting a thread. Made a few edits today based on feedback from the beta testers on the new proxy. Once the new version is tested this will be the thread.

Why the proxy:
  • Ability to quickly and easily switch wallets/ coins/ pools
  • Minimizes external connections for easier NAT/ firewall management
  • Allegedly better performance (have not actually seen this)
  • This will work with cryptonight and/or cryptonight-lite
  • Per worker and aggregate stats available
Why Docker - 100x easier than using something like screen to run your proxy. Also, this can be deployed as a service in a Swarm cluster or deployed on VMs which makes it extremely easy to manage.

This is going to be a super-easy to use image, but it is made for larger-scale operations (given it is a proxy.) It has been tested with the servethehome/universal_cryptonight image.

There are three main components:
  1. Docker image
  2. Configuration json file
  3. Hosting the config json
  4. Docker run and port publishing.

1. Docker image
Note: x86 only now. May make an ARM version later if people use this. The proxy now can re-launch in 2-3 seconds on most x86 systems.
Code:
servethehome/xmrig-proxy:latest
It is based on Ubuntu 16.04 (18.04 was having issues with the proxy.) The benefit here is that you can pull this once and re-launch super fast.

2. Configuration json file

The xmrig-proxy configuration file is really useful since it allows you to set difficulty, it allows you to change pools. You can also have a configuration file for each pool you want to use. The wallet used in the user ID will override what is on the workers. It will direct the hashing power to the pool / port input in the config.

Note, the below is for Aeon, if you want to do any cryptonight currency, use "coin": "xmr", instead.

Sample config.json for the STH Aeon Pool (change walletID to your ID):
Code:
{
    "background": false,
    "log-file": null,
    "access-log-file": null,
    "retries": 5,
    "retry-pause": 5,
    "coin": "aeon",
    "custom-diff": 20000,
    "syslog": false,
    "verbose": false,
    "colors": true,
    "workers": true,
    "pools": [
        {
            "url": "a.mwork.io:4334",
            "user": "<walletID>",
            "pass": "x"
        }
    ],
    "bind": [
        "0.0.0.0:4333"
    ],
    "api": {
        "port": 0,
        "access-token": null,
        "worker-id": null
    }
}
3. Hosting the config.json
The default name for the file is config.json. These json files tell xmrig-proxy where to connect. When you launch the Docker container, you need to specify a file name and a full URL for the file. Put them on a http(s) server internally and they are pulled via wget.

In the launch command, you use the full url to where the file is. Note, this only needs to be available to the Docker host, not the general public. You can run a local nginx container on the same Docker host and make a directory to host the static files if you want. Many options, you just need to have a URL that the proxy can reach. This can even be a s3 bucket or owncloud hosted config.

Something like:
Code:
https://192.168.2.2/config.json
4. Docker run and port publishing
Here there are two things one needs to do. First, you need to specify the config file. Second, you need to publish the service with the appropriate port.

Code:
docker run -itd -p 4333:4333 -e filename=config.json -e confurl=https://192.168.2.2/config.json servethehome/xmrig-proxy
The two environment variables are:
-e filename= This is there so you can specify the file name in the event you have grape.json for example. The image will default to expect config.json so you do not need to do this unless you are using a different file name.
-e confurl= This is where the image will pull the configuration file from.

As you can see, the port we are publishing is 4333, or the same as in the "bind" field of the config.json example. In theory we could publish to port 80 externally then 4333 in the Docker container. The docker -p flag is in the format -p ip:hostPort:containerPort but you do not need to specify an ip, albeit it can be useful if you have multiple IPs on a machine.

A quick note: the first time you run this, try -it instead of -itd. This allows you to verify it is working.

The impact of this image is twofold:

First, if you are mining Aeon and have 100 servers in a location, you can point them directly to the (hopefully STH) Aeon pool. You can then change wallet IDs, set custom difficulties and etc all at the proxy level.

Second, if you are mining cryptonight, and want to algo/ pool hop, you can now:
Use servethehome/universal_cryptonight docker image on the workers and point to the proxy; and
Use the servethehome/xmrig-proxy to manage which pool they are mining on and the wallet address.
That means you can orchestrate the deployment of miners and leave them, only changing if you swap to Aeon. You can then just re-launch the container to a different config file, altering the URL, and you are set.
 
Last edited:
  • Like
Reactions: freebsdrules

azev

Active Member
Jan 18, 2013
740
212
43
Very interested in this, especially if there's a way to gracefully switch pool without causing the workers to disconnect and reconnect to the proxy.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,009
4,991
113
Ok details posted. Enjoy.

@azev - they do reconnect, but it is VERY fast because they look for work, proxy down. Re-launching the container is ~1s on a scripted changeover so by the time they make their next attempt, the new container is online again. Compared to the fact that machines will be mid-block when the changeover happens (say if you were mining to pool A but then swap to pool B), so you lose some percentage of work anyway.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
688
289
63
USA
ioflood.com
It should... but it is not working currently with selling to NH or MRR.
By default it should not work. Xmrig-proxy uses the same proxy method as nicehash. So you are supposed to set your miners to nicehash-compatible setting to use xmrig proxy. It is not possible to stack xmrig proxy and nicehash proxy.

However, there is at least one other proxy that supports this configuration. The way they do it, when proxying to the nicehash proxy, they drop to just supporting one miner for each connection to nicehash, whereas normally you would support 255 or 256 connections to the proxy for each 1 connection to the pool.

My understanding is that "stacking proxies" would narrow the nonce range too greatly, so miners could run out of nonces to mine before receiving new work. Limiting the proxy to 1 miner per one pool connection would prevent this "nonce narrowing" from occuring. When your proxy connects to another proxy, this is required.

I'm not sure what changes would need to be made to xmrig proxy to allow for this. But hypothetically it should be possible.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
688
289
63
USA
ioflood.com
Ok details posted. Enjoy.

@azev - they do reconnect, but it is VERY fast because they look for work, proxy down. Re-launching the container is ~1s on a scripted changeover so by the time they make their next attempt, the new container is online again. Compared to the fact that machines will be mid-block when the changeover happens (say if you were mining to pool A but then swap to pool B), so you lose some percentage of work anyway.
For "regular mining" this is perfectly fine.

For nicehash arbitrage, it is very undesirable.

That said, nicehash arbitrage is a large problem in any case.

Firstly, stacking proxies generally does not work with nicehash. Possibly a solvable problem but requires code changes.

Secondly, nicehash tends to stall out, allowing miners to work on old blocks and charge you for these invalid / rejected shares. For this problem, you would need to detect which workers are misbehaving and disconnect those workers, and only those workers. Or possibly the nicehash proxy ignored a new work update, and this could be solved by resending it. Either way, this factor alone can lose you massive amounts of money if you are not watching out for it.

Third, disconnecting nicehash from your pool (say, by restarting the pool) is one possible solution to problem 2, but, nicehash charges you a "disconnection penalty" to account for time workers could have been stealing your money working diligently on your block instead of waiting to reconnect. So disconnecting a large number of nicehash miners at once can be very expensive.

Fourth, if you disconnect all miners, it takes a while for them to reconnect and start providing hash power. If your current bid is profitable, you'd prefer to avoid this.

Solving all this seems outside the scope of helping your miners hop to a profitable coin.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
688
289
63
USA
ioflood.com
Also, as to the restart delays, these do sound like they are short. However, there are probably some simple hacks / optimizations you could do to shorten it up more.

First, you could run two proxies at once on different ports. And use iptables to direct incoming connections to one proxy or the other. To switch mining, update iptables to point to the other proxy. This should invalidate the tcp connections (the other proxy never initiated them and doesnt expect this traffic), causing the miners to reconnect. If the miners tear down the connections right away, this could be faster than restarting the proxy. However, this would require testing, as it is possible the connections would remain open on the client side despite being invalid, which would certainly be worse than restarting the proxy.

A more involved solution would have the proxy maintain back end connections to multiple pools simultabeously. When switching miners from one pool to another, you would just send a work update to the miners so they can work on the other pool's block. This would have the lowest possible delay, and would be the most "nicehash compatible", but would also require reprogramming the xmrig proxy.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
688
289
63
USA
ioflood.com
Also, it is possible to solve the algorithm-switching on the pool side. Have a pool connect to multiple coin daemons at the same time. Switch the work from one daemon to another as needed, pushing new work updates to miners.

This would probably be challenging in a shared-pool environment, as there would need to be changes to the pool to track how much credit each miner has earned, if one pool mines multiple coins.

These solutions are not mutually exclusive. One pool, even for one coin, could be modified to connect to multiple daemons for reliability purposes. For multiple daemons, they will learn of a new block at different times from one another, even if both daemons are working correctly. You want to mine using the latest block at all times, so this could improve your pool luck.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,000
910
113
NYC
I think I posted in the wrong thread. I've got my Aeon miners on this stack now. It works great.
 

freebsdrules

Active Member
Aug 16, 2017
184
26
28
38
I must be doing something wrong. I get the below when I run this on my first test miner. Any thoughts/help?

[2018-02-21 17:13:18] 0.0 kH/s, shares: 0/0 +0, upstreams: 1, miners: 0 (max 0) +0/-0
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,009
4,991
113
@freebsdrules want to shoot me the miner/ proxy configs? I can look at it this afternoon. If you attach to the container, does anything come up when you hit "w"? Is the miner connected?

Most likely it means you have something between your miner / the proxy misconfigured or the port not exposed correctly on the proxy.
 

freebsdrules

Active Member
Aug 16, 2017
184
26
28
38
No sir, no miners attached. Perhaps I'm not understanding the process completely. I have done the following:

1. hosted a config.json file on an internal webserver. the only thing I changed from your sample file above is the <walletid>. verified I can read the file internally.
2. run the following from my mining machine: docker run -itd -p 4333:4333 -e filename=config.json -e confurl=http://<internalipaddress>/config.json servethehome/xmrig-proxy

Is there anything else I need to be doing on the webserver hosting the json configuration file?
 

ServeTheSam

Member
Dec 10, 2017
38
14
8
Also, it is possible to solve the algorithm-switching on the pool side. Have a pool connect to multiple coin daemons at the same time. Switch the work from one daemon to another as needed, pushing new work updates to miners.

This would probably be challenging in a shared-pool environment, as there would need to be changes to the pool to track how much credit each miner has earned, if one pool mines multiple coins.

These solutions are not mutually exclusive. One pool, even for one coin, could be modified to connect to multiple daemons for reliability purposes. For multiple daemons, they will learn of a new block at different times from one another, even if both daemons are working correctly. You want to mine using the latest block at all times, so this could improve your pool luck.
Am I reading this correctly: it is possible for one pool to mine more than one coin (via connecting to multiple coin daemons simultaneously, as you mentioned)?

If this is the case, that’s awesome! Also, wouldn’t this make it relatively easy to track and credit miners for what they’ve actually earned? In essence, pool decides to switch based on criteria we set, and then all miners that choose our pool switch accordingly?