AEON Mining Performance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jims2321

Active Member
Jul 7, 2013
184
44
28
That's what the calculator tells me too. Fingers crossed it materializes.

I haven't "bet more than I can afford to lose" at this stage, but am getting pretty close to the line of "it would be very unwise to risk more without seeing more returns first".

Ultimately, this is hardware I would buy all day if real customers were paying less and getting more. Other than the risk of the bubble popping, it's a no brainer. I'm hoping that I can leverage crypto into refreshing my entire fleet of Dual L5639 servers -- I don't expect much demand for those too much longer.

Best case scenario, I'd like to use crypto as a means to set up my own datacenter -- that way I can go through the growing pains on a non-critical workload (mining), and earn some revenue during that "beta" stage, before hosting normal customers there. The two things together would put me into a much stronger competitive position / much lower per-server overhead costs.

One step at a time : )
That's a ballsy plan. I really hope this pans out for you, I am cheering you on!
 
  • Like
Reactions: funkywizard

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
That's a ballsy plan. I really hope this pans out for you, I am cheering you on!
Thanks! : )

In the back of my head I've wanted to run my own datacenter since when I only had 2 racks of servers, but the economics of "setting up one more cage" versus "setting up your own facility" never made sense. You need a certain scale to save any money at all, the up front costs are large, there's the risk that you won't grow enough to ever reach that scale at the new facility, and you're out the fixed expenses on day 1 with no revenue. On top of that, "setting up one more cage" can be done in a week, and you add servers (and costs) incrementally as you have signups. If you're drowning in sales to provision, the last thing you want is to get bogged down in setting up a facility. Finally, building a datacenter, it's so much work to manage and outfit each facility, you want something big enough to grow with you. And you don't want to learn down the line they won't renew your lease. That compounds the problem: now you should look at buying a huge building, instead of leasing a small one.

Mining offers a unique opportunity here: You can start with a small building for $1000-2000/mo, which is a small commitment in $ terms. It saves you money in month 1 or 2 on power costs alone. And you make money in month 1 or 2 as well. No customers means no specification changes for orders and no urgent repair work, which in turn means no need for full-time staffing at each location. The building is not such a huge cost -- if you outgrow it, get a second one and keep both until the lease is up on the first one. If the crypto merry-go-round lasts long enough to buy one of the buildings you're renting -- great. Once you do that, start bringing it up to real datacenter standards as cash flow permits -- UPS's, generators, etc. If the crypto bubble pops, you have fully paid off infrastructure in one building, and in the others, power off all the (fully ROI'ed) hardware -- the biggest cost is electricity and the power company doesn't require a term commitment.

So that's the opportunity -- it gives you a path from "here" to "there" where the steps in between are all good places to be at. The normal way requires too big of a leap.

I think the key is to do it in stages to mitigate risk.

The servers at home can be powered off, and then the electric bill goes away. Low commitment.

The servers at the datacenter have a power commitment, but, I have multiple cages with staggered contract end dates. Worst case scenario something can be sorted out there. In the meantime, datacenter mining helps me use any spare power that previously had gone to waste, when servers are idle for periods of time.

Next, the servers themselves have rental value -- these are very high performing machines. As long as I don't have a ridiculous number of them, I could promote them and get them rented.

Importantly, get some of the initial investment back before moving to the next stage. Worst case, the hardware has a liquidation value. If you've gotten back half of what you've put in (and mitigated your other risks), then if the bottom drops out the next day, you've lost a huge amount of time, but not necessarily money.
 
  • Like
Reactions: aij, Joel and Marsh

jims2321

Active Member
Jul 7, 2013
184
44
28
I can appreciate that. I use to help design data centers for one of my former employers. It's always easier to build on someone else's dime. Sadly, it comes with the pains of having people higher up, that haven't a clue about designing for growth and the future, they are just worried about expenses and long term costs.
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
That's what I'm hoping for. We'll see in a few months. Right now my bank balance is far less than before buying the hardware ; )

It's also not easy to find cheap power on short notice. I'm now pulling down 47a of 208v in my spare bedroom. Much more and I'd be worried about tripping the service disconnect. Also, the experience of being in that bedroom is like a cross between standing on an active runway, using a rube goldberg device, and watching an episode of hoarders. Air quality has been bad recently in Phoenix as well, so the servers look like they've been running in a Chinese coal fired power plant. Not exactly a good long term solution.

The rest of the hardware is being colocated, which isn't cheap, but is a heck of a lot more professional. Upside is it looks like I'll be meeting my power ramp at the datacenter ahead of time = D

I'd be interested in getting a small evaporatively cooled warehouse in Phoenix, but haven't had any time to look into it. I'd split it with someone if they wanted to do the legwork.
I know you've been talking about ducting the exhaust; have you considered sealing the room and running the intake through a household air filter?

Love what you're doing; I'm taking baby steps in the same direction, though my situation is complicated by an impending cross-country move.

Another question; apologies if you've covered this already (still trying to wrap my head around datacenter power config): Household 120 + 120 = 208v? Or did you get something special wired in?
 

nkw

Active Member
Aug 28, 2017
136
48
28
In the back of my head I've wanted to run my own datacenter since when I only had 2 racks of servers, but the economics of "setting up one more cage" versus "setting up your own facility" never made sense.
The only people I have seen be really successful in the datacenter business (and they have been very very successful) have made their best deals by purchasing datacenters others have built and had to sell for one reason or another (run out of money, acquisition, etc.). The cost of acquiring an already fully setup datacenter has for several years now been a fraction of the cost of building one out new, with a few notable exceptions such as the hyperscale providers or certain government-centric facilities. On a small scale I have seen a few instances of launching a new financially successful datacenter, but in those instances the owner/developer also happened to own a very large union electrical contractor operation.
 

jims2321

Active Member
Jul 7, 2013
184
44
28
Very true, the cost of the last datacenter that I worked on back 2010 was about $400M, that was sans the actual hardware to go in it. It was a lights out/state of the art at the time facility. They made no provision for any permanent staffing in the data center and ended up building a small office next door for storing equipment and staff space. But I still have to give props to Funky having a dream, the drive and hopefully the success in getting it done. Always nice to see the small guy succeed.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
I know you've been talking about ducting the exhaust; have you considered sealing the room and running the intake through a household air filter?
Yes, have considered that. A household filter + box fan doesn't let much air past. But despite that minimal airflow, managed to turn the filter completely brown in 3 days (terrible smog those days). I actually have some clean room grade hepa filters (2' x 2' x 1') and now have fans with sufficient pressure and airflow that I could use those filters as well, but haven't built that out yet. Also, I most likely need a cheap prefilter for any large particles if I don't want to immediately ruin the main air filter. Either way, a useful solution is not in the immediate horizon, but suffice it to say I can see why most datacenter's don't simply use unfiltered outside air.

It's a toss up whether you're better off filtering the air, or avoid the entire problem by using a heat exchanger so you can recycle the same indoor air. Every heat exchanger costs you in efficiency so you want to minimize those. For every heat exchanger, you need to trade off surface area, airflow and/or water flow, and temperature rise. There are also diminishing returns: you need twice the cooling capacity to achieve a 5 degree temperature rise compared to a 10 degree rise. Beyond a certain amount of airflow, additional cooling trails off significantly as you spend more fan power to get very small cooling improvements. So using a free-air heat exchanger (to avoid using dirty and low humidity outside air), you'll probably need a very large amount of materials (copper, aluminum), and a lot of fan horsepower (both on the indoor side and the outdoor side) to achieve an indoor temperature 10F above outdoor ambient.

Right off the bat you're using twice as much power (at least) on fans and handicapping yourself by 10 degrees. Even if the rest of your facility is very efficient (good cold / hot aisle containment, air handlers close to the heat loads, etc), you'll still need a chiller 4-6 months of the year (there goes your PUE) -- evap cooling probably won't cut it.

If things are worse, say you've got a typical multi tenant datacenter with 1U servers (very high cpu temperature rise above ambient), sloppy hot / cold containment, inconsistent power density (a row of 10kw racks across from a row of 3kw racks), and so on, forget about efficiency. At this point you just pass the costs along like everyone else.

Using direct outside air, under full load at a moderate fan rpm, these CPUs run at around 30C above ambient. If you wanted to use noctua fans or water cooling, that could be reduced to a 15-20c temperature rise, at the expense of buying the coolers and using up space to put them somewhere (the servers would be 4-5u instead of 2u with either of those solutions, and at least $100 more each). Not doing that currently (except for some of my gpu servers which are 4-5u anyway), but if you had a facility using exclusively that kind of CPU cooling, you wouldn't have to do much to cool the air, even during the Phoenix summer. The capex savings on a datacenter filled with servers where evap was "overkill" is a pretty exciting idea.

Love what you're doing; I'm taking baby steps in the same direction, though my situation is complicated by an impending cross-country move.

Another question; apologies if you've covered this already (still trying to wrap my head around datacenter power config): Household 120 + 120 = 208v? Or did you get something special wired in?
Thanks, it's a lot to learn which I really enjoy.

As to the wiring, it depends how your home is wired and how the transformer servicing your home is wired. Typically, you have 2 "hot" wires, 1 Nuetral, and 1 ground wire. Wiring an outlet to Hot (either one) + Nuetral gives 120v. Wiring up Hot1 + Hot2 gives either 208 or 240v depending on the transformer used by the electric company.

Commercial power will normally be one of the following:

120/208 3-phase, or 277/480 3-phase.

Lets take 120/208. Here it's the same but you have 3 "hot" wires on 3 phases, not 2.

Connect any phase to nuetral, and you get 120v. Connect any phase to any other phase, and you get 208v. The aggregate amount of amps you can draw at 208v in this scenario is equal to 1.73 times the rating of any single phase, so long as you load all 3 phases equally.

So, for 200 amp 3-phase service (ignoring 80% de-rating), you could draw 200a of 120v on each of the 3 phases -- total of 72,000VA.

Or, you can draw 3 phases of 208v, but you can only draw a maximum of 1.73 times 200a total, if you balance it equally between each phase. Here again, 200 * 1.73 * 208 = (approximately) 72,000VA.

At home this is similar, on "200a service", you can draw 200a 120v from Hot1 + Nuetral, and 200a 120v Hot2 + Nuetral, or you can draw 200a from Hot1 + Hot2 (either 208v or 240v depending on your wiring) -- or some combination of the two as long as you don't exceed 200a on either phase.
 
  • Like
Reactions: setuid0

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
The only people I have seen be really successful in the datacenter business (and they have been very very successful) have made their best deals by purchasing datacenters others have built and had to sell for one reason or another (run out of money, acquisition, etc.). The cost of acquiring an already fully setup datacenter has for several years now been a fraction of the cost of building one out new, with a few notable exceptions such as the hyperscale providers or certain government-centric facilities. On a small scale I have seen a few instances of launching a new financially successful datacenter, but in those instances the owner/developer also happened to own a very large union electrical contractor operation.
Makes total sense to me. There was a large datacenter (former bank-operated datacenter) being sold in Phoenix for about what that sq ft of unimproved space should cost. Not the best location or the prettiest building, but it was wired for several megawatts, and had power backup and cooling gear sufficient for a decent percentage of what it was wired for.
 

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
Not to derail the thread but what residential power company services your house and how do you get around APS/SRP's pseudo demand charges.

For example, my Delivery service charge was 58$ for Nov on 1876 kwh for the month. Kind of scared to see what 24/7 is going to push me to this month.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
Not to derail the thread but what residential power company services your house and how do you get around APS/SRP's pseudo demand charges.

For example, my Delivery service charge was 58$ for Nov on 1876 kwh for the month. Kind of scared to see what 24/7 is going to push me to this month.
I'm using APS.

Demand charges are based upon the peak level of demand. Typically, only charging for the maximum amount used during on-peak hours. If you use 5kw for 15 minutes (or an hour -- depends on the plan), during the late afternoon, you get charged a 5kw demand fee. If you use 5kw 24/7, you get charged a 5kw demand fee.

I'm on an old flat rate plan that's being phased out. Winter is 12 cents / kwh. Summer has tiers of use, with the highest tier being something stupid like 24 cents / kwh. Turns out I could save money on literally any of their other plan options. The "saver max" has the largest demand charge, but, for using the same power 24/7 is by far the cheapest for residential mining.

Commercial rates are harder to calculate, but are lower than residential in nearly all cases.

Last I checked, SRP commercial rates were substantially higher than APS. Not sure about residential.
 

unclerunkle

Active Member
Mar 2, 2011
150
38
28
Wisconsin
Interesting thread so far. I guess my 10 cents/kwh is pretty good :). I have every capable server in the house mining and pulling ~1300 watts for ~7KH rate. According to the calculator, that's .8 AEON/day.

Debating whether I should go all out like funkywizard, but I tend to be a conservative guy when it comes to real money. I'll play with stocks, but coins have a much larger risk and debatable intrinsic value. My biggest concern is AEON itself and availability of proper exchanges. Possible future regulations also could hit the market hard, eliminating smaller coins. Just not sure at this point...but I'm no expert either. Just my 2 cents...
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
Interesting thread so far. I guess my 10 cents/kwh is pretty good :). I have every capable server in the house mining and pulling ~1300 watts for ~7KH rate. According to the calculator, that's .8 AEON/day.

Debating whether I should go all out like funkywizard, but I tend to be a conservative guy when it comes to real money. I'll play with stocks, but coins have a much larger risk and debatable intrinsic value. My biggest concern is AEON itself and availability of proper exchanges. Possible future regulations also could hit the market hard, eliminating smaller coins. Just not sure at this point...but I'm no expert either. Just my 2 cents...
Those are totally valid concerns. The question is not if the bottom will fall out, but when.

Fundamentally the problem is not "risk", it's uncertainty. It is certain that if you flip a coin enough times, it will come up heads close to half the time, and tails close to half the time. If someone offered to give you $1 for every correctly called toss, how much would you pay for that bet? 40 cents? 45 cents? 49 cents? That is risk.

Uncertainty is when you don't know the range of possible outcomes and/or what the odds of those outcomes are. Fundamentally, humans avoid uncertainty -- it's hard wired. There are some tricks to help you gauge uncertainty the same way you would gauge other risks.

One way to look at it, Jeff Bezos does this -- will I regret not doing this when I'm old? If the answer is no, pretty clear decision there -- don't bother. If the answer is yes, it is worth further consideration.

A couple questions I like to ask myself deal with best case / worst case:

If the best case scenario happened, would I care? You'd be surprised how often the answer is "no". A version of that question: "If all the stuff in this store were free, would I want any of it?" You'd be surprised how often an "exciting sale" becomes pretty boring when you realize you wouldn't want any of it even if it were free! It deflates the trickery stores use to sell to you.

The other side, "If the worst case situation happens, would that really matter to me?" If the answer is "no", then all the fuss isn't about the bad outcome, it's about the uncertainty. It's possible you are worrying for no good reason.

A second way I like to frame it, that I use often: "What are the obvious bad/worst outcomes that make this risky? Could I approach this in a way, where those bad outcomes are not a serious problem? How?"

Take servers for example -- maybe there's a big customer opportunity, but they might be a little too big -- what if they cancel? One way to mitigate risk is to quote them a product you already own. If half the total cost of their hardware is hard drives, quote them a model you already own plenty of -- this cuts your risk in half and speeds up your ROI -- you own the hard drives right now, whether they order or not. Another way to do it -- is there a processor they will accept, that my other customers regularly want to buy? This way, if they cancel, you know someone else wants the processors. A customer that seemed too risky before might pose an acceptable risk now.

That second one was pretty specific to me, but hopefully you can see ways to make it apply to you. Think of it this way -- arranged properly, you want a series of unlikely things to all need to happen -at the same time- for there to be any serious consequences. And for something where the worst happens anyway, make sure "the worst" is something you can live with. Finally, make sure the upside is meaningful to you -- or make sure the worst case downside is meaningless.

If the opportunity can be structured to meet those criteria, yes, the outcome is still uncertain, but the risk is minimal. If you can't minimize the risk, best to wait for a better opportunity.

One last observation: sometimes there is a larger risk in doing nothing than in doing something. Say an asteroid is hurtling towards the earth and will kill everyone unless something is done. No matter how risky the plan, if it has any hope of success at all, it is less risky than doing nothing. It's unlikely you'll encounter a situation that dire, but bad things do happen when you do nothing. "Not making $10 you could have made" is mathematically the same as losing $10 you already had, but people will avoid the loss more strongly than they will seek out the gain. There's a reason marketers flip their wording around. "I wouldn't want you to miss out on this opportunity to give me money".... Gosh, you might lose an opportunity! Better get right on that. Try phrasing a situation both ways and see if one if them is more strongly motivating than the other. That can help even the playing field.
 
Last edited:

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
Single 7401P build running on all 48 cores and currently at:
speed 2.5s/60s/15m 2248.0 2220.9 n/a H/s max: 2292.0 H/s

numthreads=48 --cpuset-cpus="0-47" xxxxxxxxxxxxx/aeon_xmrig:priv

Any tweaks I should be making for EPYC?

About the same hashrate I was seeing with my dual 2697 v2's last night while testing.
 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
Since your CPU "only" has 64mb of L3 Cache, running more than 32 threads will do more harm than good.
Each of the four NUMA-Nodes in your CPU has 16mb of L3 cache, so you are better off running 4 instances of your miner with a maximum of 8 threads each. Pin each miner to the cores of one NUMA Node.
This is your starting point, I could increase Aeon hashrates even further by running only 7 threads per NUMA node. You could give it a try.
 
  • Like
Reactions: gigatexal and bash