For Sale: Intel R2308GZ 2U Server

doop

Member
Jan 16, 2015
43
2
8
78
Is RMS25JB080 (LSI2308) the best choice for Linux software RAID (mdadm)?
I want to put 2x2.5" SATA SSDs on the internal SATA and 6x8TB SATA disks in front.
Intel's onboard RAID **supports SATA only** (-$15)
Intel LSI 2308 6Gb/s SAS/SATA RAID Controller - RMS25JB080 (0, 1, 1E, 10, JBOD)
LSI 6GB/s SAS/SATA Controller (JBOD) (+$35)
LSI 9210-8i 6GB/s SAS/SATA Controller (0, 1, 1E, 10, JBOD) (+$55)
LSI 9265-8i 6GB/s SAS/SATA RAID Controller (0, 1, 5, 6, 10, 50, 60) (+$95)
 

pricklypunter

Well-Known Member
Nov 10, 2015
1,606
470
83
Canada
You won't go wrong using that card for software raid, although I think I would look at using ZFS or BTRFS before MDRaid, even more so as disk capacities get larger. Really the only benefit I see now of using mdadm, is the ability to add extra disks piecemeal to your storage pool :)
 
  • Like
Reactions: doop

Jeggs101

Well-Known Member
Dec 29, 2010
1,484
222
63
Any thoughts on using this with Windows 10 as a workstation?
The Intel 2U's are great servers and aren't too loud but they are not meant for workstation duty. If you had a server closet and were going to RDP or STEAM inhome streaming maybe.
 

fossxplorer

Active Member
Mar 17, 2016
466
74
28
Oslo, Norway
I briefly looked at the motherboard's tech specs:
One eUSB 2x5 pin connector to support 2mm low-profile eUSB solid state devices
Two 7-pin single port AHCI SATA connectors capable of supporting up to 6 GB/sec
Two SCU 4-port mini-SAS connectors capable of supporting up to 3 GB/sec SAS/SATA
o SCU 0 Port (Enabled standard)
o SCU 1 Port (Requires Intel RAID C600 Upgrade Key)
®
Intel RAID C600 Upgrade Key support providing optional expanded SCU SATA / SAS RAID

Does it mean that only 4 (except the of SAS/SATA ports works out of the box and one must buy the Upgrade key in order to get all 8 ports working?

And if we go for the $10 mezzazine card, we can simply ignore the other onboard mini-SAS and avoid paying upgrade license?

Also, as i'm used to SM boards with integrated SATA DOM with power connectors, i wonder if "eUSB 2x5 pin connector to support 2mm low-profile eUSB solid state devices" could be something similar?
Point is to avoid taking bay or the internal SSD ports for the OS!
 

MiniKnight

Well-Known Member
Mar 30, 2012
2,987
891
113
NYC
Intel RAID C600 Upgrade Key support providing optional expanded SCU SATA / SAS RAID

Does it mean that only 4 (except the of SAS/SATA ports works out of the box and one must buy the Upgrade key in order to get all 8 ports working?
I'm not sure on this one but on the other GZ series I have I thought you just needed the C600 upgrade for enabling RAID on the ports.

On this particular server I'd just get the option for the LSI card since that's a better controller anyway. You can use it for ESXi. At under $20 new I'd get it just to have.

V1/ V2 chipsets only had 2 6G ports the rest were 3G.
 

iriscloud

SADTech
Jan 13, 2015
37
0
6
29
ID, USA
Would any buyers mind commenting on the noise level of the power supplies included? I'm wondering which generates more noise; the power supply fans or the chassis fans. The supermicro power supplies for the 836 chassis are far too loud for the living room without the SQ or similar.
 

talsit

Member
Aug 8, 2013
112
20
18
Definitely the power supply fans. This one is mid rack, I have a Supermicro 6026TT-HTRF in the bottom with dual PWS-1K41P-1R Gold rated power supplies, the Supermicro is noticeably louder when working behind the rack.

I wouldn't use it in the living room.
 

niftykc

Member
May 18, 2016
33
8
8
39
Here are my thoughts:

So I bought two of them for home use, and I would definitely agree with talsit (and others): It's too noisy to be in the living room.
It's in my basement, and it's working great in there.

It's noisy, but as Patrick and talsit have said before, it's not terribly noisy, except on startup. When it starts up, all of the fans kick in on high, and then they drop to a more reasonable level. I wouldn't want to be close to it for an extended period of time, and definitely it couldn't be in my living room.

That being said, these servers are working great for me. The IPMI is nice (not a fan of Java, but...), and the RAID mezzanine card makes things a lot easier (for me). I had two minor issues with mine - the first one had an extremely old BIOS, so when I plugged it from the onboard NIC to my gig switch, it would only connect at 100Mbs. After updating the BIOS, it connected at 1Gbps. The second one had a newer BIOS, but had odd issues with reading numerous USB sticks. I ended up trying about 4-5 USB sticks before one would work (all different types, and most of them new - the first one didn't have issues like this).

I'm very happy with my purchase, so thanks theITmart_ak!
 

ccclapp

New Member
Aug 27, 2017
7
0
1
120
Have others been happy with this box?

In my case I am looking to create 2 servers, one of which will replicate to the other for HA and failover (i.e. duplicate servers). I may set up as Freenas or as Starwind VirtualSan (or some combo) for storage config. I will likely run free EXSi with FreeNas as guest and/or Starwind VirtualSan in a Win Svr 2016 instance, along with 5-7 virtual Win 10 workstations accessed via RDP in a VPN, pfSense virtual router and a Photoshop-oriented VM. I'd likely run 2x E5-2670 v2 CPU with 128g ram. I'll mirror 2x 1-2TB SSD for all VM OS's and then 8 x 3.5" 8TB SATA drives.

Would someone clarify are there 3, 4, or 6 PCIe slots? Im not very experienced in PCIe terminology especially on servers. In my case I will want (via PCIe or otherwise):

-- GPU to drive 4k monitor for photoshop
-- USB3 card
-- 10GbE or better
-- LSI controller flashed into "IT Mode" or equivalent, as required to pass-through JABOD to FreeNas as a EXSi client
-- Some room to spare.
I gather some of the above would run on PCIe slots and some otherwise. Will I be fine with this board or come up short?

Any reason this is not a good box for this config?

Thanks for your help.
 

niftykc

Member
May 18, 2016
33
8
8
39
@ccclapp, I have been very happy with mine. I bought two, and they have been workhorses. I've had three issues with mine, but all fairly minor (one is more annoying than anything).
  1. The BIOS was outdated, which made the NIC ports connect at 100Mbs. Once I updated the BIOS, the server has been behaving very well and rock solid.
  2. Not all memory works on the server. It has some quirks that I found out by trial and error. I'm sure there is better documentation out there, but you can't fill all of the memory slots unless it is a specific type. I was too cheap to look it up and buy exactly that type. I just bought whatever was cheaper (slower) RAM, but it meant I had to fill the memory in specific banks and not all of them. Not too bad, but I have 192GB of RAM in one, and 128GB of RAM in the other, but it's (slightly) slower. It's not a problem for home use though.
  3. The IPMI on both have been flaky. I keep mine up 24/7, and after a few weeks, it seems to stop working. The only way to make it work is by power cycling it. I don't have too much of a problem with it though because once it's running, I don't need the IPMI... but it does get annoying once in a while, when I need it...
Just a bit of warning for you - it does use quite some power, so don't be surprised if you see your electricity bill jump.

As for using it for ESXi... I think it's fine for ESXi. But I'm not sure I would advise it for what you're trying to do... I hope you keep it in another room. Once it's running, it runs fairly quietly... as far as servers go, but the initial startup is like a jet engine (see my previous post about it) and the fans are still fairly noisy.

Hope this helps!
 

ccclapp

New Member
Aug 27, 2017
7
0
1
120
Not all memory works on the server. It has some quirks that I found out by trial and error. I'm sure there is better documentation out there, but you can't fill all of the memory slots unless it is a specific type. I was too cheap to look it up and buy exactly that type. I just bought whatever was cheaper (slower) RAM, but it meant I had to fill the memory in specific banks and not all of them. Not too bad, but I have 192GB of RAM in one, and 128GB of RAM in the other, but it's (slightly) slower. It's not a problem for home use though.
Did you buy from the vendor listing this sale and was the memory from them fully compatible? I was intending to buy 16x8=128GB expecting to be able to add another 8x8 later if desired. Sounds like I need to confirm the RAM in the unit will allow this. Thanks for the heads up!

The IPMI on both have been flaky. I keep mine up 24/7, and after a few weeks, it seems to stop working. The only way to make it work is by power cycling it. I don't have too much of a problem with it though because once it's running, I don't need the IPMI... but it does get annoying once in a while, when I need it...
Do you think this was unique to your unit, or have you heard this is an issue with this model/line?


Just a bit of warning for you - it does use quite some power, so don't be surprised if you see your electricity bill jump.
Is that due to the CPU or MB/Box or all of the above. Im thinking of a higher speed v2 CPU eg 2670/2680/2690 v2. I have not run true server configs, just server os on Wk Stations. What level elect use/bill are we talking? Thanks again!

As for using it for ESXi... I think it's fine for ESXi. But I'm not sure I would advise it for what you're trying to do...
What do you mean by that? What do you advise against? This is fairly new to me...


I hope you keep it in another room. Once it's running, it runs fairly quietly... as far as servers go, but the initial startup is like a jet engine (see my previous post about it) and the fans are still fairly noisy.
It will be in a rack in a closet in my office. I don't mind modest "white-noise"


Are you able to tell me the basics on my prior PCIe question? Between what's built in, what's on expansion ports and PCIe, should I be fine with what I'm trying to connect?

Thanks for your help. I appreciate it!
 

Biren78

Active Member
Jan 16, 2013
550
94
28
Why would you leave JAVA KVM up for weeks 24/7? Giving someone local admin to your server?
 

niftykc

Member
May 18, 2016
33
8
8
39
Did you buy from the vendor listing this sale and was the memory from them fully compatible? I was intending to buy 16x8=128GB expecting to be able to add another 8x8 later if desired. Sounds like I need to confirm the RAM in the unit will allow this. Thanks for the heads up!
I bought two units from the vendor here. He was good, and I would buy from him again. The memory that came with the units were fine, but not enough for me. I bought additional memory from another guy here on STH, and they all worked, but not all at once. I was trying to get one server up to 384GB of RAM, but it wouldn't boot with 384GB, no matter what I did... I tried multiple slots, BIOS configurations, etc. It just didn't like more than 192 GB of RAM. Keep in mind though, that the memory I was using was not what is considered the "norm" for the server.

Do you think this was unique to your unit, or have you heard this is an issue with this model/line?
I'm not sure, but this has been happening on both of the servers I've bought, so I would guess that it maybe related to the server/BIOS. It hasn't happened lately though, so it might have been just a fluke.

Is that due to the CPU or MB/Box or all of the above. Im thinking of a higher speed v2 CPU eg 2670/2680/2690 v2. I have not run true server configs, just server os on Wk Stations. What level elect use/bill are we talking? Thanks again!
It's pretty much all of the above. You're not buying a power-efficient server. You're buying a good, solid workhorse, and it's not going to be super energy efficient. This is the older line, so it tends to be more power hungry, and given it's a server, two procs with a higher TDP with fans that are constantly running.

For the past couple of weeks, each server has been using 142w-158w. Keep in mind that both of these servers are on 24/7, have 8 SATA HDDs, and aren't heavily used. I would expect your use case to use more power, since you want to run 10 VMs and a GPU as well.

What do you mean by that? What do you advise against? This is fairly new to me...
I'm not sure *I* would do what you're doing. I've run workloads like this, and I've noticed that my bottleneck has always been the disk I/O. I don't know if 2 mirrored SSDs will perform well enough for you when you're throwing Photoshop in there. Again, it's a matter of use case.

It will be in a rack in a closet in my office. I don't mind modest "white-noise"

Are you able to tell me the basics on my prior PCIe question? Between what's built in, what's on expansion ports and PCIe, should I be fine with what I'm trying to connect?

Thanks for your help. I appreciate it!
From what I can tell, you should be fine with what you're trying to add via PCIe. There are two riser cards on the server that should be able to handle whatever you put in there, unless you put in and really push the GPU.

Hope this helps!
 

niftykc

Member
May 18, 2016
33
8
8
39
Why would you leave JAVA KVM up for weeks 24/7? Giving someone local admin to your server?
Ahhh... I need to work on my English...

Oops! I meant that the server runs 24/7, not that I connect to the Java KVM 24/7.

What happens is that the IPMI doesn't listen on the management IP and port. It's like it just stops. I try to connect, and it times out. I know it's correct, as I've connected to it before, but it just won't work. The only way to recover the IPMI was to power cycle it.
 

Biren78

Active Member
Jan 16, 2013
550
94
28
Are you using the newest fw? I've seen servers do that before. Fixed with new fw.