Fujitsu TX1320 M3 - Cheap low power server (barebone)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Albert67

New Member
Oct 11, 2021
27
3
3
ConnectX-3 is always nice if you are happy with SFP+
I have tried to install a Mellanox ConnectX-3 546SFP+ but the power consumption of my server whent from 11.8W to 18.00W.
I will try a ConnectX-4 that should have lower power consumption....
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk

Albert67

New Member
Oct 11, 2021
27
3
3
I have changed the mainboard (now I have all mainboard's ethernet working and active , the CPU (now is a xeon E3-1240 v6 ) and increased the ram (now 64Gb) with the packpanel and 1 ssd disk I get 13.6W mininum.I can save 1.4W if I remove the backpanel that in fact I don't really need considering that the online sata are not hot swap.
With the X3 I get 19.4W
With the X4 I get 21.6W
With a Supermicro A0C-STG-i4S (base on intel Intel XL710-AM1 ) I get 19.6W.
But the Supermicro is a 4 port card (the others have 2 ports) , and this is very interesting because this machine will be a proxmox virtual server , so I can using it to create a virtual 10Gbe switch with 5 ports (1 virtual + 4 phisical) that is more that enoght to connect my workstation to my 3 servers.
I just want to test few other cards and then I will take a decision... I will keep you posted.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
You should test your power consumption with ports active - just having 4 ports not doing anything should not cost more power than 2 ports doing nothing - so try to test with activity on both port.

But great that you got it tested and can see that ConnectX-4 is not lower power than ConnectX-3 :)
 

Albert67

New Member
Oct 11, 2021
27
3
3
You should test your power consumption with ports active - just having 4 ports not doing anything should not cost more power than 2 ports doing nothing - so try to test with activity on both port.

But great that you got it tested and can see that ConnectX-4 is not lower power than ConnectX-3 :)
The reason I am testing with no connected port is .... that I don't care (!) which is the power consumption when the system is working , I am only concern on the power consumption when the system is in idle , because this machine will be on 24/7, the other machine that I will connect to the high speed network (a workstation and 2 nas) will mainly off and I will switch them on only if I need to use them.
After say so I agree that a "complete" test should be done with the port connected , and maybe I will do it later (at present It is not possible because I don't have a switch to connect all of them).
 
Last edited:

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
I am only concern on the power consumption when the system is in idle
Thats fine - but network cards use more power when a cable is plugged in - even if very little traffic is sent on the wire, so your tests will be more realistic with cables attached - but whatever makes you happy I guess :)
 

TomKraut

New Member
Sep 25, 2023
7
0
1
I recently got one of these nice little machines and printed some of the caddies that da8833 so generously provided. However, I do have a serious problem with drive temperatures: after about 20 minutes idle the four drives I installed to the backplane reached almost 60°C... These are HGST 10K 1.2TB drives with a height of 15mm, so there is practically no space for airflow. I pulled two of the drives for now, leaving one empty slot between them, and they are now idling around 50°C, which still seems a lot.

I was wondering if there is anything I can do about this. Has anyone faced similar issues?
 

Albert67

New Member
Oct 11, 2021
27
3
3
I recently got one of these nice little machines and printed some of the caddies that da8833 so generously provided. However, I do have a serious problem with drive temperatures: after about 20 minutes idle the four drives I installed to the backplane reached almost 60°C... These are HGST 10K 1.2TB drives with a height of 15mm, so there is practically no space for airflow. I pulled two of the drives for now, leaving one empty slot between them, and they are now idling around 50°C, which still seems a lot.

I was wondering if there is anything I can do about this. Has anyone faced similar issues?
With my 7.2K drive and a similar caddy that I got from ebay (a little bit different and maybe with a better air flow) I have not any overheating problems.
Maybe if you don't really need the "hot pluggable" you can remove the backplane and use a " SFF-8643 to 4x SFF-8482 " cable to connect the drives, this will provide a better airflow.
An other possibity is to change the backplane fan with a more powerfull one.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
These are HGST 10K 1.2TB drives with a height of 15mm, so there is practically no space for airflow. I pulled two of the drives for now, leaving one empty slot between them, and they are now idling around 50°C, which still seems a lot.
I think most 10K 2.5 inch drives are supposed be running with high airflow since they run very hot.

So I would do as @Albert67 suggest - put in a more powerful fan, which=more noise - or better, exchange for SATA SSD's - you can get 1TB drives for around 30-50 EUR if you search a little.

That will give you faster drives, and a much better airflow, since most SATA drives are 7mm.
 

TomKraut

New Member
Sep 25, 2023
7
0
1
I think SSDs are the way to got here, which is a shame since it means that I got eight 10K HDDs that I got for this server sitting around gathering dust. What frustrates me is that there seems to be no way to control the fans in this server. The SYS2 fan is twiddling its thumbs at 1k rpm while my HDDs are burning up and it could go up to 4-5k rpm.

Maybe I will go with what Albert67 suggested, get a cable and install four of the drives in alternating slots.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
877
485
63
49
r00t.dk
I think SSDs are the way to got here, which is a shame since it means that I got eight 10K HDDs that I got for this server sitting around gathering dust.
I agree its shame, but to be honest - its a "tiny" server and I don't think its really built for high-powered devices like the 10k drives.

But have you tried to do a fan test to see if calibration can fix your slow spinning fans?

1696257739408.png

Another suggestion is to just accept the drives run hot - perhaps the temperature sensors in the drives are either non-existing, or reports temperature as OK.

If the sensors report temperature as OK, the server is doing the "right" thing to not spin up the fans. Like this drive: https://documents.westerndigital.co...-sas-series/data-sheet-ultrastar-c10k1200.pdf

Can run in 55 degrees ambient, which means the drives will most definetely be hotter - so perhaps the temperatures are ok, and you are just used to drive temperature needing to be lower?
 
Last edited:

TomKraut

New Member
Sep 25, 2023
7
0
1
so perhaps the temperatures are ok, and you are just used to drive temperature needing to be lower?
Possible. One of the drives has a trip temperature of 60°, the others' is 85°. I replaced the one with the lower trip temperature. There is absolutely nothing important on these disks and I got them very cheap (10€ each), so I will see how it goes for a while. They are configured as a mirrored zfs pool, so nothing too bad should happen to all at once... I will have to replace my PLA printed caddies though...
 

Albert67

New Member
Oct 11, 2021
27
3
3
Possible. One of the drives has a trip temperature of 60°, the others' is 85°. I replaced the one with the lower trip temperature. There is absolutely nothing important on these disks and I got them very cheap (10€ each), so I will see how it goes for a while. They are configured as a mirrored zfs pool, so nothing too bad should happen to all at once... I will have to replace my PLA printed caddies though...
for 3d printed caddy go with ABS. Petg is ok, but ABS is better if you are "playing" with so high temperatures
 

homeboy

New Member
Aug 20, 2023
12
1
3
Thanks everyone for providing a lot of info here regarding this hardware. I got interested to buid a NAS setup around this small size server and found from a local dealer a Primergy TX1320 M2 server with Xeon E3 1230v5 processor, no memory and 4x300GB SAS drives. I did not need the SAS drives as I'm going to put SSD drives in it anyway. Agreed for 80€ price without them (but still with the original trays).

The case was rather dusty inside, and decided to just take everything apart (well I left the PSU alone, just pressure air from outside with that) and used one full evening to just clean it up to a pristine condition. I have to say, that this case is an engineering marvel, so many well though details, green marked levers and most things available to be opened without tools. This is also the first "PC build" that I'm doing in about 15 years, so got a bit sentimental with all this :)

Now it takes about two weeks for me to get the ECC memory that I ordered for it, so now just have to wait. But some planning I can do while waiting.

The setup also included the FBU option with a battery for the RAID controller, which was a nice bonus that I only found out after buying it. Now I was thinking of dropping the RAID controller originally from the setup, as I was assuming that it is kinda useless with with zfs. But now reading a bit more about the EP400I functionality I'm under the impression that following features would be gained even in JBOD mode (compared to the MB drive controller):
  1. Hot-pluggable drives
  2. Write caching (?)
  3. With the FBU option ability to finish writes to the disks in case of a power break
Would there be other benefits? The downside of course is the extra power consumption, which I believe could be 10+W?

Now the first one is not really necessary for me, but the 2nd and 3rd one are interesting. Would anyone have knowledge or opinions on the pragmatic side of this, when the setup would be zfs + SSD drives? Is the caching actually giving real measurable improvement? Is FBU really adding any benefit with zfs and copy-on-write?
 

TomKraut

New Member
Sep 25, 2023
7
0
1
If you want to build a NAS, using the RAID controller is a bad, bad idea (in fact using a RAID controller these days is a bad idea, period). There are countless explanations on the internet why, but it basically comes down to this: zfs on a CPU from the last 15 years is much better than anything RAID controllers can do. And zfs needs direct access to the disc, which is impossible with most RAID controllers, even in JBOD mode.

I don't know if the EP400I can be flashed to HBA mode, but a quick Google search did not turn up anything.

As a side note, I think that while these small servers are awesome, they are a bad choice for a NAS. If you want a lot of capacity, you are out of luck without 3.5" drives. And if you want to use SSDs, there are a lot of smaller, quieter, less power hungry options available.
 

Albert67

New Member
Oct 11, 2021
27
3
3
If you want to build a NAS, using the RAID controller is a bad, bad idea (in fact using a RAID controller these days is a bad idea, period). There are countless explanations on the internet why, but it basically comes down to this: zfs on a CPU from the last 15 years is much better than anything RAID controllers can do. And zfs needs direct access to the disc, which is impossible with most RAID controllers, even in JBOD mode.

I don't know if the EP400I can be flashed to HBA mode, but a quick Google search did not turn up anything.

As a side note, I think that while these small servers are awesome, they are a bad choice for a NAS. If you want a lot of capacity, you are out of luck without 3.5" drives. And if you want to use SSDs, there are a lot of smaller, quieter, less power hungry options available.
I agree that nowdays raid controllers are a NO GO!.
I confirm that the ep400 can be flashed in HBA mode , I done it with mine.
I desagree that tx1320 are a bad solution for a NAS (ok, M1 and M2 probably are because there are limited in the number of drives), I have 2 NAS running on TX1320 M3, one is running zfs and other mergerfs+snapraid. And I am planning to build a third base on 12 ssd.
I specifically chose this model because has the power out port, to which I have connected a NetApp DS2246 (that I have payed about 80 euros including shipping) that allow me to have a compact solution with maximum number of 36 2.5" disks (12 in the tx1320 chassis + 24 in the DS2246).
The NAS (tx1320 + DS2246) can be switched on via wake-on-lan.

As an alternative solution there is bigger brother the tx1330 M3 that use the same motherboard and provides "Max. 12x 3.5-inch or 24x 2.5-inch"
 
Last edited:

homeboy

New Member
Aug 20, 2023
12
1
3
Thanks Tom for your input. The EP400I is containing LSI SAS 3018 chip and I could find some info about it. Whether this Fujitsu can be reflashed with something smarter, don't really know. To be honest my understanding before today was that IT/JBOD mode is basically the same as HBA mode, but I stand corrected. But I'm happy to rip it out of the server and just use the MB controller instead or even get a separate MBA card in case needed.

Regarding the choices, my use case is not a typical DataHoarder scenario really. I do understand that this server would not fit that kind of a purpose very well. However my wish list is as follows:
  • <50W idle power consumption
  • Fairly quiet
  • Compact, height lower than 40cm
  • Around 3-5TB of storage, with high level of data integrity and some hardware redundancy. It will still be backed up to cloud, so I'm thinking RAIDz1 should be ok. I can lose a few days of (new) data in case my house burns down, the important think is to have something like weekly backups done offsite.
  • Possibility to run some low performance tasks (containers or jails) on the server in addition to storage (some git service like giteo, a CI runner, postgresql, some simple own webapp projects, etc.)
  • ECC memory
  • No 3.5" hard drives, those would be waste of space in the enclosure.
  • Slots for 4 x 2.5" SSD drives (no room for expansion needed, this really is enough).
  • Fully headless and full remote management possibilities (including shutdown and bootup of the server)
  • Possibility for adding a 2.5 or 10 GBe card later
  • Possibility for 2xNVMe SSD drives with a PCIe card later for faster block storage. (However I think that SATA SSDs and enough RAM will be ok for all the use cases that I can think of right now.)
I think that this wish list above will reduce the amount of options quite considerably. I was looking at SuperMicro X11 based motherboards (like X11SSH-F), but those are not available with the same price and I would have had to buy the case, PSU, etc separately. In any case, I'm interested if there are some other options? I think that the ECC requirement is the one making this a bit of a challenge, even if I would be ok with lesser PCIe slots. Planning to have 4 x 4 TB drives, which is more than the max I actually need (5TB) but 4 x 2TB would be too close to the limit considering ZFS recommendations.
 

homeboy

New Member
Aug 20, 2023
12
1
3
I confirm that the ep400 can be flashed in HBA mode , I done it with mine.
Thanks for confirming this. I might still be skipping the separate controller for my use case, in case I can just connect the 4 x backplate directly to the MB and that works without issues. I think that the MB controller also contains some RAID functionality, do you Albert know any details of that and whether that can be configured to have RAID functionality disabled?

Unfortunately I can't really check anything from my server BIOS yet myself, before I get the memory for it (early next month).
 

Albert67

New Member
Oct 11, 2021
27
3
3
Thanks for confirming this. I might still be skipping the separate controller for my use case, in case I can just connect the 4 x backplate directly to the MB and that works without issues. I think that the MB controller also contains some RAID functionality, do you Albert know any details of that and whether that can be configured to have RAID functionality disabled?

Unfortunately I can't really check anything from my server BIOS yet myself, before I get the memory for it (early next month).
The integrated controller is a normal sata II (6gb/sec) with soft raid that can be disable in the bios.
The ep400 is a sas 12gb/sec controller.

Which to use depend on the type of disk you want to use , if you stay with mecanical the integrated one is ok.
If I was you I will try to get some used 2tb enterprise 2.5" disks, you can find them on ebay at reasonable price.
With four of them you will have a 6tb zraid system that should be what you want.
On my personal experience i will add a small ssd for write cache, it will encrease performance without spending too much.
Again you can look for enterprise ssd (you can find 120Gb for about 30/35 euros) and are ok for the job or you can place an nvme disk in a pci slot.
 
Last edited: