Fujitsu TX1320 M3 - Cheap low power server (barebone)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Albert67

New Member
Oct 11, 2021
27
3
3
I forgot to say that zraid need memory! count 1GB of Ram for each TB of disk (including the parity one).
So in the example above with 4x 2tb disk you will have 8tb so considering your OS you will need 10/12 GB of ram.
 

homeboy

New Member
Aug 20, 2023
12
1
3
The integrated controller is a normal sata II (6gb/sec) with soft raid that can be disable in the bios.
The ep400 is a sas 12gb/sec controller.

Which to use depend on the type of disk you want to use , if you stay with mecanical the integrated one is ok.
If I was you I will try to get some used 2tb enterprise 2.5" disks, you can find them on ebay at reasonable price.
With four of them you will have a 6tb zraid system that should be what you want.
On my personal experience i will add a small ssd for write cache, it will encrease performance without spending too much.
Again you can look for enterprise ssd (you can find 120Gb for about 30/35 euros) and are ok for the job or you can place an nvme disk in a pci slot.
Thanks for confirming the integrated controller setup.

No plans to go with mechanical disks. I've decided to go all in with the SSD only approach. That is also why I decided that I'm not looking at cases with 3.5" slots. The local reseller which I bought this from, also had the larger case TX1330 model in shelf actually. This smaller case was specifically attractive for me because I can set it up in a place where a larger (over 40cm height case) will not fit.

Now what I've understood from ZFS is that there needs to be a bit overhead reserved, e.g. you should not cross the 80% utilization limit. Using the calculator at ZFS Capacity Calculator - WintelGuy.com I get the practical usable space of 4.49TB / 4.09TiB for 4x2TB drives in RAIDz1. Now I said that I need 3-5TB of storage, but in this case I'd err on side of caution and get at least the 5TB of usable space as sizing up later is going to be difficult with ZFS. Of course I could setup 5x2TB in RAIDz1 configuration and get the usable space of 5.62TB / 6.18TiB. This would be an alternative yes, but the 4x4TB setup is what I'm leaning towards right now - partially because the price per TB is better more attractive with that size. Would get the practical usable size of 8.17TIB, which is more than I need but has some headroom that would also guarantee less performance hit with ZFS and also probably have a positive impact on the SSD endurance.

With RAM I'm going with max 64GB that the board takes so that should not be an issue.
 

Albert67

New Member
Oct 11, 2021
27
3
3
About " sizing up later is going to be difficult with ZFS. " in practice it's very easy: you just need to backup data, destroy the pull. recreate a bigger pool, restore data.

I don't see how you can get "and also probably have a positive impact on the SSD endurance." considering that everytime you make a write this is affecting all disks, so the "consumption" of ssd cells is the same on all disk. And that is the bigger disadvantage of using SSD :they are much more likely to fail all in the same time compared to HDD. This can be mitigate by having a backup or by start to update the disk to new one on a regular basis. For example change one disk in the array every year...
 

homeboy

New Member
Aug 20, 2023
12
1
3
About " sizing up later is going to be difficult with ZFS. " in practice it's very easy: you just need to backup data, destroy the pull. recreate a bigger pool, restore data.

I don't see how you can get "and also probably have a positive impact on the SSD endurance." considering that everytime you make a write this is affecting all disks, so the "consumption" of ssd cells is the same on all disk. And that is the bigger disadvantage of using SSD :they are much more likely to fail all in the same time compared to HDD. This can be mitigate by having a backup or by start to update the disk to new one on a regular basis. For example change one disk in the array every year...
It is easy if you have the parallel hardware. It's a bit nuisance if you don't. I plan to have cloud backups but I would not willingly create a situation where the only remaining copy of my data is in one place in cloud only. Even if it is temporary. It will also take some time to transfer terabytes of data from cloud back to home server. I think that a lot of people go with Unraid because if the ease of adding new drives one at a time. Like many, I've decided that for the added benefits, it will be worth to go with ZFS even with the cons. My plan is to run this storage server for next 5-7 years, if it survives without without failures for that long. I would be very happy if I don't need reconfigure pools and go through any hassle during the server lifycycle. So that is why I'd rather (and just a personal preference) rather get a bit extra space than have the risk of needing to "repool".

What I meant by the positive impact on endurance is, that the SSD drives expected TBW endurance go up with the drive size. As an example Samsung 870 QVO 2TB has expected endurance of 720 TBW and 870 QVO 4TB has double of that at 1440 TBW. Now lets assume that in my use case I would be writing/rewriting 2TB of data to the NAS each month. With four drives (one being parity), that would mean around 1TB written per drive per month. Thus the expected endurance life of the 4TB drives would double (1440 weeks) compared to the 2TB drives (720 weeks). Do correct me anyone, if there is any fault in my logic!

Albert good that you brought up the possibility of multiple drives failing at the same time. I was actually under the assumption (from reading elsewhere) that SSDs would stastically not be as likely to fail at the same time as mechanical disks. I think that I definitely need to know this better before doing any purhase decisions. If you have any good pointers about this, would be happy to dig deeper around this subject!
 

homeboy

New Member
Aug 20, 2023
12
1
3
Mixed up months and weeks above, but I think it is still understandable :) In any case, my data in the NAS is going to be fairly static (personal photos, small business paperwork archive, etc), so now looking at the endurance figures for even the lower end consumer devices, it seems highly unlikely likely that I will ever reach the write endurance limits. I'm guessing that the SSD drives can still fail, but in my case probably for other reasons than reaching their write endurance. Now this is getting a bit off-topic I would agree ...

Back to the subject - getting 4x 16GB UDIMM memory sticks seemed to be even more challenging than I initially believed for these Fujitsus. I wanted to get the Crucial MTA18ASF2G72AZ-3G2R1R memory modules as these are listed in Crucial site as compatible specifically with the TX1320 M2. The order I did for these, got back after a few days with "no stock with any of our providers and no idea if there will be these anymore". Now I ordered 4 x Kingston KSM32ED8/16MR modules and at least now I got a "we should probably get these to our stock in 10-16 days". These are not specifically listed to be compatible with the TX1320, but I'm hoping that they will be fine.
 

Albert67

New Member
Oct 11, 2021
27
3
3
Mixed up months and weeks above, but I think it is still understandable :) In any case, my data in the NAS is going to be fairly static (personal photos, small business paperwork archive, etc), so now looking at the endurance figures for even the lower end consumer devices, it seems highly unlikely likely that I will ever reach the write endurance limits. I'm guessing that the SSD drives can still fail, but in my case probably for other reasons than reaching their write endurance. Now this is getting a bit off-topic I would agree ...

Back to the subject - getting 4x 16GB UDIMM memory sticks seemed to be even more challenging than I initially believed for these Fujitsus. I wanted to get the Crucial MTA18ASF2G72AZ-3G2R1R memory modules as these are listed in Crucial site as compatible specifically with the TX1320 M2. The order I did for these, got back after a few days with "no stock with any of our providers and no idea if there will be these anymore". Now I ordered 4 x Kingston KSM32ED8/16MR modules and at least now I got a "we should probably get these to our stock in 10-16 days". These are not specifically listed to be compatible with the TX1320, but I'm hoping that they will be fine.
The ssd cells will fail after a certain number of writes, so like you have said in a scenario where you use the nas more for storage of static data (photos/videos/music/ pdf etc..) instead of dynamic data (virtual machine images, transaction databases etc...) You will be fine.
As I wrote in a previous post I am also going to buils a zfs nas based in 12 SSD on a tx1320 M3.

About the memory I have used some kingston without problems on the tx1320 M3, at present I am not at home, next week I can list the memory type I have sucessfully used on the tx1320. The only one that I can confirm now are the Samsung M391A2K43BB1-CRCQ , that I got for about 40 euros a module.
 
Last edited:

kgold

New Member
Oct 26, 2023
7
1
3
Hi, new member here just checking in. I don't see any new member intro threads so I'm just here to say hi while I get caught up on the reading.
I picked up 4-tx1320m3 bare bones and will be doing some interesting things with them, hopefully.
 

Attachments

  • Like
Reactions: slidermike

3333

New Member
Jul 20, 2023
3
0
1
Hi guys! I have a stupid question.
I plan to replace 2c/4t G4560 processor with some 4c/8t processor, like 1245v6. None of my application requires graphics processing, but it would be nice to preserve the possibility to connect a monitor screen to the server once a year for troubleshooting.
I heard the iRMC module is responsible for displaying at least POST and BIOS. I've also read somewhere (probably in this very thread) that the iGPU is not even connected to the VGA port on the motherboard. So now I'm not sure if I need a processor with iGPU or not.
Can you enlighten me how it works?
 

kgold

New Member
Oct 26, 2023
7
1
3
The tx-1320m3 system board doesn't even power a IGPU if present on the CPU itself.
To get a display or HDMI you're going to need a GPU in the pcie expansion. There's a " legacy pci non-e x1 " slot if you're desperate, otherwise it's the 2x- x8+ 1x- x4 to choose from.
Hopefully that helps.
 

3333

New Member
Jul 20, 2023
3
0
1
I was troubleshooting new RAM issues a couple of weeks ago with a screen connected to the system board. I'm pretty sure it was able to display not only POST/BIOS/iRMC, but also the operating system (text mode). So if it wasn't the iGPU of G4560, then the theory of iRMC module being able to display the graphics (at least text based or low resolution) must be valid.

Would be nice if someone else could confirm as well. Or if someone with non-iGPU processor could test if they can display Linux prompt. I assume a lot of you have 1220v6, since it's the most common.
 

kgold

New Member
Oct 26, 2023
7
1
3
I have 1270v6 and the 15pin display adapter, my ram is still traveling. Any suggestions for a OS to try?
 

3333

New Member
Jul 20, 2023
3
0
1
I'm using proxmox, but you can try anything. If it boots up to the OS installer and will be able to display it, then it's enough.
 

TomKraut

New Member
Sep 25, 2023
7
0
1
The mainboard contains a G200 GPU. That is an ancient Matrox IP core integrated into the iRMC, afaik. To any OS, that is a basic display adapter, more than enough for a Linux prompt or even a Windows Desktop. You should even be able to install Windows drivers or use an X environment with it, if you want. No need for a dedicated GPU, unless you want to do something like video transcoding.

However, you only have an analog VGA output, so if you happen to have only modern displays, you will need an adapter that might cost more than a GPU...
 

TomKraut

New Member
Sep 25, 2023
7
0
1
That... seems very cheap. Might be that I am wrong, but I think those adapters are for connecting a PC with display port to a VGA monitor. They probably will not work the other way around.
 

kgold

New Member
Oct 26, 2023
7
1
3
I would think powering a CRT from a display port would be the " hard way round " and display from VGA should be a step down in voltage and current requirements.
In any case I don't think the cable is a danger to either device but I wouldn't try a HDMI cable if it exists. The iRMC should be able to take a ill advised adapter cable?
 

TomKraut

New Member
Sep 25, 2023
7
0
1
There should be no danger to either device. But remember, you have to encode an analog signal from your PC into a digital signal for your monitor. That is a more complex task than taking a digital signal and converting it into an analog one. Furthermore, there is no power on a VGA connector (unlike Display Port or HDMI, which can provide some power). So if your adapters don't have a separate power input (mine is powered of an extra USB port) they are likely to not work for your use case.
 

kgold

New Member
Oct 26, 2023
7
1
3
I assumed that the analog signal output was providing enough power to a small embedded diode bridge, converter.
 

kgold

New Member
Oct 26, 2023
7
1
3
The connectors seem to be friction welded together and I rather like the idea of having a spare if they work. Or 1 works.