no VT-d all in one box?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

bmacklin

Member
Dec 10, 2013
96
5
8
Hello STH!

I bought a TS140 thinkserver to get a home lab going. I didn't really research it but it was a good deal (still is; $233 at Amazon right now). What I wanted was a NAS and a hypervisor all running on the same machine so I can store all of my files and have the CPU be useful for experimenting with different VM images, etc. I'm on a low budget and one of my goals was to consolidate all of my external HDDs into this machine and not have to spend any more on HDD storage. I have 2x1TB, 1x1.5TB, 1x500GB, 1x128GB SSD that I could potentially use for this build.

I started evaluating Windows Server 2012 R2 and while I like the ease of it, storage spaces performance with parity is awful, and I don't care for the added overhead of the OS itself; I like something even more minimal, which led me here.

Unfortunately I realized that my i3-4130 CPU does not support VT-d so I cannot follow the guide for an all-in-one solution. I can try and buy a E5-1220 V3 processor, but I want to keep my costs as low as possible - I still have to spend $200 more to max out the ram! Not to mention E5 draws more power than the i3 so I may need to upgrade the PSU as well.

I have been doing some reading on Raw Device Mapping (RDM) and I don't know if I should do that or if I should make each disk a VMDK and make it visible to OmniOS? Is there another way? How safe and easy is it to add or delete disk with either of these two approaches, and how easy is it to recover from errors?

Or should I just return my server and build it myself?
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
I think you will need to look at E3 v3 processors rather than E5 Xeons. E5s are not supported on that model.

As the TS140 appears to have no inbuilt video on the motherboard you will either need to add a video card if you want to use an E3-1220v3 or get an E3-1225v3 which includes inbuild video like the i3 does. The E3-12X5 models have inbuilt video. The E3-1225v3 (and 1220v3 are 4 core with no hyperthreading). For a bit more you can get an E3-1245v3 which has hyperthreading.

The VMDk vs RDM really depends on what you want to do with the disks. I use RDM which is just setting up a file that ESXi understands with the physical drive details and which points to the device. The drive is then formatted in the OS using whatever file format you choose that the OS can handle. One big advantage is that the OS can then manage the drive using its own tools without the ESXi layer getting in the way and the drive is transportable to other machines with the same OS without the need to have ESXi installed. I can unplug my RDM mapped Windows server disks from my server and put them in any Windows machine that handles NTFS. VMDKs also had a limitation of 2TB but this may have now changed as I am not current with the latest 5.1 updates or what 5.5 brings to the table.

The advantage of using VMDK is that you can use some extra features in ESXi like shapshots which you cannot with RDM. RDM also forces you to use the whole volume so you cannot partition.

As for getting a new CPU or starting from scratch and building a custom machine, I think you will be hard pressed to find a VT-d supporting system with 32GB ram for the same price considering you have got the core of it already at that very cheap price.

RB
 

bmacklin

Member
Dec 10, 2013
96
5
8
I think you will need to look at E3 v3 processors rather than E5 Xeons. E5s are not supported on that model.
Sorry, I meant E3.

The VMDk vs RDM really depends on what you want to do with the disks. I use RDM which is just setting up a file that ESXi understands with the physical drive details and which points to the device. The drive is then formatted in the OS using whatever file format you choose that the OS can handle. One big advantage is that the OS can then manage the drive using its own tools without the ESXi layer getting in the way and the drive is transportable to other machines with the same OS without the need to have ESXi installed. I can unplug my RDM mapped Windows server disks from my server and put them in any Windows machine that handles NTFS. VMDKs also had a limitation of 2TB but this may have now changed as I am not current with the latest 5.1 updates or what 5.5 brings to the table.
Cool, this is what I wanted to know - that if I got rid of the ESXi hypervisor I could still take my ZFS data with me. Have you actually done this? (Sounds like you have.)

The advantage of using VMDK is that you can use some extra features in ESXi like shapshots which you cannot with RDM. RDM also forces you to use the whole volume so you cannot partition.
That is fine for my purposes.

Thanks!
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Cool, this is what I wanted to know - that if I got rid of the ESXi hypervisor I could still take my ZFS data with me. Have you actually done this? (Sounds like you have.)
I have taken RDM'd disks out of the server, rebuilt it from scratch (new ESXi install), remounted the disks, recreated the RDM files and linked to a VM and the data is all there. I have also taken a single RDM disk out of the ESXi server and put it in a Windows server and read the information.

I have a non-virtualized Solaris storage server which passes the raid luns to ESXi which then uses RDM to make them available to my VMs.

RB
 

cptbjorn

Member
Aug 16, 2013
100
19
18
I've rebuilt my ESXi host a number of times and haven't even needed to rebuild the RDMs - I have the files stored in the VM directory and I just had to add the VM with them to inventory after the rebuild and fire it up.

I have a 4x4TB mdadm RAID5 array that started as a single 2TB disk in a n40l microserver with CentOS installed, grew to 4x2TB disks over time, reinstalled with ESXi and connected with RDM. Then these disks were moved to a Supermicro motherboard on the onboard SATA connectors and ran with RDM on ESXi 5.0, 5.1 and 5.5, upgraded to 4x4TB disks somewhere in there, and eventually moved to a DL180 G6 where they are running through an M1015 with the expander backplane on ESXi 5.1/5.5.
 

bmacklin

Member
Dec 10, 2013
96
5
8
I've rebuilt my ESXi host a number of times and haven't even needed to rebuild the RDMs - I have the files stored in the VM directory and I just had to add the VM with them to inventory after the rebuild and fire it up.

I have a 4x4TB mdadm RAID5 array that started as a single 2TB disk in a n40l microserver with CentOS installed, grew to 4x2TB disks over time, reinstalled with ESXi and connected with RDM. Then these disks were moved to a Supermicro motherboard on the onboard SATA connectors and ran with RDM on ESXi 5.0, 5.1 and 5.5, upgraded to 4x4TB disks somewhere in there, and eventually moved to a DL180 G6 where they are running through an M1015 with the expander backplane on ESXi 5.1/5.5.
oh wow. Thanks for sharing. This gives me so much more confidence to try it this way!
 

Aluminum

Active Member
Sep 7, 2012
431
46
28