Help with Napp-it ZFS set up

Synthetickiller

New Member
Jul 16, 2011
25
0
0
Warning: My experience with linux is less than nothing! Please be kind.

After fighting with ESXi and OpenIdiana 148, I finally got everything installed. I now need to configure my ZFS pool in Napp-it. I have no idea where to begin. I cannot find a walkthrough or guide for how to add drives or a decent explanation comparing the types of pools.

This is the hardware I have running ESXi:

Mobo: SUPERMICRO MBD-H8SGL-F-O Info
CPU: AMD Opteron 6134 2.3ghz octo-core w/ Noctua HSF (testing 120mm and 92mm versions. 120mm blocks the 16x slot)
Ram: DDR3 RDIMM 1066 64gb (16gb x4) (all same samsung chips)
Raid Card: LSI 9211-8i flashed to IT firmware
PSU: 1kw Enermax Galaxy
Case: Silverstone TJ06
Hotswap bay: Supermicro CSE-M35T-1 (houses my storage array)
EXSi drive: samsung 470 64gb drive, holds the VM for OpenIndiana as well.
Storage Drives: 3x3tb Hitachi 7200RPM
Other drives: 500gb and 1000gb Samsung F3
I'm basically lost. Any help is appreciated.
 

Synthetickiller

New Member
Jul 16, 2011
25
0
0
I'm not asking for a walkthrough concerning how to install ESXi, OpenIndiana, Napp-it. I'm passed that.

These walkthroughs are not at all user friendly for people who are not familiar with linux, such as myself. They leave out important steps about how to make drives visable to Napp-it. This is my main concern.

I found this, which explains how to log in as root, but this also does not work. "su-" command returns "bash: su-: command not found." Entering "su" doesn't grant me access either. I am not familiar with command line, but googling how to format drives with Gparted in open indiana / solaris yields nothing. This is why I am asking here. I'm wasting time on something fundamnetal.

How do I format drives in Open Indiana? Do I need to still a driver for my LSI 9211-8i or was it already set up when I installed the OS?

I can't find answers to these questions.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Not trying to be an ass, but if it was really that difficult to get ESX/OpenIndiania/Napp-It setup, you cant login as root, format drives, etc. which as you stated are fundamental task then do you think this is still a good solution for you?
What are you going to do when something goes wrong?

I decided that while i can get ZFS variants working, i don't feel comfortable enough with the solutions that i would want to be anywhere near it when it broke and my data was in jeopardy!!!

So i use hardware RAID for my production stuff.
 
Last edited:

Synthetickiller

New Member
Jul 16, 2011
25
0
0
I noticed an issue w/ hardware passthrough and started from scratch. All the devices now show up. I have no idea why this happened.

Only starting from scratch worked.


As for having issues with getting it to work, well, I'm not going to let anyone discourage me from learning. I know that I don't know enough, but if I never bother to try, I'll never move foward.

I don't trust hardware based raid for data integrity. I have 5 more drives I can attach and play with to learn how to expand and back up pools as needed.
 

gea

Well-Known Member
Dec 31, 2010
2,871
1,014
113
DE
Not trying to be an ass, but if it was really that difficult to get ESX/OpenIndiania/Napp-It setup, you cant login as root, format drives, etc. which as you stated are fundamental task then do you think this is still a good solution for you?
What are you going to do when something goes wrong?

I decided that while i can get ZFS variants working, i don't feel comfortable enough with the solutions that i would want to be anywhere near it when it broke and my data was in jeopardy!!!

So i use hardware RAID for my production stuff.
for production stuff you should always be familiar with your systems or you need a good and expensive service level.

but with hardware -raid i have a completely different meaning.
ZFS software Raid with a multi GHz-/ MultiCore CPU and several Gigabyte RAM is always faster than any hardware-raid,
Controller independant (you can just plugin your pool in another server with any controller and import) and there is no
Raid write hole like with Raid5/6 with the need of battery-packs that may help only a little.

Not talking about real data checksumming, scrubbing against silent data corruption and other ZFS goodies.