I'm currently using FreeNAS for iSCSI ZFS block storage for my ESXi server. The FreeNAS box is getting old so I'm looking for a replacement. I'm not currently using any of the other FreeNAS features besides ZFS and iSCSI.
I was initially planning to upgrade the FreeNAS box to TrueNAS Core or...
So this is fun,
I just woke up to a clone job of my laptop being finished, it was kind of an experiment. It runs OpenSUSE, so it's got a BTRFS root and snapshots triggered by package manager.
I wanted to back it up so I could have a copy of the machine to run elsewhere, here's how I did it...
So I'm wanting to set up some MPIO datastores for ESXi. They'd be for several different workloads: General file storage, video recording on VMs, and the VMs themselves.
I looked long and hard at several cluster file systems, but let's face it: Ceph is complicated, OCFS2 is obscure and...
I just got my first 100G network set up last night with two Connect-X 5 cards connecting my TrueNAS box to my ESXi 6.5 box. Out of the box, it works acceptably (80 Gbits/sec via iperf2).
I've seen some articles that cover tweaks that can be made on the TrueNAS (FreeBSD) side to bring down CPU...
This TrueNAS CORE box is going to be used exclusively for VMware ESXi datastores via iSCSI (with sync writes enabled).
2 x DDR4-3200 64GB ECC DIMMs (fully populate with 8 eventually)
Noctua NH-U12S TR4-SP3 CPU cooler/fan
Seasonic Prime TX 750W power supply
Good day everyone!
I am looking to possibly purchase a new server to replace my Dell R710 running FreeNAS.
Right now all my bays are full and there isn't much more I can do to expand the storage or add SSDs or NVME for caching etc.
I am looking at potentially purchasing a SuperMicro server...
I use a VM with OmniOS r151024ap and napp-it v. 18.01b as a storage server for a VMware 6.0u3 cluster. Around 30 VMs use 8 iSCSI LUNs for storage. It is mostly very stable, but about twice a year the storage VM stops responding to iSCSI requests from ESXi hosts. After an incident, vmkernel log...
I have a problem with Windows Server 2019 (and tryed 2016 too) and iSCSI target
I try to mount disk via iSCSI to blade server as boot disk, but when even one server has connected iSCSI interface starts to cyclical shutdown.
HP Dl380p Gen8 with QLE8152 as iSCSI target
This is a continuation of previous discussion here:
NFSv3 vs NFSv4 vs iSCSI for ESXI datastores
OK, so there is good news and bad news about connectx-3 and iser on esxi.
First the bad news.
The bad news is, there are issues with flow control which might or might not be able to be resolved...
I'm selling a bundle consisting of TWO QLogic BCM57810 dual port copper ethernet RJ45 NICs. They have driver support for Windows 10, Windows Server 2008/12/16, linux and VMware ESXi also. The cards are in a perfect working condition. But be aware that the fans are not the most silent...
at lest I do not get it up and running ;-)
Comstar configured within the good Napp-IT WebGUI without CHAP works instantly.
But for security reasons one likes to have CHAP enabled...
There is not any documention or hint how to do it and the obvious way within the gui:
- edit iscsi...
tl;dr -- I need a user-friendly management system for a homebrewed iSCSI SAN for maybe 12-15 machines, on a budget. Suggestions?
We've got about a dozen machines on a 40GbE network, and have just finished building a new SAN (20x 8TB drives in two RAID6 pools, so I figure about 100TB of usable...
We're setting up a new SAN for our office, running on CentOS 7.
My initial setup is a 29TB RAID0 for the target. The Volume Group, Physical Volume and Logical Volume on the server all show up as 29TB. The server is running CentOS 7, netbsd-iscsi, and I'm doing the administration (including the...
I was wondering is there a way to have a "golden image" and multiple child volumes in Napp-it?
In Windows Server I used to make one complete install of the OS and programs, create 10 child disks and diskless boot 10 different machines of them using ISCSI.
If not, is there anything similar? :)
I just started testing iSCSI volumes from my Solaris 11.3 server to my Windows 10 workstation. It's working great, and was super easy to setup.
But I am seeing one very confusing thing: zfs list always reports far more data used in the ZFS volume than I have actually created.
I'm in the process of a major upgrade to my home NAS. The server runs Solaris 11.3, with hardware of: 2 x 2.93GHz hex-core LGA1366 CPUs, 72GB RAM (considering upgrading to 120GB) and 32 ports of LSI 6GB SAS/SATA (2 x 2008, 2 x 2308).
I currently have 27 x 2TB 7.2k SATA drives...
As I have two unused devices, I got the idea to try to make a starting point for a small DIY scalable SAN, so I need your help to find out the best way to achieve that:
CPU: Celeron J1900 @ 1.99GHz
Motherboard: ASRock Q1900DC-ITX including...