I have multiple ESXi hosts, managed individually because I have yet to buy licensing. Can I tie my iSCSI SAN to these hosts and utilize it for VM storage and run VM's on that iSCSI device?
on 12/28/2012 – Made popular on 12/28/2012
Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it.
I'm trying to set up my Ubuntu 12.04 file server as an iSCSI target to use with my VMWare ESXi VM server. I started out by following this tutorial, and other than what I remember from being present while an iSCSI SAN was set up on an ESXi server at work 3 years ago, I don't know much about iSCSI.
I'm having a very odd issue with a single ESXI host.
I have 2 identical hosts, core i3, 6 nics, 16g ram. 4 of the nics are used for Management, vmotion, vm network, all on different vlans. They all go to a HP Procurve 24 port gig switch in a static trunk. The other two nics are iSCSI.
There are 2 VSS's, the one with 4nics, and the second with just the 2 and iSCSI traffic.
A popular concept for ZFS lovers is the all-in-one. A server with ESXi, and a storage appliance (usually based on some flavor of solaris). It has an HBA passed in via pci passthrough, and serves up the storage to other guests by sharing it to ESXi using NFS. Works just fine. I decided to try the same thing with iSCSI. Not so fine.
Is there anything wrong with using one vmkernel in a multipathed iSCSI setup for vMotion traffic as well as iSCSI?
I have iSCSI storage setup using 2 vmkernels for MPIO. I'd like to use one of those kernels for vMotion traffic because the iSCSI traffic is already isolated from the rest of the network. Is this a horrible idea?
I'm trialing RHEV and have a lab that I work on.. I usually use VMware on this lab, but as my VMware setup uses autodeploy, I can install RHEV on the nodes of my cluster and boot from RHEV or change boot order and PXE boot back to ESXi.
So far so good, added my hosts in the cluster, I have an old CX200 direct attach SAN that still works well and has a couple of terabytes of storage.
I have enabled Jumbo Frames (9000) in ESXi for all my vmNICs, vmKernels, vSwitches, iSCSI Bindings etc - basically anywhere in ESXi where it has an MTU settings I have put 9000 in it. The ports on the switches (Dell PowerConnects) are all set for Jumbo Frames. I have a Dell MD3200i with 2 controllers, each with 4 ports for iSCSI.