I have multiple ESXi hosts, managed individually because I have yet to buy licensing. Can I tie my iSCSI SAN to these hosts and utilize it for VM storage and run VM's on that iSCSI device?
on 12/28/2012 – Made popular on 12/28/2012
Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it.
I'm trying to set up my Ubuntu 12.04 file server as an iSCSI target to use with my VMWare ESXi VM server. I started out by following this tutorial, and other than what I remember from being present while an iSCSI SAN was set up on an ESXi server at work 3 years ago, I don't know much about iSCSI.
I'm having a very odd issue with a single ESXI host.
I have 2 identical hosts, core i3, 6 nics, 16g ram. 4 of the nics are used for Management, vmotion, vm network, all on different vlans. They all go to a HP Procurve 24 port gig switch in a static trunk. The other two nics are iSCSI.
There are 2 VSS's, the one with 4nics, and the second with just the 2 and iSCSI traffic.
When connecting to ESXi hosts, is there a preferred protocol? I know VMware claims that using VMFS is best but it seems that managing NFS is much easier especially when it comes to reclaiming space. Unless there is now an easy way to reclaim space on iSCSI LUNs.
Im Dimitris from Greece and this is my first question on the forum
We try to share LUNs from an FC SAN to clients (an esxi server) via iscsi using an linux server as proxy.
We cant directly connect the esxi server to the SAN because our HBAs are all pci-x while the server is pci-e.
A popular concept for ZFS lovers is the all-in-one. A server with ESXi, and a storage appliance (usually based on some flavor of solaris). It has an HBA passed in via pci passthrough, and serves up the storage to other guests by sharing it to ESXi using NFS. Works just fine. I decided to try the same thing with iSCSI. Not so fine.
Is there anything wrong with using one vmkernel in a multipathed iSCSI setup for vMotion traffic as well as iSCSI?
I have iSCSI storage setup using 2 vmkernels for MPIO. I'd like to use one of those kernels for vMotion traffic because the iSCSI traffic is already isolated from the rest of the network. Is this a horrible idea?