Hi All,
I have some questions regarding the setup and implementation of TMG and a pure ESXi vmware environment. I have been tinkering in a lab / non-production environment to get things situated before proceeding with a full deployment. Here's a basic rundown of the environment:
Server Hardware:
(1) HP DL380G7, 1 x 6-core Intel CPU, 96GB RAM, 1TB Internal SAS storage (RAID 6), 10 Broadband/Intel GBit NIC
(1) HP DL580G5, 4 x Quad Core Intel CPU, 96 GB RAM, 500GB SAS storage (RAID 1+0), 10 Broadband/intel Gbit NIC
Core Switch:
(2) Cisco 3750G 48-port. No Stack configuration, only one is being utilized
Storage:
NetApp FAS2040, 8 x internal 300GB SATA drives, 1 x DS14 MK2 Shelf with 14 x 250GB FC drives
That's a rundown of the hardware involved. ESXi 5.1.0 is installed on both servers. I have installed a vCenter 5 Appliance on one host, 10.20.10.13 GW 10.20.10.1. The networking is broken out by VLAN's on the core switch. I didn't want to do VLAN tagging from within the ESXi hosts / vCenter server because I've always used vlan's on the core. I have a number of vlans created on the core i.e. vlan 5 (Perimeter DMz) 192.168.0.2, vlan 10 (INT-LAN) 10.20.10.2, vlan 30 (iSCSI) 10.20.30.1, vlan 40 (ESX MGT)10.20.40.1, vlan 50 (VMOTION) 10.20.50.1.
The ESX hosts were assigned a management vlan IP, 10.20.40.5/6 Gateway 10.20.40.1. My internal Domain Controller has an IP of 10.20.10.9 GW 10.20.10.1 I then installed TMG as a 3-network template, Internal, Perimeter, External. It is a member of the domain.
The networking of the ESXi hosts use Virtual Distributed Switches that I created in vCenter. I was meticulous about breaking out NIC's for each VLAN and tried to keep the traffic separate. For instance, vmnic0 is vDS CSI-MGT_VLAN40 connected to 3750 on vlan 40 and has vmk0 10.20.40.5/6 per respective esx host. I have vmnic1/vmnic2 for internal traffic on a separate vDS CSI-INT-LAN_VLAN10, connected to the 3750 on vlan 10. I have vDS for Vmotion/iSCSI/DMz done this way as well, a physical NIC is connected to the uplink and assigned the appropriate VLAN on the switch. I have the External vDS setup to a single vDS with assigned vmnic6 attached directly to the Internet and only the TMG VM has this NIC vDS assigned to it.
I have only ever used a physical server for my ISA before, this is my first time testing/configuring with a pure virtual environment. The TMG Internal NIC has IP settings: 10.20.10.1 255.255.255.0 No Gateway DNS: 10.20.10.9. The DMZ NIC has IP settings: 192.168.0.1 255.255.255.0 No Gateway DNS: 192.168.0.9. The External NIC has internet provider IP settings. The thought bing the TMG is basically the router/firewall for systems. I have the Internal network defined as IP ranges 10.20.10.0-10.20.10.255 and 10.20.40.0-10.20.40.255. I have a route 10.20.40.1 > GW:10.20.10.1 allowing the TMG to act as the route path for traffic from the vCetner server management (vlan10) to the ESXi hosts (vlan40). All Internal LAN systems use the INT GW of the TMG Internal NIC. Management services i.e. iSCSI / VMOTION do not pass through the TMG and traffic is routed on the core switch.
To my only issue I seem to be having trouble with, vSphere Client / vCenter server communication's. I have the vCenter appliance on the internal network, 10.20.10.13. I have a vSphere Client installed on an internal PC, 10.20.10.17. I have created an access rule allowing the many specified protocols for the vCenter Appliance to communicate with the ESXi hosts(8080, 443, 902, 512, etc. etc. etc.). The problem comes when I'm on the PC with the vSphere Client installed and try to connect to the vCenter appliance to manage the vmware environment. It can connect to the vCenter. I was able to add the host's but seem to be running into issues now, unable to communicate with the ESXi hosts. When I change the GW IP of the client pc to 10.20.10.2 (core switch vlan10 IP) the issues almost goes away completely. I'm not quite sure what the TMG might be blocking or denying from the client PC to the vCenter or the ESXi hosts? I see connections initiated from the vCenter Appliance to the ESXi hosts as well as being blocked. The most repetitive error is a Port 902 TCP_SYNC_DROPPED_PACKET from the PC vSphere client to the ESXi Hosts (10.20.40.5/6). I created a rule allowing port 902 from the vCenter Appliance and the Client PC (both INT LAN) to the ESXi hosts. I'm still having some issues.
So from all the information above, can anyone see any configuration issues? Possibly change some things around differently, either vlan tagging in vCenter instead of the core switch. Possibly install the vCenter IP on the same subnet as the ESXi hosts (vlan40) rather than having the ESXi hosts in the 40.x SN and the Appliance in the 10.x SN? Any suggestion anyone could offer would be appreciated.
-Slevin