SRKegger Posted September 7, 2014 Report post Posted September 7, 2014 I have a question regarding setting up a heartbeat between two nodes on a cluster (In this case Windows Server 2012) when the nodes are on separate physical sites connected over a WAN. When the public NIC's can ping each other and theheartbeat cannot, what usually is the issue? When I run the cluster validation test I get the following result: Network interfaces SERVER01 - Heartbeat and SERVER02 - Heartbeat are on the same cluster network, yet address 10.0.36.20 is not reachable from 10.0.36.15 using UDP on port 3343. I've checked the firewall on each server and port 3343 is open. I've asked the netadmin who oversees the Cisco router configurations whether port 3343 is blocked or not and he assures me it is not. Any ideas and suggestions will be appreciated. Quote Share this post Link to post Share on other sites More sharing options...
anyweb Posted September 7, 2014 Report post Posted September 7, 2014 have you tried using wireshark to confirm if the port is blocked ? Quote Share this post Link to post Share on other sites More sharing options...
SRKegger Posted September 9, 2014 Report post Posted September 9, 2014 Thank you for the suggestion. Indeed Wireshark shows port 3343 is open between the two nodes. A new twist on this is that the above mentioned netadmin got on the server over the weekend and reset the TCP/IP stack. The interesting thing is now the heartbeat network pings, but other networks that I set up for iSCSI fail, and only from one node and not the other. Here is a sample from the validate network report: Network interfaces node01 - ISCSI 1 and node02 - iSCSI 1 are on the same cluster network, yet address 10.0.32.20 is not reachable from 10.0.32.15 using UDP on port 3343. Network interfaces node02 - iSCSI 1 and node01 - ISCSI 1 are on the same cluster network, yet address 10.0.32.15 is not reachable from 10.0.32.20 using UDP on port 3343. For example, from node02 I can ping 10.0.32.15 which is the interface for iSCSI on node01 fine. From node01 when I ping 10.0.32.20 which is the iSCSI interface for node02 it fails. In Failover Cluster Manager the status for this network is "Up", but within that under the Network Connections tab the staus for this interface is "Unreachable". I have the adapters set up using the suggestions for iSCSI found here: http://www.server-log.com/blog/2011/8/4/hyper-v-cluster-cluster-network-settings-overview.html So now, each node has an interface for heartbeat which pings fine each way, and an interface for live migration which pings fine each way. 3 interfaces are set upfor iSCSI which ping fine from node02, but fail from node01. Why would some interfaces fail while others succeed? (Note: The only commonality is that each adapter that fails is a HP nc382i dp multifunction gigabit server adapter and the other adapters are HP NC375 PCI Express. I have updated all firmware and driver updates from Broadcom and HP) Quote Share this post Link to post Share on other sites More sharing options...
SRKegger Posted September 9, 2014 Report post Posted September 9, 2014 This was an easy one it turns out. Layer 2 needed to be extended across the WAN for these networks that were failing. Basically the Microsoft guy needed to explain to the Cisco guy how the traffic flows. Specifically how the Microsoft Failover Virtual Adapter works using UDP traffic got a, "Oh yeah. We still need to open up layer 2 traffic for those networks. I'll get on that." All is now right in the world. 1 Quote Share this post Link to post Share on other sites More sharing options...