In my last post I talked about Microsoft’s implementation of Software-Defined Networking with Hyper-V virtual networking. Hyper-V virtual networking can dramatically improve an organization’s agility, allowing for the quick provisioning of supporting network resources in a virtualized infrastructure. In addition, virtual networking overcomes the constraints imposed by VLAN, allowing large enterprises and hosting providers to scale far beyond what is possible with conventional networking solutions. However, Hyper-V virtual networking does have some limitations when it comes to providing connectivity to non-virtualized resources. In this post I’ll explain those restrictions in more detail and describe how to overcome those using Iron Networks solutions.
While Microsoft network virtualization sounds wonderful (and it is, trust me!), one small detail that seems to go unnoticed until you’ve started to implement the solution is that hosts on a virtual network can only communicate with other hosts on their same virtual network. They cannot, in a default configuration, communicate with hosts located outside of their virtual network. In some cases this is a desirable configuration. For example, a completely isolated virtual network can allow multiple environments to co-exist side-by-side, even with overlapping or identical IP subnets. This is a common scenario not only for hosting providers, but for enterprise development and QA environments as well. However, a more common scenario occurs when an organization has both physical and virtual networks that need to communicate with each other. A typical use case is when servers that have been virtualized (e.g. front-end web servers) need to communicate with severs that have not been virtualized (e.g. back-end database servers). Because network traffic on a virtual network is encapsulated using Network Virtualization with Generic Routing Encapsulation (NVGRE), no communication is possible in this deployment model without additional components to translate NVGRE on behalf of non-virtualized hosts.
To address the issue, Iron Networks created the Hyper-V Network Virtualization Gateway (HNV). The HNV functions as a routing gateway that sits between your physical and virtual networks and brokers communication between the two. Any time a host needs to communicate with another host outside of its defined virtual network, the traffic is routed to the HNV where it is decapsulated and forwarded to the resources located on the physical network. Return traffic is then encapsulated accordingly and delivered to the appropriate virtual host where the guest currently resides. Traffic originating from the physical network and bound for a virtual host can also be routed through the HNV gateway.
The Iron Networks HNV Gateway integrates tightly with System Center Virtual Machine Manager (SCVMM) 2012 SP1 and can provide gateway functionality for multiple Hyper-V virtual networks. The HNV gateway’s virtual networking configuration is managed with SCVMM and is automatically updated as guest machines are migrated from one virtual host to another. The HNV gateway also includes built-in support for site-to-site networking, allowing for remote network connectivity between datacenters, customer networks, or other hosting providers such as Windows Azure.
As you can see, the Iron Networks HNV gateway for Hyper-V enables the full realization of the hybrid cloud configuration by allowing seamless network communication to take place between Hyper-V virtual networks, physical networks, and remote datacenters and hosting providers as well. For more information on the HNV gateway, click here.