Thursday, September 26, 2013

The Wolf's Adventure in OpenStack Land, Part 2: Tangled in the Network

Like I said in the last post, getting OpenStack to work is fairly easy if you're running RHEL/CentOS/Oracle/Scientific Linux. Aside from using the automagic script, packstack, OpenStack also provides a step-by-step instruction for manual installation. Personally, I'd recommend you at least go through it once, because it'll give you a much better understanding how each component of OpenStack is connected.

The advantage of using packstack, though, is that it'll take care of setting up the database (MySQL) and the AMQP (Qpid) and hooking all components up correctly to them so you don't have to do that manually (or write your own script to do it).

Now the foundation is set, here comes the interesting part: Networking. 

Unless you omitted the


during packstack run, the default network manager it sets up will be Nova-Network FlatDHCPManager on a single-host setup. If you have chosen to install Quantum (or Neutron depending on the release you got), then Quantum/Neutron will be your default network manager. While you can achieve a lot more complex network topology with Quantum/Neutron, there are several reasons that you wouldn't want to use it for production use:

  • Lack of Active/Active High Availability - As of Grizzly release, there is no equivalent of multi_host mode (we'll cover this network option later), so you either have to run network node on some computer node (which defeats the purpose of dedicated network node), or put in extra hardware just to be a standby and can't even utilize its power or bandwidth (there's no active/active support for Quantum/Neutron just yet), or run the risk of your entire infrastructure depending on a single point of network failure where all your VMs will lose access to public network when your network node goes down. I for one am not so comfortable with this. 
  • Available Bandwidth - When using Quantum/Neutron, your VMs traffic to public network are routed through your network node. So if you have VMs that has high network traffic to the outside network, you can't utilize your bandwidth of each compute node's network connection, instead, it's all funnels through your network node. It may not be a big deal in most cases, but I just don't like the idea that my network packets has to take an unnecessary hop before it get go onto the network. 

So for those reasons, I would personally recommend against running Quantum/Neutron in production environment and stick with Nova-Network, at least for now.

Nove-Network ships with three network managers:

  • Flat Network Manager (FlatManager)
  • Flat DHCP Network Manager (FlatDHCPManager)
  • Vlan Network Manager (VlanManager)

Rather than trying to explain it myself, the guys at Mirantis have some very good blog article about how those different network manager work and how to set it up: FlatManager and FlatDHCPManager, Single Host FlatDHCPManager Network, VlanManager for larger deployment.

The articles' coverage ranges from small and simple network, to large deployment utilizing Vlan. But it's missing the very thing that applies to my situation: my environment is probably best described as a small to medium sized private cloud deployment where I'm not on a large enough scale to worry about the broadcast domain size or tenant numbers, yet I would like to configure my system with multi_host mode to eliminate the single point of failure. What I need is a Flat DHCP Network setup with multi_host enabled.

The official document is rather brief and has some errors that you need to correct with a trial-and-error method, and Internet documentation on how to set it up is surprisingly nonexistent. So to save a good 30 minutes of trial-and-error process, I'll walk through the process on how you would set up your system to be in multi_host mode, more specifically, how to go from default packstack install to multi_host DHCP mode, in my next post. So, stay tuned. ;-)

No comments:

Post a Comment