Thursday, September 26, 2013

The Wolf's Adventure in OpenStack Land, Part 2: Tangled in the Network

Like I said in the last post, getting OpenStack to work is fairly easy if you're running RHEL/CentOS/Oracle/Scientific Linux. Aside from using the automagic script, packstack, OpenStack also provides a step-by-step instruction for manual installation. Personally, I'd recommend you at least go through it once, because it'll give you a much better understanding how each component of OpenStack is connected.

The advantage of using packstack, though, is that it'll take care of setting up the database (MySQL) and the AMQP (Qpid) and hooking all components up correctly to them so you don't have to do that manually (or write your own script to do it).

Now the foundation is set, here comes the interesting part: Networking. 

Unless you omitted the

 --os-quantum-install=n  

during packstack run, the default network manager it sets up will be Nova-Network FlatDHCPManager on a single-host setup. If you have chosen to install Quantum (or Neutron depending on the release you got), then Quantum/Neutron will be your default network manager. While you can achieve a lot more complex network topology with Quantum/Neutron, there are several reasons that you wouldn't want to use it for production use:

  • Lack of Active/Active High Availability - As of Grizzly release, there is no equivalent of multi_host mode (we'll cover this network option later), so you either have to run network node on some computer node (which defeats the purpose of dedicated network node), or put in extra hardware just to be a standby and can't even utilize its power or bandwidth (there's no active/active support for Quantum/Neutron just yet), or run the risk of your entire infrastructure depending on a single point of network failure where all your VMs will lose access to public network when your network node goes down. I for one am not so comfortable with this. 
  • Available Bandwidth - When using Quantum/Neutron, your VMs traffic to public network are routed through your network node. So if you have VMs that has high network traffic to the outside network, you can't utilize your bandwidth of each compute node's network connection, instead, it's all funnels through your network node. It may not be a big deal in most cases, but I just don't like the idea that my network packets has to take an unnecessary hop before it get go onto the network. 

So for those reasons, I would personally recommend against running Quantum/Neutron in production environment and stick with Nova-Network, at least for now.

Nove-Network ships with three network managers:

  • Flat Network Manager (FlatManager)
  • Flat DHCP Network Manager (FlatDHCPManager)
  • Vlan Network Manager (VlanManager)

Rather than trying to explain it myself, the guys at Mirantis have some very good blog article about how those different network manager work and how to set it up: FlatManager and FlatDHCPManager, Single Host FlatDHCPManager Network, VlanManager for larger deployment.

The articles' coverage ranges from small and simple network, to large deployment utilizing Vlan. But it's missing the very thing that applies to my situation: my environment is probably best described as a small to medium sized private cloud deployment where I'm not on a large enough scale to worry about the broadcast domain size or tenant numbers, yet I would like to configure my system with multi_host mode to eliminate the single point of failure. What I need is a Flat DHCP Network setup with multi_host enabled.

The official document is rather brief and has some errors that you need to correct with a trial-and-error method, and Internet documentation on how to set it up is surprisingly nonexistent. So to save a good 30 minutes of trial-and-error process, I'll walk through the process on how you would set up your system to be in multi_host mode, more specifically, how to go from default packstack install to multi_host DHCP mode, in my next post. So, stay tuned. ;-)

Tuesday, September 24, 2013

The Wolf's Adventure in OpenStack Land, Part 1: Meet the Grizzly

Prologue

Time is changing, so is technology. Infrastructure management of present days is nothing like it was just a decade ago. Every institution moves along with the current wave of technological revolution is looking to get involved with the five letter word: CLOUD.

Being someone who's in charge of a moderate sized infrastructure inside an institution, I too am caught in the tidal wave. After studying the use case that could benefit my infrastructure, I decided to roll my infrastructure over to OpenStack based environment.

OpenStack, the most popular open source cloud software suite, it has tons of features and great community support. Even though it provides detailed and thorough documentations, it can still be a bit overwhelming and chaotic for even the veteran sys admins who has been in the traditional infrastructure management.

This is the record my adventure along with the lessons I've learned during the quest of rolling my existing infrastructure from the old brick-and-mortar environment to OpenStack. Getting OpenStack up and running is actually pretty easy. But getting OpenStack to run the way you want it to actually takes some effort and understanding of how OpenStack works. Having gone through that entire process myself, I hope this adventure log could be somewhat of use to admins that are new to OpenStack and in the same boat as I was trying to roll the entire infrastructure over to OpenStack based environment.

Adventure 1

First thing first, as much as I like Ubuntu based system, most of my servers are equipped with LSI controllers and it's a known fact that LSI doesn't play well with Ubuntu/Debian. So since I was rolling my infrastructure from a completely different platform (Solaris 10/11), I didn't quite have the urge to marry to one or another distro. After a brief uphill battle against getting LSI working on Ubuntu, I settled on CentOS/RHEL/Oracle/Scientific Linux. (Yes, I could probably have gotten LSI working on Ubuntu if I just spent a little time, but Ubuntu doesn't give me any significant advantage, nor do I want to run my entire production environment on a "hack"). So this adventure is all about running OpenStack on RHEL based systems.

Getting OpenStack up and running is fairly easy to do, thanks to RDO, the OpenStack distribution built specifically for RedHat based distros. Simply follow the Quick Start Guide, you'll get your all-in-one setup of OpenStack up and running in no time. You can play around with it to get familiar with its interface, api, and concept in general. When it comes to production environment however, the all-in-one mode is definitely not the preferred choice. Instead, you'd want to separate your controller node from your compute node, you'd probably want to do something like:

 packstack --install-hosts=10.0.0.2,10.0.0.3 --os-quantum-install=n  

Assuming:
  • Controller node IP 10.0.0.2
  • Compute node IP 10.0.0.3
Packstack would ssh to each IP as root (I'm a little annoyed by this, but since these machines will never host application, only VMs, I'll let it slide. If you disable ssh root login by default like I do, you can always just temporarily enable it and disable it later.)

While we're at it, might as well throw in the ntp setting:

 packstack --install-hosts=10.0.0.2,10.0.0.3 --os-quantum-install=n --ntp-severs=pool.ntp.org  

Allow the command run to completion, it should complete the single-host FlatDHCP setup (I'll explain that later) for one controller node (10.0.0.2) and one compute node (10.0.0.3).

It's preferable to run this on a minimal installed CentOS/RHEL/SL, previously installed Qpid or MySQL on the controller could cause some part of Puppet run to fail.

After installation completes, you'll get a file named keystonerc_admin in your /root which contains your admin password. You can now use it to log into the web interface running on your controller node's port 80 or just source it to play with commands.

Now the introduction to Grizzly (the codename for the current stable release as of this post is written) is complete. Next time we'll talk a little bit more about networking element in OpenStack so we can have a finalized packstack command.