<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta http-equiv="Content-Style-Type" content="text/css" /> <meta name="generator" content="pandoc" /> <meta name="author" content="Anton Beloglazov" /> <meta name="author" content="Sareh Fotuhi Piraghaj" /> <meta name="author" content="Mohammed Alrokayan" /> <meta name="author" content="Rajkumar Buyya" /> <title>Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System</title> <style type="text/css"> table.sourceCode, tr.sourceCode, td.lineNumbers, td.sourceCode { margin: 0; padding: 0; vertical-align: baseline; border: none; } table.sourceCode { width: 100%; } td.lineNumbers { text-align: right; padding-right: 4px; padding-left: 4px; color: #aaaaaa; border-right: 1px solid #aaaaaa; } td.sourceCode { padding-left: 5px; } code > span.kw { color: #007020; font-weight: bold; } code > span.dt { color: #902000; } code > span.dv { color: #40a070; } code > span.bn { color: #40a070; } code > span.fl { color: #40a070; } code > span.ch { color: #4070a0; } code > span.st { color: #4070a0; } code > span.co { color: #60a0b0; font-style: italic; } code > span.ot { color: #007020; } code > span.al { color: #ff0000; font-weight: bold; } code > span.fu { color: #06287e; } code > span.er { color: #ff0000; font-weight: bold; } </style> </head> <body> <div id="header"> <h1 class="title">Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System</h1> <h2 class="author">Anton Beloglazov</h2> <h2 class="author">Sareh Fotuhi Piraghaj</h2> <h2 class="author">Mohammed Alrokayan</h2> <h2 class="author">Rajkumar Buyya</h2> <h3 class="date">14th of August 2012</h3> </div> <div id="TOC"> <ul> <li><a href="#introduction"><span class="toc-section-number">1</span> Introduction</a></li> <li><a href="#overview-of-the-openstack-cloud-platform"><span class="toc-section-number">2</span> Overview of the OpenStack Cloud Platform</a></li> <li><a href="#comparison-of-open-source-cloud-platforms"><span class="toc-section-number">3</span> Comparison of Open Source Cloud Platforms</a></li> <li><a href="#existing-openstack-installation-tools"><span class="toc-section-number">4</span> Existing OpenStack Installation Tools</a></li> <li><a href="#step-by-step-openstack-deployment"><span class="toc-section-number">5</span> Step-by-Step OpenStack Deployment</a><ul> <li><a href="#hardware-setup"><span class="toc-section-number">5.1</span> Hardware Setup</a></li> <li><a href="#organization-of-the-installation-package"><span class="toc-section-number">5.2</span> Organization of the Installation Package</a></li> <li><a href="#configuration-files"><span class="toc-section-number">5.3</span> Configuration Files</a></li> <li><a href="#installation-procedure"><span class="toc-section-number">5.4</span> Installation Procedure</a><ul> <li><a href="#centos"><span class="toc-section-number">5.4.1</span> CentOS</a><ul> <li><a href="#network-configuration."><span class="toc-section-number">5.4.1.1</span> Network Configuration.</a></li> <li><a href="#hard-drive-partitioning."><span class="toc-section-number">5.4.1.2</span> Hard Drive Partitioning.</a></li> </ul></li> <li><a href="#network-gateway"><span class="toc-section-number">5.4.2</span> Network Gateway</a></li> <li><a href="#glusterfs-distributed-replicated-storage"><span class="toc-section-number">5.4.3</span> GlusterFS Distributed Replicated Storage</a><ul> <li><a href="#glusterfs-all-all-nodes."><span class="toc-section-number">5.4.3.1</span> 02-glusterfs-all (all nodes).</a></li> <li><a href="#glusterfs-controller-controller."><span class="toc-section-number">5.4.3.2</span> 03-glusterfs-controller (controller).</a></li> <li><a href="#glusterfs-all-all-nodes.-1"><span class="toc-section-number">5.4.3.3</span> 04-glusterfs-all (all nodes).</a></li> </ul></li> <li><a href="#kvm"><span class="toc-section-number">5.4.4</span> KVM</a></li> <li><a href="#openstack"><span class="toc-section-number">5.4.5</span> OpenStack</a><ul> <li><a href="#openstack-all-all-nodes."><span class="toc-section-number">5.4.5.1</span> 06-openstack-all (all nodes).</a></li> <li><a href="#openstack-controller-controller."><span class="toc-section-number">5.4.5.2</span> 07-openstack-controller (controller).</a></li> <li><a href="#openstack-compute-compute-nodes."><span class="toc-section-number">5.4.5.3</span> 08-openstack-compute (compute nodes).</a></li> <li><a href="#openstack-gateway-network-gateway."><span class="toc-section-number">5.4.5.4</span> 09-openstack-gateway (network gateway).</a></li> <li><a href="#openstack-controller-controller.-1"><span class="toc-section-number">5.4.5.5</span> 10-openstack-controller (controller).</a></li> </ul></li> </ul></li> <li><a href="#openstack-troubleshooting"><span class="toc-section-number">5.5</span> OpenStack Troubleshooting</a><ul> <li><a href="#glance"><span class="toc-section-number">5.5.1</span> Glance</a></li> <li><a href="#nova-compute"><span class="toc-section-number">5.5.2</span> Nova Compute</a></li> <li><a href="#nova-network"><span class="toc-section-number">5.5.3</span> Nova Network</a></li> </ul></li> </ul></li> <li><a href="#conclusions"><span class="toc-section-number">6</span> Conclusions</a></li> <li><a href="#references"><span class="toc-section-number">7</span> References</a></li> </ul> </div> <p></p> <h1 id="introduction"><a href="#TOC"><span class="header-section-number">1</span> Introduction</a></h1> <p>The Cloud computing model leverages virtualization to deliver computing resources to users on-demand on a pay-per-use basis <span class="citation">[1], [2]</span>. It provides the properties of self-service and elasticity enabling users to dynamically and flexibly adjust their resource consumption according to the current workload. These properties of the Cloud computing model allow one to avoid high upfront investments in a computing infrastructure, thus reducing the time to market and facilitating a higher pace of innovation.</p> <p>Cloud computing resources are delivered to users through three major service models:</p> <ul> <li><em>Infrastructure as a Service (IaaS)</em>: computing resources are delivered in the form of Virtual Machines (VMs). A VM provides to the user a view of a dedicated server. The user is capable of managing the system within a VM and deploying the required software. Examples of IaaS are Amazon EC2<sup><a href="#fn1" class="footnoteRef" id="fnref1">1</a></sup> and Google Compute Engine<sup><a href="#fn2" class="footnoteRef" id="fnref2">2</a></sup>.</li> <li><em>Platform as a Service (PaaS)</em>: the access to the resources is provided in the form of an Application Programming Interface (API) that is used for application development and deployment. In this model, the user does not have a direct access to the system resources, rather the resource allocation to applications is automatically managed by the platform. Examples of PaaS are Google App Engine<sup><a href="#fn3" class="footnoteRef" id="fnref3">3</a></sup> and Microsoft Azure<sup><a href="#fn4" class="footnoteRef" id="fnref4">4</a></sup>.</li> <li><em>Software as a Service (SaaS)</em>: application-level software services are provided to the users on a subscription basis over the Internet. Examples of SaaS are Salesforce.com<sup><a href="#fn5" class="footnoteRef" id="fnref5">5</a></sup> and applications from the Amazon Web Services Marketplace<sup><a href="#fn6" class="footnoteRef" id="fnref6">6</a></sup>.</li> </ul> <p>In this work, we focus on the low level service model – IaaS. Apart from the service models, Cloud computing services are distinguished according to their deployment models. There are three basic deployment models:</p> <ul> <li><em>Public Cloud</em>: computing resources are provided publicly over the Internet based on a pay-per-use model.</li> <li><em>Private Cloud</em>: the Cloud infrastructure is owned by an organization, and hosted and operated internally.</li> <li><em>Hybrid Cloud</em>: computing resources are provided by a composition of a private and public Clouds.</li> </ul> <p>Public Clouds, such as Amazon EC2, have initiated and driven the industrial adoption of the Cloud computing model. However, the software platforms utilized by public Cloud providers are usually proprietary disallowing their deployment on-premise. In other words, due to closed-source software, it is not possible to deploy the same software platform used, for example, by Amazon EC2 on a private computing infrastructure. Fortunately, there exist several open source Cloud platforms striving to address the issue, such as OpenStack, Eucalyptus, OpenNebula, and CloudStack. The mentioned projects basically allow anyone to not only deploy a private Cloud environment free of charge, but also contribute back to the development of the platform.</p> <p>The aim of this work is to facilitate further development and adoption of open source Cloud computing software by providing a step-by-step guide to installing OpenStack on multiple compute nodes of a real-world testbed using a set of shell scripts. The difference from the existing tools for automated installation of OpenStack is that the purpose of this work is not only obtaining a fully operational OpenStack Cloud environment, but also learning the steps required to perform the installation from the ground up and understanding the responsibilities and interaction of the OpenStack components. This is achieved by splitting the installation process into multiple logical steps, and implementing each step as a separate shell script. In this paper, we go through and discuss each of the complete sequence of steps required to install OpenStack on top of CentOS 6.3 using the Kernel-based Virtual Machine (KVM) as a hypervisor and GlusterFS as a distributed replicated file system to enable live migration and provide fault tolerance. The source code described in this paper is released under the Apache 2.0 License and is publicly available online<sup><a href="#fn7" class="footnoteRef" id="fnref7">7</a></sup>.</p> <p>In summary, this paper discusses and guides through the installation process of the following software:</p> <ul> <li>CentOS<sup><a href="#fn8" class="footnoteRef" id="fnref8">8</a></sup>: a free Linux Operating System (OS) distribution derived from the Red Hat Enterprise Linux (RHEL) distribution.</li> <li>GlusterFS<sup><a href="#fn9" class="footnoteRef" id="fnref9">9</a></sup>: a distributed file system providing shared replicated storage across multiple servers over Ethernet or Infiniband. Having a storage system shared between the compute nodes is a requirement for enabling live migration of VM instances. However, having a centralized shared storage service, such as NAS limits the scalability and leads to a single point of failure. In contrast, the advantages of a distributed file system solution, such as GlusterFS, are: (1) no single point of failure, which means even if a server fails, the storage and data will remain available due to automatic replication over multiple servers; (2) higher scalability, as Input/Output (I/O) operations are distributed across multiple servers; and (3) due to the data replication over multiple servers, if a data replica is available on the host, VM instances access the data locally rather than remotely over network improving the I/O performance.</li> <li>KVM<sup><a href="#fn10" class="footnoteRef" id="fnref10">10</a></sup>: a hypervisor providing full virtualization for Linux leveraging hardware-assisted virtualization support of the Intel VT and AMD-V chipsets. The kernel component of KVM is included in the Linux kernel since the 2.6.20 version.</li> <li>OpenStack<sup><a href="#fn11" class="footnoteRef" id="fnref11">11</a></sup>: free open source IaaS Cloud computing software originally released by Rackspace and NASA under the Apache 2.0 License in July 2010. The OpenStack project is currently lead and managed by the OpenStack Foundation, which is “an independent body providing shared resources to help achieve the OpenStack Mission by Protecting, Empowering, and Promoting OpenStack software and the community around it, including users, developers and the entire ecosystem”.<sup><a href="#fn12" class="footnoteRef" id="fnref12">12</a></sup></li> </ul> <p>In the next section we give an overview of the OpenStack software, its features, main components, and their interaction. In Section 3, we briefly compare 4 open source Cloud computing platforms, namely OpenStack, Eucalyptus, CloudStack, and OpenNebula. In Section 4, we discuss the existing tools for automated installation of OpenStack and the differences from our approach. In Section 5 we provide a detailed description and discussion of the steps required to install OpenStack on top of CentOS using KVM and GlusterFS. In Section 6, we conclude the paper with a summary and discussion of future directions.</p> <h1 id="overview-of-the-openstack-cloud-platform"><a href="#TOC"><span class="header-section-number">2</span> Overview of the OpenStack Cloud Platform</a></h1> <div class="figure"> <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAuwAAAE2CAMAAAANszcaAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAwBQTFRFttvjyOLs4/r/nbzEcXFx8Kypi4yM19fX7u7v+vr69PT13vr+6tht9tfW8+mr5fHU0Pb9vefyzOSs1Pn/17Tq8PDwztDQ3t7eoKCg7Nv1xOfznre+i5yh+P3/3e3I6urr7fDx7/z/lJSU7vT2vLy9ysrK9vjp5eXlqNLjRkZG1Oi5+/v7vN3q1uvy5tBO7Pv/xdTW7+GPo9HjsLGymqiswcHBm8vd4+zv1Nnax5ji5/r/Z29xt8rQ6HFrrKysQXB86/X49Or55fH2y+Dn4uLi7ZGNnc7h3e702trazfb+tN3pfYSGyfX9xfH5zrvZnJyc3eHju8THXWRlvtbbi6as6fz/V1la7e3u0efvrt7s7OztsNXkzfT78fHyzNve8vP06Ojo8vj7yp7k1/j/Z2dn8fn8x/H52OXpzbHp9fv91K+Ly5/IzqXlz8vYwNTb3L1TxMTE0tLT+Pn62LZt4OTlxvT8+Pz9xfL79/7/zM7Pos7g6u3u9fj50PL44vP51+Pm+/T0z6eq+enp09bX7/X33fL5wsXGp6eo2urw3vf/wdvl9OvVobzd8/j50/f+yMnK1NTU5mFayvT8rL7Ex/T/6PDy1t7g9/f3+v7/////lcfb8/b29MTC76Cc/v//xe72+/7/we33+/z8/P7/+f7/k8zgyPD35urs/f//0NDQ9/3/9f7/9v3/9f3/8/z/+fr68vz/8Pz/8fz/vtyU5Ojp9P3/7vv/4ccu0dHR/f7/7/v/6+vs/f39/v7+9Pz/+P7/+fn5/Pz89vb28/Pz+Pj4+Pj58vLy8vLzy9LTuNrf9fX29vb35FZP8/3/5M3x6oB7psjQ7fXipLO49/DH/P//+/348v3/+PL7/Pn99/vyxN+e3sLucJSd1/P80qvnr9Pc48s7/fzy+fXYgpGVzuz3lLC3e5yld3t8ZIuVWIKM2+v78f3/2eP53PD2+Pn53fX8u6Tg9fz/+v3/38Q53Nzct7i44+Tkvtff7/3/p9bmw8PDx/P74dvk4Pb83Nne////+1n2nAAAAQB0Uk5T////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////AFP3ByUAADglSURBVHja7J0LVFNnuvflYuBAaaHtNFRCRyMWaJPSEhsYiH4MolyCi5YPqZZKcXTaca1ZrvqtAl46R5el0sxa40ShNTu1MQ5M5XiqB5lRo0i5ON+Z71gsDGVuHR07RUg7ndZ1XOip4+rye9733bmSezZkJ/v9K7nsW3b2/u1n/5/nfffOvLtUVALRPN6v4dNUPBGFffZhP0jFC1HYKewUdgo7hZ3CTmGnsFPYKewUdgo7hZ2Kwk5hp6KwU9ipKOwUdioKO4WdisJOYfdFmdOWh5mK3zpj0HTrwfiZ01HYKexhAXssoGrWuhx30zzjINgqOpjq+K1gktSnKewU9jCA/TdVioNPi1yPUytaXcDuZQCFncLO39BujjdnHlwiEqU+d1AEvgX+RE/HkjGZZoR8q1Q9bXkCthWKg/GtItGSzFSRKH5aHSuahiHTUpGUTCTKPBgvFSko7BR2HkqhViDiD95MtcKeSlxM628A799sunlwOtbyRGBvbSVOHWbRQmSHIWB4ps0w0ZIhRSrMSCM7hZ2Xit8EoKNIHGuFnSSm6ptaKWC7CV6KptknArsZp6RPp6pF2MYoFNMi60QwYNqsiKewU9j5KCDUFeyZZoWi1ewSdpTUHrxZlal1DfvBTKmIwk5h5yns2Ma0HkxVDGXGWmDHZgTsyaatyKGwTwT2VDRKtOTgTYBdbbUx6oMs7PEH42Mp7BR2nsKOE9T4g5lqUVWqBXaUlSJ3vokkqJvsEtTnwMAs0aqlCtHB34hEmbYElcB+UyR6msJOYQ9DbXJ4oi2oFHYKO4Wdwh7+Ujg8Udgp7BEMeySIwk5hp7BT2CnsFHYKO4Wdwk5hp7BT2CnsVBR2CjsVhT3CYKeid/EVCuxUVBR2KioKOxUVhZ0H2kA3AYVdICraUUQ3AoVdGKzr9UpKO4VdEKwrAXZKO4VdAGpSAuxFSmUT3RQU9khnPUsJNgbCexalncIe4SrVl94F2PEzFYU9srXhLoadlh8p7IIQhp2Kwk5hp6KwU9ipKOwUdioKO4WdisJOYaewU1HYKexUFHYKOxWFncJORWHnM+xNMqWeiiqSpZQ1YdibsuYVMVRzIr2eboOQqGheVhOCXTavj24MCnuka54Mwa6kcZ3CLoDYrkSw0+1PYRfEtqewU9iFBbuBaq6kpxs7dNuewk5hp7BTUdgp7FQUdgo7FYWdwk5FYaewU9ipKOwUdioKO4Wdip+w/0fD1u236DamsEc67PHin99sBaUuF9OtTGHnEexdnOo/bm3NWI5Br6qSSqVLuqhs0uvpNgjZtucY9vj8nysw56mIcywKO4U94mB/8db2jFZbQCdSVFHYKeyRBXtD/j+dA7o09eazW2RZCgo7hT1iYH9RvP3mzIC+V/XTOnydK4Wdwh4ZsDds/edy54DemgEBfYf1om4KO4U97GG3lhbtOK9a/uwWNqDbwf5zgW1SuRwemtZGyYvg9Q8p7HyD/axf+j8NltKifUD/p0pVsWPm7ToA9rPCUtYOeCjOKpYrS8/WFXc4jdXrzwpQSSv3Dw8vO3Qevzm0cmRkZCV5c2iECE0zsnI3OyzJAE+7Vy4bHh451HX23RGLDg3C8JUj8w2W5e6ePzJMJjo78i4eMDKyx+1aBAJ75isZCoeAfvNZVVTWTM7rooQIe+m8eaVnO4rr0KuOuuKzFPaz5w8N7393/vxlw8t2w7vBkWHgdj9+Mzh/eBkLe1fS8PB8NDUMS4K3u5cNwzwjMAjBvgxPdwhG74GjxrLcpP3DK+fPn78SzTeMDhiY6dBZTmG/tWRJhjUTzXjyf+bNvJ3YDtnaIoYxCBF2mVwug8he1wGwo8gur6vb0CFs2A8B2BB7z68cXolC98gwGghvYBgGm43+w8uGcUiHYectj2A+8Egy3Xk8fCWZDM+x51/QEtFEGPaVw/PPn/cCe4dfeo7ATkqLM28zVldX2mS9lB7B3iEoKTe3KDuMxXU3irLkRoBdudlhtF7fITSdX7Z/N3kxMpwETyPD8DAOQ2HIfDwEj0wCikc6rMPwVNZFWKcbX7Ysafhd9uX+PYPWSWDeLjgQPK1IILCvx7CvqJsZ0LNkcgjoUXb3jRAc7KU7oqKUpR3FyjqZvANsjFEGf8KG/RCEW/IKeB63YoyfUPzusIyc/+7w/C7LsJW2MfbT7Rl+F3h/nJ3BbgqA/dDwyOD4rMBe5xzQo4rZgC5k2M/KZMXFMpkRefaODhTZO+TKYkHDPv6uNSqP719mjdm70WvHyD5/9344B5BhjycN70+aGdm7EPXsAm3zEtj37F+2u6NjtmHPipJvsLsXjaAju3JDR8cGJSSoOJxDVN/cIVsr7Mg+Mjw+bn15noV9N9jrcQQsTlDfZQP1oeGVj7NRHPhG9RvnyL5sWcc4OUHYLxfDvowMnj3Yd8yLKmYMBocbLwkZ9tIs9LijFBLUDhzZN9fVzdssdNgdX7LVmJUEYgvsKLKfHwRXz0bx8fGklYD7bofI3oX8+ngHsvtOrr4Dpl3mEOo5h73U1V3GBB3ZvUmQsFvdxbJl51nY5yd1jTtaEWzBk4aXDVqi+PnBPSRltYvs7+Ja47LhQ+PI1du7FnA9e+DYOD97sBfPRH2DfB6FncLeYZ9cHmJf7cbhnFRjxh2rLDiyw9D5w/Pthp0fGd5jP93gsmEiWM6gbbnExozDkHc75gz24rXytcVeIvvmUgq7sJRkLQgCno872g/Hagy83L1s/yG7OoxlPPvMuvXdw/vH0ZsRB8+OzhwOJZxZg71ILpeXbvBuYzZn6TdQ2AVVjRkfISG4K2k/Lrjbwe4c2WHqQ8MjNmszvtIhso+/S5YEgw/ZlmuDHSy9p4IMF7A3FcvXyot98+xFSiWN7ELTnv3D84HBQ/v347jrGNnnJyHttpbNAWQcnucfwsUaNnizkX3ZfmJ/DuEID8tdmdQFc+55nI3s4IJWPu4FdqM/0tnBHlW0FgJ6k+cE1W5WYL3IKGjp9cL7zuNJI9hojyThtyPDtlHzWQ+eZLwBsKMhNwBhmG6cuPOR3XiyG/PxMHAxZIHj+/ej591kucPvwuvhETwGclf3295/2I0OsBu8V2NscxbRH23jneaE9vGk+fPnJ8EzepeUZBuzJ4kImN6dtIcMw++MeJY9j1un240f94zbTwNDDuEF4yHj7ALHOYRd57kawzTJ3MK+g7IlVNqNLOm2Z6Ibv0C64TSt5dUvPCyPnf0G+9r14rmM7M6wg6spLXUPO3IxOmH7Br7ZGCHZKu4ie1NpqXxt0QwbkxFqz05hp7AHHdmj7GDH5Zgml559iT3sOLaX6ijsdH3CK7Jj2OXMhlJ5qbzIfYLqBLtxc5b+hxR2ga3PrvqWljz0Iq++Bd6hITAMvSZjjcYW9A4e6mGSejQQ5oB39XkwITtLXv0usoi8PFgWDIUJ0SS7YEl5RjRTXt6sRvZ5a4u9VGOcYTduppFdYOvTcjktTVWfBq/SJtMu7zJehleXjTDwsgqhMAUvWuovp13Og4c0Y1oNvDXmfaHKS6u5nFYPE07eQLNM1qTBC+OTU2lteV8YWz6HpU7mpaV9kaaCueqNXzx5wzg5eWM2I3uxy45gRZ5gFzhdwoNdV4MiMoI9rwb+piyww+PnMFalgr82GF2fRg6Ieviva2ubMurQe5gQ5r+sU6XhGKm7jE4HaU/C0TOJl34Z/9ftqqmBxV+eC89u1w2sNKrONhTDfvMGpUvQkf1zHQt7Wz28socdwIb3KLrD6EkViextW2p0xs8hdBtZ2PPQxFOs53nyCzgdtEzVdBo/r8ljYYfIXp92ubNtyyzAvtAN7MVR7KXXjrArKF3Chh0jCODq2pCxtoP9xuQuK+xfXFYZ62vq63VpNZOT4GLS0HsCO/CPHm9MflFv1Ola0qbg5RaI5vWX6wns4Nnr0yafJAeQV9h1fskF7EVy+3tpFLPDZAT2dF0opdfr+CSerc5crM9UHjzUp+nAa/Tr8qZ0NXmdeTX9l4Fb+NO1qTqxjcHTdOp0afX9lztVafWqmn408DL8033+OZ4M7A3SZR37ioxH/+HlrqnJ9M+9fNegYW8qjnK+9Lq0CZkZIoCd0iVs2PM+RwkqmI28msm0qbxOlIzu0sH7zxGyKH2d6iSwo2mA47T6qRZd5+c6K+z1/0vX8jkktzB9WtrkJIZ9F7zPI7DD4tO26L7YpftiVmHf4eJeGg73SfppBo3sQocdYCWFxMX9u3BEzqtvgTBfX28ZuwuQR9EfTdOS1wJv0KhdLWjgLvinI3/1ZBr0Og8voIWMgSXtggG7yGSzB7sHZcm2PHszFd1HidIldNj58l1nA3Zl3f88maGw3jRMgJHdr4SQwh6usGdFqZ69ab0PZGrGQ0cFEbo4rH5Q2MMB9h3zVKpXWq23O1X8/NXXQv31NpO9uTnUiFtajwnsm31tjgnzzROpsINxeVZRZftBgteP8uDLjRfp5bA30+X6ohBTjnXjh3o5wH5D7mfvoPDbPJEL+455W1QZtoB+8+ev8ufrRenlevgfFVrKrbKsTmDN7mGweSIa9rqfqvYqbL+k9PTrCzs7efX9ULuWXhZ6zFmR1QmqpwlPN0/4wN7plwjscpnq2YxU20/jbX+1k4+C3SkLfG60iTg9fGWcsGVZsVBvnjBT4LDbBfSfv72Qv99QJgsc9Fk4Tck4i6PsGoZq8wgKdmtpkdffT9exeXN/YKDPUgGE29JH0MBv7qawe9LPxAj2KgjoP+P71xtnmK8BXT9Bn928o6OD4wWSVQ4sFnQxDIXdSzh49fWjvP9qxq7XvnuLGRJrGwwGY8gjujXPNTDMazrOSyuBRHijsWknw6wv22noMlLYw1oQszJNpjiTyVQyyFx4p9OoCz3oXV0LSwYGe01lXzNd3JcS/QMejrohkymzDLYRujdEp47CHpYydjELxSUDmXGmuMyE+DKTtqTMNM0wXSH0Lph1hhkkh59piGF2Nhk7uOfdN0ejGzcubDAMxpuIynrjTeKvDeMU9nAT7OxfMEwJ7ENtyXp8FclAL4SvsgTmxe++Nm50Q/psg95h+MWtuIH1YpOFLzjrXGCYjln4KO+8dzAMcI7WRZs5NJSJDz/TIMMs7NTpKOzho3EDc0tb0ltmIf3RR+77PeIdRfk4fMIOAek6XROsgjWMJrB4NQwwDQuNs+BnPPFu7HrnlpjBR50lGDCEd3GDSXuBMRgp7GER042diweYwTK04+IhUBkefeSTX4Eew7yjPaptGBgS7/yFLR+bZdJRk2eHgXmxzJSQqUV8x/UO2OFlglWNB3ulm7v4DplML2COMhlMepFMhn8qaH0JXr9ehvnuO7oOHYWd56gbGCbOJBazpDN/JqRjffLY7wzsHoV9Cq90s0G6i8b9DmPTi9gzYNIzwaozf3r4Lw9j3glesLIlt94xnJ2FzgBOvOu6mJ1lYjaK401UFIXvNKuUFVl515Zg4o0RDHt/uMto/NmtQWY9gh3Hzj/fd/+vHAW8kz0qhj09pC2BfMyf+BWgUQfOtQ3oVAMUIdKZhx/8HugHhHc2nIKhYZhf+JGE+im0gTr7N3+Ny1PofIJJ37DW7p7KO/C9fgbjcU4hTmAyj77WZeyPNLmF/Wh+i9uZ8p+ZMajVMuiZGSPJqGda3a+EKMgv0YkdcZkW/DB2Cb9zJp3l/ZFHDVYLARO+5ltzU6BlRt3C7zCDBC/WHT/8lx98z6If/+X7mHcxO74kbifT1TFLxKNMxiRGeXpcCd5EG9bOuHZ4R9QGA+I9U4wrRhDeuyIT9nRntUhFCnNrukvtVStSt6cfXewwUJSPn/oVIkWq04xkVL4o3a02pQcuXUfT0bLBIVRSz0S78feY9sc+ccE68TKWkFoSb4p7EfKxGUsMHnRQFyoIxZHiixiT/n070u15Z8MpmGg4ZP1qbrJfUfcb6IYxfSFkMmJipHAw2DDPzVVmpdatgyYtES9kOnTpkSM3sLdmbEtPl+51OUtsix3dTrCnKxT96YtF8DBHsBsZ1DSiZf0wc99jFr/izDshHVJWq58h+djC124YXYIeTJnxnVvxzPoyk8nqjr//lx9/z5X++tEfMe9lBEWwWP7mq96At2Yy7CZqQj/y1iR3gbuSpKpFthOOKRPsVcTDvjgWxe04dXp+aqpaujh9capa9Ey/QiFVZ6Snm7dDBG+NVbemp0rVzyCMRVIg+kX1XhTZFTDjM+b07akwPn2vSHQ0XQTLyIepFGrpURT6RQp4TIWp+0WtovTtImlrgLAbDF+LtQO4zJgAO8hg5Rv8CorxQL6FdFSPsRRnLONRSI3LxMQbOhxJDyovhEWLTVotKTMi0v/40V+/516Ed9wWgOYYYAZ2NnUYAwTeMRB0dLxzawBlMlYjVSpT6rPWYt6djAwhvRjG2/v3sl6x9hZsHmMEw86G4U038jc9k54h7U/NQIMU6hstsUcBZRSr0d9ihPVR81EcvkV7LZEdherF+MSAjxmI8/nmdHZJe6X9aMTi9BaYLTbj2FFzS//2gGBH1ZcEiJzsbnSK5J888mfMO7j3+3Hl8c/3uRiP92g8vOjggHSj0XjWwCws6x3KZElHnuFPHkknevDhPyHeG0g5UozqRQZGFyzvZ2GZZaY4rZV0RDLrV9Y2ORj3efImW3EG+Xd8PLAnHDjZMF0CgJ28iE03KxSKTThqY0+yN3Y7fpHfKtqUvhc7dJFaSjy7Ap8Y0hdnSM2KY6nsYXHDsqR+KZpNmn5UIYrNh+Ufy1AEbmM6EsSYVLuYba/770M8//nPLosz1vHM+gbYm+MBg25faDTgoI7wspYZH/yebyK8s+V31PbFBFBqdOL9LHJ469dn4jYHG8l2fBs2wNAsTPoGp/Ek/g/09pZgaxXJnj0WRWuIwizssfkgHLWJAd8uQi+eAROzKT2jtR+Hb3O+NbLDjNKMxejlXkAazWI9bAjs/WZ8tADkGWj6QD27SYvsC/P7R37lRo+QfPQ+t+P70OiEeAJ78JfpdSC8wK432JUZfRZbfm8YgmMYInFAl+E58H7juTgclmeSbOPdYGhyXZxB11zio2AwIa4komEHT7342HeR045dfCwj9RjYGDZqA6JHIR63pou29yta05/ZlH40ltiYfORZ8DT5KGgv7pcq0hf3t2ZYYMdLguMgvV+6HbKBFjOG/ah5cfregGFHFerfO9pzu5QU2xcsl+M/IZGd6TXFPedP85KnC0QbwDMMMs5lRl/Flt+ZIa3JCrvfzNvx/mtTJl6ZYne/UigjPw1U7O4ehnX4J8sHxA2R7NkhQqtF6oeAWzUkqC2QoIqkGcdIZF8M+Wjq4vTtasVRyC3NEObVImknEN0qgiCtMIOhAboz1NLlihaYtMUCe6sULSkdJaj96anqVCmGPX2vWgq5QIA2pgxgf4TYkd87OhWLUX+k73esnXEcT2rucBA8wgCgDZ3BYs6qwVSG8frTg98LTA/+CeNVJu4I4mpriyeDyA5HjaMddzIyDBnvKrSziSxKjEyRVI05NkNoHDwh83HMpWzDnaYgM6anv/++4wzb+tPdzO3uI7ypCwwywM6i3Wfj2Q7v+5nf/WpGjoqqMQYD5LTY6TTgyB5gO5KTjblQgrvvohKMm2KjJ/34L3/E8yK8jAEW3B0u02pAsOvnlcK3BdOunEEyMI7GY/vueDzsiCIlyqw6vDbHIkLuYGeVL+Lxuttgt5oWVEdn6zDEuBDYHcY/9jtDn8FavUGwNzDjfpNudO3ZgYwE5kGSbn7fL95JExOktA8aUJGpw79o7gl2toRuMJSy5RhSbAGS4TUeD7z3MQYr7zvwD5eTYjyCXcx0CQH2o3t5vO7pYq0NdtxuhOK1o0W3wm4/3r564wh7kF6GwN73oMV+933fR+fOunVs9B9EC3H27MYAgDcOlbCw23p7lcpIGd3arGQbj+I/Gi8rRlupVMbaduHAzm/hBPUR5x4BDhV3e9jZ8Y51SgR7yYDRF9J9uIXRc2U4slsB7uvzJVMlpFsOjAf7XMHuG/KOtE+bbLBbAzZB2jrQbrxSZvkhlVJrRZ7CzhN1ip1hBzGMYzHdAfaZ4zHspl97I93X23XBskhkt7cmfR5rkDMsD47sCYZAbwrmHnbMe5Nd25IT7GxvMMc6pYA8Oy/0vhsxDIewc9Pr0T6yO1lx130FMOl//Mje3OPIbgqm16Ol37O9jbH93lWdw3vn8VGM0x1rMex2W53CPquoH//AtTiB/T5kY77udH04+X2usXr2mb1f/vTwX132ipnRc+ZBn2Pp++51HKSzJqhBwh7XkG7d6MfDGHe+w45QP9HTc3KRCxkTOID9d7ik0/mHGboUgLax1Rg3vb3suSZlRlcRn/XsPn7kH9zpyy/7uYJdzBjJFj/Z0/3Bf4cv7TyH/f33//vEyZycNQUuZeIM9n9fE5xy1qz5Bh6++brXRWS39+bEsXj0Nmw1Zk3wetvEFeyd7BZfk7Oo+4OwpZ3vsP9396I1BdGJrsUV7HF57j7BX71sch3ZHXLRjx722J+AjewcrE3+Vq5szBrLIqMLcnrClnZ+w/7+8e5FgPrGffuSXegUVzbGtGufn3rAtfa9bHIb2e3r6R7r78Sz2y81QCXuFHOVoLJbfN/GjYlhTDuB/X1+6jhiHVAvP1B5wIU4SVAR7CVvJb/AiTQ/y/QU2QnvD/c97LFllVRjvH3UEe9rU8CVZ9eK97GbvLwc037i+PvhKH7D/kFPTvTG5PLK2y775HEGe4ITAAErCuHlKbIjfcR85LkvGIqlDVEcrQ03nr3Y8jY3GdHefeo4hZ3rwH4CAvu+8sYU5RWNCxWIOYO9QONdeu+TnMOwM0HCTjy7l4/KWuB1bQq4sjFi5hi7yCu565I3RhcsOhGWtPMa9g9OroneWH7gpVyZ673JVYJasjCaG9g1eXEcRXavsMsqvK9NPkfVmLJ3eiyLvKZMAdrXLDpx/DiFnUvWT3XnFCQmH7jjJrBzF9lNL2s4gt1zNcaPyN5wzssnXcvyAfatpjimiYMENd+2zAXK1cmJ0TknT1DYOYX9BA7slS8p3ZyxOfPs3MHOWWQ3efukK3qfTJWd3Q4c9q12sGsqVpXvC9Mklb+wHz8F2WnivgMvVazSBA77Y+Hn2T9CC/EKuyb32pzBbmdj0AevewGS1EVhmKTyFnaUna5JBMd+WOlur55r8A77I17GzwbsfRzA7tWza+pS5gx2W4JKzinhatv5C/sJXHaEwO7enLpIUO9nHg0h7DmvlXAU2b3DnpLli2efBdg11/RHWNt+nMLOZdnxzhHlNQ+w9w443TTG0aOji1GdbirjAPsnjz3KMAklHCeoHER2MdPp7aOurfKlGiNmirzBrvQI+zzUqPSqY7VKtupOWNp2Avtx3ukUmBicnd52X2IDzx7Xy5BLpz+ZCTu5GNVpvB3slhud9sbxqxrjo2fX6K94r7OXxTvf2NEBdnJZdan91RwOsONrmwYz4/Kdi/y3yx9AtF86dTycxFPYT13qRtlp5Z0UD7sUJahi5rOnHiW3v3OAndzOkbn3njNP3POx3Xgr7Pj2pn2P3osWwlmdndiYvrmB3WuzEnh2cTy5W4aNdxvs5GJUrD7beBvsSnIxKjPYW+YU2TVXlOtwv4Hu8KKdr7CjwP4ABHZ37Uk22D8+c+aJe603MsWwf0Iurf74nk/PgD4988RTn1kvw8awk1sN/A0OhTNoIRx49n9wW42Ja8ixLdWNKmS+pMsQuMlNwdg7w7Cwo+tNEcoborLIlaeW44GFnb2lL9NUPNOzs7YdtaR2nzpFYQ86sJ+EwJ5c6bY9yZqgYtjPIN7/hnl+xPAoidmfEdIteuIpcjzczzD320g/ww3sf/8RMPqjf3BYjUF4/QMv9e9uP3VBlncbg1tQLeBi3hHsDgNsMR7f2BfDzt4/CR0AdS5hv54Cth2XZCjswbJ+CvV2fKD8wHtKT6fqUjvYQfdg3rE+e+qJMzNE4j/Rx/dYhnJgY/5Ouuv+4B/ceXbA6x/sUt3SfkXpQzWG7S6gJDcWYIrgOUrOMDPuWU1+dwPwlzOl7J3CiJWfWWdnK5+3K/ehkkz3KQp7cLB3n8RlxwMVHisORQ0OsCPesT1nnvr0jEt9ei9Lut14BHtwCWrij1hKf8RdNQZsjN1S3WjVNe/VGFvfGBy+Dexvj7j6NQJykzBW1lsQzGhBtXUJC7sklY+wn8KdYvaV3zms9Lg7rZ7dIXx/3OfMs8OR8Nm9jmOCh13j503uvMHuLPfNSgt88ux24ZvFucnNz8zos4pn3PTXuW+MxrFLWFjZdl7CzpYdV1d4dqWuYMf2/DO2EGNPOvY4LuxN8J7976GC3VvHx3Mu+rNnbWCsd76b+RNi2Lr3zfPcXcDWJcxi209R2IMJ7LjseMRzYHdhYxx5t+SgLOmP3uvCyHORoIYK9ms7vBRC33F535goyz1NHX9RCdw8mBz5vD7GWwuqtRyUG2a2nY+wk04xjZ7ak1wlqK7SUQDc+sL1dMHDbufZoxeKOavGePfsXiv/LvuzG+osPzNg+yVUpayU3N9ROaP/gAfYr+wIM9tOYD/FI136g63seD1w2NlyJDLwfX9ztDQzPfubGp5VY8oSerxWY7x3fPR0dwG7n1GaR34lj01JXcDu3F3AVg9KIdek/uHSqXAQD2HvxoHdc3uSrYtv371nPOiee53Nu0vYS4PrLmCts3NWjUEtqN7q7BrNOlkgNqbO7gfyIJ4XOd2c3QXsrqoxrG0/gq9J7bl0icIeUGBfhMqOlS95bk+y9Wd/6swZzzB7G28y9f4w6O4CHLegst0F/uF5jRbU+Z2gOvaNIa1LDj/s7gr2uJ3ubF7Fqju4u283hT0Q1i/15KD2pDue25O4hZ1//dnBxviwRl46Pp7z5e4Cdc59gH337NhJ3T6Auvsu6rlEYfcf9h5Udkz2cH2So2fnI+zctKD64qw8n/yiX42bddivKNcloyT1ZDjYdp7BjgI7Ljt6aU/iFvayYv71Z/cJdi8dH325u4AvsMftzHF/dlEeKQ8X28432PEtwMobX6rw4eL5ogaOYA82QeW+P7tW7EsW4aVZKfpVLTewm/I9rYPVtl+isPuXnZLA3nzEh8DOnWfnLLJzVmf3pT+7146PxxgxN7Bv9QS75vbtSrDta8LAthPYP+CJTnSfRoH9jtf2JF4nqHMX2T3fTwPBXsZsmFXPzrYtoav0znWf+IDf4hXsJ7rP4d6OzQu8XnKGVCyO2GqMj57dc8fHnnfKvF9wHTzs6EoOYtv5TjufYD9xoof0dnzPa3sStwlq72aOLssrwDZm7qoxmrqU60HeXYAD2DUpymbUtnS658SJMID9BC/UffJ0Ab7I2nt7EmeR/dNZSFDnrM6uuZ5S5/XuAsWz7dnRMZdbibr7otjOZ/EI9m5kYlCnGB/ak+w8+6NPPREw7Kg3AeewDzFzlqB67vjImY0xeYP9Su66StTd9/TJbgq7b7AjEwPZ6R0f2pOssJcMMn19n3ng3T3spOPvUCaHNoa0WXr+3VPPsP8Y/dKY77B7vJ8GSlDFzIZ5wcC+Yy2yMTu95TTXlCnlqLvvOV7Hdv7A3t1zbk30xn0Hmn1pT8LajG4Aqi1Zjzo2uro0yRPs5ALtgd4yk4nrBHXAw++eeoad/K7YYHyZrzZGo8m65g12p0s1/IGddJ1JgNOnt3sKoy5hOEk9191NYfcxsD9Qfme1L+1JbIIqRqgC7wmMm0vxzjzB/M1dX3eWdC5hz3kHBeWy3gFXv2/qBfYfYNKZ9SUm3HPZtwTV4/00cKMSWRnbpRq+w05uQZBQovUFdk3FqgOsbe+msPsW2JMP3Dnia2DXvPGqYaCX4BqXOWR/aZKdV+n7eMZVTEB632A8OVBM2sx40zPfcvVjBPlaLV5qWfygW95dwc7+sNj6EjK777B7bFZ6Ox4fOmK8MuyFpT7CTrq7D2XGQSiJE7/lQzDYsQ7RzmvbzhfYUXZakLjvwJ33fGpPYit9Xcw7JhPsEMx7w4AL3u9xvLjjiac+6+tjrKTjviPMv/poj31JUJ9j1rPLJoh9/y8/9g77g5j0oUwL6fElr+dF+7ZSnu+n0cWAk9FaVgb3WvcF9ixM+kAD2rCZTN/ir3xr4MK2fQ2PbTtvYD95eo2l7Hhd4zvuBdffBExYcMkp2+EKvCfsYP/0HkS6LXyWrF//+q7NnTka7mDXnGCYElOchXeUTszg3Ql28uOoQ+whi/FimnzfAh6rtNHnjr359nMoCUDfFt1Jo2iDd9g3WDyeOD5uax7j6wZCth11gOQv7TyBnZQd91X62p5kv0OPM035pl6W4On4QeDZVo60RvZPyZ00rKTHmeLAzUT78UF631YnR1P6FjOYyZoknD47/sCvPezkZ96t2QMcrSVvlxp7/Fgt7z8kdhwdf9oyy8pAfFe6hZ2kpOyZbz3TVwRfyOdVka1q5neSyhfY2bLjah/bk5wA07wBu8gSGol/YMuR9/z+Y2tB3RY+wVK/+PabJ875w7qPsOM81cjkabXxZSzvQ468W2HHZUZ031DL4QeZIHPcz+/uww+JRZ/49s2dhoFMnKEgO2Owu2+vPezkYtQBID2ud0D79ltdPX5tIE3WbX63LfED9m7SKaay+SUf25Nm8tXZuev1eItfRiGsD9/uEUV2S0HdQnqZqYxhjH5/hN6vw09TDKcRk336bClHEtgtZUZL9gCHxKsv/+uJAn/X6ppvtatzXcxRQD3OVEZWhimVOcFuSUnL4nDDmCHH/51wRcnvtiUCe3eI1XMOlx2bV/vanuQ2Hyszaa3+AfzMx08xf/ubQ5lRPMAsfvO6fzHdf9hxPO16623tUIN9+ox5B9gtZUZLRajEZOplmJxAvvMVX1erQPPGW31MvO3gI7fAI7BnrW2ybKWSocH8l4uOFwSyMteURw6gJPX0yZ5uHooXsPfg7DTZx27sHvjqOY7ysd44u3hqT3ocRLZXma5zgS1dH8gJBz7d5JA+//Gjh5k/OmQPpnjG8OuvonMC/NK5vm+yguPMW/km/MFxeGWa5HXzmGJyk0dkp5Czh+HRge6A6ynK1WyS2kNh9xjY77zkc3uSBxlxua2MpXsAW1AiCJ6L39AUBLpgfUD26tQPX84fHCqx8D5Ijj9bmbHBtHUx01kQ+Pet8DOjLzKwxx85+IB4i52Ki2cGdn2VkxPM1q+73YhvrsFL2vkAOxvYD6wOMrBb4teJ628uZIYs9j2BdTVxr+cxx4OAKjDY8QkHYAK2tNb0eb21zAjsl2q+jQ5ira6n3PZzbXI6f7jr9QGcLJC2Lwa9FotNW7uY9OggN/6VVcS255yksLtkvQeXHVF7Up2GI51jmF9DXEct3Q2kisYUBbtMfeCz5uR8tWvAynh8CQms8aa3f2bsCXa1vN3x0XVAIMcfblTrRZ0K4Az4piYnOvgtf02fUslX284H2E/mkFuABVR2dMsX5GOwIxuY9fGDr+8q6gx+R+qDm72T6dqK4qcpAV0aiqr8zLcaDujS6APZatE95958e2CwV8swcb0lW99muno42vALsG2PzjnHP9pDD3sP6e0YSHuSt/DVyby1E52mczhZXJCwa6ILNG9+zQyWJTDxQ+vzXz52ooCbr5m1INAVOsswWuzar3Ny1FmSiNzGcn7adh7AzmanqzkN7NZy91c5PRztSD0Hy+gxMm/jLpocfsmKwGNEQc71fy0OLpNxYdtz11WySSqF3dnEnCNlx/cCbU+aK+k5Ov6uf3Uih8v1WpDFsw11TZnSmMzHJDX0sFvLjrkaIcDOvbz/kNhcC3UJ42OSSmDvCZlOLsopwIH9MCdlx/CCfYuck8Ws4t2Wq1jViJPU0ydP9vBIIYb9JCqxo7LjYQ7ak1Yslah4AnthITxIJN6mwqtbuCLIFavjn//LvV1JklRe0R5q2Bfh3o6NqxcEn51OFI6pJN5ipXzF3MBe2wawV7Hv2sY8wb4l2C8uW+fP1G2FhYXy2YYddQnDHSAXUdgdAvtGdPcMP65PcqOlZl+m8hptOYJ9LFuOP2sLnGxUhYWSFfBaBWwvlWtWjLUBbGPypQj2tjbN2Ar4vxQdD/KxMfnYLDcrraiVa+DjJVtml/ZryiP4DpC8oj3EsFt6O3JQdqwmFkY+AWgBbUvNtWMTZrAIEom52kzIK1SN1dYWAnzVE7MNu0o9gSL7UrPKPLZCrVZpslcgbyPPTlQVji2thTfmCbQ+1XKEfLZaBW9X1I61mQtnec22VKPHsVo1HFWqpYD8CpVKJW+TrMA+sG2FCj2NBU87upIDJ6k8oj2ksLOBvbH5JQ7ak7JZM7NUozGPaQD4LdkqjUT9rQRCGUT9Kgw7Yn4FACZZOtuwf5Wtgs+qhaOvFn1otHpMXlgtH1NrCgGwpQD6GDoikaFAsEO0zcYr1RYI7Ln+ZKiFyDxtqQbOzRNtcIjBeUfVNtYGx19bddtYtQQGtEnUHLR13cZJ6hoeJamhhR11inmg8s7qlFXBtydlr8VPgLamzYztcPZ12HHEuGQnSljYqzRjhRKJunC2YdcsLZRUqbLBIBdi8z6mHls6MaZuwwclrBZawcLaWhb262ht0SBVILD716w0Bic39KEq5Puyv1LV4oCvluDDcKIKJ8zVK4LeH8i2s0kqhR1npySwc9GedN2Mz72JiCUC+/VszA4K6ZpajcQa2Zeq4cS9ZdZhhxAqkddaEwV5tXrFFjV4iGr4aMkEmUI1YUaPaE3hDzmLQCL79QV+dqCDswrZFOjD0eE1Vj0Bp8Jssqr4AOXA0l/TH+FZkhpS2Nm7Z6zmpD1JBa4FUi+IUPLCNvvIDq5lqVozFgtOQYUwQynarCeoyCpkSzQTZjiwgCUVqUZWgz1oqx6TEKcOf/LqpbbIvgU8e2Egkf2an9d3SUhuDAeaHCJ7IT4dwtkHnIxmQsJFVCdKUa7GHSB5Q3sIYSftSeWVqzlqT1KZqwvNGrm6sHCpxi6yS8wThWrA21yN8sQVtWbYy4Vm1ezCPoaAGUPEL0VJ4NhSUotBQyzVmBX4b4tEDo8oxkpwZhiQZ9f4kd6rJiBLh5ig3qKpnmiDTYVgr146ViuBY7INPYF1V7VxQvttntl2AvvJEGgRMjH4pr0Vs9q5QyL5Nly6C6BDcmkgM/rR8VGukqBjTb60zVKNGcPvUBlmi6QNpRcrxjhqnLuSuw51gIzOOb1o0UkeKKSw4/YkH39mI2BVSTRhAvsK8MoTAVksWQVHx5q8ltP6+zVlCukAKXDYF1k6xXDQnuT5vK0KF9gDFzcdH1Hb6lKOV0zfzCapiwQN+2nyu3iz0Y19dsRj2K/wdt3qchvLSZK6SLiws4G9sZnz65OECDsPOz5abfvtSpyk8oH2kMGOeztWNjfLVl2hsAcfQFN4e9JRpuCr9FC/AYHCzman4NhXLdBQ2IMvadfxdtUWKI808sS2hwZ2UnZEF1nPzvVJV/QudC3ksF9zsVacnNeuXwt6My7genNZVbFqNbqVDA9oDxnspBv7LF2fdC33gRkKOg8OHvYryn3Oa3WboxNb8AeN8gWHFeOwboC6hPEiSQ0J7Nay4+FZak9acHsm7EGzyoGN0c9YqwqO6q5ZQQeNuhTHNVvH2a65oiRtS2tC3bYUGtjZ7HT1bLUnySpmUPXeKh7Avuo959U6zJGNqwi6prUg1/G0k7yKszLZNWzbQ9+2RGBfNLfKYdtOZ609qWIm7IezeAB71mHn1UrmKO3loFnJycc88BJ3tl0Gth0nqTmLQqnQwM4G9tv62dLhmX5BzwfNPAhz9bxRyr7Z22SrsG0PMe2hgJ0N7KtTlMkPUHGkF4Jewr59s7d2ybm3WdpDD/u5OdTp0yg7hcD+Xm4FZVQoSl61DtOek3P6XMgUAthzcqITN1Y2NlesogwIR+8p2TtAhpD2OYcdBfZEcguwFIqAgJSiZK/SCx3tIYC9gGSn625TAISkfSn61ZXJGqD99GmBwM4G9sbmI/qfUACEpXXoR4E1IbTtcw57DtuNPWsd3ftCi+232R8FDlVon3vY0UXWjc207CjEkkzu7ZBWZAjsp+dKOZbATrNTIeol5bpyUpEJieYcdnJ90rrc2WzCoOKrDuuP4NCeIxjYyxuP6KmEqpdI+VEYsCduLL+zWt9OJUzpy4UFO7oYL4XudkEqZVUITXsIYN9X3vxeLoVdoLDnVu6jsFMJBPZyQcEOnv2lrHWVdMcLFXYBeXZUjVldsY7ud0Fq3TpBJaiozk5hFy7syaGGPWfuhG8rsHodhV3YsIdEIYA9MbmyOSVXSHu4osLh7ah7R3s10jdFbkpyYmhhV74xh5+I+8aENewp6sJqsxssYwoLC2dUmiQS8nxgqrBQ3d6e7XbJo4Xw0JhlLiyceUBMxEQG7KS7QCj0hhLBLqujsPvBeu1kY/tk9hGXMbw2pR1YlzgeCpIp8jxpbkRjs5s9wt5ee7WxcWZptiIlUmAPVWCvkyHYm7Lq5i62FwDsq48ow3d/1ajRo7qmfTKlZhIRGCMBmzKZMlmT0n61GkNdq55sv1ozifmtiamcksA7NOcEGtKcnTKJRo1KIHyPVkxWtFfgSWFoDIK9MptgfdUyvgYt/uoovKuIQa/x51XUhGWkV64OVWR/oy6rCcF+t0mmnLueQL/85S9v5/4yfGE3xzRiv9KeXRgjqU5pN0/FFF5tLzTDm/Z2bECuVtdcbZ8YVU80tpvVozVgY67Wolifkq1G7j1bPWpWN16tGa2OaZcUqitGYTnqxorayZhqHNkltYj9GvOoeRKPh+OqPbsCvNBo9WTMKPm8lNqYyXDceGjv/zIkHdCUsqa7GPa5k2F82zfRD1T+JHx7gjUSPw2OI7uivVEdMwqBPkbdWAiBtnoURXUAFk+SMpoNU8GB0Sy5XEh8zRFJtgRF9sYU5Nsragrbp2DuQlhOdcXEJD6CyLJrK9prwcvU4vGj1e0x5nY4PeDFks+7WlsRlltP/0D0om3jhrsh0pzDvih6Y/JPlEfClnYUaLElyQaQpyRTtZCTXsaHADkMCifRi5RC81R2M3HrkuzsukbW8MMRgjx7dvPVarW6sFEy1diOf2X3KpqZeHY0h3oUD8Xj22srwBZJpirRB0LcR593ZzJ7Igy33RGlEGEP484xo5CEQpY62oy8tXo0Blv4divsjZIa9GJyorE5m4Rq8OwxtZbvW3gVBkN0h3NCM8R9MCfN1RDZGxvtInt7o94MUb2xsR2Nb2+cqKlFr5rRmaMRziKN7XfQGoRhnTIld6OwYP8mzGFvV1fHxFRLUEQelVQ3tldPjcZUsLCPToxOAtdqdVZM9agaQng18eyNajNE5ZqaUQn4GhLZJ9QxELmnYDkx1TGjMe1Wz56ihsWPNk6YRyEnxaeGimx1I7IxMbXIs+PPuzppO37CDPZvKOzh5NqvSlBRBFVVcEVkVDJZ0T6JSiYVKaOSGBT3a2LaR2sqIFY3opoLQJsydRUVU1CBpVmC/hobJydTJtEoGD6JhqOaDFvAmUSx/moNpLl4fDv6OFyNmUQfgz6vYrImHE270GDv/xLBXhcB/QXc18up3Gjduo3RX/YLBvauzi8LNia/EAE9wZqzKbx+w16xseDLzi6BwH63S3dpTeK+F2i3R4HCnrjmkq7rrmBgP5WTuK9cdpvueQHqdkpizinhwG4wbstJ3JgsrG6PVKxyDyfmbDMahAR7NIVdsLBHCwp2XHssT1lF97wAtepwSCuPoYE9+QC9TZIQpd8oMNj7vyxITH6B3gdOkEqM/kO/gGDv0l1CGWpyefILyUgvvDePKkL1X4eTrXoB9njyRlx5FAzsd7uM275Zk5i40SrZf1FFrH6y0V6JiWu+2WYMXeVxzmGH0P7v36wpiLZI/r+pIlf/8220nQrWfPPvoQzscw47uHbdtktffvPNN+SXWOufpIpg1Vt/chf2+JeXtulC6NhDADvEdqOufxurX/8/qojWq9ts6tcZQxnXQwE70N41zqpJ+xBVZGvnuE1doWU9FLAD7qy6GigMka6tQwab7t4VHuwWDfwbVcQrbvAuXxRS2EsoCgLQNIUdNERBEIQyKex3705TDoSh71DYL1AKBKKtQ4KHvYxSIBSVDQoc9ucoA8KRWOCwx1EEBKQGQcPeQAGgSapAYB/cSvc/TVIFAvvTVEKTgGH/LpWwRGGnorALAPZeKmFJyLA30N0vKDUIGPbtJc9TCUkl24ULu6nkOSohqcREYaeisEc+7LfiqYSkWxR2Kgq7EGD/DpWQRGGnorBT2Kko7JFUjVlPJSQJuRozXfafkaIliiXfdT0m1un9Vul6eHy6xPLgadpIU9m0cGEXl/02MrRepDAt/+1vS0pcjIt1em+SohmkW9kHj9NGmsrEQob9QmRoq/Q/0ZN0q4txsS6nxZPOmD72QmSLwh4B6lVrL1z47XKzaPlvq6pE0xcuiFqr1PC0RFS1PPbFkiqptPeCdrlUcUEhqkqVwgy/lWrJg6JK9E948U8zmfbC8uUXLpi1F7aK2AVJW6VweIiWR8Z2mhYy7NMJEaIlaqk2IQH9PZ8wrU5IiNUmaNUJmerehCWxMCghI/Wi1qxN+GdVQsJyKZpBKpJKpTDk+YTn1ZkXpKnP3VI/j6ZdIoL/yxOWZ7ALMisu3hL1JigyImIzCTuyX4wYvWK+dVGqvXgRAnjsxYvwfyj2YobiIn75SqpIelErvXixCk0gRZOjSdFDpgKQRy8s05qfS4XDRHTLtqBXpApFqjQiNpKwI3vkwH5xueJDYHZaNH2RMHoRwX4QP6Vmat3Brl6CHjHseNqLqdvVu6XTIrsFZaRqtdrI2FKCjuzTQ5Gh+PihoSrtkHT7QcXyIXHs0BD8h78GdfzQK7EweChDOqSVDg29UjV0sVWK5pBq8YNWNBSv1qJ37LQwSStMvdy2oItoTIRILGgb82FkSCuVVikOfrhdfTNeWrVcffDDWBgIf6/AYNGHYlGVQgrTwLDl0qpXpEPwQqolD8shldXid2TaD+NjxUOZseIP7Ra0vUpapY2I7SRkG/P8QwcjS0NDB6k86KHnhQt7/EO/oRKSHooXMuz/l0pIEjbsA1RCEoWdisIuCNj/hUpIEjbsj1MJSYKG/d/o/heU/k3IsG8fpBKShAz73afPUwlJTz9OYacSCux3hQx7H5WQRGGnorALAfY4hkpIiuMB7P9fgAEAdQJN2jyvHVsAAAAASUVORK5CYII=" alt="A high level view of the OpenStack service interaction [3]" /><p class="caption">A high level view of the OpenStack service interaction <span class="citation">[3]</span></p> </div> <p>OpenStack is a free open source IaaS Cloud platform originally released by Rackspace and NASA under the Apache 2.0 License in July 2010. OpenStack controls and manages compute, storage, and network resource aggregated from multiple servers in a data center. The system provides a web interface (dashboard) and APIs compatible with Amazon EC2 to the administrators and users that allow flexible on-demand provisioning of resources. OpenStack also supports the Open Cloud Computing Interface (OCCI)<sup><a href="#fn13" class="footnoteRef" id="fnref13">13</a></sup>, which is an emerging standard defining IaaS APIs, and delivered through the Open Grid Forum (OGF)<sup><a href="#fn14" class="footnoteRef" id="fnref14">14</a></sup>.</p> <p>In April 2012, the project lead and management functions have been transferred to a newly formed OpenStack Foundation. The goals of the foundation are to support an open development process and community building, drive awareness and adoption, and encourage and maintain an ecosystem of companies powered by the OpenStack software. The OpenStack project is currently supported by more than 150 companies including AMD, Intel, Canonical, SUSE Linux, Red Hat, Cisco, Dell, HP, IBM and Yahoo!.</p> <p>The OpenStack software is divided into several services shown in Figure 1 that through their interaction provide the overall system management capabilities. The main services include the following:</p> <ul> <li><em>OpenStack Compute (Nova)</em>: manages the life cycle of VM instances from scheduling and resource provisioning to live migration and security rules. By leveraging the virtualization API provided by Libvirt<sup><a href="#fn15" class="footnoteRef" id="fnref15">15</a></sup>, OpenStack Compute supports multiple hypervisors, such as KVM and Xen.</li> <li><em>OpenStack Storage</em>: provides block and object storage to use by VM instances. The block storage system allows the uses to create block storage devices and dynamically attach and detach them from VM instances using the dashboard or API. In addition to block storage, OpenStack provides a scalable distributed object storage called Swift, which is also accessible through an API.</li> <li><em>OpenStack Networking</em>: provides API-driven network and IP address management capabilities. The system allows the users to create their own networks and assign static, floating, or dynamic IP addresses to VM instances.</li> <li><em>OpenStack Dashboard (Horizon)</em>: provides a web interface for the administrators and users to the system management capabilities, such as VM image management, VM instance life cycle management, and storage management.</li> <li><em>OpenStack Identity (Keystone)</em>: a centralized user account management service acting as an authentication and access control system. In addition, the service provides the access to a registry of the OpenStack services deployed in the data center and their communication endpoints.</li> <li><em>OpenStack Image (Glance)</em>: provides various VM image management capabilities, such as registration, delivery, and snapshotting. The service supports multiple VM image formats including Raw, AMI, VHD, VDI, qcow2, VMDK, and OVF.</li> </ul> <p>The OpenStack software is architectured with an aim of providing decoupling between the services constituting the system. The services interact with each other through the public APIs they provide and using Keystone as a registry for obtaining the information about the communication endpoints. The OpenStack Compute service, also referred to as Nova, is built on a shared-nothing messaging-based architecture, which allows running the services on multiple servers. The services, which compose Nova communicate via the Advanced Message Queue Protocol (AMQP) using asynchronous calls to avoid blocking. More detailed information on installation and administration of OpenStack in given in the official manuals <span class="citation">[4], [5]</span>. In the next section we compare OpenStack with the other major open source Cloud platforms.</p> <h1 id="comparison-of-open-source-cloud-platforms"><a href="#TOC"><span class="header-section-number">3</span> Comparison of Open Source Cloud Platforms</a></h1> <p>In this section, we briefly discuss and compare OpenStack with three other major open source Cloud platforms, namely Eucalyptus, OpenNebula, and CloudStack.</p> <p>Eucalyptus<sup><a href="#fn16" class="footnoteRef" id="fnref16">16</a></sup> is an open source IaaS Cloud platform developed by Eucalyptus Systems and released in March 2008 under the GPL v3 license. Eucalyptus is an acronym for “Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems”. Prior to version 3.1, Eucalyptus had two editions: open source, and enterprise, which included extra features and commercial support. As of version 3.1, both the editions have been merged into a single open source project. In March 2012, Eucalyptus and Amazon Web Services (AWS) announced a partnership aimed at bringing and maintaining additional API compatibility between the Eucalyptus platform and AWS, which will enable simpler workload migration and deployment of hybrid Cloud environments<sup><a href="#fn17" class="footnoteRef" id="fnref17">17</a></sup>. The Eucalyptus platform is composed of the following 5 high-level components, each of which is implemented as a standalone web service:</p> <ul> <li><em>Cloud Controller</em>: manages the underlying virtualized resources (servers, network, and storage) and provides a web interface and API compatible with Amazon EC2.</li> <li><em>Cluster Controller</em>: controls VMs running on multiple physical nodes and manages the virtual networking between VMs, and between VMs and external users.</li> <li><em>Walrus</em>: implements object storage accessible through an API compatible with Amazon S3.</li> <li><em>Storage Controller</em>: provides block storage that can be dynamically attached to VMs, which is managed via an API compatible with Amazon Elastic Block Storage (EBS).</li> <li><em>Node Controller</em>: controls the life cycle of VMs within a physical node using the functionality provided by the hypervisor.</li> </ul> <p>OpenNebula<sup><a href="#fn18" class="footnoteRef" id="fnref18">18</a></sup> is an open source IaaS Cloud platform originally established as a research project back in 2005 by Ignacio M. Llorente and Rubén S. Montero. The software was first publicly released in March 2008 under the Apache 2.0 license. In March 2010, the authors of OpenNebula founded C12G Labs, an organization aiming to provide commercial support and services for the OpenNebula software. Currently, the OpenNebula project is managed by C12G Labs. OpenNebula supports several standard APIs, such as EC2 Query, OGF OCCI, and vCLoud. OpenNebula provides the following features and components:</p> <ul> <li><em>Users and Groups</em>: OpenNebula supports multiple user accounts and groups, various authentication and authorization mechanisms, as well as Access Control Lists (ACL) allowing fine grained permission management.</li> <li><em>Virtualization Subsystem</em>: communicates with the hypervisor installed on a physical host enabling the management and monitoring of the life cycle of VMs.</li> <li><em>Network Subsystem</em>: manages virtual networking provided to interconnect VMs, supports VLANs and Open vSwitch.</li> <li><em>Storage Subsystem</em>: supports several types of data stores for storing VM images.</li> <li><em>Clusters</em>: are pools of hosts that share data stores and virtual networks, they can be used for load balancing, high availability, and high performance computing.</li> </ul> <p>CloudStack<sup><a href="#fn19" class="footnoteRef" id="fnref19">19</a></sup> is an open source IaaS Cloud platform originally developed by Cloud.com. In May 2010, most of the software was released under the GPL v3 license, while 5% of the code were kept proprietary. In July 2011, Citrix purchased Cloud.com and in August 2011 released the remaining code of CloudStack under the GPL v3 license. In April 2012, Citrix donated CloudStack to the Apache Software Foundation, while changing the license to Apache 2.0. CloudStack implements the Amazon EC2 and S3 APIs, as well as the vCloud API, in addition to its own API. CloudStack has a hierarchical structure, which enables management of multiple physical hosts from a single interface. The structure includes the following components:</p> <ul> <li><em>Availability Zones</em>: represent geographical locations, which are used in the allocation of VM instances in data storage. An Availability Zone consists of at least one Pod, and Secondary Storage, which is shared by all Pods in the Zone.</li> <li><em>Pods</em>: are collections of hardware configured to form Clusters. A Pod can contain one or more Clusters, and a Layer 2 switch architecture, which is shared by all Clusters in that Pod.</li> <li><em>Clusters</em>: are groups of identical physical hosts running the same hypervisor. A Cluster has a dedicated Primary Storage device, where the VM instances are hosted.</li> <li><em>Primary Storage</em>: is unique to each Cluster and is used to host VM instances.</li> <li><em>Secondary Storage</em>: is used to store VM images and snapshots.</li> </ul> <p>A comparison of the discussed Cloud platforms is summarized in Table 1.</p> <table> <caption>Comparison of OpenStack, Eucalyptus, OpenNebula, and CloudStack</caption> <col width="19%"></col> <col width="18%"></col> <col width="15%"></col> <col width="15%"></col> <col width="15%"></col> <thead> <tr class="header"> <th align="left"></th> <th align="left">OpenStack</th> <th align="left">Eucalyptus</th> <th align="left">OpenNebula</th> <th align="left">CloudStack</th> </tr> </thead> <tbody> <tr class="odd"> <td align="left">Managed By</td> <td align="left">OpenStack Foundation</td> <td align="left">Eucalyptus Systems</td> <td align="left">C12G Labs</td> <td align="left">Apache Software Foundation</td> </tr> <tr class="even"> <td align="left">License</td> <td align="left">Apache 2.0</td> <td align="left">GPL v3</td> <td align="left">Apache 2.0</td> <td align="left">Apache 2.0</td> </tr> <tr class="odd"> <td align="left">Initial Release</td> <td align="left">October 2010</td> <td align="left">May 2010</td> <td align="left">March 2008</td> <td align="left">May 2010</td> </tr> <tr class="even"> <td align="left">OCCI Compatibility</td> <td align="left">Yes</td> <td align="left">No</td> <td align="left">Yes</td> <td align="left">No</td> </tr> <tr class="odd"> <td align="left">AWS Compatibility</td> <td align="left">Yes</td> <td align="left">Yes</td> <td align="left">Yes</td> <td align="left">Yes</td> </tr> <tr class="even"> <td align="left">Hypervisors</td> <td align="left">Xen, KVM, VMware</td> <td align="left">Xen, KVM, VMware</td> <td align="left">Xen, KVM, VMware</td> <td align="left">Xen, KVM, VMware, Oracle VM</td> </tr> <tr class="odd"> <td align="left">Programming Language</td> <td align="left">Python</td> <td align="left">Java, C</td> <td align="left">C, C++, Ruby, Java</td> <td align="left">Java</td> </tr> </tbody> </table> <h1 id="existing-openstack-installation-tools"><a href="#TOC"><span class="header-section-number">4</span> Existing OpenStack Installation Tools</a></h1> <p>There are several official OpenStack installation and administration guides <span class="citation">[5]</span>. These are invaluable sources of information about OpenStack; however, the official guides mainly focus on the configuration in Ubuntu, while the documentation for other Linux distributions, such as CentOS, is incomplete or missing. In this work, we aim to close to gap by providing a step-by-step guide to installing OpenStack on CentOS. Another difference of the current guide from the official documentation is that rather then describing a general installation procedure, we focus on concrete and tested steps required to obtain an operational OpenStack installation for our testbed. In other words, this guide can be considered to be an example of how OpenStack can be deployed on a real-world multi-node testbed.</p> <p>One of the existing tools for automated installation of OpenStack is DevStack<sup><a href="#fn20" class="footnoteRef" id="fnref20">20</a></sup>. DevStack is distributed in the form of a single shell script, which installs a complete OpenStack development environment. The officially supported Linux distributions are Ubuntu 12.04 (Precise) and Fedora 16. DevStack also comes with guides to installing OpenStack in a VM, and on real hardware. The guides to installing OpenStack on hardware include both single node and multi-node installations. One of the drawbacks of the approach taken by DevStack is that in case of an error during the installation process, it is required to start installation from the beginning instead of just fixing the current step.</p> <p>Another tool for automated installation of OpenStack is dodai-deploy<sup><a href="#fn21" class="footnoteRef" id="fnref21">21</a></sup>, which is described in the OpenStack Compute Administration Manual <span class="citation">[4]</span>. dodai-deploy is a Puppet<sup><a href="#fn22" class="footnoteRef" id="fnref22">22</a></sup> service running on all the nodes and providing a web interface for automated installation of OpenStack. The service is developed and maintained to be run on Ubuntu. Several steps are required to install and configure the dodai-deploy service on the nodes. Once the service is started on the head and compute nodes, it is possible to install and configure OpenStack using the provided web interface or REST API.</p> <p>The difference of our approach from both DevStack and dodai-deploy is that instead of adding an abstraction layer and minimizing the number of steps required to be followed by the user to obtain an operational OpenStack installation, we aim to explicitly describe and perform every installation step in the form of a separate shell script. This allows the user to proceed slowly and customize individual steps when necessary. The purpose of our approach is not just obtaining an up and running OpenStack installation, but also learning the steps required to perform the installation from the ground up and understanding the responsibilities and interaction of the OpenStack components. Our installation scripts have been developed and tested on CentOS, which is a widely used server Linux distribution. Another difference of our approach from both DevStack and dodai-deploy is that we also set up GlusterFS to provide a distributed shared storage, which enables fault tolerance and efficient live migration of VM instances.</p> <p>Red Hat, a platinum member of the OpenStack Foundation, has announced its commercial offering of OpenStack starting from the Folsom release with the availability in 2013<sup><a href="#fn23" class="footnoteRef" id="fnref23">23</a></sup>. From the announcement it appears that the product will be delivered through the official repositories for Red Hat Enterprise Linux 6.3 or higher, and will contain Red Hat’s proprietary code providing integration with other Red Hat products, such as Red Hat Enterprise Virtualization for managing virtualized data centers and Red Hat Enterprise Linux. This announcement is a solid step to the direction of adoption of OpenStack in enterprises requiring commercial services and support.</p> <h1 id="step-by-step-openstack-deployment"><a href="#TOC"><span class="header-section-number">5</span> Step-by-Step OpenStack Deployment</a></h1> <p>As mentioned earlier, the aim of this work is to detail the steps required to perform a complete installation of OpenStack on multiple nodes. We split the installation process into multiple subsequent logical steps and provide a shell script for each of the steps. In this section, we explain and discuss every step needed to be followed to obtain a fully operational OpenStack installation on our testbed consisting of 1 controller and 4 compute nodes. The source code of the shell scripts described in this paper is released under the Apache 2.0 License and is publicly available online<sup><a href="#fn24" class="footnoteRef" id="fnref24">24</a></sup>.</p> <h2 id="hardware-setup"><a href="#TOC"><span class="header-section-number">5.1</span> Hardware Setup</a></h2> <p>The testbed used for testing the installation scripts consists of the following hardware:</p> <ul> <li>1 x Dell Optiplex 745 <ul> <li>Intel(R) Core(TM) 2 CPU (2 cores, 2 threads) 6600 @ 2.40GHz</li> <li>2GB DDR2-667</li> <li>Seagate Barracuda 80GB, 7200 RPM SATA II (ST3808110AS)</li> <li>Broadcom 5751 NetXtreme Gigabit Controller</li> </ul></li> <li>4 x IBM System x3200 M3 <ul> <li>Intel(R) Xeon(R) CPU (4 cores, 8 threads), X3460 @ 2.80GHz</li> <li>4GB DDR3-1333</li> <li>Western Digital 250 GB, 7200 RPM SATA II (WD2502ABYS-23B7A)</li> <li>Dual Gigabit Ethernet (2 x Intel 82574L Ethernet Controller)</li> </ul></li> <li>1 x Netgear ProSafe 16-Port 10/100 Desktop Switch FS116</li> </ul> <p>The Dell Optiplex 745 machine has been chosen to serve as a management host running all the major OpenStack services. The management host is referred to as the <em>controller</em> further in the text. The 4 IBM System x3200 M3 servers are used as <em>compute hosts</em>, i.e. for hosting VM instances.</p> <p>Due to the specifics of our setup, the only one machine connected to the public network and the Internet is one of the IBM System x3200 M3 servers. This server is refereed to as the <em>gateway</em>. The gateway is connected to the public network via the <code>eth0</code> network interface.</p> <p>All the machines form a local network connected via the Netgear FS116 network switch. The compute hosts are connected to the local network through their <code>eth1</code> network interfaces. The controller is connected to the local network through its <code>eth0</code> interface. To provide the access to the public network and the Internet, the gateway performs Network Address Translation (NAT) for the hosts from the local network.</p> <h2 id="organization-of-the-installation-package"><a href="#TOC"><span class="header-section-number">5.2</span> Organization of the Installation Package</a></h2> <p>The project contains a number of directories, whose organization is explained in this section. The <code>config</code> directory includes configuration files, which are used by the installation scripts and should be modified prior to the installation. The <code>lib</code> directory contains utility scripts that are shared by the other installation scripts. The <code>doc</code> directory comprises the source and compiled versions of the documentation.</p> <p>The remaining directories directly include the step-by-step installation scripts. The names of these directories have a specific format. The prefix (before the first dash) is the number denoting the order of execution. For example, the scripts from the directory with the prefix <em>01</em> must be executed first, followed by the scripts from the directory with the prefix <em>02</em>, etc. The middle part of a directory name denotes the purpose of the scripts in this directory. The suffix (after the last dash) specifies the host, on which the scripts from this directory should be executed. There are 4 possible values of the target host prefix:</p> <ul> <li><em>all</em> – execute the scripts on all the hosts;</li> <li><em>compute</em> – execute the scripts on all the compute hosts;</li> <li><em>controller</em> – execute the scripts on the controller;</li> <li><em>gateway</em> – execute the scripts on the gateway.</li> </ul> <p>For example, the first directory is named <code>01-network-gateway</code>, which means that (1) the scripts from this directory must be executed in the first place; (2) the scripts are supposed to do a network set up; and (3) the scripts must be executed only on the gateway. The name <code>02-glusterfs-all</code> means: (1) the scripts from this directory must be executed after the scripts from <code>01-network-gateway</code>; (2) the scripts set up GlusterFS; and (3) the scripts must be executed on all the hosts.</p> <p>The names of the installation scripts themselves follow a similar convention. The prefix denotes the order, in which the scripts should be run, while the remaining part of the name describes the purpose of the script.</p> <h2 id="configuration-files"><a href="#TOC"><span class="header-section-number">5.3</span> Configuration Files</a></h2> <p>The <code>lib</code> directory contains configuration files used by the installation scripts. These configuration files should be modified prior to running the installation scripts. The configuration files are described below.</p> <dl> <dt><code>configrc:</code></dt> <dd><p>This file contains a number of environmental variables defining various aspects of OpenStack’s configuration, such as administration and service account credentials, as well as access points. The file must be “sourced” to export the variables into the current shell session. The file can be sourced directly by running: <code>. configrc</code>, or using the scripts described later. A simple test to check whether the variables have been correctly exported is to <code>echo</code> any of the variables. For example, <code>echo $OS_USERNAME</code> must output <code>admin</code> for the default configuration.</p> </dd> <dt><code>hosts:</code></dt> <dd><p>This file contains a mapping between the IP addresses of the hosts in the local network and their host names. We apply the following host name convention: the compute hosts are named <em>computeX</em>, where <em>X</em> is replaced by the number of the host. According the described hardware setup, the default configuration defines 4 compute hosts: <code>compute1</code> (192.168.0.1), <code>compute2</code> (192.168.0.2), <code>compute3</code> (192.168.0.3), <code>compute4</code> (192.168.0.4); and 1 <code>controller</code> (192.168.0.5). As mentioned above, in our setup one of the compute hosts is connected to the public network and acts as a gateway. We assign to this host the host name <code>compute1</code>, and also alias it as <code>gateway</code>.</p> </dd> <dt><code>ntp.conf:</code></dt> <dd><p>This file contains a list of Network Time Protocol (NTP) servers to use by all the hosts. It is important to set accessible servers, since time synchronization is important for OpenStack services to interact correctly. By default, this file defines servers used within the University of Melbourne. It is advised to replace the default configuration with a list of preferred servers.</p> </dd> </dl> <p>It is important to replaced the default configuration defined in the described configuration files, since the default configuration is tailored to the specific setup of our testbed.</p> <h2 id="installation-procedure"><a href="#TOC"><span class="header-section-number">5.4</span> Installation Procedure</a></h2> <h3 id="centos"><a href="#TOC"><span class="header-section-number">5.4.1</span> CentOS</a></h3> <p>The installation scripts have been tested with CentOS 6.3, which has been installed on all the hosts. The CentOS installation mainly follows the standard process described in detail in the Red Hat Enterprise Linux 6 Installation Guide <span class="citation">[6]</span>. The minimal configuration option is sufficient, since all the required packages can be installed later when needed. The steps of the installation process that differ from the default are discussed in this section.</p> <h4 id="network-configuration."><a href="#TOC"><span class="header-section-number">5.4.1.1</span> Network Configuration.</a></h4> <p>The simplest way to configure network is during the OS installation process. As mentioned above, in our setup, the gateway is connected to two networks: to the public network through the <code>eth0</code> interface; and to the local network through the <code>eth1</code> interface. Since in our setup the public network configuration can be obtained from a DHCP server, in the configuration of the <code>eth0</code> interface it is only required to enable the automatic connection by enabling the “Connect Automatically” option. We use static configuration for the local network; therefore, <code>eth1</code> has be configured manually. Apart from enabling the “Connect Automatically” option, it is necessary to configure IPv4 by adding an IP address and netmask. According to the configuration defined in the <code>hosts</code> file described above, we assign 192.168.0.1/24 to the gateway.</p> <p>One of the differences in the network configuration of the other compute hosts (<code>compute2</code>, <code>compute3</code>, and <code>compute4</code>) from the gateway is that <code>eth0</code> should be kept disabled, as it is unused. The <code>eth1</code> interface should be enabled by turning on the “Connect Automatically” option. The IP address and netmask for <code>eth1</code> should be set to 192.168.0.<em>X</em>/24, where <em>X</em> is replaced by the compute host number. The gateway for the compute hosts should be set to 192.168.0.1, which the IP address of the gateway. The controller is configured similarly to the compute hosts with the only difference that the configuration should be done for <code>eth0</code> instead of <code>eth1</code>, since the controller has only one network interface.</p> <h4 id="hard-drive-partitioning."><a href="#TOC"><span class="header-section-number">5.4.1.2</span> Hard Drive Partitioning.</a></h4> <p>The hard drive partitioning scheme is the same for all the compute hosts, but differs for the controller. Table 2 shows the partitioning scheme for the compute hosts. <code>vg_base</code> is a volume group comprising the standard OS partitions: <code>lv_root</code>, <code>lv_home</code> and <code>lv_swap</code>. <code>vg_gluster</code> is a special volume group containing a single <code>lv_gluster</code> partition, which is dedicated to serve as a GlusterFS brick. The <code>lv_gluster</code> logical volume is formatted using the XFS<sup><a href="#fn25" class="footnoteRef" id="fnref25">25</a></sup> file system, as recommended for GlusterFS bricks.</p> <table> <caption>The partitioning scheme for the compute hosts</caption> <col width="30%"></col> <col width="15%"></col> <col width="29%"></col> <col width="13%"></col> <thead> <tr class="header"> <th align="left">Device</th> <th align="left">Size (MB)</th> <th align="left">Mount Point / Volume</th> <th align="left">Type</th> </tr> </thead> <tbody> <tr class="odd"> <td align="left"><em>LVM Volume Groups</em></td> <td align="left"> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="even"> <td align="left"> vg_base</td> <td align="left">20996</td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="odd"> <td align="left"> lv_root</td> <td align="left">10000</td> <td align="left">/</td> <td align="left">ext4</td> </tr> <tr class="even"> <td align="left"> lv_swap</td> <td align="left">6000</td> <td align="left"> </td> <td align="left">swap</td> </tr> <tr class="odd"> <td align="left"> lv_home</td> <td align="left">4996</td> <td align="left">/home</td> <td align="left">ext4</td> </tr> <tr class="even"> <td align="left"> vg_gluster</td> <td align="left">216972</td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="odd"> <td align="left"> lv_gluster</td> <td align="left">216972</td> <td align="left">/export/gluster</td> <td align="left">xfs</td> </tr> <tr class="even"> <td align="left"><em>Hard Drives</em></td> <td align="left"> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="odd"> <td align="left"> sda</td> <td align="left"> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="even"> <td align="left"> sda1</td> <td align="left">500</td> <td align="left">/boot</td> <td align="left">ext4</td> </tr> <tr class="odd"> <td align="left"> sda2</td> <td align="left">21000</td> <td align="left">vg_base</td> <td align="left">PV (LVM)</td> </tr> <tr class="even"> <td align="left"> sda3</td> <td align="left">216974</td> <td align="left">vg_gluster</td> <td align="left">PV (LVM)</td> </tr> </tbody> </table> <p>Table 3 shows the partitioning scheme for the controller. It does not include a <code>vg_gluster</code> volume group since the controller is not going to be a part of the GlusterFS volume. However, the scheme includes two new important volume groups: <code>nova-volumes</code> and <code>vg_images</code>. The <code>nova-volumes</code> volume group is used by OpenStack Nova to allocated volumes for VM instances. This volume group is managed by Nova; therefore, there is not need to create logical volumes manually. The <code>vg_images</code> volume group and its <code>lv_images</code> logical volume are devoted for storing VM images by OpenStack Glance. The mount point for <code>lv_images</code> is <code>/var/lib/glance/images</code>, which is the default directory used by Glance to store VM image files.</p> <table> <caption>The partitioning scheme for the controller</caption> <col width="27%"></col> <col width="15%"></col> <col width="31%"></col> <col width="13%"></col> <thead> <tr class="header"> <th align="left">Device</th> <th align="left">Size (MB)</th> <th align="left">Mount Point / Volume</th> <th align="left">Type</th> </tr> </thead> <tbody> <tr class="odd"> <td align="left"><em>LVM Volume Groups</em></td> <td align="left"> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="even"> <td align="left"> nova-volumes</td> <td align="left">29996</td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="odd"> <td align="left"> Free</td> <td align="left">29996</td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="even"> <td align="left"> vg_base</td> <td align="left">16996</td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="odd"> <td align="left"> lv_root</td> <td align="left">10000</td> <td align="left">/</td> <td align="left">ext4</td> </tr> <tr class="even"> <td align="left"> lv_swap</td> <td align="left">2000</td> <td align="left"> </td> <td align="left">swap</td> </tr> <tr class="odd"> <td align="left"> lv_home</td> <td align="left">4996</td> <td align="left">/home</td> <td align="left">ext4</td> </tr> <tr class="even"> <td align="left"> vg_images</td> <td align="left">28788</td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="odd"> <td align="left"> lv_images</td> <td align="left">28788</td> <td align="left">/var/lib/glance/images</td> <td align="left">ext4</td> </tr> <tr class="even"> <td align="left"><em>Hard Drives</em></td> <td align="left"> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="odd"> <td align="left"> sda</td> <td align="left"> </td> <td align="left"> </td> <td align="left"> </td> </tr> <tr class="even"> <td align="left"> sda1</td> <td align="left">500</td> <td align="left">/boot</td> <td align="left">ext4</td> </tr> <tr class="odd"> <td align="left"> sda2</td> <td align="left">17000</td> <td align="left">vg_base</td> <td align="left">PV (LVM)</td> </tr> <tr class="even"> <td align="left"> sda3</td> <td align="left">30000</td> <td align="left">nova-volumes</td> <td align="left">PV (LVM)</td> </tr> <tr class="odd"> <td align="left"> sda4</td> <td align="left">28792</td> <td align="left"> </td> <td align="left">Extended</td> </tr> <tr class="even"> <td align="left"> sda5</td> <td align="left">28788</td> <td align="left">vg_images</td> <td align="left">PV (LVM)</td> </tr> </tbody> </table> <h3 id="network-gateway"><a href="#TOC"><span class="header-section-number">5.4.2</span> Network Gateway</a></h3> <p>Once CentOS is installed on all the machines, the next step is to configure NAT on the gateway to enable the Internet access on all the hosts. First, it is necessary to check whether the Internet is available on the gateway itself. If the Internet is not available, the problem might be in the configuration of <code>eth0</code>, the network interface connected to the public network in our setup.</p> <p>In all the following steps, it is assumed that the user logged in is <code>root</code>. If the Internet is available on the gateway, it is necessary to install the Git<sup><a href="#fn26" class="footnoteRef" id="fnref26">26</a></sup> version control client to be able to clone the repository containing the installation scripts. This can be done using <code>yum</code>, the default package manager in CentOS, as follows:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">yum <span class="kw">install</span> -y git</code></pre> <p>Next, the repository can be cloned using the following command:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">git clone <span class="kw">\</span> https://github.com/beloglazov/openstack-centos-kvm-glusterfs.git</code></pre> <p>Now, we can continue the installation using the scripts contained in the cloned Git repository. As described above, the starting point is the directory called <code>01-network-gateway</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">cd</span> openstack-centos-kvm-glusterfs/01-network-gateway</code></pre> <p>All the scripts described below can be run by executing <code>./<script name>.sh</code> in the command line.</p> <ol style="list-style-type: example"> <li><code>01-iptables-nat.sh</code></li> </ol> <p>This script flushes all the default <code>iptables</code> rules to open all the ports. This is acceptable for testing; however, it is not recommended for production environments due to security concerns. Then, the script sets up NAT using <code>iptables</code> by forwarding packets from <code>eth1</code> (the local network interface) through <code>eth0</code>. The last stage is saving the defined <code>iptables</code> rules and restarting the service.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Flush the iptables rules.</span> iptables -F iptables -t nat -F iptables -t mangle -F <span class="co"># Set up packet forwarding for NAT</span> iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth1 -j ACCEPT iptables -A FORWARD -o eth1 -j ACCEPT <span class="co"># Save the iptables configuration into a file and restart iptables</span> service iptables save service iptables restart</code></pre> <ol start="2" style="list-style-type: example"> <li><code>02-ip-forward.sh</code></li> </ol> <p>By default, IP packet forwarding is disabled in CentOS; therefore, it is necessary to enable it by modifying the <code>/etc/sysctl.conf</code> configuration file. This is done by the <code>02-ip-forward.sh</code> script as follows:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Enable IP packet forwarding</span> <span class="kw">sed</span> -i <span class="st">'s/net.ipv4.ip_forward = 0/net.ipv4.ip_forward = 1/g'</span> <span class="kw">\</span> /etc/sysctl.conf <span class="co"># Restart the network service</span> service network restart</code></pre> <ol start="3" style="list-style-type: example"> <li><code>03-copy-hosts.sh</code></li> </ol> <p>This script copies the <code>hosts</code> file from the <code>config</code> directory to <code>/etc</code> locally, as well to all the other hosts: the remaining compute hosts and the controller. The <code>hosts</code> file defines a mapping between the IP addresses of the hosts and host names. For convenience, prior to copying you may use the <code>ssh-copy-id</code> program to copy the public key to the other hosts for password-less SSH connections. Once the <code>hosts</code> file is copied to all the hosts, they can be accessed by using their respective host names instead of the IP addresses.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Copy the hosts file into the local configuration</span> <span class="kw">cp</span> ../config/hosts /etc/ <span class="co"># Copy the hosts file to the other nodes.</span> <span class="kw">scp</span> ../config/hosts root@compute2:/etc/ <span class="kw">scp</span> ../config/hosts root@compute3:/etc/ <span class="kw">scp</span> ../config/hosts root@compute4:/etc/ <span class="kw">scp</span> ../config/hosts root@controller:/etc/</code></pre> <p>From this point, all the installation steps on any host can be performed remotely over SSH.</p> <h3 id="glusterfs-distributed-replicated-storage"><a href="#TOC"><span class="header-section-number">5.4.3</span> GlusterFS Distributed Replicated Storage</a></h3> <p>In this section, we describe how to set up distributed replicated storage using GlusterFS.</p> <h4 id="glusterfs-all-all-nodes."><a href="#TOC"><span class="header-section-number">5.4.3.1</span> 02-glusterfs-all (all nodes).</a></h4> <p>The steps discussed in this section need to be run on all the hosts. The easiest way to manage multi-node installation is to SSH into all the hosts from another machine using separate terminals. This way the hosts can be conveniently managed from a single machine simultaneously. Before applying further installation scripts, it is necessary to run the following commands:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Update the OS packages</span> yum update -y <span class="co"># Install Git</span> yum <span class="kw">install</span> -y git <span class="co"># Clone the repository</span> git clone <span class="kw">\</span> https://github.com/beloglazov/openstack-centos-kvm-glusterfs.git</code></pre> <p>It is optional but might be useful to install other programs on all the hosts, such as <code>man</code>, <code>nano</code>, or <code>emacs</code> for reading manuals and editing files.</p> <ol start="4" style="list-style-type: example"> <li><code>01-iptables-flush.sh</code></li> </ol> <p>This script flushes all the default <code>iptables</code> rules to allow connections through all the ports. As mentioned above, this is insecure and not recommended for production environments. For production it is recommended to open only the required ports.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Flush the iptables rules.</span> iptables -F <span class="co"># Save the configuration and restart iptables</span> service iptables save service iptables restart</code></pre> <ol start="5" style="list-style-type: example"> <li><code>02-selinux-permissive.sh</code></li> </ol> <p>This script switches SELinux<sup><a href="#fn27" class="footnoteRef" id="fnref27">27</a></sup> into the permissive mode. By default, SELinux blocks certain operations, such as VM migrations. Switching SELinux into the permissive mode is not recommended for production environments, but is acceptable for testing purposes.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Set SELinux into the permissive mode</span> <span class="kw">sed</span> -i <span class="st">'s/SELINUX=enforcing/SELINUX=permissive/g'</span> /etc/selinux/config <span class="kw">echo</span> 0 <span class="kw">></span> /selinux/enforce</code></pre> <ol start="6" style="list-style-type: example"> <li><code>03-glusterfs-install.sh</code></li> </ol> <p>This script installs GlusterFS services and their dependencies.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install GlusterFS and its dependencies</span> yum -y <span class="kw">install</span> <span class="kw">\</span> openssh-server <span class="kw">wget</span> fuse fuse-libs openib libibverbs <span class="kw">\</span> http://download.gluster.org/pub/gluster/glusterfs/LATEST/<span class="kw">\</span> CentOS/glusterfs-3.3.0-1.el6.x86_64.rpm <span class="kw">\</span> http://download.gluster.org/pub/gluster/glusterfs/LATEST/<span class="kw">\</span> CentOS/glusterfs-fuse-3.3.0-1.el6.x86_64.rpm <span class="kw">\</span> http://download.gluster.org/pub/gluster/glusterfs/LATEST/<span class="kw">\</span> CentOS/glusterfs-server-3.3.0-1.el6.x86_64.rpm</code></pre> <ol start="7" style="list-style-type: example"> <li><code>04-glusterfs-start.sh</code></li> </ol> <p>This script starts the GlusterFS service, and sets the service to start during the system start up.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the GlusterFS service</span> service glusterd restart chkconfig glusterd on</code></pre> <h4 id="glusterfs-controller-controller."><a href="#TOC"><span class="header-section-number">5.4.3.2</span> 03-glusterfs-controller (controller).</a></h4> <p>The scripts described in this section need to be run only on the controller.</p> <ol start="8" style="list-style-type: example"> <li><code>01-glusterfs-probe.sh</code></li> </ol> <p>This script probes the compute hosts to add them to a GlusterFS cluster.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Probe GlusterFS peer hosts</span> gluster peer probe compute1 gluster peer probe compute2 gluster peer probe compute3 gluster peer probe compute4</code></pre> <ol start="9" style="list-style-type: example"> <li><code>02-glusterfs-create-volume.sh</code></li> </ol> <p>This scripts creates a GlusterFS volume out of the bricks exported by the compute hosts mounted to <code>/export/gluster</code> for storing VM instances. The created GlusterFS volume is replicated across all the 4 compute hosts. Such replication provides fault tolerance, as if any of the compute hosts fail, the VM instance data will be available from the remaining replicas. Compared to a Network File System (NFS) exported by a single server, the complete replication provided by GlusterFS improves the read performance, since all the read operations are local. This is important to enable efficient live migration of VMs.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a GlusterFS volume replicated over 4 gluster hosts</span> gluster volume create vm-instances replica 4 <span class="kw">\</span> compute1:/export/gluster compute2:/export/gluster <span class="kw">\</span> compute3:/export/gluster compute4:/export/gluster <span class="co"># Start the created volume</span> gluster volume start vm-instances</code></pre> <h4 id="glusterfs-all-all-nodes.-1"><a href="#TOC"><span class="header-section-number">5.4.3.3</span> 04-glusterfs-all (all nodes).</a></h4> <p>The script described in this section needs to be run on all the hosts.</p> <ol start="10" style="list-style-type: example"> <li><code>01-glusterfs-mount.sh</code></li> </ol> <p>This scripts adds a line to the <code>/etc/fstab</code> configuration file to automatically mount the GlusterFS volume during the system start up to the <code>/var/lib/nova/instances</code> directory. The <code>/var/lib/nova/instances</code> directory is the default location where OpenStack Nova stores the VM instance related data. This directory must be mounted and shared by the controller and all the compute hosts to enable live migration of VMs. Even though the controller does not manage the data of VM instances, it is still necessary for it to have the access to the VM instance data directory to enable live migration. The reason is that the controller coordinates live migration by writing some temporary data to the shared directory. The <code>mount -a</code> command re-mounts everything from the config file after it has been modified.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Mount the GlusterFS volume</span> <span class="kw">mkdir</span> -p /var/lib/nova/instances <span class="kw">echo</span> <span class="st">"localhost:/vm-instances /var/lib/nova/instances \</span> <span class="st"> glusterfs defaults 0 0"</span> <span class="kw">>></span> /etc/fstab <span class="kw">mount</span> -a</code></pre> <h3 id="kvm"><a href="#TOC"><span class="header-section-number">5.4.4</span> KVM</a></h3> <p>The scripts included in the <code>05-kvm-compute</code> directory need to be run on the compute hosts. KVM is not required on the controller, since it is not going to be used to host VM instances.</p> <p>Prior to enabling KVM on a machine, it is important to make sure that the machine uses either Intel VT or AMD-V chipsets that support hardware-assisted virtualization. This feature might be disabled in the Basic Input Output System (BIOS); therefore, it is necessary to verify that it is enabled. To check whether hardware-assisted virtualization is supported by the hardware, the following Linux command can be used:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">grep</span> -E <span class="st">'vmx|svm'</span> /proc/cpuinfo</code></pre> <p>If the command returns any output, it means that the system supports hardware-assisted virtualization. The <code>vmx</code> processor feature flag represents an Intel VT chipset, whereas the <code>svm</code> flag represents AMD-V. Depending on the flag returned, you need to modify the <code>02-kvm-modprobe.sh</code> script.</p> <ol start="11" style="list-style-type: example"> <li><code>01-kvm-install.sh</code></li> </ol> <p>This script installs KVM and the related tools.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install KVM and the related tools</span> yum -y <span class="kw">install</span> kvm qemu-kvm qemu-kvm-tools</code></pre> <ol start="12" style="list-style-type: example"> <li><code>02-kvm-modprobe.sh</code></li> </ol> <p>This script enables KVM in the OS. If the <code>grep -E 'vmx|svm' /proc/cpuinfo</code> command described above returned <code>vmx</code>, there is no need to modify this script, as it enables the Intel KVM module by default. If the command returned <code>svm</code>, it is necessary to comment the <code>modprobe kvm-intel</code> line and uncomment the <code>modprobe kvm-amd</code> line.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a script for enabling the KVM kernel module</span> <span class="kw">echo</span> <span class="st">"</span> <span class="st">modprobe kvm</span> <span class="st"># Uncomment this line if the host has an AMD CPU</span> <span class="st">#modprobe kvm-amd</span> <span class="st"># Uncomment this line if the host has an Intel CPU</span> <span class="st">modprobe kvm-intel</span> <span class="st">"</span> <span class="kw">></span> /etc/sysconfig/modules/kvm.modules <span class="kw">chmod</span> +x /etc/sysconfig/modules/kvm.modules <span class="co"># Enable KVM</span> /etc/sysconfig/modules/kvm.modules</code></pre> <ol start="13" style="list-style-type: example"> <li><code>03-libvirt-install.sh</code></li> </ol> <p>This script installs Libvirt<sup><a href="#fn28" class="footnoteRef" id="fnref28">28</a></sup>, its dependencies and the related tools. Libvirt provides an abstraction and a common Application Programming Interface (API) over various hypervisors. It is used by OpenStack to provide support for multiple hypervisors including KVM and Xen. After the installation, the script starts the <code>messagebus</code> and <code>avahi-daemon</code> services, which are prerequisites of Libvirt.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install Libvirt and its dependencies</span> yum -y <span class="kw">install</span> libvirt libvirt-python python-virtinst avahi dmidecode <span class="co"># Start the services required by Libvirt</span> service messagebus restart service avahi-daemon restart <span class="co"># Start the service during the system start up</span> chkconfig messagebus on chkconfig avahi-daemon on</code></pre> <ol start="14" style="list-style-type: example"> <li><code>04-libvirt-config.sh</code></li> </ol> <p>This script modifies the Libvirt configuration to enable communication over TCP without authentication. This is required by OpenStack to enable live migration of VM instances.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Enable the communication with Libvirt</span> <span class="co"># over TCP without authentication.</span> <span class="kw">sed</span> -i <span class="st">'s/#listen_tls = 0/listen_tls = 0/g'</span> <span class="kw">\</span> /etc/libvirt/libvirtd.conf <span class="kw">sed</span> -i <span class="st">'s/#listen_tcp = 1/listen_tcp = 1/g'</span> <span class="kw">\</span> /etc/libvirt/libvirtd.conf <span class="kw">sed</span> -i <span class="st">'s/#auth_tcp = "sasl"/auth_tcp = "none"/g'</span> <span class="kw">\</span> /etc/libvirt/libvirtd.conf <span class="kw">sed</span> -i <span class="st">'s/#LIBVIRTD_ARGS="--listen"/LIBVIRTD_ARGS="--listen"/g'</span> <span class="kw">\</span> /etc/sysconfig/libvirtd</code></pre> <ol start="15" style="list-style-type: example"> <li><code>05-libvirt-start.sh</code></li> </ol> <p>This script starts the <code>libvirtd</code> service and sets it to automatically start during the system start up.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the Libvirt service</span> service libvirtd restart chkconfig libvirtd on</code></pre> <h3 id="openstack"><a href="#TOC"><span class="header-section-number">5.4.5</span> OpenStack</a></h3> <p>This section contains a few subsection describing different phases of OpenStack installation.</p> <h4 id="openstack-all-all-nodes."><a href="#TOC"><span class="header-section-number">5.4.5.1</span> 06-openstack-all (all nodes).</a></h4> <p>The scripts described in this section need to be executed on all the hosts.</p> <ol start="16" style="list-style-type: example"> <li><code>01-epel-add-repo.sh</code></li> </ol> <p>This scripts adds the Extra Packages for Enterprise Linux<sup><a href="#fn29" class="footnoteRef" id="fnref29">29</a></sup> (EPEL) repository, which contains the OpenStack related packages.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Add the EPEL repo: http://fedoraproject.org/wiki/EPEL</span> yum <span class="kw">install</span> -y http://dl.fedoraproject.org/pub/epel/6/i386/<span class="kw">\</span> epel-release-6-7.noarch.rpm</code></pre> <ol start="17" style="list-style-type: example"> <li><code>02-ntp-install.sh</code></li> </ol> <p>This script install the NTP service, which is required to automatically synchronize the time with external NTP servers.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install NTP</span> yum <span class="kw">install</span> -y ntp</code></pre> <ol start="18" style="list-style-type: example"> <li><code>03-ntp-config.sh</code></li> </ol> <p>This script replaces the default servers specified in the <code>/etc/ntp.conf</code> configuration file with the servers specified in the <code>config/ntp.conf</code> file described above. If the default set of servers is satisfactory, then the execution of this script is not required.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Fetch the NTP servers specified in ../config/ntp.conf</span> <span class="ot">SERVER1=</span><span class="kw">`cat</span> ../config/ntp.conf <span class="kw">|</span> <span class="kw">sed</span> <span class="st">'1!d;q'</span><span class="kw">`</span> <span class="ot">SERVER2=</span><span class="kw">`cat</span> ../config/ntp.conf <span class="kw">|</span> <span class="kw">sed</span> <span class="st">'2!d;q'</span><span class="kw">`</span> <span class="ot">SERVER3=</span><span class="kw">`cat</span> ../config/ntp.conf <span class="kw">|</span> <span class="kw">sed</span> <span class="st">'3!d;q'</span><span class="kw">`</span> <span class="co"># Replace the default NTP servers with the above</span> <span class="kw">sed</span> -i <span class="st">"s/server 0.*pool.ntp.org/</span><span class="ot">$SERVER1</span><span class="st">/g"</span> /etc/ntp.conf <span class="kw">sed</span> -i <span class="st">"s/server 1.*pool.ntp.org/</span><span class="ot">$SERVER2</span><span class="st">/g"</span> /etc/ntp.conf <span class="kw">sed</span> -i <span class="st">"s/server 2.*pool.ntp.org/</span><span class="ot">$SERVER3</span><span class="st">/g"</span> /etc/ntp.conf</code></pre> <ol start="19" style="list-style-type: example"> <li><code>04-ntp-start.sh</code></li> </ol> <p>This script starts the <code>ntpdate</code> service and sets it to start during the system start up. Upon the start, the <code>ntpdate</code> service synchronizes the time with the servers specified in the <code>/etc/ntp.conf</code> configuration file.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the NTP service</span> service ntpdate restart chkconfig ntpdate on</code></pre> <h4 id="openstack-controller-controller."><a href="#TOC"><span class="header-section-number">5.4.5.2</span> 07-openstack-controller (controller).</a></h4> <p>The scripts described in this section need to be run only on the controller host.</p> <ol start="20" style="list-style-type: example"> <li><code>01-source-configrc.sh</code></li> </ol> <p>This scripts is mainly used to remind of the necessity to “source” the <code>configrc</code> file prior to continuing, since some scripts in this directory use the environmental variable defined in <code>configrc</code>. To source the file, it is necessary to run the following command: <code>. 01-source-configrc.sh</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">echo</span> <span class="st">"To make the environmental variables available \</span> <span class="st"> in the current session, run: "</span> <span class="kw">echo</span> <span class="st">". 01-source-configrc.sh"</span> <span class="co"># Export the variables defined in ../config/configrc</span> <span class="kw">.</span> ../config/configrc</code></pre> <ol start="21" style="list-style-type: example"> <li><code>02-mysql-install.sh</code></li> </ol> <p>This script installs the MySQL server, which is required to host the databases used by the OpenStack services.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install the MySQL server</span> yum <span class="kw">install</span> -y mysql mysql-server</code></pre> <ol start="22" style="list-style-type: example"> <li><code>03-mysql-start.sh</code></li> </ol> <p>This script start the MySQL service and initializes the password of the <code>root</code> MySQL user using a variable from the <code>configrc</code> file called <code>$MYSQL_ROOT_PASSWORD</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the MySQL service</span> service mysqld start chkconfig mysqld on <span class="co"># Initialize the MySQL root password</span> mysqladmin -u root password <span class="ot">$MYSQL_ROOT_PASSWORD</span> <span class="kw">echo</span> <span class="st">""</span> <span class="kw">echo</span> <span class="st">"The MySQL root password has been set \</span> <span class="st"> to the value of </span><span class="ot">$MYSQL_ROOT_PASSWORD</span><span class="st">: </span><span class="dt">\"</span><span class="ot">$MYSQL_ROOT_PASSWORD</span><span class="dt">\"</span><span class="st">"</span></code></pre> <ol start="23" style="list-style-type: example"> <li><code>04-keystone-install.sh</code></li> </ol> <p>This script installs Keystone - the OpenStack identity management service, and other OpenStack command line utilities.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install OpenStack utils and Keystone, the identity management service</span> yum <span class="kw">install</span> -y openstack-utils openstack-keystone</code></pre> <ol start="24" style="list-style-type: example"> <li><code>05-keystone-create-db.sh</code></li> </ol> <p>This script creates a MySQL database for Keystone called <code>keystone</code>, which is used to store various identity data. The script also creates a <code>keystone</code> user and grants the user with full permissions to the <code>keystone</code> database.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a database for Keystone</span> ../lib/mysqlq.sh <span class="st">"CREATE DATABASE keystone;"</span> <span class="co"># Create a keystone user and grant all privileges</span> <span class="co"># to the keystone database</span> ../lib/mysqlq.sh <span class="st">"GRANT ALL ON keystone.* TO 'keystone'@'controller' \</span> <span class="st"> IDENTIFIED BY '</span><span class="ot">$KEYSTONE_MYSQL_PASSWORD</span><span class="st">';"</span></code></pre> <ol start="25" style="list-style-type: example"> <li><code>06-keystone-generate-admin-token.sh</code></li> </ol> <p>Keystone allows two types of authentication in its command line interface for administrative actions like creating users, tenants, etc:</p> <ol style="list-style-type: decimal"> <li><p>Using an admin token and <code>admin_port</code> (35357), e.g.:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">keystone <span class="kw">\</span> --token=<span class="kw"><</span>admin token<span class="kw">></span> <span class="kw">\</span> --endpoint=http://controller:35357/v2.0 user-list</code></pre></li> <li><p>Using an admin user and <code>public_port</code> (5000), e.g.:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">keystone <span class="kw">\</span> --os_username=<span class="kw"><</span>username<span class="kw">></span> <span class="kw">\</span> --os_tenant_name=<span class="kw"><</span>tenant<span class="kw">></span> <span class="kw">\</span> --os_password=<span class="kw"><</span>password<span class="kw">></span> <span class="kw">\</span> --os_auth_url=http://controller:5000/v2.0 user-list</code></pre></li> </ol> <p>Where <code><admin token></code> should be replaced by an actual value of the admin token described next, and <code><username></code>, <code><tenant></code>, and <code><password></code> should be replaced by the corresponding values of an administrative user account created in Keystone. The process of setting up a user account is discussed in the following steps.</p> <p>Apart from authenticating in Keystone as a user, OpenStack services, such as Glance and Nova, can also authenticate in Keystone using either of the two mentioned authentication methods. One way is to share the admin token among the services and authenticate using the token. The other way is to use special users created in Keystone for each service. By default, these users are <code>nova</code>, <code>glance</code>, etc. The service users are assigned to the <code>service</code> tenant and <code>admin</code> role in that tenant.</p> <p>In this work, we use password-based authentication. It uses Keystone’s database backend to store user credentials; and therefore, it is possible to update user credentials, for example, using Keystone’s command line tools without the necessity to re-generate the admin token and update the configuration files. However, since both methods can coexist, the installation scripts set up the token-based authentication as well.</p> <p>The <code>06-keystone-generate-admin-token.sh</code> script generates a random token used to authorize the Keystone admin account. The generated token is stored in the <code>./keystone-admin-token</code> file, which is later used to configure Keystone.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Generate an admin token for Keystone and save it into</span> <span class="co"># ./keystone-admin-token</span> openssl rand -hex 10 <span class="kw">></span> keystone-admin-token</code></pre> <ol start="26" style="list-style-type: example"> <li><code>07-keystone-config.sh</code></li> </ol> <p>This script modifies the configuration file of Keystone, <code>/etc/keystone/keystone.conf</code>. It sets the generated admin token and the MySQL connection configuration using the variables defined in <code>configrc</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Set the generated admin token in the Keystone configuration</span> openstack-config --set /etc/keystone/keystone.conf DEFAULT <span class="kw">\</span> admin_token <span class="kw">`cat</span> keystone-admin-token<span class="kw">`</span> <span class="co"># Set the connection to the MySQL server</span> openstack-config --set /etc/keystone/keystone.conf sql connection <span class="kw">\</span> mysql://keystone:<span class="ot">$KEYSTONE_MYSQL_PASSWORD</span>@controller/keystone</code></pre> <ol start="27" style="list-style-type: example"> <li><code>08-keystone-init-db.sh</code></li> </ol> <p>This script initializes the <code>keystone</code> database using the <code>keystone-manage</code> command line tool. The executed command creates tables in the database.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Initialize the database for Keystone</span> keystone-manage db_sync</code></pre> <ol start="28" style="list-style-type: example"> <li><code>09-keystone-permissions.sh</code></li> </ol> <p>This script sets restrictive permissions (640) on the Keystone configuration file, since it contains the MySQL account credentials and the admin token. Then, the scripts sets the ownership of the Keystone related directories to the <code>keystone</code> user and <code>keystone</code> group.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Set restrictive permissions on the Keystone config file</span> <span class="kw">chmod</span> 640 /etc/keystone/keystone.conf <span class="co"># Set the ownership for the Keystone related directories</span> <span class="kw">chown</span> -R keystone:keystone /var/log/keystone <span class="kw">chown</span> -R keystone:keystone /var/lib/keystone</code></pre> <ol start="29" style="list-style-type: example"> <li><code>10-keystone-start.sh</code></li> </ol> <p>This script starts the Keystone service and sets it to automatically start during the system start up.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the Keystone service</span> service openstack-keystone restart chkconfig openstack-keystone on</code></pre> <ol start="30" style="list-style-type: example"> <li><code>11-keystone-create-users.sh</code></li> </ol> <p>The purpose of this script is to create user accounts, roles and tenants in Keystone for the admin user and service accounts for each OpenStack service: Keystone, Glance, and Nova. Since the process is complicated when done manually (it is necessary to define relations between database records), we use the <em>keystone-init</em> project<sup><a href="#fn30" class="footnoteRef" id="fnref30">30</a></sup> to automate the process. The <em>keystone-init</em> project allows one to create a configuration file in the “YAML Ain’t Markup Language”<sup><a href="#fn31" class="footnoteRef" id="fnref31">31</a></sup> (YAML) data format defining the required OpenStack user accounts. Then, according the defined configuration, the required database records are automatically created.</p> <p>Our script first installs a dependency of <em>keystone-init</em> and clones the project’s repository. Then, the script modifies the default configuration file provided with the <em>keystone-init</em> project by populating it with the values defined by the environmental variables defined in <code>configrc</code>. The last step of the script is to invoke <em>keystone-init</em>. The script does not remove the <em>keystone-init</em> repository to allow one to browse the generated configuration file, e.g. to check the correctness. When the repository is not required anymore, it can be removed by executing <code>rm -rf keystone-init</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install PyYAML, a YAML Python library</span> yum <span class="kw">install</span> -y PyYAML <span class="co"># Clone a repository with Keystone initialization scripts</span> git clone https://github.com/nimbis/keystone-init.git <span class="co"># Replace the default configuration with the values defined be the</span> <span class="co"># environmental variables in configrc</span> <span class="kw">sed</span> -i <span class="st">"s/192.168.206.130/controller/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/012345SECRET99TOKEN012345/</span><span class="kw">`cat</span> keystone-admin-token<span class="kw">`</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/name: openstackDemo/name: </span><span class="ot">$OS_TENANT_NAME</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/name: adminUser/name: </span><span class="ot">$OS_USERNAME</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/password: secretword/password: </span><span class="ot">$OS_PASSWORD</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/name: glance/name: </span><span class="ot">$GLANCE_SERVICE_USERNAME</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/password: glance/password: </span><span class="ot">$GLANCE_SERVICE_PASSWORD</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/name: nova/name: </span><span class="ot">$NOVA_SERVICE_USERNAME</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/password: nova/password: </span><span class="ot">$NOVA_SERVICE_PASSWORD</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="kw">sed</span> -i <span class="st">"s/RegionOne/</span><span class="ot">$OS_REGION_NAME</span><span class="st">/g"</span> <span class="kw">\</span> keystone-init/config.yaml <span class="co"># Run the Keystone initialization script</span> ./keystone-init/keystone-init.py ./keystone-init/config.yaml <span class="kw">echo</span> <span class="st">""</span> <span class="kw">echo</span> <span class="st">"The applied config file is keystone-init/config.yaml"</span> <span class="kw">echo</span> <span class="st">"You may do 'rm -rf keystone-init' to remove \</span> <span class="st"> the no more needed keystone-init directory"</span></code></pre> <ol start="31" style="list-style-type: example"> <li><code>12-glance-install.sh</code></li> </ol> <p>This script install Glance – the OpenStack VM image management service.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install OpenStack Glance, an image management service</span> yum <span class="kw">install</span> -y openstack-glance</code></pre> <ol start="32" style="list-style-type: example"> <li><code>13-glance-create-db.sh</code></li> </ol> <p>This script creates a MySQL database for Glance called <code>glance</code>, which is used to store VM image metadata. The script also creates a <code>glance</code> user and grants full permissions to the <code>glance</code> database to this user.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a database for Glance</span> ../lib/mysqlq.sh <span class="st">"CREATE DATABASE glance;"</span> <span class="co"># Create a glance user and grant all privileges</span> <span class="co"># to the glance database</span> ../lib/mysqlq.sh <span class="st">"GRANT ALL ON glance.* TO 'glance'@'controller' \</span> <span class="st"> IDENTIFIED BY '</span><span class="ot">$GLANCE_MYSQL_PASSWORD</span><span class="st">';"</span></code></pre> <ol start="33" style="list-style-type: example"> <li><code>14-glance-config.sh</code></li> </ol> <p>This scripts modifies the configuration files of the Glance services, which include the API and Registry services. Apart from various credentials, the script also sets Keystone as the identity management service used by Glance.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Make Glance API use Keystone as the identity management service</span> openstack-config --set /etc/glance/glance-api.conf <span class="kw">\</span> paste_deploy flavor keystone <span class="co"># Set Glance API user credentials</span> openstack-config --set /etc/glance/glance-api-paste.ini <span class="kw">\</span> filter:authtoken admin_tenant_name <span class="ot">$GLANCE_SERVICE_TENANT</span> openstack-config --set /etc/glance/glance-api-paste.ini <span class="kw">\</span> filter:authtoken admin_user <span class="ot">$GLANCE_SERVICE_USERNAME</span> openstack-config --set /etc/glance/glance-api-paste.ini <span class="kw">\</span> filter:authtoken admin_password <span class="ot">$GLANCE_SERVICE_PASSWORD</span> <span class="co"># Set Glance Cache user credentials</span> openstack-config --set /etc/glance/glance-cache.conf <span class="kw">\</span> DEFAULT admin_tenant_name <span class="ot">$GLANCE_SERVICE_TENANT</span> openstack-config --set /etc/glance/glance-cache.conf <span class="kw">\</span> DEFAULT admin_user <span class="ot">$GLANCE_SERVICE_USERNAME</span> openstack-config --set /etc/glance/glance-cache.conf <span class="kw">\</span> DEFAULT admin_password <span class="ot">$GLANCE_SERVICE_PASSWORD</span> <span class="co"># Set Glance Registry to use Keystone</span> <span class="co"># as the identity management service</span> openstack-config --set /etc/glance/glance-registry.conf <span class="kw">\</span> paste_deploy flavor keystone <span class="co"># Set the connection to the MySQL server</span> openstack-config --set /etc/glance/glance-registry.conf <span class="kw">\</span> DEFAULT sql_connection <span class="kw">\</span> mysql://glance:<span class="ot">$GLANCE_MYSQL_PASSWORD</span>@controller/glance <span class="co"># In Folsom, the sql_connection option has been moved</span> <span class="co"># from glance-registry.conf to glance-api.conf</span> openstack-config --set /etc/glance/glance-api.conf <span class="kw">\</span> DEFAULT sql_connection <span class="kw">\</span> mysql://glance:<span class="ot">$GLANCE_MYSQL_PASSWORD</span>@controller/glance <span class="co"># Set Glance Registry user credentials</span> openstack-config --set /etc/glance/glance-registry-paste.ini <span class="kw">\</span> filter:authtoken admin_tenant_name <span class="ot">$GLANCE_SERVICE_TENANT</span> openstack-config --set /etc/glance/glance-registry-paste.ini <span class="kw">\</span> filter:authtoken admin_user <span class="ot">$GLANCE_SERVICE_USERNAME</span> openstack-config --set /etc/glance/glance-registry-paste.ini <span class="kw">\</span> filter:authtoken admin_password <span class="ot">$GLANCE_SERVICE_PASSWORD</span></code></pre> <ol start="34" style="list-style-type: example"> <li><code>15-glance-init-db.sh</code></li> </ol> <p>This scripts initializes the <code>glance</code> database using the <code>glance-manage</code> command line tool.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Initialize the database for Glance</span> glance-manage db_sync</code></pre> <ol start="35" style="list-style-type: example"> <li><code>16-glance-permissions.sh</code></li> </ol> <p>This scripts sets restrictive permissions (640) on the Glance configuration files, since they contain sensitive information. The script also set the ownership of the Glance related directories to the <code>glance</code> user and <code>glance</code> group.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Set restrictive permissions for the Glance config files</span> <span class="kw">chmod</span> 640 /etc/glance/*.conf <span class="kw">chmod</span> 640 /etc/glance/*.ini <span class="co"># Set the ownership for the Glance related directories</span> <span class="kw">chown</span> -R glance:glance /var/log/glance <span class="kw">chown</span> -R glance:glance /var/lib/glance</code></pre> <ol start="36" style="list-style-type: example"> <li><code>17-glance-start.sh</code></li> </ol> <p>This script starts the Glance services: API and Registry. The script sets the services to automatically start during the system start up.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the Glance Registry and API services</span> service openstack-glance-registry restart service openstack-glance-api restart chkconfig openstack-glance-registry on chkconfig openstack-glance-api on</code></pre> <ol start="37" style="list-style-type: example"> <li><code>18-add-cirros.sh</code></li> </ol> <p>This script downloads the CirrOS VM image<sup><a href="#fn32" class="footnoteRef" id="fnref32">32</a></sup> and imports it into Glance. This image contains a pre-installed CirrOS, a Tiny OS specialized for running in a Cloud. The image is very simplistic: its size is just 9.4 MB. However, it is sufficient for testing OpenStack.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Download the CirrOS VM image</span> <span class="kw">mkdir</span> /tmp/images <span class="kw">cd</span> /tmp/images <span class="kw">wget</span> https://launchpad.net/cirros/trunk/0.3.0/+download/<span class="kw">\</span> cirros-0.3.0-x86_64-disk.img <span class="co"># Add the downloaded image to Glance</span> glance add <span class="ot">name=</span><span class="st">"cirros-0.3.0-x86_64"</span> is_public=true <span class="kw">\</span> <span class="ot">disk_format=</span>qcow2 container_format=bare <span class="kw">\</span> <span class="kw"><</span> cirros-0.3.0-x86_64-disk.img <span class="co"># Remove the temporary directory</span> <span class="kw">rm</span> -rf /tmp/images</code></pre> <ol start="38" style="list-style-type: example"> <li><code>19-add-ubuntu.sh</code></li> </ol> <p>This script downloads the Ubuntu Cloud Image<sup><a href="#fn33" class="footnoteRef" id="fnref33">33</a></sup> and imports it into Glance. This is a VM image with a pre-installed version of Ubuntu that is customized by Ubuntu engineering to run on Cloud platforms such as Openstack, Amazon EC2, and LXC.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Download an Ubuntu Cloud image</span> <span class="kw">mkdir</span> /tmp/images <span class="kw">cd</span> /tmp/images <span class="kw">wget</span> http://uec-images.ubuntu.com/precise/current/<span class="kw">\</span> precise-server-cloudimg-amd64-disk1.img <span class="co"># Add the downloaded image to Glance</span> glance add <span class="ot">name=</span><span class="st">"ubuntu"</span> is_public=true disk_format=qcow2 <span class="kw">\</span> <span class="ot">container_format=</span>bare <span class="kw"><</span> precise-server-cloudimg-amd64-disk1.img <span class="co"># Remove the temporary directory</span> <span class="kw">rm</span> -rf /tmp/images</code></pre> <ol start="39" style="list-style-type: example"> <li><code>20-nova-install.sh</code></li> </ol> <p>This script installs Nova – the OpenStack compute service, as well as the Qpid AMQP message broker. The message broker is required by the OpenStack services to communicate with each other.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install OpenStack Nova (compute service)</span> <span class="co"># and the Qpid AMQP message broker</span> yum <span class="kw">install</span> -y openstack-nova* qpid-cpp-server</code></pre> <ol start="40" style="list-style-type: example"> <li><code>21-nova-create-db.sh</code></li> </ol> <p>This script creates a MySQL database for Nova called <code>nova</code>, which is used to store VM instance metadata. The script also creates a <code>nova</code> user and grants full permissions to the <code>nova</code> database to this user. The script also enables the access to the database from hosts other than controller.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a database for Nova</span> ../lib/mysqlq.sh <span class="st">"CREATE DATABASE nova;"</span> <span class="co"># Create a nova user and grant all privileges</span> <span class="co"># to the nova database</span> ../lib/mysqlq.sh <span class="st">"GRANT ALL ON nova.* TO 'nova'@'controller' \</span> <span class="st"> IDENTIFIED BY '</span><span class="ot">$NOVA_MYSQL_PASSWORD</span><span class="st">';"</span> <span class="co"># The following is need to allow access</span> <span class="co"># from Nova services running on other hosts</span> ../lib/mysqlq.sh <span class="st">"GRANT ALL ON nova.* TO 'nova'@'%' \</span> <span class="st"> IDENTIFIED BY '</span><span class="ot">$NOVA_MYSQL_PASSWORD</span><span class="st">';"</span></code></pre> <ol start="41" style="list-style-type: example"> <li><code>22-nova-permissions.sh</code></li> </ol> <p>This script sets restrictive permissions on the Nova configuration file, since it contains sensitive information, such as user credentials. The script also sets the ownership of the Nova related directories to the <code>nova</code> group.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Set restrictive permissions for the Nova config file</span> <span class="kw">chmod</span> 640 /etc/nova/nova.conf <span class="co"># Set the ownership for the Nova related directories</span> <span class="kw">chown</span> -R root:nova /etc/nova <span class="kw">chown</span> -R nova:nova /var/lib/nova</code></pre> <ol start="42" style="list-style-type: example"> <li><code>23-nova-config.sh</code></li> </ol> <p>The <code>/etc/nova/nova.conf</code> configuration file should be present on all the compute hosts running Nova Compute, as well as on the controller, which runs the other Nova services. Moreover, the content of the configuration file should be the same on the controller and compute hosts. Therefore, a script that modifies the Nova configuration is placed in the <code>lib</code> directory and is shared by the corresponding installation scripts of the controller and compute hosts. The <code>23-nova-config.sh</code> script invokes the Nova configuration script provided in the <code>lib</code> directory.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Run the Nova configuration script</span> <span class="co"># defined in ../lib/nova-config.sh</span> ../lib/nova-config.sh</code></pre> <p>Among other configuration options, the <code>nova-config.sh</code> script sets up password-based authentication of Nova in Keystone and other OpenStack services. One of two sets of authentication parameters is required to be specified in <code>/etc/nova/api-paste.ini</code> according to the selected authentication method, whether it is token-based or password-based authentication. The first option is to set up the token-based authentication, like the following:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">auth_host = controller auth_protocol = http admin_token = <span class="kw"><</span>admin token<span class="kw">></span></code></pre> <p>The second option is to set up password-based authentication, as follows:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">auth_uri = http://controller:5000/v2.0/ admin_tenant_name = service admin_user = nova admin_password = <span class="kw"><</span>password<span class="kw">></span></code></pre> <p>In this work, we use password-based authentication. Even though, the user name and password are specified in the config file, it is still necessary to provide these data when using the command line tools. One way to do this is to directly provide the credentials in the form of command line arguments, as following:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"> nova <span class="kw">\</span> --os_username=nova <span class="kw">\</span> --os_password=<span class="kw"><</span>password<span class="kw">></span> <span class="kw">\</span> --os_tenant_name=service <span class="kw">\</span> --os_auth_url=http://controller:5000/v2.0 list</code></pre> <p>Another approach, which we apply in this work, is to set corresponding environmental variables that will be automatically used by the command line tools. In this case, all the <code>--os-*</code> options can be omitted. The required configuration is done by the <code>nova-config.sh</code> script shown below:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># This is a Nova configuration shared</span> <span class="co"># by the compute hosts, gateway and controller</span> <span class="co"># Enable verbose output</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT verbose True <span class="co"># Set the connection to the MySQL server</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT sql_connection <span class="kw">\</span> mysql://nova:<span class="ot">$NOVA_MYSQL_PASSWORD</span>@controller/nova <span class="co"># Make Nova use Keystone as the identity management service</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT auth_strategy keystone <span class="co"># Set the host name of the Qpid AMQP message broker</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT qpid_hostname controller <span class="co"># Set Nova user credentials</span> openstack-config --set /etc/nova/api-paste.ini <span class="kw">\</span> filter:authtoken admin_tenant_name <span class="ot">$NOVA_SERVICE_TENANT</span> openstack-config --set /etc/nova/api-paste.ini <span class="kw">\</span> filter:authtoken admin_user <span class="ot">$NOVA_SERVICE_USERNAME</span> openstack-config --set /etc/nova/api-paste.ini <span class="kw">\</span> filter:authtoken admin_password <span class="ot">$NOVA_SERVICE_PASSWORD</span> openstack-config --set /etc/nova/api-paste.ini <span class="kw">\</span> filter:authtoken auth_uri <span class="ot">$NOVA_OS_AUTH_URL</span> <span class="co"># Set the network configuration</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT network_host compute1 openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT fixed_range 10.0.0.0/24 openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT flat_interface eth1 openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT flat_network_bridge br100 openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT public_interface eth1 openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT force_dhcp_release False <span class="co"># Set the Glance host name</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT glance_host controller <span class="co"># Set the VNC configuration</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT vncserver_listen 0.0.0.0 openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT vncserver_proxyclient_address controller <span class="co"># This is the host accessible from outside,</span> <span class="co"># where novncproxy is running on</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT novncproxy_base_url <span class="kw">\</span> http://<span class="ot">$PUBLIC_IP_ADDRESS</span>:6080/vnc_auto.html <span class="co"># This is the host accessible from outside,</span> <span class="co"># where xvpvncproxy is running on</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT xvpvncproxy_base_url <span class="kw">\</span> http://<span class="ot">$PUBLIC_IP_ADDRESS</span>:6081/console <span class="co"># Set the host name of the metadata service</span> openstack-config --set /etc/nova/nova.conf <span class="kw">\</span> DEFAULT metadata_host <span class="ot">$METADATA_HOST</span></code></pre> <p>Apart from user credentials, the script configures a few other important options:</p> <ul> <li>the identity management service – Keystone;</li> <li>the Qpid server host name – controller;</li> <li>the host running the Nova network service – compute1 (i.e. gateway);</li> <li>the network used for VMs – 10.0.0.0/24;</li> <li>the network interface used to bridge VMs to – <code>eth1</code>;</li> <li>the Linux bridge used by VMs – br100;</li> <li>the public network interface – <code>eth1</code>;</li> <li>the Glance service host name – controller;</li> <li>the VNC server host name – controller;</li> <li>the IP address of the host running VNC proxies (they must be run on the host that can be accessed from outside; in our setup it is the gateway) – <code>$PUBLIC_IP_ADDRESS</code>;</li> <li>the Nova metadata service host name – controller.</li> </ul> <ol start="43" style="list-style-type: example"> <li><code>24-nova-init-db.sh</code></li> </ol> <p>This scripts initializes the <code>nova</code> database using the <code>nova-manage</code> command line tool.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Initialize the database for Nova</span> nova-manage db <span class="kw">sync</span></code></pre> <ol start="44" style="list-style-type: example"> <li><code>25-nova-start.sh</code></li> </ol> <p>This script starts various Nova services, as well as their dependencies: the Qpid AMQP message broker, and iSCSI target daemon used by the <code>nova-volume</code> service.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the Qpid AMQP message broker</span> service qpidd restart <span class="co"># iSCSI target daemon for nova-volume</span> service tgtd restart <span class="co"># Start OpenStack Nova services</span> service openstack-nova-api restart service openstack-nova-cert restart service openstack-nova-consoleauth restart service openstack-nova-direct-api restart service openstack-nova-metadata-api restart service openstack-nova-scheduler restart service openstack-nova-volume restart <span class="co"># Make the service start on the system startup</span> chkconfig qpidd on chkconfig tgtd on chkconfig openstack-nova-api on chkconfig openstack-nova-cert on chkconfig openstack-nova-consoleauth on chkconfig openstack-nova-direct-api on chkconfig openstack-nova-metadata-api on chkconfig openstack-nova-scheduler on chkconfig openstack-nova-volume on</code></pre> <h4 id="openstack-compute-compute-nodes."><a href="#TOC"><span class="header-section-number">5.4.5.3</span> 08-openstack-compute (compute nodes).</a></h4> <p>The scripts described in this section should be run on the compute hosts.</p> <ol start="45" style="list-style-type: example"> <li><code>01-source-configrc.sh</code></li> </ol> <p>This scripts is mainly used to remind of the necessity to “source” the <code>configrc</code> file prior to continuing, since some scripts in this directory use the environmental variable defined in <code>configrc</code>. To source the file, it is necessary to run the following command: <code>. 01-source-configrc.sh</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">echo</span> <span class="st">"To make the environmental variables available \</span> <span class="st"> in the current session, run: "</span> <span class="kw">echo</span> <span class="st">". 01-source-configrc.sh"</span> <span class="co"># Export the variables defined in ../config/configrc</span> <span class="kw">.</span> ../config/configrc</code></pre> <ol start="46" style="list-style-type: example"> <li><code>02-install-nova.sh</code></li> </ol> <p>This script installs OpenStack Nova and OpenStack utilities.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install OpenStack Nova and utils</span> yum <span class="kw">install</span> -y openstack-nova* openstack-utils</code></pre> <ol start="47" style="list-style-type: example"> <li><code>03-nova-permissions.sh</code></li> </ol> <p>This script sets restrictive permissions (640) on the Nova configuration file, since it contains sensitive information, such as user credentials. Then, the script sets the ownership on the Nova and Libvirt related directories to the <code>nova</code> user and <code>nova</code> group. The script also sets the user and group used by the Quick EMUlator<sup><a href="#fn34" class="footnoteRef" id="fnref34">34</a></sup> (QEMU) service to <code>nova</code>. This is required since a number of directories need to accessed by both Nova using the <code>nova</code> user and <code>nova</code> group, and QEMU.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Set restrictive permissions for the Nova config file</span> <span class="kw">chmod</span> 640 /etc/nova/nova.conf <span class="co"># Set the ownership for the Nova related directories</span> <span class="kw">chown</span> -R root:nova /etc/nova <span class="kw">chown</span> -R nova:nova /var/lib/nova <span class="kw">chown</span> -R nova:nova /var/cache/libvirt <span class="kw">chown</span> -R nova:nova /var/run/libvirt <span class="kw">chown</span> -R nova:nova /var/lib/libvirt <span class="co"># Make Qemu run under the nova user and group</span> <span class="kw">sed</span> -i <span class="st">'s/#user = "root"/user = "nova"/g'</span> /etc/libvirt/qemu.conf <span class="kw">sed</span> -i <span class="st">'s/#group = "root"/group = "nova"/g'</span> /etc/libvirt/qemu.conf</code></pre> <ol start="48" style="list-style-type: example"> <li><code>04-nova-config.sh</code></li> </ol> <p>This scripts invokes the Nova configuration script provided in the <code>lib</code> directory, which has been detailed above.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Run the Nova configuration script</span> <span class="co"># defined in ../lib/nova-config.sh</span> ../lib/nova-config.sh</code></pre> <ol start="49" style="list-style-type: example"> <li><code>05-nova-compute-start.sh</code></li> </ol> <p>First, this script restarts the Libvirt service since its configuration has been modified. Then, the script starts Nova compute service and sets it to automatically start during the system start up.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the Libvirt and Nova services</span> service libvirtd restart service openstack-nova-compute restart chkconfig openstack-nova-compute on</code></pre> <h4 id="openstack-gateway-network-gateway."><a href="#TOC"><span class="header-section-number">5.4.5.4</span> 09-openstack-gateway (network gateway).</a></h4> <p>The scripts described in this section need to be run only on the gateway.</p> <p>Nova supports three network configuration modes:</p> <ol style="list-style-type: decimal"> <li><p>Flat Mode: public IP addresses from a specified range are assigned and injected into VM instances on launch. This only works on Linux systems that keep their network configuration in <code>/etc/network/interfaces</code>. To enable this mode, the following option should be specified in <code>nova.conf</code>:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="ot">network_manager=</span>nova.network.manager.FlatManager</code></pre></li> <li><p>Flat DHCP Mode: Nova runs a Dnsmasq<sup><a href="#fn35" class="footnoteRef" id="fnref35">35</a></sup> server listening to a created network bridge that assigns public IP addresses to VM instances. This is the mode we use in this work. There must be only one host running the <code>openstack-nova-network</code> service. The <code>network_host</code> option in <code>nova.conf</code> specifies which host the <code>openstack-nova-network</code> service is running on. The network bridge name is specified using the <code>flat_network_bridge</code> option. To enable this mode, the following option should be specified in <code>nova.conf</code>:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="ot">network_manager=</span>nova.network.manager.FlatDHCPManager</code></pre></li> <li><p>VLAN Mode: VM instances are assigned private IP addresses from networks created for each tenant / project. Instances are accessed through a special VPN VM instance. To enable this mode, the following option should be specified in <code>nova.conf</code>:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="ot">network_manager=</span>nova.network.manager.VlanManager</code></pre></li> </ol> <p>Nova runs a metadata service on http://169.254.169.254 that is queried by VM instances to obtain SSH keys and other user data. The <code>openstack-nova-network</code> service automatically configures <code>iptables</code> to NAT the port 80 of 169.254.169.254 to the IP address specified in the <code>metadata_host</code> option and the port specified in the <code>metadata_port</code> option configured in <code>nova.conf</code> (the defaults are the IP address of the <code>openstack-nova-network</code> service and 8775). If the <code>openstack-nova-metadata-api</code> and <code>openstack-nova-network</code> services are running on different hosts, the <code>metadata_host</code> option should point to the IP address of <code>openstack-nova-metadata-api</code>.</p> <ol start="50" style="list-style-type: example"> <li><code>01-source-configrc.sh</code></li> </ol> <p>This scripts is mainly used to remind of the necessity to “source” the <code>configrc</code> file prior to continuing, since some scripts in this directory use the environmental variable defined in <code>configrc</code>. To source the file, it is necessary to run the following command: <code>. 01-source-configrc.sh</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">echo</span> <span class="st">"To make the environmental variables available \</span> <span class="st"> in the current session, run: "</span> <span class="kw">echo</span> <span class="st">". 01-source-configrc.sh"</span> <span class="co"># Export the variables defined in ../config/configrc</span> <span class="kw">.</span> ../config/configrc</code></pre> <ol start="51" style="list-style-type: example"> <li><code>02-nova-start.sh</code></li> </ol> <p>It is assumed that the gateway host is one of the compute hosts; therefore, the OpenStack compute service has already been configured and is running. This scripts starts 3 additional Nova services that are specific to the gateway host: <code>openstack-nova-network</code>, <code>openstack-nova-novncproxy</code>, and <code>openstack-nova-xvpvncproxy</code>. The <code>openstack-nova-network</code> service is responsible for bridging VM instances into the physical network, and configuring the Dnsmasq service for assigning IP addresses to the VMs. The VNC proxy services enable VNC connections to VM instances from the outside network; therefore, they must be run on a machine that has access to the public network, which is the gateway in our case.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the Libvirt and Nova services</span> <span class="co"># (network, and VNC proxies)</span> service libvirtd restart service openstack-nova-network restart service openstack-nova-novncproxy restart service openstack-nova-xvpvncproxy restart <span class="co"># Make the service start on the system start up</span> chkconfig openstack-nova-network on chkconfig openstack-nova-novncproxy on chkconfig openstack-nova-xvpvncproxy on</code></pre> <ol start="52" style="list-style-type: example"> <li><code>03-nova-network-create.sh</code></li> </ol> <p>This service creates a Nova network 10.0.0.0/24, which is used to allocate IP addresses from by Dnsmasq to VM instances. The created network is configured to use the <code>br100</code> Linux bridge to connect VM instances to the physical network.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a Nova network for VM instances: 10.0.0.0/24</span> nova-manage network create --label=public <span class="kw">\</span> --fixed_range_v4=10.0.0.0/24 --num_networks=1 <span class="kw">\</span> --network_size=256 --bridge=br100</code></pre> <ol start="53" style="list-style-type: example"> <li><code>04-nova-secgroup-add.sh</code></li> </ol> <p>This script adds two rules to the default OpenStack security group. The first rule enables the Internet Control Message Protocol (ICMP) for VM instances (the ping command). The second rule enables TCP connections via the 22 port, which is used by SSH.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Enable ping for VMs</span> nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 <span class="co"># Enable SSH for VMs</span> nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</code></pre> <ol start="54" style="list-style-type: example"> <li><code>05-dashboard-install.sh</code></li> </ol> <p>This script installs the OpenStack dashboard. The OpenStack dashboard provides a web-interface to managing an OpenStack environment. Since the dashboard is supposed to be accessed from outside, this service must be installed on a host that has access to the public network, which is the gateway in our setup.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Install OpenStack Dashboard</span> yum <span class="kw">install</span> -y openstack-dashboard</code></pre> <ol start="55" style="list-style-type: example"> <li><code>06-dashboard-config.sh</code></li> </ol> <p>This script configures the OpenStack dashboard. Particularly, the script sets the <code>OPENSTACK_HOST</code> configuration option denoting the host name of the management host to <code>controller</code>. The script also sets the default Keystone role to the value of the <code>$OS_TENANT_NAME</code> environmental variable.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Set the OpenStack management host</span> <span class="kw">sed</span> -i <span class="st">'s/OPENSTACK_HOST = "127.0.0.1"/\</span> <span class="st"> OPENSTACK_HOST = "controller"/g'</span> <span class="kw">\</span> /etc/openstack-dashboard/local_settings <span class="co"># Set the Keystone default role</span> <span class="kw">sed</span> -i <span class="st">"s/OPENSTACK_KEYSTONE_DEFAULT_ROLE = </span><span class="dt">\"</span><span class="st">Member</span><span class="dt">\"</span><span class="st">/\</span> <span class="st"> OPENSTACK_KEYSTONE_DEFAULT_ROLE = </span><span class="dt">\"</span><span class="ot">$OS_TENANT_NAME</span><span class="dt">\"</span><span class="st">/g"</span> <span class="kw">\</span> /etc/openstack-dashboard/local_settings</code></pre> <ol start="56" style="list-style-type: example"> <li><code>07-dashboard-start.sh</code></li> </ol> <p>This script starts the <code>httpd</code> service, which is a web server configured to serve the OpenStack dashboard. The script also sets the <code>httpd</code> service to start automatically during the system start up. Once the service is started, the dashboard will be available at <code>http://localhost/dashboard</code>, where ‘localhost’ should be replaced by the public IP address of the gateway host for accessing the dashboard from the outside network.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Start the httpd service.</span> service httpd restart chkconfig httpd on</code></pre> <p>At this point the installation of OpenStack can be considered completed. The next steps are only intended for testing the environment.</p> <h4 id="openstack-controller-controller.-1"><a href="#TOC"><span class="header-section-number">5.4.5.5</span> 10-openstack-controller (controller).</a></h4> <p>This section describes commands and scripts that can be used to test the OpenStack installation obtained by following the steps above. The testing should start from the identity management service, Keystone, since it coordinates all the other OpenStack services. To use the command line programs provided by OpenStack, it is necessary to “source” the <code>configrc</code>. This can be done by executing the following command: <code>. config/configrc</code>. The check whether Keystone is properly initialized and the authorization works, the following command can be used:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">keystone user-list</code></pre> <p>If everything is configured correctly, the command should output a table with a list of user accounts, such as <code>admin</code>, <code>nova</code>, <code>glance</code>, etc.</p> <p>The next service to test is Glance. In the previous steps, we have already imported VM images into Glance; therefore, it is possible to output a list of them:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">glance index</code></pre> <p>The command should output a list of two VM images: <code>cirros-0.3.0-x86_64</code> and <code>ubuntu</code>.</p> <p>A list of active OpenStack service spanning all the hosts can be output using the following command:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">nova-manage service list</code></pre> <p>The command should output approximately the following table:</p> <table> <caption>The expected output of the <code>nova-manage service list</code> command</caption> <col width="23%"></col> <col width="15%"></col> <col width="6%"></col> <col width="11%"></col> <col width="8%"></col> <col width="12%"></col> <thead> <tr class="header"> <th align="left">Binary</th> <th align="left">Host</th> <th align="left">Zone</th> <th align="left">Status</th> <th align="left">State</th> <th align="left">Updated</th> </tr> </thead> <tbody> <tr class="odd"> <td align="left">nova-consoleauth</td> <td align="left">controller</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="even"> <td align="left">nova-cert</td> <td align="left">controller</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="odd"> <td align="left">nova-scheduler</td> <td align="left">controller</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="even"> <td align="left">nova-volume</td> <td align="left">controller</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="odd"> <td align="left">nova-compute</td> <td align="left">compute1</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="even"> <td align="left">nova-compute</td> <td align="left">compute2</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="odd"> <td align="left">nova-compute</td> <td align="left">compute3</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="even"> <td align="left">nova-compute</td> <td align="left">compute4</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> <tr class="odd"> <td align="left">nova-network</td> <td align="left">controller</td> <td align="left">nova</td> <td align="left">enabled</td> <td align="left">:-)</td> <td align="left"><date></td> </tr> </tbody> </table> <p>If the value of any cell in the <code>State</code> column is <code>XXX</code> instead of <code>:-)</code>, it means that the corresponding service failed to start. The first place to start troubleshooting is the log files of the failed service. The log files are located in the <code>/var/log/<service></code> directory, where <code><service></code> is replaced with the name of the service.</p> <p>Another service to test is the OpenStack dashboard, which should be available at <code>http://$PUBLIC_IP_ADDRESS/dashboard</code>. This URL should open a login page prompting the user to enter a user name and password. The values of the <code>$OS_USERNAME</code> and <code>$OS_PASSWORD</code> variables defined in <code>configrc</code> can be used to log in as the admin user. The dashboard provides a web interface to all the main functionality of OpenStack, such as managing VM instances, VM images, security rules, key pairs, etc.</p> <p>Once the initial testing steps are successfully passed, we can go on to test the actual instantiation of VMs using the OpenStack command line tools, as shown by the scripts from the <code>10-openstack-controller</code> directory.</p> <ol start="57" style="list-style-type: example"> <li><code>01-source-configrc.sh</code></li> </ol> <p>This scripts is mainly used to remind of the necessity to “source” the <code>configrc</code> file prior to continuing, since some scripts in this directory use the environmental variable defined in <code>configrc</code>. To source the file, it is necessary to run the following command: <code>. 01-source-configrc.sh</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">echo</span> <span class="st">"To make the environmental variables available \</span> <span class="st"> in the current session, run: "</span> <span class="kw">echo</span> <span class="st">". 01-source-configrc.sh"</span> <span class="co"># Export the variables defined in ../config/configrc</span> <span class="kw">.</span> ../config/configrc</code></pre> <ol start="58" style="list-style-type: example"> <li><code>02-boot-cirros.sh</code></li> </ol> <p>This script creates a VM instance using the CirrOS image added to Glance previously.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a VM instance from the CirrOS image</span> nova boot --image cirros-0.3.0-x86_64 --flavor m1.small cirros</code></pre> <p>Depending on the hardware the instantiation process may take from a few seconds to a few minutes. The status of a VM instance can be checked using the following command:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">nova show cirros</code></pre> <p>This command shows detailed information about the VM instances, such as the host name, where the VM has been allocated to, instance name, current state, flavor, image name, IP address of the VM, etc. Once the state of the VM turns into <code>ACTIVE</code>, it means that the VM has started booting. It may take some more time before the VM is ready to accept SSH connections. The CirrOS VM image has a default user <code>cirros</code> with the <code>cubswin:)</code> password. The following command can be used to SSH into the VM instance once it is booted:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">ssh</span> cirros@<span class="kw"><</span>ip address<span class="kw">></span></code></pre> <p>Where <code><ip address></code> is replaced with the actual IP address of the VM instance. The following command can be used to delete the VM instance:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">nova delete cirros</code></pre> <ol start="59" style="list-style-type: example"> <li><code>03-keypair-add.sh</code></li> </ol> <p>Nova supports injection of SSH keys into VM instances for password-less authentication. This script creates a key pair, which can be used by Nova to inject into VMs. The generated public key is stored internally by Nova, whereas, the private key is saved into the specified <code>../config/test.pem</code> file.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a key pair</span> nova keypair-add <span class="kw">test</span> <span class="kw">></span> ../config/test.pem <span class="kw">chmod</span> 600 ../config/test.pem</code></pre> <ol start="60" style="list-style-type: example"> <li><code>04-boot-ubuntu.sh</code></li> </ol> <p>This script creates a VM instance using the Ubuntu Cloud image added to Glance previously. The executed command instructs OpenStack to inject the previously generated public key called <code>test</code> to allow password-less SSH connections.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a VM instance from the Ubuntu Cloud image</span> nova boot --image ubuntu --flavor m1.small --key_name <span class="kw">test</span> ubuntu</code></pre> <ol start="61" style="list-style-type: example"> <li><code>05-ssh-into-vm.sh</code></li> </ol> <p>This script shows how to SSH into a VM instance, which has been injected with the previously generated <code>test</code> key. The script accepts two arguments: the IP address of the VM instance, and the user name. To connect to an instance of the Ubuntu Cloud image, the user name should be set to <code>ubuntu</code>.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># SSH into a VM instance using the generated test.pem key.</span> <span class="kw">if </span>[ <span class="ot">$#</span> -ne 2 ] <span class="kw">then</span> <span class="kw">echo</span> <span class="st">"You must specify two arguments:"</span> <span class="kw">echo</span> <span class="st">"(1) the IP address of the VM instance"</span> <span class="kw">echo</span> <span class="st">"(2) the user name"</span> <span class="kw">exit</span> 1 <span class="kw">fi</span> <span class="kw">ssh</span> -i ../config/test.pem -l <span class="ot">$2</span> <span class="ot">$1</span></code></pre> <ol start="62" style="list-style-type: example"> <li><code>06-nova-volume-create.sh</code></li> </ol> <p>This script shows how to create a 2 GB Nova volume called <code>myvolume</code>. Once created, the volume can be dynamically attached to a VM instance, as shown in the next script. A volume can only be attached to one instance at a time.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Create a 2GB volume called myvolume</span> nova volume-create --display_name myvolume 2</code></pre> <ol start="63" style="list-style-type: example"> <li><code>07-nova-volume-attach.sh</code></li> </ol> <p>This script shows how to attached a volume to a VM instance. The script accepts two arguments: (1) the name of the VM instance to attach the volume to; and (2) the ID of the volume to attach to the VM instance. Once attached, the volume will be available inside the VM instance as the <code>/dev/vdc/</code> device. The volume is provided as a block storage, which means it has be formatted before it can be used.</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="co"># Attach the created volume to a VM instance as /dev/vdc.</span> <span class="kw">if </span>[ <span class="ot">$#</span> -ne 2 ] <span class="kw">then</span> <span class="kw">echo</span> <span class="st">"You must specify two arguments:"</span> <span class="kw">echo</span> <span class="st">"(1) the name of the VM instance"</span> <span class="kw">echo</span> <span class="st">"(2) the ID of the volume to attach"</span> <span class="kw">exit</span> 1 <span class="kw">fi</span> nova volume-attach <span class="ot">$1</span> <span class="ot">$2</span> /dev/vdc</code></pre> <h2 id="openstack-troubleshooting"><a href="#TOC"><span class="header-section-number">5.5</span> OpenStack Troubleshooting</a></h2> <p>This section lists some of the problems encountered by the authors during the installation process and their solutions. The following general procedure can be used to resolve problems with OpenStack:</p> <ol style="list-style-type: decimal"> <li>Run the <code>nova-manage service list</code> command to find out if any of the services failed. A service failed if the corresponding row of the table the <code>State</code> column contains <code>XXX</code> instead of <code>:-)</code>.</li> <li>From the same service status table, the host running the failed service can be identified by looking at the <code>Host</code> column.</li> <li>Once the problematic service and host are determined, the respective log files should be examined. To do this, it is necessary to open an SSH connection with the host and find the log file that corresponds to the failed service. The default location of the log files is <code>/var/log/<service name></code>, where <code><service name></code> is one of: <code>keystone</code>, <code>glance</code>, <code>nova</code>, etc.</li> </ol> <h3 id="glance"><a href="#TOC"><span class="header-section-number">5.5.1</span> Glance</a></h3> <p>Sometimes the Glance Registry service fails to start during the OS start up. This results in failing of various requests of the OpenStack services to Glance. The problem can be identified by running the <code>glance index</code> command, which should not fail in a normal case. The reason of a failure might be the fact that the Glance Registry service starts before the MySQL server. The solution to this problem is to restart the Glance services as follows:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">service openstack-glance-registry restart service openstack-glance-api restart</code></pre> <h3 id="nova-compute"><a href="#TOC"><span class="header-section-number">5.5.2</span> Nova Compute</a></h3> <p>The <code>libvirtd</code> service may fail with errors, such the following:</p> <pre><code>15391: error : qemuProcessReadLogOutput:1005 : \ internal error Process exited while reading console \ log output: chardev: opening backend "file" failed</code></pre> <p>And such as:</p> <pre><code>error : qemuProcessReadLogOutput:1005 : internal error \ Process exited while reading console log output: \ char device redirected to /dev/pts/3 qemu-kvm: -drive file=/var/lib/nova/instances/instance-00000015/ \ disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: \ could not open disk image /var/lib/nova/instances/ \ instance-00000015/disk: Permission denied</code></pre> <p>Both the problems can be resolved by setting the user and group in the <code>/etc/libvirt/qemu.conf</code> configuration file as follows:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">user = <span class="st">"nova"</span> group = <span class="st">"nova"</span></code></pre> <p>And also changing the ownership as follows:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">chown</span> -R nova:nova /var/cache/libvirt <span class="kw">chown</span> -R nova:nova /var/run/libvirt <span class="kw">chown</span> -R nova:nova /var/lib/libvirt</code></pre> <p>Another potential problem is hitting the limit on the maximum number of VM instances, which results in the following error:</p> <pre><code>ERROR: Quota exceeded: code=InstanceLimitExceeded (HTTP 413)</code></pre> <p>The solution is to increase the quota by executing the following command:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">nova quota-update --instances <span class="kw"><</span>number of instances<span class="kw">></span> <span class="kw"><</span>project ID<span class="kw">></span></code></pre> <p>Where <code><project ID></code> is the UUID of the project to increase the quota for; and <code><number of instances></code> is the new limit that you want to set on the maximum allowed number of VM instances.</p> <p>Another potential problem is getting the following error message when running any command of the <code>nova</code> client, such as <code>nova list</code>:</p> <pre><code>ERROR: ConnectionRefused: '[Errno 111] Connection refused'</code></pre> <p>This may happen because the <code>openstack-nova-api</code> service is not running on the controller. The following command can be used to check the status of the service on the controller host:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">service openstack-nova-api status</code></pre> <p>Getting an error message like “openstack-nova-api dead but pid file exists” or “openstack-nova-api dead but subsys locked” means the service failed. A quick solution could be just removing the pid and lock file:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">rm</span> -f /var/lock/subsys/openstack-nova-api <span class="kw">rm</span> -f /var/run/nova/nova-api.pid</code></pre> <p>Another solution is re-installing the service as follows:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">yum remove -y openstack-nova-api yum <span class="kw">install</span> -y openstack-nova-api service openstack-nova-api start</code></pre> <h3 id="nova-network"><a href="#TOC"><span class="header-section-number">5.5.3</span> Nova Network</a></h3> <p>If after a start up, the <code>openstack-nova-network</code> service hangs with the following last message in the log file: ‘Attempting to grab file lock “iptables” for method “apply”’, the solution is the following<sup><a href="#fn36" class="footnoteRef" id="fnref36">36</a></sup>:</p> <pre class="sourceCode Bash"><code class="sourceCode bash"><span class="kw">rm</span> /var/lib/nova/tmp/nova-iptables.lock</code></pre> <p>Another problem is that sometimes a VM instance cannot be deleted and gets stuck in the <code>deleting</code> state with a message in <code>/var/log/nova/compute.log</code> similar to the following:</p> <pre><code>nova.rpc.amqp RemoteError: Remote error: ProcessExecutionError \ Unexpected error while running command. nova.rpc.amqp Command: sudo nova-rootwrap dhcp_release br100 \ 10.0.0.2 fa:16:3e:6b:f5:72 nova.rpc.amqp Exit code: 1</code></pre> <p>The solution to this problem is to modify <code>/etc/nova/nova.conf</code> and set:</p> <pre class="sourceCode Bash"><code class="sourceCode bash">force_dhcp_release = False</code></pre> <h1 id="conclusions"><a href="#TOC"><span class="header-section-number">6</span> Conclusions</a></h1> <p>We have gone through and discussed all the steps required to get from bare hardware to a fully operational OpenStack infrastructure. We have started from notes on installing CentOS on the nodes, continued through setting up a network gateway, distributed replicated storage using GlusterFS, KVM hypervisor, and all the main OpenStack services. We have concluded with steps to test the OpenStack installation, suggestions on ways of finding problem sources and resolving them, and a discussion of solutions to a number of problems that may be encountered during the installation process.</p> <p>In our opinion, the availability of step-by-step installation and configuration guides, such as this one, is very important to lower the barrier to entry into the real world application of open source Cloud platforms for a wider audience. The task of providing such a guidance lies on both the official documentation and tutorials and materials developed by the project community. It is hard to underestimate the role of the community support in facilitating the adoption of open source software. We believe that the OpenStack project has attracted a large, active and growing community of people, who will undoubtedly greatly contribute to further advancements of both the software and documentation of OpenStack leading to a significant impact on the adoption of free open source software and Cloud computing.</p> <h1 id="references"><a href="#TOC"><span class="header-section-number">7</span> References</a></h1> <p>[1] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and others, “A view of cloud computing,” <em>Communications of the ACM</em>, vol. 53, pp. 50–58, 2010.</p> <p>[2] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, “Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility,” <em>Future Generation computer systems</em>, vol. 25, pp. 599–616, 2009.</p> <p>[3] OpenStack LLC, “OpenStack: The Open Source Cloud Operating System,” 21-Jul-2012. [Online]. Available: <a href="http://www.openstack.org/software/" title="http://www.openstack.org/software/">http://www.openstack.org/software/</a>.</p> <p>[4] OpenStack LLC, “OpenStack Compute Administration Manual,” 2012.</p> <p>[5] OpenStack LLC, “OpenStack Install and Deploy Manual,” 2012.</p> <p>[6] R. Landmann, J. Reed, D. Cantrell, H. D. Goede, and J. Masters, “Red Hat Enterprise Linux 6 Installation Guide,” 2012.</p> <div class="footnotes"> <hr /> <ol> <li id="fn1"><p>Amazon EC2. <a href="http://aws.amazon.com/ec2/">http://aws.amazon.com/ec2/</a>.<a href="#fnref1">↩</a></p></li> <li id="fn2"><p>Google Compute Engine. <a href="http://cloud.google.com/products/compute-engine.html">http://cloud.google.com/products/compute-engine.html</a>.<a href="#fnref2">↩</a></p></li> <li id="fn3"><p>Google App Engine. <a href="http://cloud.google.com/products/">http://cloud.google.com/products/</a>.<a href="#fnref3">↩</a></p></li> <li id="fn4"><p>Microsoft Azure. <a href="http://www.windowsazure.com/">http://www.windowsazure.com/</a>.<a href="#fnref4">↩</a></p></li> <li id="fn5"><p>Salesforce.com. <a href="http://www.salesforce.com/">http://www.salesforce.com/</a>.<a href="#fnref5">↩</a></p></li> <li id="fn6"><p>Amazon Web Services Marketplace. <a href="https://aws.amazon.com/marketplace/">https://aws.amazon.com/marketplace/</a>.<a href="#fnref6">↩</a></p></li> <li id="fn7"><p>The project repository. <a href="https://github.com/beloglazov/openstack-centos-kvm-glusterfs">https://github.com/beloglazov/openstack-centos-kvm-glusterfs</a>.<a href="#fnref7">↩</a></p></li> <li id="fn8"><p>CentOS. <a href="http://centos.org/">http://centos.org/</a>.<a href="#fnref8">↩</a></p></li> <li id="fn9"><p>GlusterFS. <a href="http://gluster.org/">http://gluster.org/</a>.<a href="#fnref9">↩</a></p></li> <li id="fn10"><p>KVM. <a href="http://www.linux-kvm.org/">http://www.linux-kvm.org/</a>.<a href="#fnref10">↩</a></p></li> <li id="fn11"><p>OpenStack. <a href="http://openstack.org/">http://openstack.org/</a>.<a href="#fnref11">↩</a></p></li> <li id="fn12"><p>The OpenStack Foundation. <a href="http://wiki.openstack.org/Governance/Foundation/Structure">http://wiki.openstack.org/Governance/Foundation/Structure</a>.<a href="#fnref12">↩</a></p></li> <li id="fn13"><p>Open Cloud Computing Interface. <a href="http://occi-wg.org/">http://occi-wg.org/</a>.<a href="#fnref13">↩</a></p></li> <li id="fn14"><p>Open Grid Forum. <a href="http://www.ogf.org/">http://www.ogf.org/</a>.<a href="#fnref14">↩</a></p></li> <li id="fn15"><p>Libvirt. <a href="http://libvirt.org/">http://libvirt.org/</a>.<a href="#fnref15">↩</a></p></li> <li id="fn16"><p>Eucalyptus. <a href="http://www.eucalyptus.com/">http://www.eucalyptus.com/</a>.<a href="#fnref16">↩</a></p></li> <li id="fn17"><p><a href="http://www.eucalyptus.com/news/amazon-web-services-and-eucalyptus-partner">Http://www.eucalyptus.com/news/amazon-web-services-and-eucalyptus-partner</a>.<a href="#fnref17">↩</a></p></li> <li id="fn18"><p>OpenNebula. <a href="http://opennebula.org/">http://opennebula.org/</a>.<a href="#fnref18">↩</a></p></li> <li id="fn19"><p>CloudStack. <a href="http://cloudstack.org/">http://cloudstack.org/</a>.<a href="#fnref19">↩</a></p></li> <li id="fn20"><p>DevStack. <a href="http://devstack.org/">http://devstack.org/</a>.<a href="#fnref20">↩</a></p></li> <li id="fn21"><p>Dodai-deploy. <a href="https://github.com/nii-cloud/dodai-deploy">https://github.com/nii-cloud/dodai-deploy</a>.<a href="#fnref21">↩</a></p></li> <li id="fn22"><p>Puppet. <a href="http://puppetlabs.com/">http://puppetlabs.com/</a>.<a href="#fnref22">↩</a></p></li> <li id="fn23"><p>Red Hat OpenStack. <a href="http://www.redhat.com/openstack/">http://www.redhat.com/openstack/</a>.<a href="#fnref23">↩</a></p></li> <li id="fn24"><p>The project repository. <a href="https://github.com/beloglazov/openstack-centos-kvm-glusterfs">https://github.com/beloglazov/openstack-centos-kvm-glusterfs</a>.<a href="#fnref24">↩</a></p></li> <li id="fn25"><p>XFS. <a href="http://en.wikipedia.org/wiki/XFS">http://en.wikipedia.org/wiki/XFS</a>.<a href="#fnref25">↩</a></p></li> <li id="fn26"><p>Git. <a href="http://git-scm.com/">http://git-scm.com/</a>.<a href="#fnref26">↩</a></p></li> <li id="fn27"><p>SELinux. <a href="http://en.wikipedia.org/wiki/Security-Enhanced_Linux">http://en.wikipedia.org/wiki/Security-Enhanced_Linux</a>.<a href="#fnref27">↩</a></p></li> <li id="fn28"><p>Libvirt. <a href="http://libvirt.org/">http://libvirt.org/</a>.<a href="#fnref28">↩</a></p></li> <li id="fn29"><p>The EPEL repository. <a href="http://fedoraproject.org/wiki/EPEL">http://fedoraproject.org/wiki/EPEL</a>.<a href="#fnref29">↩</a></p></li> <li id="fn30"><p>The <em>keystone-init</em> project. <a href="https://github.com/nimbis/keystone-init">https://github.com/nimbis/keystone-init</a>.<a href="#fnref30">↩</a></p></li> <li id="fn31"><p>YAML. <a href="http://en.wikipedia.org/wiki/YAML">http://en.wikipedia.org/wiki/YAML</a>.<a href="#fnref31">↩</a></p></li> <li id="fn32"><p>CirrOS. <a href="https://launchpad.net/cirros/">https://launchpad.net/cirros/</a>.<a href="#fnref32">↩</a></p></li> <li id="fn33"><p>Ubuntu Cloud Images. <a href="http://uec-images.ubuntu.com/">http://uec-images.ubuntu.com/</a>.<a href="#fnref33">↩</a></p></li> <li id="fn34"><p>QEMU. <a href="http://en.wikipedia.org/wiki/QEMU">http://en.wikipedia.org/wiki/QEMU</a>.<a href="#fnref34">↩</a></p></li> <li id="fn35"><p>Dnsmasq. <a href="http://en.wikipedia.org/wiki/Dnsmasq">http://en.wikipedia.org/wiki/Dnsmasq</a>.<a href="#fnref35">↩</a></p></li> <li id="fn36"><p>OpenStack Compute Questions. <a href="https://answers.launchpad.net/nova/+question/200985">https://answers.launchpad.net/nova/+question/200985</a>.<a href="#fnref36">↩</a></p></li> </ol> </div> </body> </html>