Playing with GlusterFS

Posted on: 2017-04-15

GlusterFS is a really cool software defined storage solution. It allows you to do RAID like setups using disks in different servers. It is MUCH cheaper, more flexible and sometimes more performant than the typical EMC or Netapp + shelves setup.

This post will setup two shares. A "RAID0" like distributed share where disk capacity from two servers is added together and represented as one big export with no redundancy and a RAID1 like setup where disks from two servers are mirrored. We will also go over how to use these shares on a client system.

Note that this stuff isn't even scratching the surface of this product. You can do mixed types of volume setups, replicate across data centers for DR, offer shares via samba and more. I encourage you to explore further.

Start by setting up 2 minimal CentOS 7 VMs with static IPs. Each VM should have 3 disks. vda for OS. vdb and vdc should be 10GB blanks used for gluster.

On both VMs: format the drives, install the software, disable firewalls, and configure /etc/hosts using your info

fdisk /dev/vdb
n
p (keep default)
1 (keep default)
2048 (keep default)
20971519 (keep default)
w

fdisk /dev/vdc
n
p (keep default)
1 (keep default)
2048 (keep default)
20971519 (keep default)
w

mkfs.xfs -i size=512 /dev/vdb1
mkfs.xfs -i size=512 /dev/vdc1
mkdir -p /data/brick1 /data/brick2
echo '/dev/vdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
echo '/dev/vdc1 /data/brick2 xfs defaults 1 2' >> /etc/fstab
mount -a
#validate
df -h

mkdir /data/brick1/repl0
mkdir /data/brick2/dist0

yum -y update
yum -y install centos-release-gluster39 vim
yum -y install glusterfs-server
systemctl start glusterd 
systemctl enable glusterd

systemctl disable firewalld
systemctl stop firewalld

vim /etc/hosts
192.168.1.68 gs1.lan 
192.168.1.69 gs2.lan

Now that the servers are configured we need them to talk to each other.

On gs1.lan:
gluster peer probe gs2.lan

On gs2.lan
gluster peer probe gs1.lan

Now lets create the replicated RAID1 like volume. On either VM.

gluster volume create repl0 replica 2 transport tcp gs1.lan:/data/brick1/repl0 gs2.lan:/data/brick1/repl0
gluster volume start repl0
gluster volume info repl0

Now lets create the distributed RAID0 like volume. On either VM.

gluster volume create dist0 gs1.lan:/data/brick2/dist0 gs2.lan:/data/brick2/dist0
gluster volume start dist0
gluster volume info dist0

Finally, lets pretend gs1.lan is a client and mount the shares there.

mkdir -p /mnt/dist0 /mnt/repl0
mount -t glusterfs gs1.lan:/dist0 /mnt/dist0
mount -t glusterfs gs1.lan:/repl0 /mnt/repl0

Checking it

df -h
for i in `seq -w 1 100`; do cp /etc/hosts /mnt/dist0/hosts-$i; done
for i in `seq -w 1 100`; do cp /etc/hosts /mnt/repl0/hosts-$i; done
#on both nodes do the following and look at file distribution
ls -lha /data/brick1/repl0 /data/brick2/dist0

Bonus: Mount a share via NFS

mkdir -p /mnt/dist0_nfs
gluster volume set dist0 nfs.disable false
mount -t nfs -o vers=3 gs1.lan:/dist0 /mnt/dist0_nfs
ls -lha /mnt/dist0_nfs

References: http://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Quickstart/


Single box ovirt4 setup

Posted on: 2017-04-15

I like to spin up VMs for testing stuff. My home lab has a small box I use for that. On it I run ovirt, Red Hats answer to vsphere. It works well and has a decent web UI to make mangement easy.

do a fresh centos 7 minimal install on physical hardware

yum update
yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
yum install ovirt-hosted-engine-setup screen vim wget

configure /etc/hosts on this machine and your client machine for testing. or setup DNS

192.168.1.66 ovirt.lan
192.168.1.67 ovirtm.lan

configure NFS shares

[root@ovirt ~]# mkdir -p /hosted_storage /storage /isos
[root@ovirt ~]# chmod 0755 /hosted_storage /storage /isos 
[root@ovirt ~]# chown 36:36 /hosted_storage /storage /isos 
[root@ovirt ~]# vi /etc/exports
/hosted_storage        192.168.0.0/16(rw,no_root_squash,anonuid=36,anongid=36,no_subtree_check,sync)
/storage        192.168.0.0/16(rw,no_root_squash,anonuid=36,anongid=36,no_subtree_check,sync)
/isos        192.168.0.0/16(rw,no_root_squash,anonuid=36,anongid=36,no_subtree_check,sync)
[root@ovirt ~]# systemctl start nfs-server
[root@ovirt ~]# systemctl enable nfs-server

Note that the install of the software and the start of the web interface can take a LONG time on slow hardware. Be patient.

[root@ovirt ~]# screen
[root@ovirt ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
      During customization use CTRL-D to abort.
      Continuing will configure this host for serving as hypervisor and create a VM where you have to install the engine afterwards.
      Are you sure you want to continue? (Yes, No)[Yes]: 
[ INFO  ] Hardware supports virtualization
      Configuration files: []
      Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170210180401-5tl1oa.log
      Version: otopi-1.6.0 (otopi-1.6.0-1.el7.centos)
[ INFO  ] Detecting available oVirt engine appliances
[ ERROR ] No engine appliance image is available on your system.
      The oVirt engine appliance is now required to deploy hosted-engine.
      You could get oVirt engine appliance installing ovirt-engine-appliance rpm.
      Do you want to install ovirt-engine-appliance rpm? (Yes, No) [Yes]: 
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Installing the oVirt engine appliance
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Installing the oVirt engine appliance
[ INFO  ] Yum Status: Downloading Packages
[ INFO  ] Yum Downloading: ovirt-engine-appliance-4.1-20170201.1.el7.centos.noarch.rpm 170 M(18%)
[ INFO  ] Yum Downloading: ovirt-engine-appliance-4.1-20170201.1.el7.centos.noarch.rpm 456 M(48%)
[ INFO  ] Yum Downloading: ovirt-engine-appliance-4.1-20170201.1.el7.centos.noarch.rpm 637 M(67%)
[ INFO  ] Yum Downloading: ovirt-engine-appliance-4.1-20170201.1.el7.centos.noarch.rpm 822 M(87%)
[ INFO  ] Yum Download/Verify: ovirt-engine-appliance-4.1-20170201.1.el7.centos.noarch
[ INFO  ] Yum Status: Check Package Signatures
[ INFO  ] Yum Status: Running Test Transaction
[ INFO  ] Yum Status: Running Transaction
[ INFO  ] Yum install: 1/1: ovirt-engine-appliance-4.1-20170201.1.el7.centos.noarch
[ INFO  ] Yum Verify: 1/1: ovirt-engine-appliance.noarch 0:4.1-20170201.1.el7.centos - u
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[WARNING] Cannot locate gluster packages, Hyper Converged setup support will be disabled.
[ INFO  ] Please abort the setup and install vdsm-gluster, glusterfs-server >= 3.7.2 and restart vdsmd service in order to gain Hyper Converged setup support.
[ INFO  ] Stage: Environment customization

      --== STORAGE CONFIGURATION ==--

      Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: 
      Please specify the full shared storage connection path to use (example: host:/path): ovirt.lan:/hosted_storage

      --== HOST NETWORK CONFIGURATION ==--

      iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: 
      Please indicate a pingable gateway IP address [192.168.1.1]: 
      Please indicate a nic to set ovirtmgmt bridge on: (enp2s0, enp3s0) [enp2s0]: enp3s0 
      #choose the NIC that is connected to your network^

      --== VM CONFIGURATION ==--

      The following appliance have been found on your system:
            [1] - The oVirt Engine Appliance image (OVA) - 4.1-20170201.1.el7.centos
            [2] - Directly select an OVA file
      Please select an appliance (1, 2) [1]: 
[ INFO  ] Verifying its sha1sum
[ INFO  ] Checking OVF archive content (could take a few minutes depending on archive size)
[ INFO  ] Checking OVF XML content (could take a few minutes depending on archive size)
      Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]:
[ INFO  ] Detecting host timezone.
      Would you like to use cloud-init to customize the appliance on the first boot (Yes, No)[Yes]?
      Would you like to generate on-fly a cloud-init ISO image (of no-cloud type)
      or do you have an existing one (Generate, Existing)[Generate]?
      Please provide the FQDN you would like to use for the engine appliance.
      Note: This will be the FQDN of the engine VM you are now going to launch,
      it should not point to the base host or to any other existing machine.
      Engine VM FQDN: (leave it empty to skip):  []: ovirtm.lan
      #ovirtm for management vs ovirt for the host machine
      Please provide the domain name you would like touse for the engine appliance.
      Engine VM domain: [lan]
      Automatically execute engine-setup on the engine appliance on first boot (Yes, No)[Yes]? 
      Automatically restart the engine VM as a monitored service after engine-setup (Yes, No)[Yes]? 
      Enter root password that will be used for the engine appliance (leave it empty to skip): 
      Confirm appliance root password: 
      Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): 
[WARNING] Skipping appliance root ssh public key
      Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: 
      Please specify the size of the VM disk in GB: [25]: 
      Please specify the memory size of the VM in MB (Defaults to maximum available): [7039]: 4096
      The following CPU types are supported by this host:
             - model_Nehalem: Intel Nehalem Family
             - model_Penryn: Intel Penryn Family
             - model_Conroe: Intel Conroe Family
      Please specify the CPU type to be used by the VM [model_Nehalem]: 
      Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: 2
      You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:67:3f:d8]: 
      How should the engine VM network be configured (DHCP, Static)[DHCP]? Static
      Please enter the IP address to be used for the engine VM [192.168.1.2]: 192.168.1.67
[ INFO  ] The engine VM will be configured to use 192.168.1.67/24
      Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM
      Engine VM DNS (leave it empty to skip) [192.168.1.1]: 
      Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
      Note: ensuring that this host could resolve the engine VM hostname is still up to you
      (Yes, No)[No] Yes

      --== HOSTED ENGINE CONFIGURATION ==--

      Enter engine admin password: 
      Confirm engine admin password: 
      Please provide the name of the SMTP server through which we will send notifications [localhost]: 
      Please provide the TCP port number of the SMTP server [25]: 
      Please provide the email address from which notifications will be sent [root@localhost]: 
      Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: 
[ INFO  ] Stage: Setup validation
[WARNING] Failed to resolve ovirt.lan using DNS, it can be resolved only locally
      --== CONFIGURATION PREVIEW ==--

      Bridge interface                   : enp3s0
      Engine FQDN                        : ovirtm.lan
      Bridge name                        : ovirtmgmt
      Host address                       : ovirt.lan
      SSH daemon port                    : 22
      Firewall manager                   : iptables
      Gateway address                    : 192.168.1.1
      Storage Domain type                : nfs3
      Image size GB                      : 25
      Host ID                            : 1
      Storage connection                 : ovirt.lan:/hosted_storage
      Console type                       : vnc
      Memory size MB                     : 4096
      MAC address                        : 00:16:3e:67:3f:d8
      Number of CPUs                     : 2
      OVF archive (for disk boot)        : /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.1-20170201.1.el7.centos.ova
      Appliance version                  : 4.1-20170201.1.el7.centos
      Restart engine VM after engine-setup: True
      Engine VM timezone                 : America/New_York
      CPU Type                           : model_Nehalem
      Please confirm installation settings (Yes, No)[Yes]:                                                                                                                                                                                                     
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Configuring the management bridge
[ INFO  ] Creating Storage Domain
[ INFO  ] Creating Storage Pool
[ INFO  ] Connecting Storage Pool
[ INFO  ] Verifying sanlock lockspace initialization
[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...
[ INFO  ] Image for 'hosted-engine.lockspace' created successfully
[ INFO  ] Creating Image for 'hosted-engine.metadata' ...
[ INFO  ] Image for 'hosted-engine.metadata' created successfully
[ INFO  ] Creating VM Image
[ INFO  ] Extracting disk image from OVF archive (could take a few minutes depending on archive size)
[ INFO  ] Validating pre-allocated volume size
[ INFO  ] Uploading volume to data domain (could take a few minutes depending on archive size)
[ INFO  ] Image successfully imported from OVF
[ INFO  ] Destroying Storage Pool
[ INFO  ] Start monitoring domain
[ INFO  ] Configuring VM
[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
[ INFO  ] Creating VM
      You can now connect to the VM with the following command:
            hosted-engine --console
      You can also graphically connect to the VM from your system with the following command:
            remote-viewer vnc://ovirt.lan:5900
      Use temporary password "6465WJGH" to connect to vnc console.
      Please ensure that your Guest OS is properly configured to support serial console according to your distro documentation.
      Follow http://www.ovirt.org/Serial_Console_Setup#I_need_to_access_the_console_the_old_way for more info.
      If you need to reboot the VM you will need to start it manually using the command:
      hosted-engine --vm-start
      You can then set a temporary password using the command:
      hosted-engine --add-console-password
[ INFO  ] Running engine-setup on the appliance
      |- [ INFO  ] Stage: Initializing
      |- [ INFO  ] Stage: Environment setup
      |-           Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/root/ovirt-engine-answers', '/root/heanswers.conf']
      |-           Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20170210184819-egk9f2.log
      |-           Version: otopi-1.6.0 (otopi-1.6.0-1.el7.centos)
      |- [ INFO  ] Stage: Environment packages setup
      |- [ INFO  ] Stage: Programs detection
      |- [ INFO  ] Stage: Environment setup
      |- [ INFO  ] Stage: Environment customization
      |-
      |-           --== PRODUCT OPTIONS ==--
      |-
      |-           Configure Image I/O Proxy on this host? (Yes, No) [Yes]:
      |-
      |-           --== PACKAGES ==--
      |-
      |-
      |-           --== NETWORK CONFIGURATION ==--
      |-
      |- [WARNING] Failed to resolve ovirtm.lan using DNS, it can be resolved only locally
      |- [ INFO  ] firewalld will be configured as firewall manager.
      |-           --== DATABASE CONFIGURATION ==--
      |-
      |-
      |-           --== OVIRT ENGINE CONFIGURATION ==--
      |-
      |-
      |-           --== STORAGE CONFIGURATION ==--
      |-
      |-
      |-           --== PKI CONFIGURATION ==--
      |-
      |-
      |-           --== APACHE CONFIGURATION ==--
      |-
      |-
      |-           --== SYSTEM CONFIGURATION ==--
      |-
      |-
      |-           --== MISC CONFIGURATION ==--
      |-
      |-           Please choose Data Warehouse sampling scale:
      |-           (1) Basic
      |-           (2) Full
      |-           (1, 2)[1]:
      |-
      |-           --== END OF CONFIGURATION ==--
      |-
      |- [ INFO  ] Stage: Setup validation
      |- [WARNING] Less than 16384MB of memory is available
      |-
      |-           --== CONFIGURATION PREVIEW ==--
      |-
      |-           Application mode                        : both
      |-           Default SAN wipe after delete           : False
      |-           Firewall manager                        : firewalld
      |-           Update Firewall                         : True
      |-           Host FQDN                               : ovirtm.lan
      |-           Configure local Engine database         : True
      |-           Set application as default page         : True
      |-           Configure Apache SSL                    : True
      |-           Engine database secured connection      : False
      |-           Engine database user name               : engine
      |-           Engine database name                    : engine
      |-           Engine database host                    : localhost
      |-           Engine database port                    : 5432
      |-           Engine database host name validation    : False
      |-           Engine installation                     : True
      |-           PKI organization                        : lan
      |-           DWH installation                        : True
      |-           DWH database secured connection         : False
      |-           DWH database host                       : localhost
      |-           DWH database user name                  : ovirt_engine_history
      |-           DWH database name                       : ovirt_engine_history
      |-           DWH database port                       : 5432
      |-           DWH database host name validation       : False
      |-           Configure local DWH database            : True
      |-           Configure Image I/O Proxy               : True
      |-           Configure VMConsole Proxy               : True
      |-           Configure WebSocket Proxy               : True
      |- [ INFO  ] Stage: Transaction setup
      |- [ INFO  ] Stopping engine service
      |- [ INFO  ] Stopping ovirt-fence-kdump-listener service
      |- [ INFO  ] Stopping dwh service
      |- [ INFO  ] Stopping Image I/O Proxy service
      |- [ INFO  ] Stopping websocket-proxy service
      |- [ INFO  ] Stage: Misc configuration
      |- [ INFO  ] Stage: Package installation
      |- [ INFO  ] Stage: Misc configuration
      |- [ INFO  ] Upgrading CA
      |- [ INFO  ] Initializing PostgreSQL
      |- [ INFO  ] Creating PostgreSQL 'engine' database
      |- [ INFO  ] Configuring PostgreSQL
      |- [ INFO  ] Creating PostgreSQL 'ovirt_engine_history' database
      |- [ INFO  ] Configuring PostgreSQL
      |- [ INFO  ] Creating/refreshing Engine database schema
      |- [ INFO  ] Creating/refreshing DWH database schema
      |- [ INFO  ] Configuring Image I/O Proxy
      |- [ INFO  ] Setting up ovirt-vmconsole proxy helper PKI artifacts
      |- [ INFO  ] Setting up ovirt-vmconsole SSH PKI artifacts
      |- [ INFO  ] Configuring WebSocket Proxy
      |- [ INFO  ] Creating/refreshing Engine 'internal' domain database schema
  |- [ INFO  ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
      |- [ INFO  ] Stage: Transaction commit
      |- [ INFO  ] Stage: Closing up
      |- [ INFO  ] Starting engine service
      |- [ INFO  ] Starting dwh service
      |- [ INFO  ] Restarting ovirt-vmconsole proxy service
      |-          
      |-           --== SUMMARY ==--
      |-          
      |- [ INFO  ] Restarting httpd
      |-           Please use the user 'admin@internal' and password specified in order to login
      |-           Web access is enabled at:
      |-               http://ovirtm.lan:80/ovirt-engine
      |-               https://ovirtm.lan:443/ovirt-engine
      |-           Internal CA D1:4E:13:33:5C:F8:52:92:BB:A2:63:16:30:31:26:4D:7F:9E:BA:E8
      |-           SSH fingerprint: e2:fc:64:88:51:07:41:94:03:09:d4:1c:05:1a:87:ad
      |- [WARNING] Less than 16384MB of memory is available
      |-          
      |-           --== END OF SUMMARY ==--
      |-          
      |- [ INFO  ] Stage: Clean up
      |-           Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20170210184819-egk9f2.log
      |- [ INFO  ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20170210185850-setup.conf'
      |- [ INFO  ] Stage: Pre-termination
      |- [ INFO  ] Stage: Termination
      |- [ INFO  ] Execution of setup completed successfully
      |- HE_APPLIANCE_ENGINE_SETUP_SUCCESS
        [ INFO  ] Engine-setup successfully completed 
        [ INFO  ] Engine is still unreachable
        [ INFO  ] Engine is still not reachable, waiting...
        [ INFO  ] Engine is still unreachable
        [ INFO  ] Engine is still not reachable, waiting...
        [ INFO  ] Engine is still unreachable
        [ INFO  ] Engine is still not reachable, waiting...
        [ INFO  ] Engine replied: DB Up!Welcome to Health Status!
        [ INFO  ] Acquiring internal CA cert from the engine
        [ INFO  ] The following CA certificate is going to be used, please immediately interrupt if not correct:
        [ INFO  ] Issuer: C=US, O=lan, CN=ovirtm.lan.20449, Subject: C=US, O=lan, CN=ovirtm.lan.20449, Fingerprint (SHA-1): D14E13335CF85292BBA263163031264D7F9EBAE8
        [ INFO  ] Connecting to the Engine
        [ INFO  ] Waiting for the host to become operational in the engine. This may take several minutes...
        [ INFO  ] Still waiting for VDSM host to become operational...
        [ INFO  ] The VDSM Host is now operational
        [ INFO  ] Saving hosted-engine configuration on the shared storage domain
        [ INFO  ] Shutting down the engine VM
        [ INFO  ] Enabling and starting HA services
        [ INFO  ] Stage: Clean up
        [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20170210190327.conf'
        [ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
        [ INFO  ] Stage: Pre-termination
        [ INFO  ] Stage: Termination
        [ INFO  ] Hosted Engine successfully deployed

Alright now that its installed, lets login. Again, it may take a while to come up and may 404. Be patient.

https://ovirtm.lan/ovirt-engine/
Admin Portal
admin / password you set above

Configure some storage.

System > Storage Tab > New Domain
    name: storage
    export path: ovirt.lan:/storage
System > Storage Tab > New Domain
     name: isos
     export path: ovirt.lan:/isos
     function: iso

Wait for isos, storage and hosted_storage to go green click the up arrow on the bottom of the screen next to events to wastch progress. Click on Data Centers and Hosts to make sure all things are green. Virtual Machines should now show HostedEngine.

Now lest stand up a VM.

wget https://nl.alpinelinux.org/alpine/v3.5/releases/x86_64/alpine-virt-3.5.1-x86_64.iso 
cp alpine-virt-3.5.1-x86_64.iso /isos/5949ed8e-3966-4641-8b86-142a699684cf/images/11111111-1111-1111-1111-111111111111/ 
chown 36:36 /isos/5949ed8e-3966-4641-8b86-142a699684cf/images/11111111-1111-1111-1111-111111111111/alpine-virt-3.5.1-x86_64.iso

Virtual Machines > New VM  
    os: linux
    server
    name: testvm
    image > create > 5G
    nic1: ovirtmgmt
    ok
click it > run once > boot options > attach cd > alpine > ok
click console icon to the right of run once
install the virt-viewer or spice plugin if necessary
once you are at a console you can setup-alpine to install it to disk (sys mode. most other answers should be obvious)
when install is complete, right click > poweroff. 
right click > run.  will boot off hard disk this time
pull up the console and see that it booted without CD.  You'll need to configure another account or configure SSH to allow remote root logins if you want to ssh into this VM.

One final note, I've it take 40+ minutes for my machine to reboot and bring up the management VM fully again. So, if you are on crap hardware, be patient.

All done. Enjoy your new hypervisor and management console.


Playing with OpenStack

Posted on: 2017-04-15

OpenStack is the premiere open source Amazon AWS cloud infrastructure alternative. This post will go over installing its most critical services to a single machine and then spinning up a VM inside of it and exposing that VM to the rest of your network. You wouldn't want to run like this in production, but its good for learning or home lab use.

Do a fresh centos 7 minimal install on physical hardware. assign a static IP during the install. Then do the config and install.

yum -y update
yum install -y centos-release-openstack-newton 
yum -y update
yum install -y openstack-packstack vim wget

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network

packstack --gen-answer-file=answers.txt
vim answers.txt
#vms in my lab are small and throw away. no fancy data storage needed.
CONFIG_CINDER_INSTALL=n
CONFIG_SWIFT_INSTALL=n
CONFIG_MANILA_INSTALL=n
#my hardware is crap.  I'd rather not see charts and monitor usage and stuff than spend cycles doing that.
CONFIG_CEILOMETER_INSTALL=n
CONFIG_AODH_INSTALL=n
CONFIG_GNOCCHI_INSTALL=n
CONFIG_NAGIOS_INSTALL=n
#run lots of vms on crap hardware
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=20
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=20
#make sure networking looks like so if you want your normal network to be able to reach the VMs
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex
#use your ethernet adapter device
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:enp3s0
#don't need orchestration, demo data, hadoop, DB as a serivce, bare metal management, or fancy load balancers
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_PROVISION_DEMO=n
CONFIG_SAHARA_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_LBAAS_INSTALL=n

[root@openstack ~]# packstack --answer-file=answers.txt
Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Preparing pre-install entries                        [ DONE ]
Setting up CACERT                                    [ DONE ]
Preparing AMQP entries                               [ DONE ]
Preparing MariaDB entries                            [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Preparing Keystone entries                           [ DONE ]
Preparing Glance entries                             [ DONE ]
Preparing Nova API entries                           [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Preparing Nova Compute entries                       [ DONE ]
Preparing Nova Scheduler entries                     [ DONE ]
Preparing Nova VNC Proxy entries                     [ DONE ]
Preparing OpenStack Network-related Nova entries     [ DONE ]
Preparing Nova Common entries                        [ DONE ]
Preparing Neutron LBaaS Agent entries                [ DONE ]
Preparing Neutron API entries                        [ DONE ]
Preparing Neutron L3 entries                         [ DONE ]
Preparing Neutron L2 Agent entries                   [ DONE ]
Preparing Neutron DHCP Agent entries                 [ DONE ]
Preparing Neutron Metering Agent entries             [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Preparing OpenStack Client entries                   [ DONE ]
Preparing Horizon entries                            [ DONE ]
Preparing Puppet manifests                           [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.1.66_controller.pp
192.168.1.66_controller.pp:                          [ DONE ]         
Applying 192.168.1.66_network.pp
192.168.1.66_network.pp:                             [ DONE ]      
Applying 192.168.1.66_compute.pp
192.168.1.66_compute.pp:                             [ DONE ]      
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Now lets create the "external" network and give it an IP range to use. The IP range should be a small set of unused IPs in your real LAN network preferably outside of your DHCP range. This allows OpenStack to assign a floating IP to a VM and then that VM will be accessible to the rest of your network.

. keystonerc_admin
neutron net-create external_network --provider:network_type flat --provider:physical_network extnet  --router:external
#update the IPs to your enviornment
neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.1.70,end=192.168.1.80 --gateway=192.168.1.1 --dns-nameserver=192.168.1.1 external_network 192.168.1.0/24

Now lets download a VM hard disk image we can use. http://docs.openstack.org/image-guide/obtain-images.html

wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1701.qcow2
glance image-create --name=centos7 --visibility=public --disk-format=qcow2 --container-format=bare --file=CentOS-7-x86_64-GenericCloud-1701.qcow2 --progress

Now we need a user

openstack project create --enable internal
#use your info
openstack user create --project internal --password test1234 --email dustin@localhost --enable dustin
cp keystonerc_admin keystonerc_dustin

vim keystonerc_dustin
unset OS_SERVICE_TOKEN
    export OS_USERNAME=dustin
    export OS_PASSWORD=test1234
    export OS_AUTH_URL=http://192.168.1.66:5000/v2.0
    export PS1='[\u@\h \W(keystone_dustin)]\$ '
export OS_TENANT_NAME=internal
export OS_REGION_NAME=RegionOne

. keystonerc_dustin

Finally the new user/tenant needs a private subnet and the ability to route out to internet. The private subnet should not be used already on your lan.

neutron router-create router1
neutron router-gateway-set router1 external_network
neutron net-create private_network
neutron subnet-create --name private_subnet --dns-nameserver=192.168.1.1 private_network 192.168.100.0/24
neutron router-interface-add router1 private_subnet

Before we spin up a VM you may want to browse the web GUI as an admin to see what all you can do.

`cat keystonerc_admin` and grab the OS_PASSWORD value.  >  http://192.168.1.66 > admin / password from before.

Alright. Now that you've seen the GUI with all the options, logout and log back in as your regular user. That is dustin/test1234 based on my example.

Before we launch that VM we need to configure a security group to allow SSH, upload our SSH key and allocate a floating IP for it to use.

Project > Compute > Access & Security > Security Groups > default > Manage Rules > Add Rule > Rule > SSH > Add
#on your client machine. do a ssh-keygen first if necessary
cat ~/.ssh/id_rsa.pub  
Project > Compute > Access & Security > Key Pairs > Import Key Pair > Name: Me > Public Key: data from cat above > Import Key Pair
Project > Compute > Access & Security > Floating IPs > Allocate IP to Project > Allocate IP

Now lets launch the VM, give it a "public" IP and SSH into it.

Project > Compute > Instances > Launch Instance
name: testvm
source > centos7 > +
flavor > m1.small > +
Make sure security groups has default
Make sure Key Pair has Me
Launch Instance

Once its done Spawning. Click the dropdown > Associate Floating IP > Choose IP > Associate. Now on your client machine where your SSH key is ssh centos@FLOATING_IP. Call a sudo yum update to make sure all the networking is working and to update the vm.

OpenStack is insanely powerful. We haven't even scratched the surface with this stuff. Enjoy digging in and playing around with it. Once you get bored with all the installed components go back and install some of the cool stuff I specifically disabled because of my needs and play with those as well.

References



<<Newer Older>>