Setting up Razor on CentOS 7

razor

This guide will walk you through the steps to setup Razor a next generation provisioning solution for bare metal and virtual servers. Razor puts an API on top of bare metal and virtual server provisioning. Razor makes it extremely easy to provision one, two or two hundred servers very quickly. The software was co-developed between EMC and Puppet Labs. The entire Razor project was open sourced and freely available to anyone who wants to use the software.

This tutorial was written using CentOS 7

STEP 1. Install Postgres
To setup Razor you will first need the Postgres SQL database installed. Postgres is used to stored facts about the nodes being provisioned through razor.

Install the Postgres database server and initialize the databases

$ sudo yum install postgresql-server postgresql-contrib

$ sudo postgresql-setup initdb

Configure Postgres to allow remote access by adding the Subnet or IP address of the razor server connecting to Postgres

$ sudo vim /var/lib/pgsql/data/pg_hba.conf
host     all      all      172.16.1.0/24     trust

$ sudo vim /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*'
$ sudo systemctl start postgresql
su - postgres
psql
CREATE USER razoruser WITH PASSWORD 'password';

CREATE DATABASE razor_prd OWNER razoruser;

# to exit postgres cli
\q

STEP 2. Install Razor Server and Client

$ sudo yum install http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
$ sudo yum install razor-server

Install the Razor client

$ sudo yum install ruby
$ sudo gem install razor-client

STEP 3. Configure Razor

Configure the razor server database connection string by editing the Razor config file /etc/razor/config.yaml

production:
  database_url: 'jdbc:postgresql:razor_prd?user=razoruser&password=password'

Start the razor server and verify connectivity to the Razor server

$ sudo systemctl start razor-server
$ razor -u http://localhost:8150/api -v

STEP 4. Download and extract the Razor microkernel

$ cd /var/lib/razor/repo-store/
$ wget http://links.puppetlabs.com/razor-microkernel-latest.tar
$ tar -xvf ./razor-microkernel-latest.tar

STEP 5. Install the tftp server

yum install tftp tftp-server xinetd
cd /var/lib/tftpboot/
service xinetd start
service xinetd status
chmod 664 /var/lib/tftpboot/*

vim /etc/xinetd.d/tftp
# configure disable to no
disable = no

Install the ipxe boot

cd /var/lib/tftpboot
wget http://boot.ipxe.org/undionly.kpxe
chmod 644 ./undionly.kpxe

Save the Razor iPXE bootstrap script as bootstrap.ipxe

cd /var/lib/tftpboot
curl http://172.16.1.75:8150/api/microkernel/bootstrap?nic_max=4 > ./bootstrap.ipxe
chmod 644 ./bootstrap.ipxe

STEP 6. Install the CentOS ISO

This is the ISO Which we will install when we PXE boot servers with Razor
Download the CentOS iso

yum install nginx
systemctl start nginx
systemctl enable nginx

cd /usr/share/nginx/html/
wget http://mirrors.centos.webair.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1511.iso

chmod 644 ./CentOS-7-x86_64-DVD-1511.iso

# Verify the ISO can be downloaded
wget http://localhost/CentOS-7-x86_64-DVD-1511.iso

STEP 7. Configure the DHCP server on your network so clients can PXE boot.

Getting the PXE boot to work can be a bit tricky. You must define an iPXE class in the DHCP server.

Create a Policy / Class
DHCP1-policy1

Configure the DHCP Scope Options for the iPXE class
DHCP1-policy2

When you are complete conifguring the iPXE DHCP class your scope options should look like this.
DHCP1-razor

STEP 8. Use the Razor Client to create a CentOS 7 Repo

$ razor create-repo --name centos70 \
--iso-url http://chef02.lab.net/CentOS-7.0-1406-x86_64-DVD.iso \
--task centos/7

STEP 9. Use the Razor Client to create a tag. In this example tag I create a very specific tag which specifies if a node boots on the network with this MAC address then install CentOS 7.

$ razor create-tag --name node01 --rule '["in", ["fact", "macaddress"], "00:0c:29:49:92:71"]'

Here is an example tag which includes two MAC addresses

razor create-tag --name twoNodes --force --rule '["in", ["fact", "macaddress"], "00:0c:29:49:92:71", "00:0c:29:6a:d1:74"]'

STEP 10. Define you policy

$ vim policy.json
{
"name": "centos-for-small",
"repo": "centos70",
"task": "centos/7",
"broker": "noop",
"enabled": true,
"hostname": "host${id}.lab.net",
"root_password": "password",
"max_count": 20,
"tags": ["node01"]
}

STEP 11. Create the Policy

razor create-policy --json policy.json

STEP 12.
At this point you should be able to boot the server with the specified MAC address above and CentOS should begin installation.

References
Allow remote Postgres connections
http://www.thegeekstuff.com/2014/02/enable-remote-postgresql-connection/

Install Postgres on CentOS 7
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-centos-7

Install Razor
https://github.com/puppetlabs/razor-server/wiki/Installation#installing-packages

https://github.com/puppetlabs/razor-server/wiki/Getting-started

http://technodrone.blogspot.com/2013/11/razor-dhcp-and-tftp.html

Advertisements

Setup Mesos-DNS

2016-01-31 21_18_49-MarathonOver the last month I have been evaluating container clustering software. I started with Kubernetes, Rancher which uses swarm and Mesos. I am going through these evaluations to determine which container clustering software will fit my employer’s needs best.

ENVIRONMENT CENTOS 7.0 running three Mesos masters and two Mesos slaves

NAME: mesos01.lab.net
IP 172.16.1.80
services: zookeeper, marathon, mesos-master

NAME: mesos02.lab.net
IP 172.16.1.81
services: zookeeper, marathon, mesos-master

NAME: mesos03.lab.net
IP 172.16.1.82
services: zookeeper, marathon, mesos-master

NAME: mesos04.lab.net
IP 172.16.1.83
services: mesos-slave

NAME: mesos05.lab.net
IP 172.16.1.84
services: mesos-slave

STEP 1. Prerequisites install golang and git

$ yum install go git
$ export GOPATH=$HOME/go
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/tools/godep

$ go get github.com/mesosphere/mesos-dns/logging
$ go get github.com/mesosphere/mesos-dns/records
$ go get github.com/mesosphere/mesos-dns/resolver

STEP 2. Clone the mesos-dns repository and build the mesos-dns binary.

$ git clone https://github.com/mesosphere/mesos-dns.git
$ cd ./mesos-dns
$ go build -o mesos-dns

After building mesos-dns you should have a mesos-dns binary file in your
./mesos-dns directory

STEP 3. In the ./mesos-dns directory there is a config.json.sample example file.
Copy this file and edit it for your own environment.

$ cp config.json.sample config.json

ThisĀ link describes the each of the fields in the config.json file.

{
  "zk": "zk://172.16.1.80:2181,172.16.1.81:2181,172.16.1.82:2181/mesos",
  "masters": ["172.16.1.80:5050","172.16.1.81:5050","172.16.1.82:5050"],
  "stateTimeoutSeconds": 300,
  "refreshSeconds": 60,
  "ttl": 60,
  "domain": "mesos",
  "ns": "ns1",
  "port": 53,
  "resolvers": ["172.16.1.21"],
  "timeout": 5,
  "listener": "0.0.0.0",
  "SOAMname": "root.ns1.mesos",
  "SOARname": "ns1.mesos",
  "SOARefresh": 60,
  "SOARetry":   600,
  "SOAExpire":  86400,
  "SOAMinttl": 60,
  "dnson": true,
  "httpon": true,
  "httpport": 8123,
  "externalon": true,
  "recurseon": true,
  "IPSources": ["mesos", "host"],
  "EnforceRFC952": false
}

STEP 4. Run the mesos-dns with the config.json file to verify it is properly formatted.

$ ./mesos-dns -config=config.json
On the mesos slave create a directory for the config.json file.
I have designated mesos04.lab.net as the mesos-dns server for my
cluster.
$ mkdir /etc/mesos-dns

STEP 5. Copy the mesos-dns binary to the mesos slave which you have designated as the mesos-dns server. In this example I copy the mesos-dns service to mesos slave mesos04.

$ scp ./mesos-dns/mesos-dns root@mesos04.lab.net:/usr/local/bin/mesos-dns

STEP 6. Configure the constraints for the mesos-dns service. This essentially tells the marathon to constrain the mesos-dns service to host mesos04.lab.net. For example, you may want to designate two nodes in your cluster to run mesos-dns. The constrains directive ensures that mesos-dns does not try to run on other hosts.
Constraints: hostname:CLUSTER:mesos04.lab.net

marathon-mesos-dns

STEP 7. Update the network-script file with IP address of the host running mesos-dns.

$ vim /etc/sysconfig/network-scripts/ifcfg-ens160
DNS1="172.16.1.83"
DNS2="172.16.1.21"

STEP 8. After updating the network-script file restart the network service

systemctl restart network

STEP 9. If you have any applications running in marathon you should be able to look them up using mesos-dns. For example, I had a application named nodehello2. I was able to resolve the application using mesos-dns.

$ nslookup nodehello2.marathon.mesos
Server:         172.16.1.83
Address:        172.16.1.83#53

Name:   nodehello2.marathon.mesos
Address: 172.16.1.84
Name:   nodehello2.marathon.mesos
Address: 172.16.1.83

2016-01-31 21_13_24-2016-01-31 21_13_05-kube.txt - Notepad.png - Greenshot image editor

STEP 10. Additional verification can be done by hitting the node hello world app end point using the application name http://nodehello2.marathon.mesos with curl.

[root@mesos04 mesos-dns]$ docker ps
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS                     NAMES
2f6d8a4f99fd        172.16.1.60:5000/node_hello:2.0   "/bin/sh -c '/node/bi"   35 hours ago        Up 35 hours         0.0.0.0:31495->8081/tcp   mesos-a78b235a-8427-4743-9bcc-5d6aed338412-S3.3698d9f9-a25a-457a-8602-50d9c26e70a7
38ca56e041f3        172.16.1.60:5000/node_hello:1.0   "/bin/sh -c '/node/bi"   35 hours ago        Up 35 hours         0.0.0.0:31884->8081/tcp   mesos-a78b235a-8427-4743-9bcc-5d6aed338412-S3.e16a1e3e-a662-40da-b353-318de55178dc

[root@mesos04 mesos-dns]$ curl http://nodehello2.marathon.mesos:31495
Version 2.0
Hello World
[root@mesos04 mesos-dns]$ curl http://nodehello2.marathon.mesos:31884
Version 1.0
Hello World

STEP 11. You can also return the ports of the application. For example, nodehello2 is running on port 31472 on s2.marathon.slave.mesos and port 31495 on s3.marathon.slave.mesos.

[root@mesos04 mesos-dns]$ dig _nodehello2._tcp.marathon.mesos SRV

;; ANSWER SECTION:
_nodehello2._tcp.marathon.mesos. 60 IN  SRV     0 0 31472 nodehello2-uhq4s-s2.marathon.slave.mesos.
_nodehello2._tcp.marathon.mesos. 60 IN  SRV     0 0 31495 nodehello2-sbk5j-s3.marathon.slave.mesos.