Copy VM between two ESXi servers, without shared storage

The vmware ovftool tool can be used to copy a VM between two ESXi servers which are not connected via shared storage. This comes in handy in a home lab environment. In the example below I am copying the VM “WIN10” to another ESXi host on my home network.

[root@mysql04p ~] ovftool -ds=datastore1 vi://root@172.16.1.11/WIN10 vi://root@172.16.1.12

Enter login information for source vi://172.16.1.11/
Username: root
Password: ********
Opening VI source: vi://root@172.16.1.11:443/WIN10
Enter login information for target vi://172.16.1.12/
Username: root
Password: ********
Opening VI target: vi://root@172.16.1.12:443/
Deploying to VI: vi://root@172.16.1.12:443/
Transfer Completed
Completed successfully
[root@mysql04p ~]

In my home lab I’ not running a full VMware vsphere cluster. The free version of ESXi does not offer the clone feature. When testing various applications I often run into the requirement to clone VMs on the same ESXi host. This can easily be accomplished with the ovftool. Below I clone the VM “KVM01” to “KVM01v2.”

[root@mysql04p ~] ovftool -ds=datastore1 --name=KVM01v2 --diskMode=thin vi://root@172.16.1.11/KVM01 vi://root@172.16.1.11/

Enter login information for source vi://172.16.1.11/
Username: root
Password: ********
Opening VI source: vi://root@172.16.1.11:443/KVM01
Enter login information for target vi://172.16.1.11/
Username: root
Password: ********
Opening VI target: vi://root@172.16.1.11:443/
Deploying to VI: vi://root@172.16.1.11:443/
Advertisements

Quickly create selinux policies using audit2allow

Recently I was configuring MySQL in a high availability configuration when I encountered problems with getting my keepalived health check script to work.

I have two MySQL servers configured in Master/Master replication with a VIP (keepalived) which floats between the two servers. We only write to one of the masters using the VIP. The goal is to have a fail over of the VIP occur if the primary server becomes unreachable.

I created my health check script and configured Keepalived to use the script to check on Mysql. Below is snippet of code from my keepalived.conf config file. I would test the fail over by shutting down Mysql to force a fail over of the VIP to occur however the fail over was not occurring. When I would run keepalived as root from the console the VIP fail over process would work. I started to suspect a permissions or selinux issue.

vrrp_script check_mysql {
script /opt/mysql/check.py
interval 2
timeout 3
}

track_script {
check_mysql
}

Introduce audit2allow, this tool reads the audit logs and creates selinux allow policies off of failed audits.

yum install /usr/bin/audit2allow 

I grep the audit.log file to find failures. Then wrote down context which was being denied.

grep check.py /var/log/audit/audit.log 

After finding all the denied contexts I used audit2allow to create allow polices.

grep keepalived_t /var/log/audit/audit.log | audit2allow -M keepalived_t
grep root_t /var/log/audit/audit.log | audit2allow -M root_t
grep tmp_t /var/log/audit/audit.log | audit2allow -M tmp_t
grep mysqld_port_t /var/log/audit/audit.log | audit2allow -M mysqld_port_t

semodule -i keepalived_t.pp
semodule -i root_t.pp
semodule -i tmp_t.pp
semodule -i mysqld_port_t.pp

After creating the allow polices the health checking script would run successfully and a VIP fail over would occur in the event MySQL went down.

Create SQL SERVER Linked SERVER to HADOOP

I recently attended a SQL SATURDAY precon in Minneapolis. The precon was an introduction to Hadoop for SQL users. The introduction got me interested enough to give Hadoop another try. In my spare time between last weekend and this I have been installing, configuring and playing around with Hadoop. My initial thoughts are that Hadoop is defiantly production ready despite what you might read from some analysts. Hortonworks Ambari made installing Hortonworks nodes painless.

You may not be aware of this but it is possible to query hadoop right from SQL SERVER using a linked server. In this tutorial I go through the steps needed to setup a linked server between Hadoop and SQL SERVER.

This tutorial was written using SQL SERVER 2012 and a three node Hortonworks cluster running HDFS 2.7, MapReduce2 2.7, YARN 2.7, and Hive 1.2. The Hortonworks cluster is running on CentOS 7.1.

Let’s get started, log into the Hadoop cluster via ssh. On the Linux cluster create a new user and add that user to the hadoop group.

shell> adduser sqlserver
shell> passwd sqlserver

Add user sqlserver to hadoop group

shell> usermod -a -G hadoop sqlserver

I ran through part of the HIVE tutorial and used their Group lens data set for sample data.

ssh into one of the Hadoop nodes and perform the following steps to load data into Hadoop and create a table.

shell> su hdfs
shell> wget http://files.grouplens.org/datasets/movielens/ml-100k.zip
shell> unzip ml-100k.zip
shell> hive
hive>CREATE TABLE u_data (
  userid INT,
  movieid INT,
  rating INT,
  unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE;

hive>LOAD DATA LOCAL INPATH '/home/hdfs/ml-100k/u.data' INTO TABLE u_data;
hive> exit

You should now have data loaded into the Hive table we just created in Hadoop.

Next, download the Microsoft HIVE ODBC driver and install it on your Microsoft SQL SERVER. Select the 32bit or 64bit driver which appropriate for your SQL SERVER.
Microsoft HIVE ODBC driver

Configure the ODBC driver using the “sqlserver” username and password you created earlier.
HIVE_ODBC

Log into SQL Management Studio and configure the linked server using the data source with the DSN used for the ODBC.
linked-server-1

Configure the Hadoop username and password.
linked-server-2

select *
from openquery (HIVE, 'select userid, movieid, rating from default.u_data')
where userid = 196;

If everything was configured correctly you should be able to query Hadoop from SQL SERVER.
HIVE_query

CentOS 7 Join Active Directory Domain

Before you begin ensure that the DNS on the Linux computer you wish to join to the domain is pointed to a the Active Directory server. Active Directory relies heavily on DNS to function.

STEP 1. Ensure the following packages are installed

yum -y install realmd sssd oddjob 
oddjob-mkhomedir adcli samba-common 

STEP 2. From the computer you will join to the domain run realm discover to verify connectivity to the domain controllers.

[root@test02 ~] realm discover LAB.NET
lab.net
  type: kerberos
  realm-name: LAB.NET
  domain-name: lab.net
  configured: kerberos-member
  server-software: active-directory
  client-software: sssd
  required-package: oddjob
  required-package: oddjob-mkhomedir
  required-package: sssd
  required-package: adcli
  required-package: samba-common
  login-formats: %U
  login-policy: allow-realm-logins

STEP 3. Join Active Directory domain, you must use an account which has privileges to join a computer the domain.

[root@test02 ~] realm join -U adminuser LAB.NET

STEP 4. Verify you can retrieve directory information for user

[root@test02 ~] id LAB\\ktest
uid=522401118(ktest) gid=522400513(domain users) 
groups=522400513(domain users)

STEP 5. Verify the ability to perform a su to an Active Directory user

[root@test02 ~] su - ktest
Last login: Sun Sep 20 05:21:42 CDT 2015 on pts/0
[ktest@test02 ~]$

STEP 6. To remove the requirement of fully qualifying the Active Directory username edit the sssd.conf file. After this change you will not be required to use DOMAIN\\ when logging in as an Active Directory user.

[root@test02 ~] vi /etc/sssd/sssd.conf
use_fully_qualified_names = False
[root@test02 ~] systemctl restart sssd 

SQL SERVER Query tuning

Recently I encountered an interesting issue with the SQL SERVER query engine. I had received a high CPU alert from the SQL SERVER 2008 server. I logged in and looked at the query plan cache to see which queries were causing load on the server. The query below quickly caught my attention. Each time the query below ran it was performing 207,651 logical reads on the server. I thought this was interesting in that the query seemed to be very selective with the WHERE clause placed on it. In addition, I noticed that there were extra parentheses around the filters in the WHERE clause. Upon removing these parentheses the query went from 207,651 logical reads to 4.

Original query:

set statistics io on
SELECT table1.brcd, table2.Pkt_x, table2.PKT_NBR 
FROM table1 WITH(NOLOCK) INNER JOIN table2 WITH(NOLOCK) ON table1.PKT_NB = table2.PKT_NB 
WHERE (((table2.Pkt_x)=1) AND ((table2.PKT_NBR)=5630));
(1 row(s) affected)
Table 'table2'. Scan count 9, logical reads 207651, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'table1'. Scan count 1, logical reads 4, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

I gathered the following information and forwarded it to the customer. They replied saying that the query was generated by an Microsoft Access database and they had refactored the query, removing the extra parentheses.

Refactored query:

SELECT table1.brcd, table2.Pkt_x, table2.PKT_NBR
FROM table1 WITH(NOLOCK) INNER JOIN table2 WITH(NOLOCK) ON table1.PKT_NB = table2.PKT_NB
WHERE table2.Pkt_x=1 and table2.PKT_NBR='5630'
Table 'table1'. Scan count 1, logical reads 4, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'table2'. Scan count 1, logical reads 4, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Solarwinds SDK script unmanage nodes

Solarwinds offers a Powershell SDK to manipulate nodes programmatically. The Solarwinds SDK can be downloaded here.

I have created a demonstration script will unmanage a node for 2 hours so that maintenance can be performed on the node.

Setup connection to Solarwinds application server.

$secpasswd = ConvertTo-SecureString "password" -AsPlainText -Force
$mycreds = New-Object System.Management.Automation.PSCredential("LAB\username01", $secpasswd)

Search for the node named ‘SQL01.lab.net’ and set the node as unmanaged for 2 hours. This is very useful if you have planned maintenance which happens once per week and want to programmatically unmanage a node each week.

$swis = Connect-Swis -Hostname swserver.lab.net -Credential $mycreds
$uris = Get-SwisData $swis "SELECT Uri FROM Orion.Nodes where Caption='SQL01.lab.net'"
$uris | ForEach-Object { Set-SwisObject $swis $_ @{Status=9;Unmanaged=$true;UnmanageFrom=[DateTime]::UtcNow;UnmanageUntil=[DateTime]::UtcNow.AddHours(2)} }

Adding nodes to rundeck

I am still gaining operational knowledge of rundeck. Rundeck is an awesome job scheduling tool. Recently I was required to setup a job which is scheduled to run on a remote node. To perform this task you must edit the resource.xml file under the project directory. For this to work it is required that you setup ssh key pairs between the two servers. Check out this link from Digital Ocean on setting up ssh key pairs

/var/rundeck/projects/[projectname]/etc/resources.xml

Sample node added to the resources.xml file

<project>
  <node name="servername" description="Dev MySQL" tags="" hostname="servername" osArch="amd64" osFamily="unix" osName="Linux" osVersion="2.6.32-504.8.1.el6.x86_64" username="userAccount"/>
</project>

After adding the node to rundeck you must restart the service for the node to be recognized.

service rundeckd restart