Quickly create selinux policies using audit2allow

Recently I was configuring MySQL in a high availability configuration when I encountered problems with getting my keepalived health check script to work.

I have two MySQL servers configured in Master/Master replication with a VIP (keepalived) which floats between the two servers. We only write to one of the masters using the VIP. The goal is to have a fail over of the VIP occur if the primary server becomes unreachable.

I created my health check script and configured Keepalived to use the script to check on Mysql. Below is snippet of code from my keepalived.conf config file. I would test the fail over by shutting down Mysql to force a fail over of the VIP to occur however the fail over was not occurring. When I would run keepalived as root from the console the VIP fail over process would work. I started to suspect a permissions or selinux issue.

vrrp_script check_mysql {
script /opt/mysql/check.py
interval 2
timeout 3
}

track_script {
check_mysql
}

Introduce audit2allow, this tool reads the audit logs and creates selinux allow policies off of failed audits.

yum install /usr/bin/audit2allow 

I grep the audit.log file to find failures. Then wrote down context which was being denied.

grep check.py /var/log/audit/audit.log 

After finding all the denied contexts I used audit2allow to create allow polices.

grep keepalived_t /var/log/audit/audit.log | audit2allow -M keepalived_t
grep root_t /var/log/audit/audit.log | audit2allow -M root_t
grep tmp_t /var/log/audit/audit.log | audit2allow -M tmp_t
grep mysqld_port_t /var/log/audit/audit.log | audit2allow -M mysqld_port_t

semodule -i keepalived_t.pp
semodule -i root_t.pp
semodule -i tmp_t.pp
semodule -i mysqld_port_t.pp

After creating the allow polices the health checking script would run successfully and a VIP fail over would occur in the event MySQL went down.

Create SQL SERVER Linked SERVER to HADOOP

I recently attended a SQL SATURDAY precon in Minneapolis. The precon was an introduction to Hadoop for SQL users. The introduction got me interested enough to give Hadoop another try. In my spare time between last weekend and this I have been installing, configuring and playing around with Hadoop. My initial thoughts are that Hadoop is defiantly production ready despite what you might read from some analysts. Hortonworks Ambari made installing Hortonworks nodes painless.

You may not be aware of this but it is possible to query hadoop right from SQL SERVER using a linked server. In this tutorial I go through the steps needed to setup a linked server between Hadoop and SQL SERVER.

This tutorial was written using SQL SERVER 2012 and a three node Hortonworks cluster running HDFS 2.7, MapReduce2 2.7, YARN 2.7, and Hive 1.2. The Hortonworks cluster is running on CentOS 7.1.

Let’s get started, log into the Hadoop cluster via ssh. On the Linux cluster create a new user and add that user to the hadoop group.

shell> adduser sqlserver
shell> passwd sqlserver

Add user sqlserver to hadoop group

shell> usermod -a -G hadoop sqlserver

I ran through part of the HIVE tutorial and used their Group lens data set for sample data.

ssh into one of the Hadoop nodes and perform the following steps to load data into Hadoop and create a table.

shell> su hdfs
shell> wget http://files.grouplens.org/datasets/movielens/ml-100k.zip
shell> unzip ml-100k.zip
shell> hive
hive>CREATE TABLE u_data (
  userid INT,
  movieid INT,
  rating INT,
  unixtime STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE;

hive>LOAD DATA LOCAL INPATH '/home/hdfs/ml-100k/u.data' INTO TABLE u_data;
hive> exit

You should now have data loaded into the Hive table we just created in Hadoop.

Next, download the Microsoft HIVE ODBC driver and install it on your Microsoft SQL SERVER. Select the 32bit or 64bit driver which appropriate for your SQL SERVER.
Microsoft HIVE ODBC driver

Configure the ODBC driver using the “sqlserver” username and password you created earlier.
HIVE_ODBC

Log into SQL Management Studio and configure the linked server using the data source with the DSN used for the ODBC.
linked-server-1

Configure the Hadoop username and password.
linked-server-2

select *
from openquery (HIVE, 'select userid, movieid, rating from default.u_data')
where userid = 196;

If everything was configured correctly you should be able to query Hadoop from SQL SERVER.
HIVE_query