SQL SERVER Query tuning

Recently I encountered an interesting issue with the SQL SERVER query engine. I had received a high CPU alert from the SQL SERVER 2008 server. I logged in and looked at the query plan cache to see which queries were causing load on the server. The query below quickly caught my attention. Each time the query below ran it was performing 207,651 logical reads on the server. I thought this was interesting in that the query seemed to be very selective with the WHERE clause placed on it. In addition, I noticed that there were extra parentheses around the filters in the WHERE clause. Upon removing these parentheses the query went from 207,651 logical reads to 4.

Original query:

set statistics io on
SELECT table1.brcd, table2.Pkt_x, table2.PKT_NBR 
FROM table1 WITH(NOLOCK) INNER JOIN table2 WITH(NOLOCK) ON table1.PKT_NB = table2.PKT_NB 
WHERE (((table2.Pkt_x)=1) AND ((table2.PKT_NBR)=5630));
(1 row(s) affected)
Table 'table2'. Scan count 9, logical reads 207651, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'table1'. Scan count 1, logical reads 4, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

I gathered the following information and forwarded it to the customer. They replied saying that the query was generated by an Microsoft Access database and they had refactored the query, removing the extra parentheses.

Refactored query:

SELECT table1.brcd, table2.Pkt_x, table2.PKT_NBR
FROM table1 WITH(NOLOCK) INNER JOIN table2 WITH(NOLOCK) ON table1.PKT_NB = table2.PKT_NB
WHERE table2.Pkt_x=1 and table2.PKT_NBR='5630'
Table 'table1'. Scan count 1, logical reads 4, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'table2'. Scan count 1, logical reads 4, physical reads 0, 
read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

MySQL Performance Schema

The MySQL Performance Schema is an extremely valuable feature in MySQL 5.6 for finding queries which have a performance impact on the database. However, the performance schema is not enabled by default. The performance schema can be enabled through several different methods. Possible reasons why enabling the performance schema is a bad thing includes performance impact on the server. Percona has a good write up on the performance impact of enabling the Performance Schema. Therefore be careful when enabling the performance schema in a production environment as it can add a significant amount of overhead.

Setting the performance_schema directives in the mn.cnf file will ensure that the performance_schema is enabled at startup. One can also enable the performance_schema by running SQL queries after MySQL is up and running. This can be useful if you want to disable and enable the performance_schema to troubleshoot performance issues then turn them back off without having to restart MySQL.

vim /etc/my.cnf
performance_schema_consumer_events_statements_history_long=ON
performance_schema_consumer_events_statements_history=ON
performance_schema_consumer_events_stages_current=ON
performance_schema_consumer_events_stages_history=ON
performance_schema_consumer_events_stages_history_long=ON
performance_schema_consumer_events_waits_current=ON
performance_schema_consumer_events_waits_history=ON
performance_schema_consumer_events_waits_history_long=ON

performance_schema=ON

performance_schema_instrument='%=ON'

Consumers are setup in the setup_consumers table this specified where the data will go.

select * from performance_schema.setup_consumers;

MySQL Workbench_2015-08-11_07-12-25

View which objects have the performance schema setup on them.
The screen shot below shows that performance stats will be collected for all databases except for mysql, Performance_schema and information_schema.
perf_schema_setup_objects

The setup_actors table determines who to monitor, you can filter by host, user or role.
perf_schema_who_to_monitor

To view the current instruments which are enabled run the following query.

select * from setup_instruments

Instruments can be enabled by running an UPDATE statement on the table.

update setup_instruments
set enabled = 'yes', timed = 'yes'
where name = 'statement/sql/lock_tables'

MySQLinstruments

After the performance_schema has been configured you will be able to view the sys tables for statistics collected on queries.

select * 
from sys.`x$statements_with_full_table_scans`

Here it is important to note the difference between the average rows sent and the average rows examined. This means that the query optimizer is examining many rows but only sending a a small number. If there is a large difference between the two. This may indicate that their is a need for an index.

MySQL Workbench_2015-08-11_08-06-52

Additional performance tables can be viewed which present statistics on number of temp tables created.

select * 
from sys.`x$statements_with_temp_tables`
select * 
from sys.`x$statement_analysis`

Reference: For more in depth informaiton on the the performance schema and A Visual Guide to the MySQL Performance Schema

How to handle bulk UPDATE statements in large tables

Sometimes we are required to do mass database UPDATES. On very large tables this can become a problem. For example, I recently had a table with 80 million records in it which required a mass UPDATE. Instead of running one large UPDATE statement it is better to break the transactions into smaller batches. This will reduce the locking incurred on the table in question.

WHILE (2 > 1)
  BEGIN
    BEGIN TRANSACTION
     UPDATE TOP(1000) dbo.tbl_shipments
     SET ship_zone = 99999
     WHERE ship_zone = 0;
    
    IF @@ROWCOUNT = 0
      BEGIN
        COMMIT TRANSACTION
         BREAK
      END
    COMMIT TRANSACTION
    -- 1 second delay
    WAITFOR DELAY '00:00:00.200'
  END -- WHILE
GO

There is a really good article at http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes which goes into depth about the issue. Although this article is focused on deletes it also applies to UPDATE statements.

SolarWinds SAM Appinsight for SQL SERVER causing deadlocks

I recently found a bug in the way Solar winds monitor’s our databases. I am running Solarwinds Server application Monitor 11.1 and SQL SERVER Application Insight. Ever since I turned on the Application Insight feature I started seeing deadlock’s occurring in my SQL SERVER environment. Upon further investigation I noticed the Solarwinds Application Insight for SQL SERVER monitoring application was doing a read committed operation on system tables in all my database.

I sent the email below to Solar winds support and suggested the Solar winds developer change the query to a read uncommitted. Several days later I heard back from the Solar winds application engineer stating that they had a script for me to run. I ran the script which updated the monitoring query to be read uncommitted. I saw an immediate drop in the number of deadlocks alerts coming from my SQL SERVER. The moral of the story is to test monitoring thoroughly before using it in production. As in this case the monitoring server was at times causing transactions to be terminated unexpectedly.


—–Original Message—–
From: myname@mycompany.com
Sent: Monday , January 12, 2015 12:31 pm GMT (GMT+00:00)
Subject: appinsight for sql is causing deadlocks

The issue I am seeing is that solarwinds is locking system tables when its
app insight sql monitor runs. Here is one example where app insight
deadlocked on tempdb.sys.sysidxstats. Might I suggest that solarwinds
change the software so it does a “dirty read” or in other words read
uncommitted. This would prevent the deadlocks from happening.

Here is a excerpt from the deadlock XML file. The SAM App Insight monitor is running in isolation level read committed.

<process id="process7481048" taskpriority="0" logused="0" waitresource="KEY: 2:281474980249600 (45001d8e01fc)" waittime="1219" ownerI
d="825842724" transactionname="SELECT" lasttranstarted="2015-01-11T13:36:28.200" XDES="0x80039960" lockMode="S" schedulerid="6" kpid="2680" status="suspended" spid="143" sbid="0" ecid="0" priority="0" trancount="0" lastbatchstarted="2015-01-11T13:36:28.133
" lastbatchcompleted="2015-01-11T13:36:28.090" clientapp=".Net SqlClient Data Provider" hostname="SW01" hostpid="16352" loginname="DOMAIN\sa-monitor" isolationlevel="read committed (2)" xactid="825842724" currentdb="2" lockTimeout="4294967295" cliento
ption1="671088672" clientoption2="128056">
    <executionStack>
     <frame procname="adhoc" line="1" stmtstart="26" sqlhandle="0x02000000b680b70233f6dc8a15fa42960c50200f89756a25">
SELECT TOP 10
	t.NAME AS TableName,
	t.object_id,
	mainIndex.data
_space_id,
	p.rows AS RowCounts,
	SUM(a.total_pages) * 8 AS TotalSpaceKB,
	SUM(CASE i.[type]
	WHEN 2 THEN 8 * a.used_pages
	ELSE 0
	END) AS IndexSpaceKB
FROM [sys].[tables] t
INNER JOIN sys.indexes mainIndex ON (t.object_id = mainIndex.object_id AND mainIn
dex.[type] IN (0,1))
INNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN [sys].[partitions] p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN [sys].[allocation_units] a ON p.partition_id = a.container_id
WHERE t.NAME NOT LI
KE 'dt%' AND t.is_ms_shipped = 0 AND i.OBJECT_ID &gt; 255 GROUP BY t.Name, t.object_id, mainIndex.data_space_id, p.Rows
ORDER BY TotalSpaceKB DESC     </frame>
     <frame procname="adhoc" line="21" stmtstart="1748" stmtend="1786" sqlhandle="0x0200000
0db583517807077e95c2490bb681fca4f9e78724f">
EXECUTE (@sqlQuery)     </frame>
     <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000">
unknown     </frame>
    </executionStack>
    <inputbuf>


<keylock hobtid="281474980249600" dbid="2" objectname="tempdb.sys.sysidxstats" indexname="clst" id="lock30e2f1d00" mode="X" associatedObjectId="
281474980249600">
    <owner-list>
     <owner id="process39d30c088" mode="X"/>
    </owner-list>
    <waiter-list>
     <waiter id="process7481048" mode="S" requestType="wait"/>
    </waiter-list>
   </keylock>

Solarwinds Support provided the following SQL to change the ISOLATION LEVEL to READ UNCOMMITTED. Since included the new SQL in Solarwinds I have not seen any deadlocks.

-- FB395905 AppInsight for SQL is causing SQL Deadlocks
-- specifying isolation level for polling
UPDATE cts SET Value = 'SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

DECLARE @TimeStamp datetime
SET @TimeStamp = GETUTCDATE()
SELECT
	name as dbName,
	d.database_id,
	[compatibility_level],
	collation_name,
	[state],
	recovery_model,
	DATEADD(second, DATEDIFF(second, GETDATE(), GETUTCDATE()), bk.last_backup) as last_backup,
	avg_read_latency,
	avg_write_latency,
	avg_bytes_per_read,
	avg_bytes_per_write,
	num_of_reads,
	num_of_writes,
	@TimeStamp as TimeStamp
FROM sys.databases d
LEFT JOIN
(
SELECT bs.database_name, MAX(bs.backup_finish_date) last_backup
FROM msdb.dbo.backupset bs
WHERE server_name = SERVERPROPERTY(''ServerName'') GROUP BY bs.database_name
) AS bk ON d.name = bk.database_name
LEFT JOIN
(
SELECT  database_id,
	SUM(io_stall_read_ms / NULLIF(num_of_reads, 0)) AS avg_read_latency,
	SUM(io_stall_write_ms / NULLIF(num_of_writes, 0)) AS avg_write_latency,
	SUM(num_of_bytes_read / NULLIF(num_of_reads, 0)) AS avg_bytes_per_read,
	SUM(num_of_bytes_written / NULLIF(num_of_writes, 0)) AS avg_bytes_per_write,
	SUM(num_of_reads) AS num_of_reads,
	SUM(num_of_writes) AS num_of_writes
FROM sys.dm_io_virtual_file_stats(NULL, NULL)
GROUP BY database_id
)AS [io] ON d.database_id = [io].database_id
WHERE name NOT IN (${UnmanagedDatabases})'
FROM APM_ComponentTemplateSetting cts
INNER JOIN APM_ComponentTemplate ct ON cts.ComponentTemplateID = ct.ID
WHERE ct.UniqueId = '15868507-6248-4DE6-9A83-561535EBC058' AND cts.[Key] = 'SqlQuery'

-- Enforce job rescheduling for ABSA applications
UPDATE a SET LastModified = GETUTCDATE()
FROM APM_Application a INNER JOIN APM_ApplicationTemplate at ON a.TemplateID = at.ID
WHERE at.CustomApplicationType = 'ABSA'

SQL Server deadlock logging with email alerts

This configuration was setup on Windows 2012 and SQL SERVER 2012, it was also tested on SQL SERVER 2008.

Create a table named DeadlockEvents, this will be used to store the date and time stamp of the deadlock event as well as the XML generated from the deadlock event.

STEP 1.
Create a table to hold the SQL SERVER deadlocks. One field will contain the time at which the deadlock occurred and the other the XML from the deadlock.


/** 
Create Table to hold deadlocks
**/

USE DBA ;
GO

IF OBJECT_ID('DeadlockEvents', 'U') IS NOT NULL
BEGIN
    DROP TABLE DeadlockEvents ;
END ;
GO

CREATE TABLE DeadlockEvents
    (AlertTime DATETIME, DeadlockGraph XML) ;
GO

STEP 2.
Add a SQL Agent job to trigger an insert of a deadlock record when a deadlock event occurs.

The following fields will need to be populated with your environment specific information.
@owner_login_name=N’DOMAIN\sa-job-owner’


/** Add JOB**/

USE [msdb]
GO

/****** Object:  Job [DBM2_Capture_Deadlock_Graph]    Script Date: 11/13/2014 13:54:07 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 11/13/2014 13:54:07 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'DBM2_Capture_Deadlock_Graph', 
		@enabled=1, 
		@notify_level_eventlog=2, 
		@notify_level_email=0, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=0, 
		@description=N'Job for responding to DEADLOCK_GRAPH events', 
		@category_name=N'[Uncategorized (Local)]', 
		@owner_login_name=N'DOMAIN\sa-job-owner', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Insert graph into LogEvents]    Script Date: 11/13/2014 13:54:08 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Insert graph into LogEvents', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=3, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'INSERT INTO DeadlockEvents
                (AlertTime, DeadlockGraph)
                VALUES (getdate(), N''$(ESCAPE_SQUOTE(WMI(TextData)))'')', 
		@database_name=N'DBA', 
		@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [send email]    Script Date: 11/13/2014 13:54:08 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'send email', 
		@step_id=2, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'exec sp_deadlock_alert', 
		@database_name=N'DBA', 
		@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

STEP 3.
This stored procedure is run when the SQL Agent job is triggered. I sends a copy of the deadlock XML to the email address specified as the recipient.


/** Add stored proc **/

use DBA;
GO
create procedure sp_deadlock_alert
AS

DECLARE @profile varchar(30)
set @profile = (select top 1 name from msdb.dbo.sysmail_profile)

EXEC msdb.dbo.sp_send_dbmail
    @profile_name = @profile,
    @recipients = 'ctatro@lab.net',
    @body = 'DeadLock Graph',
    @subject = 'A deadlock has occurred, please review the provided attachment...',
    @body_format = 'TEXT',
    @query_attachment_filename = 'Deadlockgraph.xml',
    @attach_query_result_as_file = 1,
    @query_no_truncate = 1,
     --@query_result_width = 512,
     --@query_result_no_padding = 1,
    @query = 'SELECT TOP 1 [DeadlockGraph]
  FROM [DBA].[dbo].[DeadlockEvents]
  order by alerttime desc
    for xml PATH(''ROW''), root(''ROOT''), TYPE';
    

STEP 4.
Add alert to fire SQL Agent job when a deadlock event occurs.
The job named DBM2_Capture_Deadlock_Graph will file whenever a deadlock event occurs.
For this to work you token replacement enabled on the SQL AGENT.
Right-click SQL Server Agent in Object Explorer, select Properties, and on the Alert System page, select Replace tokens for all job responses to alerts to enable these tokens.

/** Add Alert **/
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'Respond to DEADLOCK_GRAPH', 
@wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER', 
    @wmi_query=N'SELECT * FROM DEADLOCK_GRAPH', 
    @job_name='DBM2_Capture_Deadlock_Graph' ;
GO

STEP 5.
Testing the deadlock alerting

Open a SSMS query window and execute the following SQL.

use dba;
BEGIN TRAN
update table_a set ID=ID where ID = 100;

update table_b set ID=ID where ID =100;

Open a second SSMS query window and execute the following SQL.
This should result in a deadlock which inserts a record into the DeadlockEvents table and sends an email alerting you of the deadlock.

use dba;
BEGIN TRAN
update table_b set ID=ID where ID =100;

update table_a set ID=ID where ID = 100;

SQL SERVER Powershell fails with “set-executionpolicy RemoteSigned -scope process -Force”

This is the second or third time I have run into the message below when trying to run a Powershell script from a SQL SERVER agent job. Even after running set-executionpolicy to something other then RemoteSigned.
This issue has occurred while running Microsoft Windows 2012 and SQL SERVER 2012. And I was running the SQL AGENT step using a proxy account.

Executed as user: DOMAIN\testuser01. A job step received an error at line 1 in a PowerShell script. 
The corresponding line is 'set-executionpolicy RemoteSigned -scope process -Force'. 
Correct the script and reschedule the job. The error information returned by PowerShell is: 'Security error.  '.  Process Exit Code -1.  
The step failed.

Step 1 create credential
Step 2 create proxy from credential
Step 3 Assign proxy account to job step
Step 4. Paste the Powershell code into the job step and run the SQL AGENT job, you receive the message above.
Step 5. In an attempt to remediate the issue I performed the following steps.
Start SQL Server Management Studio as Administrator, or you cannot change the Powershell execution policy

Right click on SQL SERVER instance from from SQL Server Management Studio > Start Powershell

PS SQLSERVER:\SQL\SQLSERVER01\INSTANCE01&gt; set-executionpolicy -scope Process unrestricted
PS SQLSERVER:\SQL\SQLSERVER01\INSTANCE01&gt; get-executionpolicy -list

                                  Scope                         ExecutionPolicy
                                  -----                         ---------------
                          MachinePolicy                            Unrestricted
                             UserPolicy                               Undefined
                                Process                            Unrestricted
                            CurrentUser                               Undefined
                           LocalMachine                            Unrestricted

Even after setting the execution policy to Unrestricted the Powershell script would still not execute. Note I do not recommend setting the execution policy to Unrestricted. This blog post is geared towards figuring out why Powershell is not honoring the set-executionpolicy cmdlet. The fix was to modify registry key below and set the registry key to the appropriate execution policy.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.SqlServer.Management.PowerShell.sqlps110\Unrestricted
REG_SZ Unrestricted

Creating scheduled events in MySQL

Unknown to many, the MySQL instance has the ability to schedule events. This feature can be useful for scheduling activities like maintenance and batch processes. To get started the user in MySQL creating the event requires the privileges to create events.

1. The following will GRANT user jsmith the ability to create events on all DBs in MySQL.

GRANT event privileges on all DBs to user jsmith

2. GRANT event privileges to all tables in DB myschema.

GRANT EVENT ON myschema.* TO jsmith;

3. Enter the following code to create a scheduled event which runs every 1 hour starting at 2014-01-07 13:00:00.

CREATE EVENT evntTruncate -- event name
ON SCHEDULE EVERY '1' HOUR -- run every 1 hour
STARTS '2014-01-07 13:00:00' -- should be in the future
DO
TRUNCATE TABLE City; -- SQL statement to execute

4. Run the following query to verify that the event has been scheduled.

SELECT * FROM information_schema.events;