SQL Server Error Log Reader

Reading the SQL Server Error Log is miserable.  It contains very useful information you should address as soon as possible, or at least know that it’s happening.  However, it’s hidden between so many informational messages that it’s hard to find, then it’s spread out between multiple files for every server reboot or automated file rollover event you may have set up.

Many DBAs skim these files, but when there’s a single login failure mixed into log backups running every 5 minutes for 100 databases then they’re just happy to have found something.  That login failure tells you nothing, just that someone should have been more careful typing in their password, right?  When you’re just happy you were even able to find something then you’re almost certainly not going to see it clearly enough to notice a trend, such as that login failure happens every Sunday between 10:00 PM and 10:15 PM.  However, if you knew that then you could tell someone that there’s an automated job that’s failing, it’s obviously part of a bigger process because the time varies a little, but it’s consistent enough to say it’s definitely a process.

So, the trick is to get past the junk and to the useful information.  You can listen to Warner Chaves (b|t) in his Most Important Trace Flags post and turn on trace flag 3226 to stop backup information from going to the logs, but I’m always paranoid (it’s part of the job) that it just may come in useful some day.  I know it never has, but I leave it in there anyways.

Even if you do take out information from the logs like that, it’s still a flat file that’s difficult to analyze for any number of reasons.  We’re just a T-SQL kind of group, and flat files just fall flat.

As with everything in SQL Server, I cheat my way through it.  This time I pull it into a temp table, delete the stuff I’m ignoring (please be very, very careful about what you ignore because you’ll never know it happened), then look at the results.  If there’s a login failure then I’ll uncomment the section that deletes everything except a single error and trends will pop out at me.  If I wanted to do more advanced analysis I would run queries doing aggregates of any kind against the temp table that was created.  Everything’s in the format you’re used to analyzing, so you can do crazy things without going crazy.

DECLARE @dStart DateTime 
	, @dEnd DateTime
	, @MaxLogFiles Int 

SELECT @dStart = GetDate()-30
	, @dEnd = GetDate()-0
	, @MaxLogFiles = 5

--Pulls into #TempLog because an empty log file causes errors in the temp table
--If there are no records, don't pass the issues onto your #Log table and return the results



	LogDate DateTime
	, ProcessInfo NVarChar(50)
	, LogText NVarChar(1000)

	LogDate DateTime
	, ProcessInfo NVarChar(50)
	, LogText NVarChar(1000)

DECLARE @Num int
SELECT @Num = 0

WHILE @Num < @MaxLogFiles BEGIN

	exec xp_readerrorlog @Num, 1, null, null, @dStart, @dEnd

		FROM #TempLog
		SELECT @Num = @MaxLogFiles
	SELECT @Num = @Num + 1

--Uncomment to trend out a specific message and ignore the rest
WHERE LogText NOT LIKE 'Login failed for user ''WhatAreYouDoingToMe?!?!?''%'

--Ignore most of the log file rollover process
--Keep "Attempting to cycle" and "The error log has been reinitialized" if you want to confirm it happened and succeeded
WHERE LogText LIKE '%(c) Microsoft Corporation%'
	OR LogText LIKE 'Logging SQL Server messages in file %'
	OR LogText LIKE 'Authentication mode is MIXED.'
	OR LogText LIKE 'System Manufacturer: %'
	OR LogText LIKE 'Server process ID %'
	OR LogText LIKE 'All rights reserved.'
	OR LogText LIKE 'Default collation: %'
	OR LogText LIKE 'The service account is %'
	OR LogText LIKE 'UTC adjustment: %'
	OR LogText LIKE '(c) 2005 Microsoft Corporation.'--Should I be ignoring this or fixing it?
	OR LogText LIKE 'Microsoft SQL Server % on Windows NT %'
	OR LogText LIKE 'The error log has been reinitialized. See the previous log for older entries.'
	OR LogText LIKE 'Attempting to cycle error log.%'

--Ignore databases being backed up and integrity checks running, assuming you verify this some other way.
--I don't want to complain to try to have these removed because I may need that info someday; today isn't that day.
WHERE LogText LIKE 'Log was backed up%'
	OR LogText LIKE 'Database differential changes were backed up%'
	OR LogText LIKE 'Backup database with differential successfully %'
	OR LogText LIKE 'Backup database successfully %'
	OR LogText LIKE 'Database backed up%'
	OR LogText LIKE 'DBCC CHECK% found 0 errors %'
	OR LogText LIKE 'CHECKDB for database % finished without errors %'

--We all have vendor databases...
--Ignore the stuff where it keeps making sure the setting is where the setting was.
WHERE LogText LIKE 'Configuration option % changed from 30 to 30.%'
	OR LogText LIKE 'Configuration option % changed from 5 to 5.%'
	OR LogText LIKE 'Setting database option COMPATIBILITY_LEVEL to 100 for database ReportServer%'
	OR LogText LIKE 'Configuration option ''user options'' changed from 0 to 0. Run the RECONFIGURE statement to install.'

--Now your own custom ones
--Just be careful.  You'll rarely read logs without this script once you see how easy it is.
--If you put it on the ignore list, you won't see it again.
--I have starting and stopping traces on mine, because my monitoring software likes to start and stop them a lot
----I'm accepting the risk that I won't see other people starting and stopping traces.
WHERE LogText LIKE 'Know what risk you''re taking on by putting stuff in here'
	OR LogText LIKE 'You will rarely read logs without this, so you won''t see these ever again'
	OR LogText LIKE 'DBCC TRACEON 3004,%'
	OR LogText LIKE 'DBCC TRACEON 3014,%'
	OR LogText LIKE 'DBCC TRACEON 3604,%'
	OR LogText LIKE 'DBCC TRACEON 3605,%'
	OR LogText LIKE 'Error: %, Severity:%'--They give the english version next
	OR LogText LIKE 'SQL Trace ID % was started by %'
	OR LogText LIKE 'SQL Trace stopped.%'
	OR LogText LIKE 'Changing the status to % for full-text catalog %'
	OR LogText LIKE 'I/O was resumed on database %'
	OR LogText LIKE 'I/O is frozen on database %' 

--When mirroring gives me trouble it lets me know by flooding the logs
--I uncomment this to see if there were other issues in the middle of all that.
WHERE LogText LIKE 'Database mirroring is inactive for database%'
	OR LogText LIKE 'The mirroring connection to%has timed out%'
	OR LogText LIKE 'Database mirroring is active with database%'

--This is only useful if you're using the trace flag 1222
--Only show the line that says 'deadlock-list'.  Remove this if you need to see the deadlock details.
--Note, only use this when needed.  It will give you a 1 second blind spot for every deadlock found.
--Why aren't you using extended events anyways?
	INNER JOIN #Log L2 ON L.LogDate BETWEEN L2.LogDate AND DateAdd(second, 1, L2.LogDate) AND L.ProcessInfo = L2.ProcessInfo 
WHERE L2.LogText = 'deadlock-list'
	AND L.LogText <> 'deadlock-list'


Don’t Ignore Me

Anything you ignore you won’t see here again. It’s still in the logs, but not in what you’re reading on your screen when you mentally check the logs off as being read through.  If you’re ignoring anything make sure it either doesn’t matter or you’re watching for it another way.

Backups are the first thing to be ignored.  Yes, yes, they ran successfully, they do that a lot, don’t tell me about them.  That can be good advice gone horribly wrong.  Do you have another way of saying I absolutely know I have backups taken of everything?

DBCC CheckDB ran successfully is next on the list.  Same thing goes for it, except more DBAs miss verifying that this is running and also miss running it.  If you ignore it, how are you verifying that it ran?

I don’t care how you do it.  Do what works best for you, just do something.

Be Careful

I’ll just end by saying be careful again. This code is a life saver when it’s not shooting you in the foot.

Give More Feedback

I write on my blog and get a couple comments at best. I talk at conferences and a large part of the audience fills out evals at the time.  Then I often wonder if I’m making a difference while rarely, if ever, knowing if anyone actually used what they learned at work.  There are some confidence issues at hand here, which I was convinced were just limited to me.  However, talking to a couple others in the SQL community, it’s not just me.

In fact, I told one of the leading respondents to SQL Server Central forums questions that he really made a difference in my career with the quantity and quality of the knowledge he shares.  His response was a lot like I’d imagine mine to be.  A humbleness and sincere thank you with a follow up that it’s very difficult to know if the work you put into the community really makes a difference.  That’s saying something considering that he’s in a more personal position than me with his online presence because he’s often answering user questions while I offer unfocused unsolicited advice.

The blog posts, articles, and forum help you see online is all done on a volunteer basis.  Sure, some people get paid $25 for an article, which is a lower hourly rate than an entry-level DBA if you think about how much work they put into them.  Some people write blog posts to promote their business or help their careers while knowing without a doubt that well over 99% of people who read what they have to say will never hire them.  Yes, there are ways you can say it’s not on a volunteer basis, but if any of us were really in it for the money then almost all of us would opt for something more lucrative such as setting up a lemonade stand on the corner with our kids.

That wasn’t even getting into in-person events like SQL Saturdays where volunteers put everything together then ask for volunteers to come speak, many of whom pay their own travel expenses.  Full disclosure, it’s not completely unpaid, we get invited to a free dinner the night before and typically get a free shirt.  Both of these are amazing benefits because I’m eating dinner with people I look up to then I have a shirt that I wear to work every time I need a good confidence boost, so I can’t say it’s really free.  By-the-way, every SQL Saturday is numbered, and I they all have an email address of SQLSaturday###@SQLSaturday.com.  Help keep the motivation going and force me to update that masked email address to have four digits!

So, you may notice that I write at length about not getting much feedback on my only post that does not allow comments.  I even went out of my way and deleted the name of the guy whose answers I liked so much on SQL Server Central.  Why?!?!  It’s not as counter-productive as it seems.  I know this post didn’t make your career better, I know it’s not likely to change your life, and this isn’t a plea for you to comment on THIS post or for people who inspired ME.

This post is a challenge to you.  Think back to blog posts that helped make you awesome at work.  Think about conferences that were put together so well that all you noticed were the great learning opportunities.  Then go out and comment as publically as possible how it helped, adding as many details as you can.  After that, keep this in mind when you’re learning in the future, be it free or paid events, reading blog posts, or using #sqlhelp on twitter.  Say it all, constructive criticism, compliments, offers to buy someone a beer, don’t hold back!  If you want more of something, speak up.  We have an amazing community, and I want more of it!

By the way, the guy from SQL Server Central likes plain old Budweiser.

Extended Events Intro

Extended Events is supposed to be taking over for most of Profiler and server-side tracing functionality, but there were people like me out there that took their time making the switch. For me the excuse was SQL Batch Completed isn’t in 2008 / 2008 R2, most databases are slow to move off of 2008 because it’s hard to say 2008 can’t do everything you want, and, well, I really like SQL Batch Completed!!!

Now I’m losing my excuses. It’s been three years since 2012 came out and all the new servers I’m installing are 2012 or 2014, so I lost that excuse. Then I came up with “I don’t know how to use them, but I know how to use something that does almost the same thing”; that’s great logic during an outage, but terrible logic for the long term. So, I’m learning it, still learning it, and I have something to help out those who made it to 2012 but are still stuck on the “I don’t know how to use them” phase of denial.

In several of my blog posts I referenced my favorite server-side trace. Capture everything on the server that takes over X seconds, with 5 being my favorite starting place. Yes, this adds about 1ms to anything that takes over 5 seconds, which adds up to…insignificant. Seriously, if this kills your server then it was dead already, but you will see arguments against creating this for long-term runs.  I don’t agree with those arguments, but they’re your servers and I want you to make an informed decision on what you run.

Anyways, here’s how I set it up with Extended Events, for which I used Jonathan Kehayias’s (b|t) Trace to XE Converter to get started.

IF EXISTS (SELECT 1 FROM sys.server_event_sessions WHERE name = 'DurationOver5Sec')
ADD EVENT sqlserver.rpc_completed(
		  sqlserver.client_app_name	-- ApplicationName from SQLTrace
		, sqlserver.client_hostname	-- HostName from SQLTrace
		, sqlserver.client_pid	-- ClientProcessID from SQLTrace
		, sqlserver.database_id	-- DatabaseID from SQLTrace
		, sqlserver.request_id	-- RequestID from SQLTrace
		, sqlserver.server_principal_name	-- LoginName from SQLTrace
		, sqlserver.session_id	-- SPID from SQLTrace
			duration >= 5000000
ADD EVENT sqlserver.sql_batch_completed(
		  sqlserver.client_app_name	-- ApplicationName from SQLTrace
		, sqlserver.client_hostname	-- HostName from SQLTrace
		, sqlserver.client_pid	-- ClientProcessID from SQLTrace
		, sqlserver.database_id	-- DatabaseID from SQLTrace
		, sqlserver.request_id	-- RequestID from SQLTrace
		, sqlserver.server_principal_name	-- LoginName from SQLTrace
		, sqlserver.session_id	-- SPID from SQLTrace
		duration >= 5000000
ADD EVENT sqlos.wait_info(
		  sqlserver.client_app_name	-- ApplicationName from SQLTrace
		, sqlserver.client_hostname	-- HostName from SQLTrace
		, sqlserver.client_pid	-- ClientProcessID from SQLTrace
		, sqlserver.database_id	-- DatabaseID from SQLTrace
		, sqlserver.request_id	-- RequestID from SQLTrace
		, sqlserver.server_principal_name	-- LoginName from SQLTrace
		, sqlserver.session_id	-- SPID from SQLTrace
		duration > 5000 --This one is in milliseconds, and I'm not happy about that
            AND ((wait_type > 0 AND wait_type < 22) -- LCK_ waits
                    OR (wait_type > 31 AND wait_type < 38) -- LATCH_ waits
                    OR (wait_type > 47 AND wait_type < 54) -- PAGELATCH_ waits
                    OR (wait_type > 63 AND wait_type < 70) -- PAGEIOLATCH_ waits
                    OR (wait_type > 96 AND wait_type < 100) -- IO (Disk/Network) waits
                    OR (wait_type = 107) -- RESOURCE_SEMAPHORE waits
                    OR (wait_type = 113) -- SOS_WORKER waits
                    OR (wait_type = 120) -- SOS_SCHEDULER_YIELD waits
                    OR (wait_type = 178) -- WRITELOG waits
                    OR (wait_type > 174 AND wait_type < 177) -- FCB_REPLICA_ waits
                    OR (wait_type = 186) -- CMEMTHREAD waits
                    OR (wait_type = 187) -- CXPACKET waits
                    OR (wait_type = 207) -- TRACEWRITE waits
                    OR (wait_type = 269) -- RESOURCE_SEMAPHORE_MUTEX waits
                    OR (wait_type = 283) -- RESOURCE_SEMAPHORE_QUERY_COMPILE waits
                    OR (wait_type = 284) -- RESOURCE_SEMAPHORE_SMALL_QUERY waits
	--OR (wait_type = 195) -- WAITFOR
ADD TARGET package0.event_file
	SET filename = 'DurationOver5Sec.xel',
		max_file_size = 10,
		max_rollover_files = 5


What’s it all mean?

This captures all SQL Batch Completed and RPC Completed events that took over 5 seconds along with any waits that took over 5 seconds. Seems kind simple and much easier to read than the script to create a trace, but there are some things to point out here.

  • Duration – Milliseconds or Microseconds
  • File Location
  • Restart on server reboot

First, duration, is it milliseconds or microseconds? Trick question, it’s both!!! Ola Hallengren (b|t) opened This Connect Item which resulted in Microsoft apologizing and saying they’ll fix it in SQL 2016. They can “fix” it one of two ways. Either they’ll document them as being different scales and your script will still work without any modifications, or they’ll make them all be the same scale and the script I have here will behave differently on different versions of SQL Server. Anyways, the script I have here is 5 seconds all around, which is 5,000,000 microseconds for the SQL and 5,000 milliseconds for the waits.

Continuing on the duration confusion, the wait info’s duration still isn’t what you might think it is.  This is the duration for a single instance of a wait, not cumulative waits for the duration of a specific query or batch.  If you made a loop that executed 10 times calling a wait for delay of 1 second each time then those are 10 separate 1 second waits.  More important, if your query is doing physical reads from disk then it’s a ton of small PageIOLatch waits, not one large one this session will see.  I understand why it has to be this way, but understanding and enjoying are two different things.

The rest of this isn’t nearly as confusing, I promise.

The file location I’m using is just a filename without a path, which will default to where your error log files are. It’s going to be a much more complex name once SQL Server is done with it, and not just a underscore with a rollover count like server-side traces.  However, it will start how it’s written here and you can use wildcards to say which files you want to read.

Now when you restarted the server, or just SQL services, with server-side traces you’d have to manually start that trace again if you wanted it to run all the time. This is exactly how the script I have here works, too. However, Extended Events also added “Startup State” which means it will start when the server starts. The only issue I have with this is that it will ONLY automatically start when the services start. That’s all fine and good if you’re a patient person….it’s not all fine and good for me, so I manually start it when it’s created.

The other minor details I can see having questions on are on the Max Dispatch Latency and Event Retention Mode. These are limits on how it writes to the file location I’m using. Max Dispatch means that SQL Server will write it to the output file within that many seconds after the event actually happens, so it’s obviously an asynchronous action. Event Retention Mode can give SQL Server permission to say it’s too overworked to write the entries and skip them if needed, and I chose to tell SQL Server to drop as many as it wants to if I’d be slowing it down.  It’s all documented in BOL, and it’d be good for you to read through that instead of just listening to me picking out select details.

So, how do you read this? It’s going to put it all in XML, isn’t it!?!?

If you can read this, Kendra Little (b|t) may suggest you’re a demon robot! (While her comment makes me laugh, that post is a great XE reference or it wouldn’t be on here.) Well, she’s probably right in calling this next script names because friendly robots don’t treat your CPU like this. In fact, if you’re on a stressed server or want to do a lot of analysis on this, it’s going to be best to copy the .xel files to a server you care a little less about and shred the XML there.

Unfortunately, I wrote my script to automatically find the files and add a wildcard character, so you’d have to modify my script to run it on another server. The point is getting you started with as little frustration as possible, and this is perfect for that reason. The biggest catch is that the session has to be running for this to work without changes because it’s grabbing the full file name with path from the DMVs for running sessions.

Anyways, here goes….

	@SessionName SysName 
	, @TopCount Int = 1000

--SELECT @SessionName = 'UserErrors'
SELECT @SessionName = 'DurationOver5Sec'
--SELECT @SessionName = 'system_health'
SELECT * FROM sys.traces

SELECT  Session_Name = s.name, s.blocked_event_fire_time, s.dropped_buffer_count, s.dropped_event_count, s.pending_buffers
FROM sys.dm_xe_session_targets t
	INNER JOIN sys.dm_xe_sessions s ON s.address = t.event_session_address
WHERE target_name = 'event_file'



IF OBJECT_ID('tempdb..#Queries') IS NOT NULL BEGIN
	DROP TABLE #Queries 

DECLARE @Target_File NVarChar(1000)
	, @Target_Dir NVarChar(1000)
	, @Target_File_WildCard NVarChar(1000)

SELECT @Target_File = CAST(t.target_data as XML).value('EventFileTarget[1]/File[1]/@name', 'NVARCHAR(256)')
FROM sys.dm_xe_session_targets t
	INNER JOIN sys.dm_xe_sessions s ON s.address = t.event_session_address
WHERE s.name = @SessionName
	AND t.target_name = 'event_file'

SELECT @Target_Dir = LEFT(@Target_File, Len(@Target_File) - CHARINDEX('\', REVERSE(@Target_File))) 

SELECT @Target_File_WildCard = @Target_Dir + '\'  + @SessionName + '_*.xel'

--SELECT @Target_File_WildCard

SELECT TOP (@TopCount) CAST(event_data AS XML) AS event_data_XML
INTO #Events
FROM sys.fn_xe_file_target_read_file(@Target_File_WildCard, null, null, null) AS F
	, file_offset DESC 

SELECT  EventType = event_data_XML.value('(event/@name)[1]', 'varchar(50)')
	, Duration_sec = CAST(event_data_XML.value ('(/event/data[@name=''duration'']/value)[1]', 'BIGINT')/CASE WHEN event_data_XML.value('(event/@name)[1]', 'varchar(50)') LIKE 'wait%' THEN 1000.0 ELSE 1000000.0 END as DEC(20,3)) 
	, CPU_sec = CAST(event_data_XML.value ('(/event/data[@name=''cpu_time'']/value)[1]', 'BIGINT')/1000000.0 as DEC(20,3))
	, physical_reads_k = CAST(event_data_XML.value ('(/event/data  [@name=''physical_reads'']/value)[1]', 'BIGINT')/1000.0 as DEC(20,3))
	, logical_reads_k = CAST(event_data_XML.value ('(/event/data  [@name=''logical_reads'']/value)[1]', 'BIGINT') /1000.0 as DEC(20,3))
	, writes_k = CAST(event_data_XML.value ('(/event/data  [@name=''writes'']/value)[1]', 'BIGINT')/1000.0 as DEC(20,3))
	, row_count = event_data_XML.value ('(/event/data  [@name=''row_count'']/value)[1]', 'BIGINT')
	, Statement_Text = ISNULL(event_data_XML.value ('(/event/data  [@name=''statement'']/value)[1]', 'NVARCHAR(4000)'), event_data_XML.value ('(/event/data  [@name=''batch_text''     ]/value)[1]', 'NVARCHAR(4000)')) 
	, TimeStamp = DateAdd(Hour, DateDiff(Hour, GetUTCDate(), GetDate()) , CAST(event_data_XML.value('(event/@timestamp)[1]', 'varchar(50)') as DateTime2))
	, SPID = event_data_XML.value ('(/event/action  [@name=''session_id'']/value)[1]', 'BIGINT')
	, Username = event_data_XML.value ('(/event/action  [@name=''server_principal_name'']/value)[1]', 'NVARCHAR(256)')
	, Database_Name = DB_Name(event_data_XML.value ('(/event/action  [@name=''database_id'']/value)[1]', 'BIGINT'))
	, client_app_name = event_data_XML.value ('(/event/action  [@name=''client_app_name'']/value)[1]', 'NVARCHAR(256)')
	, client_hostname = event_data_XML.value ('(/event/action  [@name=''client_hostname'']/value)[1]', 'NVARCHAR(256)')
	, result = ISNULL(event_data_XML.value('(/event/data  [@name=''result'']/text)[1]', 'NVARCHAR(256)'),event_data_XML.value('(/event/data  [@name=''message'']/value)[1]', 'NVARCHAR(256)'))
	, Error = event_data_XML.value ('(/event/data  [@name=''error_number'']/value)[1]', 'BIGINT')
	, Severity = event_data_XML.value ('(/event/data  [@name=''severity'']/value)[1]', 'BIGINT')
	, EventDetails = event_data_XML 
INTO #Queries
FROM #Events

SELECT q.EventType
	, q.Duration_sec
	, q.CPU_sec
	, q.physical_reads_k
	, q.logical_reads_k
	, q.writes_k
	, q.row_count
	, q.Statement_Text
	, q.TimeStamp
	, q.SPID
	, q.Username
	, q.Database_Name
	, client_app_name = CASE LEFT(q.client_app_name, 29)
					WHEN 'SQLAgent - TSQL JobStep (Job '
						THEN 'SQLAgent Job: ' + (SELECT name FROM msdb..sysjobs sj WHERE substring(q.client_app_name,32,32)=(substring(sys.fn_varbintohexstr(sj.job_id),3,100))) + ' - ' + SUBSTRING(q.client_app_name, 67, len(q.client_app_name)-67)
					ELSE q.client_app_name
	, q.client_hostname
	, q.result
	, q.Error
	, q.Severity
	, q.EventDetails
FROM #Queries q
--WHERE eventtype NOT IN /*rather typical filtering*/ ('security_error_ring_buffer_recorded', 'sp_server_diagnostics_component_result', 'scheduler_monitor_system_health_ring_buffer_record')
	--AND eventtype NOT IN /*specific troubleshooting filtering*/ ('connectivity_ring_buffer_recorded', 'wait_info')

The top is all I typically edit. The Session Name is ‘DurationOver5Sec’ for this purpose, but as you can see it can also read some of the information from the system_health session. The TopCount is just saying I’m only going to look at the most recent 1,000 records BEFORE I shred and filter it, mostly because I love my CPU and this changes how much XML I’m throwing at it.

No matter how I write this to filter before playing with XML or try to be 100% safe, my scripts will not be perfect for you.  In fact they may crash your server. Unfortunately, I’m only half kidding, because nothing is 100%. The space is limited to 50 MB, the overhead of writing to this file should be fine since we’re only working with events that took over 5 seconds asynchronously with data loss allowed, so I don’t foresee any issues in most cases unless you change these numbers. However, reading them gets expensive and you have to use the most caution here.  Keep in mind that you can put the .xel files on any server with SQL installed, even developer edition, and query it.

Why’d I write this and what else is going through my head?

The inspiration for this post actually came from a question a former coworker asked me quite a while ago, just seeing if I knew a lot about Extended Events.  At the time the technically correct answer was “I don’t know”, but instead I told him that although I didn’t have experience with Extended Events that he could look at Jonathan Kehayias’s (b|t) blog, specifically his XEvent A Day Series.  However, I also warned him that Jonathan’s work tends to go deeper than a beginner is prepared for and that he should start with his coworker, Erin Stellato’s (b|t), version that is easier to start with, specifically her Making the Leap From Profilers to Extended Events session she made available on YouTube. This actually inspired two blog posts, I just haven’t written the How to Say “I don’t know” Better post yet.

Have fun, experiment (with queries), and ask questions.

Technical Interviews – How to Thrive

There’s a trick to technical interviews. Every question is looking for integrity first, and intelligence and energy second. This is paraphrasing Warren Buffet, who became the most successful investor out there by becoming the best at interviewing the management of the companies he was investing in.

The Warren Buffet Way

“In evaluating people you look for three qualities: integrity, intelligence, and energy. And if you don’t have the first, the other two will kill you.” -Warren Buffet pp. 172-173

To understand what answers are right you need to understand the motives of the interviewer. Why are they asking these questions, what are they looking for? Yes, you can cram for an interview and hope you get more questions right, but that isn’t what’s going to make or break you. It may help you represent yourself as more of a subject-matter expert than you are, which can be both good and very, very bad. What they’re looking for is someone who can be a good employee a lot more than someone who can currently pass a written exam.

In the end, if you don’t know how to answer a question I know you can go look it up. However, what integrity do you have when you don’t know the answer? This is where all those years you spent in school are going to fail you, miserably. You’ve been taught from a young age that you need to have an answer for every question on the test, even if it’s because you’ll get 25% of them right on multiple choice or 10% partial credit on something else. Interviewers, like your teachers, know the answers they’re looking for before the question was asked. However, interviewers are different in that they give negative credit instead of partial credit.

So, what is this negative credit and why are they giving it to you? Lets look at it from the context of being on the job. I’m a Database Administrator, and I have developers ask me questions from time to time. Many times I know the answers and can help them out off the top of my head, we’re not going to worry about those times. The problems come from when I don’t know or I’m unsure of the answer. Do I have the integrity to tell that developer I don’t know? If I don’t, will they build the world upon inaccurate information and end up with a world that needs to be torn down and rebuilt? One of these answers cost a little bit of time to look it up. I’m not going to try to put an estimate on the cost of the other answer.

Interviewers don’t like it when you know all the answers because of this, and a good interviewer is going to have a question or two they can bring up that no one would know the answer to off the top of their head for this very reason. Yes, they do like it when you know a lot of answers, but more importantly they want to know what type of integrity and mindset you have when you don’t know. The right answer if you’re uncertain is to say you would need to verify it, but here’s my answer and here’s how I’d check it. The right answer if you have no clue is “I don’t know”, hopefully followed up by where you would start to look for answers.

In both cases, you write down the question, look it up after the interview, and be ready to answer it next time. If you want bonus points, and who can’t use bonus points in this situation, you’ll email your interviewer the next day and say that you looked into the question and here’s the right answer. Now you’re acting like the ideal employee. You maintained integrity by saying you either didn’t know or weren’t certain of the answer, which couldn’t have done a better job showing you had the integrity Warren Buffet was referring to. Then you followed up with the energy to gain the intelligence to answer it better.

On the answers you know, be it ones you knew all along or ones you crammed for, you fulfilled one of the lessor criteria that Warren Buffet would look for. On the answers you don’t know you instantly fulfilled the most important criteria if you admitted you didn’t know, then you had the opportunity to throw in the other two criteria on a follow up.

Here’s the problem. Every good employee is a teacher, and you’re better off without a teacher if their lessons are wrong. A good interviewer is looking for a good teacher more than a know-it-all.

Speaking of being a teacher, how do you answer the questions you do know very well? Chances are you know how to answer them in a way that someone else who knows the answer would understand. However, they want to know if you could teach someone else the answer, even if that person doesn’t have your specialty. Make sure you can teach them the right answer in an interview when you understand it well enough, especially if it’s a common question they’ll ask at every interview.

Back to my Database Administrator background, every interview I’m in someone will ask what the primary types of indexes are in SQL. The “right” answer is that a clustered index is the table itself in a sorted order, and a nonclustered index is a subset of that data which is also sorted. This is good, they understood my answer and know that indexes aren’t something new to me. However, if you’re talking to someone on the job who doesn’t specialize in SQL Server, how well are they going to understand and remember this answer?

The answer that will blow them away is that a telephone book is a clustered index which is sorted by the key fields “Last Name” then “First Name”, and has all the relevant information is clustered together in that single index. A nonclustered index is the index at the back of a book where it’s sorted by the key field “key word”, and it includes the page number, which is the clustered index key, so you can look up the rest of the information if the index didn’t tell you everything you needed to know. It’s not uncommon to get the response “Do you teach?” from your interviewer after an answer like that, to which you respond “every chance I get.”

There are other questions you can expect in interviews, too. “Tell me about yourself”, “Where do you expect to be in 5 years”, and so on. I’m not going to pretend to be a good list of all the common questions, but I can tell you how to prep for them. Again, think about it from the interviewer’s perspective.

“Tell me about yourself” is obviously a very vague question. I hate it this question, but it still gets asked. I view it from the standpoint that they want to see how you handle vague questions. Do you just ramble on about your kids and dog or do you clarify what they are asking? Do you have the ability to break this down into something specific and get to the right answer? On my job people will often come to me and say “the server is slow”, which I have to break down into pieces until I can start asking the right questions and answers. Now I’m taking a “why would you ask this” question and demonstrating what they want to see in an employee.

“Where do you expect to be in 5 years” is also every common, and they aren’t expecting you to be a fortune teller here. Also, I wouldn’t expect someone to try to commit to being with my company for that length of time before they’re even offered the job. What else could an interviewer be hoping for? They just want a direction. Are you excited about what you’re doing and have the drive to keep learning? Do you want to make the jump from technical employee to manager, which would start another conversation with the stated goal still being to look for the energy to improve yourself and be a continually better asset to the company. If you don’t know what career you want to have in 5 years then you can’t be expected to have the energy to improve your abilities at your current career.

Here’s some more you should know before an interview, but I’ll be brief since it goes beyond the scope of this post.  When people ask you questions in an interview, do you exude the following?

  • Humble – If someone asks you to rate yourself on anything from 1 to 10 then 10 is not an option.  Even 9 is arrogant unless you’re known around the world for your skill.  Too high and you’re too confident, won’t take criticism, and, my personal favorite, you’re done learning because you think you know it all.
  • Hunger or Drive – You want to keep bettering yourself, both in work and out.  You want something a little above your skill level because you’ll be there soon.
  • Energy – If the interviewer is good, they’ll ask you questions you don’t know no matter how good you were.  Did you come back with a solution after the interview?
  • Communication – How were you able to answer the questions you got right?  Were they just technically right, or did you put off the impression that you could help others get to your level?
  • Leadership – Chances are you aren’t interviewing for a management position, but management isn’t leadership.  Are you showing that you can take an idea and run with it, asking others for help as you go, and making sure it’s right without someone looking over your shoulder the whole time?  Managers want to do more than stand around holding a bullwhip, they want to watch people fly on their own with ground control giving a little direction.

These questions are being asked now in the context of an interview, but you should be asking yourself these things in your continual career planning phase as well. “Tell me about yourself”, can you break down large issues and find the root of the problem, and do you care enough about what you’re doing to get down into the details and root causes? “Where do you expect to be in 5 years”, do you care enough about this career path to make yourself a better you? If you don’t have good answers to these questions for yourself then where do you want to be? Figure that part out and you’ll have a better chance at finding the career you want to do. And don’t forget to consider the integrity of your answers when coworkers look towards you for guidance.  Do you give each answer your best shot off the top of your head, or do you tell them you would have to look it up and follow up with the right answer after you do?  Following this advice you’ll be a lot closer to the ideal employee the interviewers are looking for, then all you have to do is walk into the interview and be yourself.

TSQL2SDAYThis post was already in the works before I found out it was the topic for this month’s T-SQL Tuesday led up by Boris Hristov (b | t) on Interviews and Hiring. This is an amazing coincidence for both of us. For me, it gives me more visibility on this post, which I think a lot of people could benefit from. For you, it gives you a couple links in this paragraph to exercise your hunger to better yourself. Just remember that a lot of what you see is going to be how to get a job, but you can skew it to how to be better at your job and evaluate where you want to be.

Optional Parameters Causing Index Scans

Optional parameters in a stored procedure often lead to scans in the execution plan, reading through the entire table, even if it’s obvious that the right index is in place. This is extremely common in procs that are used behind search screens where, for example, you could search for a person according to their first name, last name, city, state, zip code, gender, etc.

To take a simplified look at this, make a quick and easy table consisting of an indexed identity column for us to search by and fluff to make the row size bigger.

    ID Int NOT NULL Identity(1,1) PRIMARY KEY
    , Value CHAR(500) NOT NULL

INSERT INTO Test (Value)
GO 250

INSERT INTO Test (Value)
SELECT TOP (1000) Value 
FROM Test 
GO 100

Now we’ll create a proc to get data. In this proc we’ll get the data several different ways, showing the advantages and disadvantages of each method.  You’ll notice that I put my comments as PRINT statements, which will make it easier for you to read through the messages tab and see what’s going on.

CREATE PROC proc_Testing 
    @ID int = 125
    , @MinID int = 100
    , @MaxID int = 150


PRINT 'Clustered index scan, because the query optimizer has to be prepared for both null and non-null values'

FROM Test 

PRINT 'It''s a small mess, but the query optimizer knows exactly what to expect for each query.'

    SELECT *
    FROM Test 
    WHERE @ID = Test.ID 
    SELECT * 
    FROM Test

PRINT 'Expand that to two possible parameters and it still looks simple right now, when it''s doing a scan'
FROM Test 
    AND (ID <= @MaxID OR @MaxID IS NULL)
PRINT 'Make it optimized with IF statements and it quickly becomes a big mess.'  
PRINT 'Many "search screen" queries will have 20 or more optional parameters, making this unreasonable'

        SELECT * 
        FROM Test 
        WHERE ID BETWEEN @MinID and @MaxID 
        SELECT *
        FROM Test 
        WHERE ID >= @MinID 
        SELECT * 
        FROM Test 
        WHERE ID <= @MaxID 
        SELECT *
        FROM Test 

PRINT 'However, the query optimizer can get around that if it''s making a one-time use plan'

FROM Test 
    AND (ID <= @MaxID OR @MaxID IS NULL)

PRINT 'And again with the single parameter'


PRINT 'However, this leaves two nasty side effects.'
PRINT 'First, you create a new plan each time, using CPU to do so.  Practically unnoticed in this example, but it could be in the real world.'
PRINT 'Second, you don''t have stats in your cache saying how indexes are used or what your most expensive queries are.'
PRINT 'Another option is dynamic SQL, which still has most of the first flaw, and can also bloat the cache'

DECLARE @Cmd NVarChar(4000)


    SELECT @Cmd = @Cmd + ' AND ID = ' + CAST(@ID AS VarChar(100))

    SELECT @Cmd = @Cmd + ' AND ID >= ' + CAST(@MinID AS VarChar(100))

    SELECT @Cmd = @Cmd + ' AND ID <= ' + CAST(@MaxID AS VarChar(100))

EXEC (@Cmd) 

PRINT 'Yes, you can do sp_executesql which can be parameterized, but it gets more difficult.'

Now that we have the data and proc, just execute it. It will be best if you add the actual execution plans to this, which you can get to on your menu going to Query / Include Actual Execution Plan. If you don’t know how to read execution plans then this is even better! No experience necessary with plans this simple, and you’ll start to feel more comfortable around them.

EXEC Proc_Testing

Even with the default values for the parameters the query optimizer knows that you could have easily passed in NULL for the values, and it has to plan for everything. Lets step through the results to see what this all means.  Part of that planning for everything is saying that @ID IS NULL will be used instead of ID = @ID, so the query optimizer can’t say for sure that it can just perform a seek on that column.

In the messages tab we have the number of reads done by each statement courtesy of me adding SET STATISTICS IO ON in the proc itself (bad idea in prod, by the way). For the first one we have 7272 logical reads, saying we read 7272 pages of 8kb of data, so we had ask the CPU to play around with over 56 MB of data to find records that matched.

Looking at the execution plan we can see why that happened. The Predicate you can think about as the “Scan Predicate”, since it’s saying what you couldn’t seek for and had to scan instead. In this case it’s showing the entire “@ID = Test.ID OR @ID IS NULL” in there, because it compiled a reusable plan that might be used with a NULL.

Execution Plan of Query 1

In the next statement we were able to do a seek, doing only 3 reads. This is because there are two levels of the B+tree on this index, so it read the root level, then the next level of the B+tree, and finally the single leaf level of the index that contained the data we wanted.

Take a look at the execution plan here to see the difference with not only the Seek itself, but the Seek Predicate in the details. You’ll see that it uses @ID in there still showing that it’s a reusable plan ready for any ID to be passed in, but the WHERE clause is telling the query optimizer that it can always use a seek to find this data.

Execution Plan of Query2

Now that I know you can read through a couple yourself, I’ll skip ahead to the 6th query where we used the same “@ID = Test.ID OR @ID IS NULL” that caused is trouble before, but did OPTION (RECOMPILE) at the end of the statement. Again it did the same 3 reads I just mentioned still using a seek, but the details of the execution plan looks different. The seek predicate used to say @ID so it could be reused, but the query optimizer knows nothing with OPTION (RECOMPILE) will get reused. Knowing that, it simply converted @ID into a constant value of 125 and was able to eliminate the possibility of 125 being NULL.

Execution Plan of Query 6

If you want to get more into the details, you can look at the XML version of the execution plan by right-clicking on the plan and selecting “Show Execution Plan XML…”. Here’s what you’ll find for the parameter’s value in the first two queries. It shows that the value that was sniffed when it was compiled was 125, and also that it was run this time with a value of 125. If you ran this again passing 130 in then you’d see the sniffed value staying at 125 because it’s not going to use CPU resources to recompile it, but the runtime value will be 130 for that run.

  <ColumnReference Column="@ID" ParameterCompiledValue="(125)" ParameterRuntimeValue="(125)" />

The OPTION(RECOMPILE) versions don’t look like this though, they’re changed to a constant value even at this level.

  <ColumnReference Database="[tempdb]" Schema="[dbo]" Table="[Test]" Column="ID" />
  <ScalarOperator ScalarString="(125)">
    <Const ConstValue="(125)" />

I hope I left you with a better understanding of what happens when you write code for parameters that could be null. We’ve all done this, and many of us (me especially) have been frustrated with the performance. This is a change I put into production not too long ago that ended scans on a rather large table, and the results were noticed in query execution time, the page life expectancy counter since the entire table was no longer being read into memory, and I/O activity levels since that index and the indexes it kicked out of memory didn’t have to be read off of disk nearly as often.

Let me know what you think, and especially if you have a better way of handling this issue.

Why worry about CXPACKET

TSQL2SDAYMany people see CXPACKET at the top of their waits and start trying to fix it. I know this topic’s been done before, but people ask me about it enough that it should be done again, and again. In fact, Brent Ozar wrote an excellent article about it, and Jonathan Kehayias wrote an article about tweaking it using the cost threshold of parallelism. Even with those, it took Grant Fritchey warning us to be careful about giving guidance to lead me to Bob Ward’s take on it.  After all of these people and more saying CXPACKET isn’t what you think it is, enough people have it all wrong to the point that I felt this was the biggest piece of public opinion to speak out against for Michael Swart’s T-SQL Tuesday #052.

BOL still doesn’t state it completely clear when it defines it as “Occurs with parallel query plans when trying to synchronize the query processor exchange iterator. If waiting is excessive and cannot be reduced by tuning the query (such as adding indexes), consider adjusting the cost threshold for parallelism or lowering the degree of parallelism.” The true meaning of this wait is much easier, with Bob Ward’s explanation of “This wait type only means a parallel query is executing.”

Yes, that’s it, it’s not really all that complex. If you have a MAXDOP of 4 then you could end up with 5 threads. Think of it as having one manager who can delegate work out to 4 subordinates. The subordinates do their pieces of the job and the manager marks down CXPACKET on his time sheet while he waits for them. When they finish, the manager compiles their work (repartitions the streams if you want). There was a little extra work done here because the manager wouldn’t have to combine all the work if he did it all on his own (MAXDOP = 1), and it’s possible the subordinates did more work than they had to. However, in most cases, it does get done faster, and duration is what all your users talk about.

The advice in BOL above saying “consider adjusting the cost threshold for parallelism or lowering the degree of parallelism” is incomplete at best. Upping the cost threshold is exactly what it sounds like, making it so only more expensive tasks can be assigned to those subordinates. Then lowering the degree of parallelism is just dropping how many subordinates you can use.

Cost Threshold for Parallelism is something Brent Ozar makes fun of a bit, because it’s a default of 5 units that are undefined anymore. Jeremiah Peschka recommends setting this to 50 and tweaking it from there. I’d recommend 25 for most OLTP because I like parallelism and my procs aren’t stressed. However, the point is that you should only tweak what you understand, and now I hope you have a descent understanding of this. If you have a lot moderately large queries you wish could break out the work to get it done faster, drop it. If all of your CPUs are running hot and you wish less work was being done repartitioning streams or compiling more complex plans, raise it.

Max Degree of Parallelism is what people really like to tweak, and it’s usually a bad idea. My personal recommendation is to set it to 4 in OLTP (higher in OLAP, very dependent on your load) and forget it unless you have a really, really good reason to change it. The default of 0 is absolutely horrible for OLTP on larger servers, where this could mean that you’re breaking out a task and assigning it to 80 or more subordinates, then trying to put all of their work back together, and killing yourself doing it. This shouldn’t be above 8 unless you really like to be overworked putting your subordinates’ work back together and dealing with those performance problems. However, it’s great to have subordinates who can help you out when needed, so please don’t drop this below 4 unless you have an amazing reason to do so, and only on the query level if you only have one amazing reason.

So, what do you do when CXPACKET is taking over? The trick is to think of it in terms of how a business would work. You have a ton of work to do, enough where your managers are going crazy trying to find subordinates to do it for them. Do you tell your managers to do it themselves OPTION(MAXDOP=1), do you tell them to go out and find as many subordinates as they can OPTION(MAXDOP=0), or do you tell them you’ll see if you can help them find a more efficient way to do their jobs?

If you have the Cost Threshold for Parallelism set to 25 or 50 you have a descent limit before you consider spreading the load around. So, what takes that long for your server to handle? A good portion of the time you’ll find this when you’re scanning a large index or table, or even doing a very expensive seek. If you look at your execution plan is it doing a scan? If it’s a seek that’s starting your parallelism then does it have a small “Seek Predicate” and a lot to do in the “Predicate” in the properties for that operator? It could be that you could use better indexes, it could be that you could find a more SARGable approach, or a variety of other performance tuning opportunities.

The thing is that parallelism isn’t bad and CXPACKET is saying you’re using a good thing. A high CXPACKET could mean you’re trying to use too much of a good thing because you’re “overworking your managers”, but the workload is typically more of an issue than the fact that you let your manager delegate their work when there’s enough to go around. A CXPACKET a lot higher than your baseline could mean that you have poor statistics or parameter sniffing that’s causing a bad execution plan, or it could mean you have a new query that could use some attention.  When you’re talking about parallelism it’s also possible that the statistics were off and one thread (subordinate) ends up doing 99% of the work, making CXPACKET jump up, and that can be found in the actual execution plans; read more about that in Grant Fritchey’s book.

If you have a MAXDOP around 4 (between 2 and 8, reading these recommendations) and Cost Threshold of 25 or higher then I’d be looking into the workload more than the fact that you’re using parallelism. Sure, some tasks just don’t take well to parallelism and there’s reasons to use a MAXDOP=1 hint, but it’s rare to need to take it down to 2 on the server level and even more rare to turn if off by taking it down to 1.

What ever you do, when you have “a big query using parallelism”, try to focus on “a big query” first, it’s usually the problematic part of that statement.

Related articles

Free SQL Server Training (not a sales pitch)

My company cut the training budget this year, and it’s not that uncommon for that to happen anymore. However, I’m still getting my week of training in, and more, without paying much.

Blogs of people who talk too much – FREE

There’s so much for free online, but you’re reading my blog so I don’t have to dive too deep here. Follow a couple blogs in an RSS reader, and make sure you sign up for freebies they may offer like Paul Randal giving away training!

Personally, I try to read a majority of everything written by:

You can include me on this list if you’re feeling generous. However, they’re all smarter than me, and I’ll understand if you don’t.

Local User Groups – FREE

I know most DBAs don’t go to the local user groups. Some never go, some go when the topic is right, but you can benefit from these every time.

Even the presentations that are outside of your normal job duties are valuable. If someone’s teaching SSIS and you don’t use it, knowing what you can learn from an hour-long presentation will put you in a situation to have more intelligent conversations when these topics come up. Read over job descriptions, they expect DBAs to know everything, and here’s your chance to know a little about it.

Also, you can learn from every presentation. I don’t care if you could give that presentation, people in a field known for introverts could use practice and examples of how to present ideas to others. You’ll see good, you’ll see bad, and you’ll be better because of it. Also, one or two small details you might pick up on a topic you know extremely well will make a big difference.

Speaking of our field being known as introverts, this is your opportunity to step out of your comfort zone and present. That’s what I did. Some present because they were born to do it, and I started presenting because I was born to blend into a cube farm! I’ll be presenting at my local user group in April, have already done a SQL Saturday this year, and would like to do at least two more SQL Saturdays, which drags me (kicking and screaming) into improving my soft skills where a typical DBA, like me, is behind the curve.

Our last presentation by Bill Wolf was an excellent presentation on avoiding common coding mistakes. It wouldn’t have been too much of a stretch for me to make a similar presentation, but I still learned a couple things and met some good people. The next meeting on March 25th will be about the interdependency between SQL Server and .NET, which would be amazing for any DBA or .NET developer to go to, even if Dell wasn’t giving away free laptops as door prizes!

Finally, the presentations aren’t the real reason you’re going there, are they? You’re getting to know the other DBAs in the area, and you’re sitting there talking person-to-person to the ones that want to share everything they know with you. You get to know them, they get to know you, and when one of you wants a new job or new coworker then you have a better chance at finding a good fit that you can be comfortable with from the start.

If you’re not sure where your local user group is or if you even have one, start by looking at this link.  Most groups are affiliated with PASS, and that means they’ll be listed there.

SQL Saturday – $10 Lunch

SQL Saturday is a cross between the local user group and the PASS Summit. In fact, the last one I was at has been referred to as a mini PASS Summit because of the number of high-quality speakers there, including multiple names from the list of bloggers I mentioned above.

The cities that host these events typically do so as an annual event, and there’s probably one not too far from you each year. I live in Pittsburgh, PA, which hosts one just a couple minutes from my house. I’m also trying to make it to ones reasonably close to me such as Cleveland, OH a couple weeks ago that’s about 2 hours away, Philadelphia, PA coming up in June that’s a 5 hour drive where I have family I can stay with. I think I’ll skip Columbus, OH because it’s the week after Philly and I have three small kids. Although I won’t make it this year, I may go to Richmond, VA, which is a 6 hour drive where I also have family to stay with.

It’s been a cold winter here in Pittsburgh, and now I’m starting to think about how much I miss my relatives in Texas, California, Florida and South Carolina. It knocks this idea out of the free or low cost training blog post, but that doesn’t mean it’s a bad idea.

While my company did cut the budget, they still paid mileage and gave me Friday off as a travel day. Even without a budget, still ask to see what you can get.

Virtual PASS Chapters – FREE

Pass also does virtual online events. I know I’m risking making this sound like a PASS commercial doing the local PASS chapter, the PASS SQL Saturdays, now the Virtual PASS chapters, but it’s hard to talk about free or low cost SQL Server training without talking about these people. So, why would I avoid this sounding like a commercial for them? They deserve to be known, and with all this free junk they can’t have much of an advertising budget!

Anyways, the virtual chapters each have their own focus, and I’m a member of the PowerShell, Performance, Database Administration, and I think a couple others. They do presentations about once a month, and I try to watch them when I have time. Worst case, they usually make the recordings available later.

There are also a couple “24 hours of PASS” events throughout the year where there are 24 back-to-back sessions. The next one on the schedule is the Russian version, which I’ll pass on.

BrentOzar.com – FREE

In addition to their blogs, the Brent Ozar Unlimited team (Brent’s just a member, not their leader!!!!) does weekly webinars that are usually very useful to me. I’d say I watch 3 of them a month, missing one either due to my schedule or because it’s a rare one that I don’t feel bad about missing. They put up the recordings later, although I have to admit I usually don’t watch them if they aren’t live. I know I don’t make any sense, but you’ll get used to it.

Twitter – FREE

Seriously, if you can fit your question and #sqlhelp in 160 characters or less, you’ll have someone pointing you in the right direction within minutes. They’ll point you to the posts that you didn’t know what key words to Google to find, but keep in mind that they’re doing this for free. Always be grateful, and understand when they say that digging into the issue would mean a trip to the consultant.

The End – Sad, but still free

So, yeah, my company cut my training budget. That means that so far this year (March 3rd here) I had to stick to learning from Steve Jones, Grant Fritchey, Erin Stellato, Stacia Misner, Tom LaRock, and others in-person. Then I watched free videos by Jes Borland, Jeremiah Peschka, Brent Ozar, Kendra Little, and Doug Lane. I have a free month of the SQL Skills Pluralsight classes with a code that Paul Randal personally emailed me, with which I’m in the middle of a class by Jonathan Kehayias right now. I’m planning on taking full advantage of this trial with classes by Joe Sack, Erin Stellato, Paul Randal, Bob Beauchemin, and Kimberly Tripp, although I’ll end up running out of time and buying a subscription for $30 / month.

So, is a lack of a training budget stopping you? You can say it is, but it’s not.

I gave you a lot and I missed a lot. This is not all-inclusive, and a quick look around the internet will prove that point. There are still Free e-Books and many other resources out there. I have to start skipping stuff or this post will never make it out to you, and what good would that be?

Hey, look!!! There’s a section right below here marked “Comments” that’s perfect for your addition to the list. Maybe it can be all-inclusive!!!