I get a lot more out of Idera Diagnostic Manager because I query the information it collects. The info that is collected could be used for too many things to put into a single GUI, so it’s not their fault someone can do more with it. In fact, it’s to Idera’s credit that this information is stored in a way that is easily understandable and able to be queried easily.
Querying this information started because I moved from one company that did a lot of manual monitoring to another that used Idera. A lot of the information I used to manually monitor was already being captured, and it’s best to use their data instead of collecting duplicate data.
Sharing this all started with a tweet. Someone went out of their way to create a twitter account just to send me this tweet. How can you say no to that?
Having set-based logic so far embedded into my train of thought, I made this page and shared them all. Knowing that more needed to be said about each, I’m slowly blogging each of these and creating others to build on them.
As time goes on, the links to blog posts will grow and the scripts below will disappear. Each time the scripts will get a closer review by me, and get a very thorough description of why I feel it’s useful. Comments now will help me write them better from the start, and your involvement later will help me make them better. Us talking to each other is a good thing.
Blog Posts (This section will grow)
Wait Stats Baseline Comparison – Idera Diagnostic Manager. It’s great to know you’re waiting on something now. Is that normal for your server, or is this an incident you need to solve now? Also, if it’s always been that way, you know where to focus your efforts. This is where you start when someone says “I’m not sure, but I think the database is slower today.”
I/O Baseline Comparison – Idera Diagnostic Manager. Beyond just looking at PLE to try to guess how much physical I/O is being done on each server, look at the actual counters for pages read (Reads and Read Ahead) and compare that to pages used by queries (Lookups) over periods of time. Then compare your recent results to your historical values to determine if something is different now, for better or worse.
Database Growth Trends – Idera Diagnostic Manager. When will your database grow and when will it run out of space? Based on how large your database was in the past, how much free space you have on disk, and your growth rates you can get a descent guess as to when these things will happen. While the past doesn’t predict the future perfectly, it gets it right 95% of the time.
Original Page (This section will shrink)
As I blog each script, I will improve upon it and delete it from this section. I only keep one copy of each script on my blog to keep it normalized. 🙂
These scripts work for me, in my environment, on Idera Diagnostic Manager 9.0. Remember, trust but verify, after all this is the internet.
I plan on blogging each one of these scripts in the long run. Right now there’s a reason they haven’t been blogged. These vary from “need refined and tested more” to “haven’t run any of these against DM 10 yet”. Even more so than any scripts you find on my blog or elsewhere on the internet…don’t trust them until you verified them.
More should be said about each one of these and how I use them, but that takes time. Until then, here’s a quick sentence or two of “oh, that’s what he’s going for here”, and I expect to write a proper blog post on most of these eventually. Hopefully, I’ll even have scripts from other monitoring software vendors as well.
As I blog each one, I’ll probably replace the script on this page with a link to the blog post which contains the script. This way I’m keeping one post consistent as I make changes due to comments and other reasons. This paragraph should also be read as “your comments make a difference” and “blogger doesn’t like talking to a wall”. 🙂
I do not work for Idera, I do not receive compensation for this post or anything else from Idera, my scripts were not verified by Idera, and this page is not an official endorsement for Idera. I’m just a DBA that wanted to see more stats without duplicating the data capture that was already being done.
Sorry for the mess, and enjoy!
View recent deadlocks captured by Idera, optionally filtered by a specific instance.
DECLARE @InstanceName SYSNAME = '' --Not required , @Hours Int = 72 ;WITH CTE AS ( SELECT RowID = row_number() OVER (ORDER BY D.UTCCollectionDateTime) -- assign a row number to each deadlock , DeadlockGraph = Cast(D.XDLData as XML) , D.UTCCollectionDateTime , SS.InstanceName FROM SQLdmRepository.dbo.Deadlocks D INNER JOIN SQLdmRepository.dbo.MonitoredSQLServers SS ON D.SQLServerID = SS.SQLServerID WHERE UTCCollectionDateTime > DATEADD(Hour, -1 * @Hours, GetUTCDate()) AND (UPPER(SS.InstanceName) = UPPER(@InstanceName) OR ISNULL(@InstanceName, '') = '') ) SELECT * FROM ( SELECT InstanceName = CTE.InstanceName , TimeoutID = CTE.RowID , DeadlockTime = DateAdd(Minute, DATEDIFF (minute, GETUTCDATE(), GETDATE()), [UTCCollectionDateTime]) , DeadlockGraph , PagelockObject = CTE.[DeadlockGraph].value('/deadlock-list/deadlock/resource-list/pagelock/@objectname', 'varchar(200)') , DeadlockObject = CTE.[DeadlockGraph].value('/deadlock-list/deadlock/resource-list/objectlock/@objectname', 'varchar(200)') , Victim = CASE WHEN Deadlock.Process.value('@id', 'varchar(50)') = CTE.[DeadlockGraph].value('/deadlock-list/deadlock/@victim', 'varchar(50)') THEN 1 ELSE 0 END , DatabaseName = Deadlock.Process.value('@databaseName', 'sysname') , SPID = Deadlock.Process.value('@spid', 'int') , ProcedureName = Deadlock.Process.value('executionStack/frame/@procname', 'varchar(200)') , LockMode = Deadlock.Process.value('@lockMode', 'char(1)') , Code = Deadlock.Process.value('executionStack/frame', 'varchar(1000)') , ClientApp = CASE LEFT(Deadlock.Process.value('@clientapp', 'varchar(100)'), 29) WHEN 'SQLAgent - TSQL JobStep (Job ' THEN 'SQLAgent Job: ' + (SELECT name FROM msdb..sysjobs sj WHERE substring(Deadlock.Process.value('@clientapp', 'varchar(100)'),32,32)=(substring(sys.fn_varbintohexstr(sj.job_id),3,100))) + ' - ' + SUBSTRING(Deadlock.Process.value('@clientapp', 'varchar(100)'), 67, len(Deadlock.Process.value('@clientapp', 'varchar(100)'))-67) ELSE Deadlock.Process.value('@clientapp', 'varchar(100)') END , HostName = Deadlock.Process.value('@hostname', 'varchar(20)') , LoginName = Deadlock.Process.value('@loginname', 'varchar(20)') , TransactionTime = Deadlock.Process.value('@lasttranstarted', 'datetime') , InputBuffer = Deadlock.Process.value('inputbuf', 'varchar(1000)') FROM CTE CROSS APPLY CTE.DeadlockGraph.nodes('/deadlock-list/deadlock/process-list/process') as Deadlock(Process) ) X --In a subquery to make filtering easier (use column names, not XML parsing), no other reason ORDER BY TimeoutID DESC
What tables are growing and how big were they before?
DECLARE @Instance SYSNAME = NULL , @DatabaseName SYSNAME = 'SQLdmRepository' , @TableName SYSNAME = NULL , @DateStart DateTime = GetUTCDate() --Because the database is case sensitive, but my brain isn't. SELECT @Instance = UPPER(@Instance) , @DatabaseName = UPPER(@DatabaseName) , @TableName = UPPER(@TableName) --The day counts are all divisible by 7 because data is only collected on weekdays by default ;WITH d7 AS ( SELECT tg.* FROM SQLdmRepository..TableGrowth tg WHERE tg.UTCCollectionDateTime BETWEEN @DateStart-8 AND @DateStart-7 ) , d28 AS ( SELECT tg.* FROM SQLdmRepository..TableGrowth tg WHERE tg.UTCCollectionDateTime BETWEEN @DateStart-29 AND @DateStart-28 ) , d56 AS ( SELECT tg.* FROM SQLdmRepository..TableGrowth tg WHERE tg.UTCCollectionDateTime BETWEEN @DateStart-57 AND @DateStart-56 ) , d91 AS ( SELECT tg.* FROM SQLdmRepository..TableGrowth tg WHERE tg.UTCCollectionDateTime BETWEEN @DateStart-92 AND @DateStart-91 ) , d182 AS ( SELECT tg.* FROM SQLdmRepository..TableGrowth tg WHERE tg.UTCCollectionDateTime BETWEEN @DateStart-183 AND @DateStart-182 ) , d364 AS ( SELECT tg.* FROM SQLdmRepository..TableGrowth tg WHERE tg.UTCCollectionDateTime BETWEEN @DateStart-365 AND @DateStart-364 ) SELECT TOP 100 s.InstanceName , d.DatabaseName , t.SchemaName , t.TableName , DataSize_MB = CAST(tg.DataSize / 1024 as INT) , DataSize_MB_7days = CAST(d7.DataSize / 1024 as INT) , DataSize_MB_28days = CAST(d28.DataSize / 1024 as INT) , DataSize_MB_56days = CAST(d56.DataSize / 1024 as INT) , DataSize_MB_91days = CAST(d91.DataSize / 1024 as INT) , DataSize_MB_182days = CAST(d182.DataSize / 1024 as INT) , DataSize_MB_364days = CAST(d364.DataSize / 1024 as INT) , TotalSize_MB = CAST((tg.DataSize + tg.[TextSize] + tg.IndexSize) / 1024 as INT) , TotalSize_MB_7days = CAST((d7.DataSize + d7.[TextSize] + d7.IndexSize) / 1024 as INT) , TotalSize_MB_28days = CAST((d28.DataSize + d28.[TextSize] + d28.IndexSize) / 1024 as INT) , TotalSize_MB_56days = CAST((d56.DataSize + d56.[TextSize] + d56.IndexSize) / 1024 as INT) , TotalSize_MB_91days = CAST((d91.DataSize + d91.[TextSize] + d91.IndexSize) / 1024 as INT) , TotalSize_MB_182days = CAST((d182.DataSize + d182.[TextSize] + d182.IndexSize) / 1024 as INT) , TotalSize_MB_364days = CAST((d364.DataSize + d364.[TextSize] + d364.IndexSize) / 1024 as INT) , NumRows_k = tg.NumberOfRows / 1000 , NumRows_k_7days = d7.NumberOfRows / 1000 , NumRows_k_28days = d28.NumberOfRows / 1000 , NumRows_k_56days = d56.NumberOfRows / 1000 , NumRows_k_91days = d91.NumberOfRows / 1000 , NumRows_k_182days = d182.NumberOfRows / 1000 , NumRows_k_364days = d364.NumberOfRows / 1000 FROM SQLdmRepository..TableGrowth tg INNER JOIN SQLdmRepository..SQLServerTableNames t ON t.TableID = tg.TableID INNER JOIN SQLdmRepository..SQLServerDatabaseNames d ON t.DatabaseID = d.DatabaseID INNER JOIN SQLdmRepository..MonitoredSQLServers s ON d.SQLServerID = s.SQLServerID LEFT JOIN d7 ON d7.TableID = tg.TableID LEFT JOIN d28 ON d28.TableID = tg.TableID LEFT JOIN d56 ON d56.TableID = tg.TableID LEFT JOIN d91 ON d91.TableID = tg.TableID LEFT JOIN d182 ON d182.TableID = tg.TableID LEFT JOIN d364 ON d364.TableID = tg.TableID WHERE tg.UTCCollectionDateTime BETWEEN @DateStart-1 AND @DateStart AND (UPPER(s.InstanceName) = @Instance OR @Instance IS NULL) AND (UPPER(d.DatabaseName) = @DatabaseName OR @DatabaseName IS NULL) AND (UPPER(t.TableName) = @TableName OR @TableName IS NULL) ORDER BY tg.DataSize DESC OPTION (RECOMPILE) --Due to UPPER(x) = @x or @x IS NULL