File Growths

I hinted at file growths in my previous posts about shrinking data and log files.  Then I talked about growing log files in my post Database Log VLFs.  However, there are still some unanswered questions about growing data files.

Should I use autogrowth?  What should I set it to grow by?

Why do manual growths, and how do I know how much to grow by or when a growth is needed?

Can I and should I use Instant File Initialization?  If I can and should, how do I even know if it’s turned on?

Leave Autogrowth On?

Yes, absolutely.  This isn’t a question.

This could be worded as “Do you want a slight pause or do you want your application to fail?”  The answer is a little more obvious when you word it that way.  The only time I’ve seen an appropriate answer of “No” is when you have multiple data files in a filegroup and only want one to grow.

To answer how long this pause will be you have to ask these two questions.  Are you able to use Instant File Initialization?  How much are you growing by if you can’t use Instant File Initialization?  I’ll discuss Instant File Initialization later in this post, but it’s basically the difference between “this space is yours immediately” instead of writing zeros SQL Server doesn’t care about to be paranoid about security.

There are competing factors as to how much you want to autogrowth to be.  The only real consensus is that the defaults are horrible.  You need to balance out how long of a pause it will take for a growth, how long of a pause your users will tolerate, how fast is your database growing, how much fragmentation smaller growths will cause, and how much fragmentation you will tolerate.

By default, data files grow at 1 MB and log files grow at 10%.  I covered log file growths rather well in my post Database Log VLFs, so we’ll focus on the data files here.  Think about it, if you have a 100 GB database that got there by the defaults, it got there in 102,400 pieces.  Not good.

If the defaults are horrible, what’s good?  Tiny is bad, huge is bad, and percentage-based growth just lets you automatically drift from tiny to huge.  Do your balancing act I talked about above.  If growing by 64 MB will keep you good for several weeks, set it to 64 MB.  If 1 GB goes by rather quickly, you can’t use Instant File Initialization, and the users can tolerate the pause it takes to grow by 1 GB, then this might be the best growth rate for you.  No single answer is right, but at least you have a better idea of what the variables are.

There are also cases where there are no good answers.  Say you have an app wouldn’t be uncommon to grow by a couple GB a day, you can’t use Instant File Initialization, and pauses of more than a second wouldn’t go over very well.  You don’t want the fragmentation of setting the growth to 64 MB, but you don’t want the pause of setting it to 1 GB.  Maybe you’ll settle for 256 MB, but you’re not happy about it.  Manual growths are the real answer, but you want a safety net that won’t kill you with pauses or fragmentation.  Not everyone is in that situation, but some of us are.

If I’m going to suggest leaving Autogrowth on and suggest that you be aware of the size of these growths, then the least I can do is give you a script for that.  This will tell you the size of every database file and the autogrowth rate.  I’ll often use it to tweak the growth rates, so I script that change out in the results as well.  If you read the post Database Log VLFs then it will make more sense why I use 1,000 MB here instead of 1,024 MB; it wasn’t a typo.

DECLARE @FilterByDB VarChar(100)
	, @Cmd NVarChar(4000)

SELECT @FilterByDB = ''
--SELECT @FilterByDB = 'tempdb'

IF Object_ID('TempDB..##FileSize') IS NOT NULL BEGIN
	DROP TABLE ##FileSize
END

CREATE TABLE ##FileSize
(	  DB VarChar(128)
	, FileLogicalName VarChar(128)
	, FilePhysicalName VarChar(512)
	, FileGroup VarChar(128)
	, Used_MB VarChar(128)
	, Free_MB VarChar(128)
	, Size_MB VarChar(128)
	, GrowthRate VarChar(128)
	, MaxSize VarChar(128)
)


SELECT @Cmd =  N'use [?]; 

IF ''?'' like ''' + @FilterByDB + '%'' OR ''' + @FilterByDB + ''' = '''' BEGIN 
INSERT INTO ##FileSize
SELECT DB = db_name()
	, FileLogicalName = f.name
	, FilePhysicalName = f.physical_name
	, FileGroup = ISNULL(g.name, f.Type_Desc) 
	, Used_MB = REPLACE(CONVERT(varchar(100), (CAST((FileProperty(replace(f.name, '''', ''''''''), ''SpaceUsed'')) / 128 AS money)), 1), ''.00'', '''')
	, Free_MB = REPLACE(CONVERT(varchar(100), (CAST((f.size - FileProperty(f.name, ''SpaceUsed'')) / 128 AS money)), 1), ''.00'', '''')
	, Size_MB = REPLACE(CONVERT(varchar(100), (CAST((f.size) / 128 AS money)), 1), ''.00'', '''')
	, GrowthRate = Case Is_Percent_Growth
					WHEN 1 THEN '''' + Cast(Growth as VarChar(100)) + ''%''
					ELSE REPLACE(CONVERT(varchar(100), (CAST(Growth/128 AS money)), 1), ''.00'', '''') + '' MB''
					END 
	, MaxSize = Case Max_Size
					WHEN -1 THEN ''---''
					WHEN 268435456 THEN ''---''
					ELSE REPLACE(CONVERT(varchar(100), (CAST(Max_Size/128 AS money)), 1), ''.00'', '''') + '' MB''
					END
FROM sys.database_files f
	LEFT JOIN sys.filegroups g on f.data_space_id = g.data_space_id

END
' 
IF @FilterByDB <> '' AND 100 < (SELECT COUNT(1) FROM sys.databases) BEGIN
	SELECT @Cmd = REPLACE(@Cmd, '?', @FilterByDB)

	EXEC (@Cmd)
END ELSE BEGIN
	exec sp_MSforeachdb @Cmd 
END

SELECT *
	, PctUsed = Cast(100 * replace(Used_MB, ',', '') / (Cast(replace(Size_MB,',', '') as Dec(20,2)) + .01) as Dec(20,2))
	, SetGrowth64MB = 'ALTER DATABASE [' + db + '] MODIFY FILE ( NAME = N''' + FileLogicalName + ''', /*Size = MB,*/ FILEGROWTH = 64MB )'
	, SetGrowthTo256MB = 'ALTER DATABASE [' + db + '] MODIFY FILE ( NAME = N''' + FileLogicalName + ''', /*Size = MB,*/ FILEGROWTH = 256MB )'
	, SetGrowthTo1000MB = 'ALTER DATABASE [' + db + '] MODIFY FILE ( NAME = N''' + FileLogicalName + ''', /*Size = MB,*/ FILEGROWTH = 1000MB )'
FROM ##FileSize
ORDER BY 1, 4, /*cast(replace(size_mb, ',', '') as int) DESC,*/ FilePhysicalName

DROP TABLE ##FileSize

Manual Growths

If you want to be extreme about it, autogrowth is like asking someone about wearing a seatbelt.  Do you want it to hurt a little or a lot if you “crash” by running out of allocated space?  Well, you want it to hurt…wait…why do I have to crash?

This may seem over-the-top, but some of the applications may have databases like the last one I talked about in the autogrowth section.  Typically we’d prefer not to have autogrowth handling everything to avoid the pause while it grows, but there are cases like that where it’s more than a slight preference to avoid the pause.

So the autogrowths are our seatbelts we’re putting on just in case while catching everything with manual growths is our real goal of avoiding that “crash”.  However, how are you supposed to know when to grow a file or how much to grow it?  Seeing how large your files are now doesn’t tell you how much they’ll grow in the next couple months, they could be static or growing exponentially.

The partial answer is to say how much data was used before, how much data is being used now, and predict how much data will be used in the future using that trend.  A better answer is to say what the peak usage was over a week and compare that to past weeks, which would account for things that cause spikes in file usage such as index rebuilds or data loads.

If you have monitoring software, it may have that answer just waiting for you to query it.  That’s the situation I find myself in right now, and I wrote Database Growth Trends – Idera Diagnostic Manager talking about how I handle it right now.  However, I used to work with monitoring software that pretty much only told me OS-level info, so we watched this data manually as was discussed in my post Monitoring Database and Table Sizes.

To be clear, many databases are fine with autogrowth handling everything.  Do what’s best for you for each individual database.  I personally have a mix of databases where some I’d be upset if autogrowth kicked in because it would cause timeouts that the app handles extremely poorly, and others, such as my DBA database, where a pause would only affect my collection jobs or a query I’m personally running.  Autogrowth is not inherently evil, so only be paranoid where paranoia is called for.

Performing Manual Growths

Manually growing the file is just setting a new initial size.  To do that in SSMS, right-click on a database, properties, files, then change the initial size to how large you want it to be.

I’m not a big fan of having SSMS do things for me, so I’ll use “Script Action to New Query Window” and run it form there.  This will give me a script like this:

USE [master]
GO
ALTER DATABASE [DBA] MODIFY FILE ( NAME = N'DBA_Data', SIZE = 256000KB )
GO

One thing I wish I knew about years before I did is that the size in KB also accepts sizes in MB.  I write just about everything in MB, so it’s more natural for me to write that script with Size = 250MB instead.  I know I’m being petty, but I’m petty and comfortable.

This script could take a couple milliseconds or several seconds, even for 250MB.  It all depends on if you have Instant File Initialization turned on and if you’re able to use it for this growth.

Instant File Initialization – What is it?

Short answer, this makes growths exponentially quicker by opening an often-acceptable small security hole.

Of course, I always have a long answer…

By default, all space used by SQL Server is zeroed out before control is handed over.  This makes sure that it’s clean space for two basic reasons.  First, the database requires that space to be clean because of how it plans to use it.  Second, we don’t want to risk the possibility of special commands being able to pull back unencrypted sensitive data that used to be there.

Now the first reason we don’t have much control or say over at this point in the conversation.  If you’re growing a log file then it needs to be zeroed out, and SQL Server will zero out each VLF when it’s marked for reuse, too.  For data files, if you’re using Transparent Data Encryption (TDE) then the way SQL Server stores the data also requires the space to be zeroed out.  These things are outside of the conversation on file growths, it’s just what we’ll have to deal with at this point and SQL Server won’t use IFI even if it’s turned on.

The one thing that will be pulled into this discussion is the security aspect of using uninitialized space.  Kimberly Tripp demonstrates this the best in her post Instant Initialization – Why, Why, and How? in the “Why isn’t this on by default?” section.

I’ll try to make it short and sweet if you don’t want to read yet another post (I recommend you read it). You can ask SQL Server to read specific pages off the disk.  If SQL Server never actually used that page then it contains what was there when it was given to SQL Server.  Zero initialized disks are very boring info, just “0000000000….”.  Not much fun.

Uninitialized space is more fun (depending on your meaning of the word) and could contain old data.  If you had unencrypted sensitive data sitting there before, it could contain enough to steal someone’s identity.  You never know.  These commands will only work if you have the permissions to run them, but who has those permissions and how much do you trust them?  Most organizations say it’s fine, but some won’t.

Instant File Initialization – How do I use it?

This is an odd feature that you can’t just see in a table in SQL Server.  The easiest way is to run an xp_cmdshell command and look for it, but you can optionally turn on trace flags, grow or create a database, turn off the trace flags, then look in your error log.

Paul Randal gets into both methods in his post How to tell if you have instant file initialization enabled?.  I don’t have any issues with xp_cmdshell, so I’ll let you go to Paul’s blog if your company insists on the more difficult method.

As for me, I’ll run the query below as a multiserver query in SSMS and look at the results.  If that sounds foreign to you, check out my Multiserver Queries post, it’s a very simple and useful technique for specific tasks such as this one.  In fact, I wrote that post so I could link to it in this post.

However, even if IFI is turned on, it’s only going to work for data files that aren’t encrypted with TDE.  Logs and TDE data files need the zeros.  Also, if any database on your instance is using TDE then so is TempDB, which means its data files can’t use IFI either.

CREATE TABLE #Output
(
	DataOut VarChar(2000)
)

DECLARE @XP_CmdShell_Enabled int

SELECT @XP_CmdShell_Enabled = CONVERT(INT, ISNULL(value, value_in_use))
FROM master.sys.configurations
WHERE name = 'xp_cmdshell'

IF @XP_CmdShell_Enabled = 0 BEGIN
	EXEC sp_configure 'show advanced options', 1
	RECONFIGURE WITH OVERRIDE 
	EXEC sp_configure 'xp_cmdshell', 1
	RECONFIGURE WITH OVERRIDE 
END 

INSERT INTO #Output 
EXEC xp_cmdshell 'whoami /priv'

IF @XP_CmdShell_Enabled = 0 BEGIN
	EXEC sp_configure 'xp_cmdshell', 0
	RECONFIGURE WITH OVERRIDE 
END 

SELECT IFI_Enabled = COUNT(1) 
FROM #Output 
WHERE DataOut LIKE '%SeManageVolumePrivilege%Enabled%'

DROP TABLE #Output

If IFI is turned off and it’s proper to turn it on for this server, here are your steps to turn it on:

  1. Open Administrative Tools / Local Security Policy
  2. Local Policies
  3. User Rights Assignment
  4. Open “Perform volume maintenance tasks”
  5. Add the user for the SQL Server service account

If IFI can’t be used, here are your steps (which are also good even if IFI is being used):

  1. Monitor database sizes for manual growths
  2. Grow files manually

The SQL Server service account, if you don’t know which account it is, can be found in Administrative Tools / Services / SQL Server (InstanceName) under “Log On As”.  There are easier ways find the service account name in bulk if you’re doing this a lot, such as a slight variation of the query above.

xp_cmdshell Rant

Now, to be clear on xp_cmdshell which I used in the script above…. I do believe in security, but I don’t believe disabling this feature is the security you’re looking for.  When it’s set up properly, disabling xp_cmdshell is like locking the door to your data center’s supply closet with a keyless lock instead of limiting access to the data center.

I do have issue with granting execute permissions on this to non-sysadmin accounts.  I also have issue with granting sysadmin to anyone who asks, including vendors who insist that’s the only way their app will work.  This means the only users that can use it are the very, very few with sysadmin.  Also, any sysadmin can turn it on anyways, so the only extra security you get is a message in the error log saying it was turned on, but not even who did it.

My recommendation is that if you need it, leave it on.  If you don’t, leave it off.  If you have a one-time script that needs it, such as checking for IFI, turn it on just long enough to do what you need to do then automate turning it off.  The script follows this recommendation; it’s left on if it was already on, but turns it off if it was off.

It’s time for me to admit I’m going off on a tangent, so I’ll pass you off to Sean McCown’s (b|tSecurity Theater post if you’re too fired up to stop now.

In Summary

Autogrowth – Yes.  Just as a fail-safe if performance is important to you.  Limit the size to limit the performance impact, but don’t cause undue fragmentation.

Manual Growth – Yes.  Trend how fast you’re growing so you know how to stay ahead of it.

Instant File Initialization – Usually.  Understand the limitations and security risks, and turn it on if it’s the right thing to do.

Entry-Level Content Challenge

This is my fourth post in Tim Ford’s Entry Level Content Challenge.  I went through much of my career with people saying to manually grow files, but it was left at that.  I had no clue how to see how much space was used without clicking through the GUI, and passed it off as being ridiculous.  Now I found ways to make it easy, and I’m betting there’s a lot of people out there doing too much work or skipping this simply for not knowing how.

Read over Tim’s challenge and consider joining in.  It’s a great way to help people get past barriers you had in the past while learning how to clear them a little more efficiently yourself.  Anyone who’s reading this post already knows how to clear some barriers that others are viewing as an impassible brick wall, so you can make a difference for them.

 

 

 

 

Multiserver Queries

You can run queries against multiple servers at once, and it’s quite useful for a number of reasons.  I use it to check settings, verify backups and DBCC ran recently on unmonitored servers, make sure all servers rebooted during the maintenance window, and many other reasons.

This is all done through registering servers on SQL Server and opening a new query for the group.  I’ll walk you through that then run a couple queries that I find useful.

Unfortunately, this method is so easy that you’ll only ever have to see it once and won’t be back to visit this blog post again.  Hopefully the scripts below will inspire a return visit.

Setting It Up

Registered servers are local to your SSMS install.  You’re not changing server settings by setting this up, and this will be lost if you reinstall SSMS without exporting your settings.

First, open the Registered Servers pane by going to View / Registered Servers.

View Registered Servers

Add a new server group and call it anything you want.  I’ll call mine blog because that’s what I’m using it for.  I love the idea of having Prod and Non-Prod groups, especially since I can run a query against all my Non-Prod servers that aren’t monitored to verify backups and DBCC checks are being done.

It’s important to note at this point that you can have a server in more than one group and groups can be nested.  So in my prod group I have groups for each data center, then the servers are at that level.  I could also have the same servers in functional groups, such as Finance, IT, and Why_Me.

Create Registered Servers Group

 

Right-click and do a New Server Registration, and the options should be pretty natural to you.  For this example, I used aliases for “Registered Server Name”, but I stick to the default when doing this for myself.

At the most basic level, it should look like this.

Registered Server Group

That’s it, you’re set up.

Running Queries

This is easier than the setup.

Right-click on a group and click on New Query.

Multiserver New Query

It opens a new query window with the only oddity being instead of a yellowish bar saying “Connected. (1/1)”, now you have a pink bar saying “Connected. (2/2)” along with the group name.

Multiserver Connected

This will be connected to all servers directly in the group and in groups nested within that group.  There will be a long delay if one of the servers isn’t available, so it’s worth while to keep these groups cleaned up.

Now you run a query.  Here’s how the results are grouped by default.  Although they showed up in order for me, that was a coincidence.  They’ll actually be in the order they came back, so a server that returned results in 1 second will come before a server that returned results in 2 seconds.

Multiserver Results

You can go to Tools / Options and change a couple things, but the defaults tend to work great.

Multiserver Query Options

Now that it’s set up, right-click on your group and go to Tasks / Export… to save this off somewhere.  The only place this is saved right now is on your workstation, and those words will make any DBA cringe.

Practical Uses

Doing “SELECT 1” like I did above is always a blast, but it’s not all that productive.  As always, I encourage you to play around with it yourself, but here’s some things to get you started.

Server (services) last started

When I do Windows Updates I reboot all the database servers every month, even if the updates don’t say the needed the reboot at that time.  However, can be difficult to tell which servers I rebooted and which ones just aren’t begging for a reboot.  Since TempDB is recreated each time services start, I check to make sure that happened during the WSUS window.

SELECT create_date FROM sys.databases WHERE database_id = 2

If I was doing this for hundreds of servers, I’d simply change the query to say AND create_date < GetDate()-1 then only my problem servers would return results.

DBCC and Backups up-to-date

My non-prod servers aren’t monitored, so things go missed sometimes.  Backup jobs may be failing and I don’t know until I have time to check everything on all my non-prod servers.  Unfortunately, that doesn’t happen very often.  As with everything else in life, I cheat as much as possible to make it easier.  This time by running this as a multiserver query:

--Leaving some overlap to avoid false alarms due to timing issues
--We want to know if there hasn't been:
--	Full backup in the last 8 days (should happen every 7)
--	Full or diff backup in the last 2 days (should happen every 1)
--  Log backup in the last 4 hours (should happen every 1 or more often)
--  DBCC CheckDB run successfully in the last 8 days (should happen every 7)

SET NOCOUNT ON

DECLARE @FullDays Int
SELECT @FullDays = 8

DECLARE @FullDiffDays Int
SELECT @FullDiffDays = 2

DECLARE @LogHours Int
SELECT @LogHours = 4

DECLARE @DBCCDays Int
SELECT @DBCCDays = 8

DECLARE @SQLText NVarChar(4000)

IF OBJECT_ID('tempdb..#Results') IS NOT NULL BEGIN
	DROP TABLE #Results
END

CREATE TABLE #Results 
(
	ResultText NVarChar(2000)
)

IF Object_ID('TempDB..#DBCC_Temp') IS NOT NULL BEGIN
	DROP TABLE #DBCC_Temp
END

CREATE TABLE #DBCC_Temp 
(
	ParentObject VARCHAR(255),
	[Object] VARCHAR(255),
	Field VARCHAR(255),
	[Value] VARCHAR(255)
)

--When was the last full backup?
INSERT INTO #Results (ResultText)
SELECT CAST(serverproperty('ComputerNamePhysicalNetBIOS') as sysname) 
	+ ISNULL(NULLIF('\' + CAST(serverproperty('InstanceName') as sysname),'\MSSQLSERVER'),'') 
	+ '.' 
	+ d.name 
	+ ' hasn''t had a full backup in over ' 
	+ Cast(@FullDays as VarChar(2)) 
	+ ' day(s).  Last full backup was ' 
	+ ISNULL('on ' + Convert(VarChar(16), max(b.backup_finish_date), 120), 'NEVER') + '.'
FROM sys.databases d 
	LEFT JOIN msdb..backupset b ON d.name = b.database_name AND b.type = 'D'
WHERE d.name <> 'tempdb'
	AND d.state = 0
GROUP BY d.name 
HAVING ISNULL(max(b.backup_finish_date), '2000-01-01') < GetDate()-@FullDays 

--When was the last full or differential backup?
INSERT INTO #Results (ResultText)
SELECT CAST(serverproperty('ComputerNamePhysicalNetBIOS') as sysname) 
	+ ISNULL(NULLIF('\' + CAST(serverproperty('InstanceName') as sysname),'\MSSQLSERVER'),'') 
	+ '.' 
	+ d.name 
	+ ' hasn''t had a full or diff backup in over ' 
	+ Cast(@FullDiffDays as VarChar(2)) 
	+ ' day(s).  Last full or diff backup was ' 
	+ ISNULL('on ' + Convert(VarChar(16), max(b.backup_finish_date), 120), 'NEVER') + '.'
FROM sys.databases d
	LEFT JOIN msdb..backupset b ON d.name = b.database_name AND b.type IN ('D', 'I')
WHERE d.name <> 'tempdb'
	AND d.state = 0
GROUP BY d.name  
HAVING ISNULL(max(b.backup_finish_date), '2000-01-01') < GetDate()-@FullDiffDays 

--When was the last log backup?
INSERT INTO #Results (ResultText)
SELECT CAST(serverproperty('ComputerNamePhysicalNetBIOS') as sysname) 
	+ ISNULL(NULLIF('\' + CAST(serverproperty('InstanceName') as sysname),'\MSSQLSERVER'),'') 
	+ '.' 
	+ d.name 
	+ ' hasn''t had a log backup in over ' 
	+ Cast(@LogHours as VarChar(3)) 
	+ ' hour(s).  Last log backup was ' 
	+ ISNULL('on ' + Convert(VarChar(16), max(b.backup_finish_date), 120), 'NEVER') + '.'
FROM sys.databases d
	LEFT JOIN msdb..backupset b ON d.name = b.database_name AND b.type = 'L'
WHERE d.name NOT IN ('tempdb', 'model')--Model is by default in full, rarely backed up, and not modified enough to complain about.
	AND d.recovery_model_desc <> 'SIMPLE'
	AND d.state = 0
GROUP BY d.name  
HAVING ISNULL(max(b.backup_finish_date), '2000-01-01') < DateAdd(Hour, -1 * @LogHours, GetDate()) 

SELECT @SQLText = N'use [?]; 

DECLARE @DBName sysname
DECLARE @HasRetried bit 

SELECT @DBName = CAST(''?'' as VarChar(256)) 
SELECT @HasRetried = 0

TRUNCATE TABLE #DBCC_Temp

IF EXISTS (SELECT d.state FROM sys.databases d WHERE d.name = @DBName AND d.State = 0 /*Online*/ AND d.is_read_only = 0 AND d.create_date < GetDate()-7) AND ''tempdb'' <> @DBName BEGIN
	INSERT INTO #DBCC_Temp
	EXECUTE(''DBCC PAGE (['' + @DBName + ''], 1, 9, 3)WITH TABLERESULTS, NO_INFOMSGS'')

	INSERT INTO #Results (ResultText)
	SELECT CAST(serverproperty(''ComputerNamePhysicalNetBIOS'') as sysname) 
		+ ISNULL(NULLIF(''\'' + CAST(serverproperty(''InstanceName'') as sysname),''\MSSQLSERVER''),'''') 
		+ ''.''
		+ @DBName
		+ '' hasn''''t had a successful DBCC check in the last ' + CAST(@DBCCDays as VARCHAR(2)) + ' day(s).  Last successful check was ''
		+ ISNULL(NULLIF(MAX([Value]),''1900-01-01 00:00:00.000''), ''NEVER'')
	FROM #DBCC_Temp
	WHERE Field = ''dbi_dbccLastKnownGood''
	HAVING MAX([Value]) < GetDate()-' + CAST(@DBCCDays as VARCHAR(2)) + '
END
'

exec sp_MSforeachdb @SQLText 

SELECT ResultText 
FROM #Results

DROP TABLE #Results
DROP TABLE #DBCC_Temp

If I get any results from this, I have something to fix.  This also works for daily full backups because it does diff or full, but you may want to change it around to fit your maintenance schedules.

Version Check

If you’re evaluating what version all of your servers are on so you can make plans to patch them, just run this:

SELECT @@VERSION

You have everything you need to compare it to SQLServerBuilds.BlogSpot.com.  The only thing is that I tend to copy/paste the results into Excel so I can sort them there.

More Uses

I use registered servers and multiserver queries for more than this.  In fact, I wrote this post so I could link to it in a future post on Instant File Initialization.  Once I have that one here, I’ll have four examples of uses.

Other readers and I would benefit from hearing more uses of this feature.  Let us hear the best uses you have, preferably with a script.

Entry-Level Content Challenge

This is my third post in Tim Ford’s Entry Level Content Challenge.  I was tempted to write a post and just tell people to run a script as a multiserver query.  After all, it’s very simple after you see it for the first time.  However, you have to see it for the first time before you get to that point.

This is the exact purpose of his challenge.  Read over it and consider joining in.

 

Tim Ford’s Entry-Level Content Challenge

All bloggers started as beginners.  We picked things up, learned them, and got to the point where we wanted to start sharing our knowledge.  The problem is that most of us moved on to talking about advanced topics and left people to fend for themselves on the difficult entry-level learning curve.  My blog, Simple SQL Server, was actually created with me saying I was going to fill that gap, right up until the time I figured out the best way to learn advanced topics was to write things down as I learned.

Entry-Level Content Challenge

Tim Ford (b|t) noticed how we moved to advanced topics and asked us to remember where we came from in his Entry-Level Content Challenge.

It’s very simple.  Create one blog post a month that someone just learning your job could understand and benefit from, use the #EntryLevel tag on your blog and notification on twitter, and link to his original post.

This is for everyone, I don’t care what job you have.  I’m a SQL Server DBA, Tim’s a SQL Server DBA, but every profession has beginners that could use some help.

We’re Outnumbered

One of the things, if not THE biggest thing, you hope to do with a blog is to make a difference.  Then you talk about advanced topics to help other people near your level keep moving forward.  You also realize that teaching the details on these topics accelerates your learning and confidence faster than you thought possible.  It’s great; I recommend everyone does it.

However, there are a couple problems with this logic.  By the time people feel confident enough to blog, they aren’t talking about the fundamentals anymore.  Also, for every advanced person in any field, there are many, many more entry-level people who could benefit from learning the fundamentals properly from the start.

Bloggers aren’t talking to the largest portion of their potential audience, who just happen to be going through the most difficult part of the learning curve.

It’s the Right Thing To Do

In no specific order, here’s why it’s the right thing to do.  This list contains selfish reasons, moral reasons, and everything in-between.

  • You want to reach a larger audience, and there’s more people who need this content.
  • You were a beginner once and learned off of others.
  • You wish you learned the fundamentals right from the start, but you didn’t always.
  • You learn everything you teach better, even if you consider it simple.
  • Tim challenged you, and it’s hard to say no to a challenge.
  • You always wanted to make a difference.

New Bloggers

This is also a great opportunity for new bloggers.  Talk about subjects you know well, even if you’re not at the level you feel comfortable teaching advanced topics.

The content is the only difficult part of blogging.  Setting up a blog and creating a post is practically identical to setting up a new email account and sending an email.  If you want to get into blogging and need help, let me know.

If Einstein Can Do It…

Einstein said “If you can’t explain it simply, you don’t understand it well enough.”  I took this quote to heart enough to name my blog after it.  Try explaining the fundamentals simply, you’ll see how much your understanding of the topic increases to accomplish that feat.

 

 

Shrinking Database Data Files

Normal DBAs will sum up shrinking a database data file in one word: NO!

I’ll talk to you about this subject because I’m not normal.  Still, my goal is that you’ll go from “why can’t I shrink my files” to “I don’t want to shrink my files.”

Truth be told, even when I HAVE to shrink files, I don’t want to.

Basically, you needed the space at one point in time, so you’ll probably need it again.  Unless you changed a process or something extremely rare happened, you’ll use the space again and there’s no reason to take on the issues involved with shrinking just to have a file grow again.  While this advice is mostly the same for log files, it’s different enough where I’m making a separate post for shrinking database log files

When should you shrink data files

First, a process changed where you’ll never need that amount of space again, such as the database used to keep 10 years of history but now an automated process keeps it pruned to 6 months.  This change also left you with a very significant amount of free space that you need for another purpose.

Really, that’s about it.

What happens when you shrink data files

The most common way to shrink a file is to have it reorganize pages before releasing free space, so I’ll cover that.  It basically has two steps:

  1. Fragment the file as much as possible (killing performance)
  2. Truncate the end of the file

By “reorganize” it means take the tail end of the file after the size you said you’d like the file to be shrunk to, find every populated page of data after that point, then move those pages to the first available spot you find in the file.  When the end of the file is empty, truncate it.

It’s an expensive process that leaves a big mess behind, but there are times you’ll need to go through with it (kicking and screaming).  If you do, schedule an index maintenance job immediately after this to undo most of the damage you just did and make sure the server still loves you.

Think about this, you’re considering kicking off a process where you have to tell the server you’re sorry afterwards.  You better have a good reason for this.  A 100 GB database with only 2 GB used may be a good reason to shrink.  A 100 GB database with 75 GB used is normal and healthy.

If you are going to shrink, make sure the database won’t be using that space again.  If you have a 100 GB database with only 2 GB used, does it populate, process, and purge 90 GB in an overnight process?  Find out before you shrink.

Manual Shrinking

If I haven’t scared you off yet (and I hope I did) then here’s how you shrink.

Here’s the link how to shrink a file using T-SQL: DBCC ShrinkFile

Here’s the link how to shrink a file using the GUI: SSMS Shrink File

There are Shrink Database options as well.  I don’t use it and don’t recommend it.  You know what files you want to shrink, don’t shrink them all.  When this is used it’s typically because people want to shrink their data files, but they mess up their log files while their at it.  Be nice to yourself, don’t use shrink database.

What about the Auto-Shrink option

No.

Just no.

It’s not a feature, it’s a threat.

If you have a certain amount of free space it will automatically shrink your database causing all the problems I just discussed, but it will probably kick in during your busiest time of day.  Then it won’t follow that up with index maintenance, so it just left you hopping all over your disk to find data.  The best part is that it didn’t check to see if you would need that space again, so it’ll probably grow tomorrow to get that space back.

In the example above of a 100 GB database that only uses 2 GB during the day but populates 90 GB for overnight processing, those 2 GB will be horribly fragmented, you’ll spend the resources shrinking, then you’ll take the performance hit growing the database every night as it puts the growth anywhere it can which typically means physical file fragmentation as well.  In the mean time your monitoring software will show that you have tons of free space, right up until you get the disk full error.  You can auto-shrink that database and get all those issues, or you can just let it do its job as you do yours without a maintenance nightmare.

The Connect Item on this is marked as fixed, stating that they’ll consider removing it in future versions.  This wasn’t a joke opened by some wannabe DBA.  This was serious and opened by someone who went on to become the president of the Professional Association for SQL Server.

To be fair, we all word our feelings on this differently.  I say no, Tom LaRock says never, Paul Randal says Turn It Off!, and Brent Ozar is colorful enough to come up with the term Hamster Wheel of Death.

As of the time of this writing, no one has used the term “Magical Unicorn” to describe this feature.

Sum it up

  1. Don’t shrink data files.
  2. If you didn’t listen to #1, why?
  3. Follow it up with index maintenance.
  4. Every time you try to automate this process a unicorn dies.

Entry-Level Content Challenge

I’m taking Tim Ford’s (b|tEntry-Level Content Challenge, and this is my first post with the #EntryLevel tag.  The challenge is to have one post that earns that tag each month, but I hope to exceed that.