Normal DBAs will sum up shrinking a database data file in one word: NO!
I’ll talk to you about this subject because I’m not normal. Still, my goal is that you’ll go from “why can’t I shrink my files” to “I don’t want to shrink my files.”
Truth be told, even when I HAVE to shrink files, I don’t want to.
Basically, you needed the space at one point in time, so you’ll probably need it again. Unless you changed a process or something extremely rare happened, you’ll use the space again and there’s no reason to take on the issues involved with shrinking just to have a file grow again. While this advice is mostly the same for log files, it’s different enough where I’m making a separate post for shrinking database log files
When should you shrink data files
First, a process changed where you’ll never need that amount of space again, such as the database used to keep 10 years of history but now an automated process keeps it pruned to 6 months. This change also left you with a very significant amount of free space that you need for another purpose.
Really, that’s about it.
What happens when you shrink data files
The most common way to shrink a file is to have it reorganize pages before releasing free space, so I’ll cover that. It basically has two steps:
- Fragment the file as much as possible (killing performance)
- Truncate the end of the file
By “reorganize” it means take the tail end of the file after the size you said you’d like the file to be shrunk to, find every populated page of data after that point, then move those pages to the first available spot you find in the file. When the end of the file is empty, truncate it.
It’s an expensive process that leaves a big mess behind, but there are times you’ll need to go through with it (kicking and screaming). If you do, schedule an index maintenance job immediately after this to undo most of the damage you just did and make sure the server still loves you.
Think about this, you’re considering kicking off a process where you have to tell the server you’re sorry afterwards. You better have a good reason for this. A 100 GB database with only 2 GB used may be a good reason to shrink. A 100 GB database with 75 GB used is normal and healthy.
If you are going to shrink, make sure the database won’t be using that space again. If you have a 100 GB database with only 2 GB used, does it populate, process, and purge 90 GB in an overnight process? Find out before you shrink.
If I haven’t scared you off yet (and I hope I did) then here’s how you shrink.
Here’s the link how to shrink a file using T-SQL: DBCC ShrinkFile
Here’s the link how to shrink a file using the GUI: SSMS Shrink File
There are Shrink Database options as well. I don’t use it and don’t recommend it. You know what files you want to shrink, don’t shrink them all. When this is used it’s typically because people want to shrink their data files, but they mess up their log files while their at it. Be nice to yourself, don’t use shrink database.
What about the Auto-Shrink option
It’s not a feature, it’s a threat.
If you have a certain amount of free space it will automatically shrink your database causing all the problems I just discussed, but it will probably kick in during your busiest time of day. Then it won’t follow that up with index maintenance, so it just left you hopping all over your disk to find data. The best part is that it didn’t check to see if you would need that space again, so it’ll probably grow tomorrow to get that space back.
In the example above of a 100 GB database that only uses 2 GB during the day but populates 90 GB for overnight processing, those 2 GB will be horribly fragmented, you’ll spend the resources shrinking, then you’ll take the performance hit growing the database every night as it puts the growth anywhere it can which typically means physical file fragmentation as well. In the mean time your monitoring software will show that you have tons of free space, right up until you get the disk full error. You can auto-shrink that database and get all those issues, or you can just let it do its job as you do yours without a maintenance nightmare.
The Connect Item on this is marked as fixed, stating that they’ll consider removing it in future versions. This wasn’t a joke opened by some wannabe DBA. This was serious and opened by someone who went on to become the president of the Professional Association for SQL Server.
To be fair, we all word our feelings on this differently. I say no, Tom LaRock says never, Paul Randal says Turn It Off!, and Brent Ozar is colorful enough to come up with the term Hamster Wheel of Death.
As of the time of this writing, no one has used the term “Magical Unicorn” to describe this feature.
Sum it up
- Don’t shrink data files.
- If you didn’t listen to #1, why?
- Follow it up with index maintenance.
- Every time you try to automate this process a unicorn dies.
Entry-Level Content Challenge
I’m taking Tim Ford’s (b|t) Entry-Level Content Challenge, and this is my first post with the #EntryLevel tag. The challenge is to have one post that earns that tag each month, but I hope to exceed that.
An other (maybe better) solution to shrinking and rebuilding the indexes is (again only in the case of a 100 GB with only 2 GB data) is to add a second file to the filegroup and shrinkt the old file by using the EMPTYFILE parameter. This would move all the remaing data to the new file and you could drop the old one afterwards.
Pingback: Shrinking Database Log Files | Simple SQL Server