Is it necessary to defragment an SSD drive, will performance increase - answer


Windows

Author Veronica and Vlad

Date
Sep 25, 2016
6,142

Share

The popularity of solid state drives (SSD or Solid State Drive) is growing exponentially. Classic magnetic HDD drives are rapidly becoming a thing of the past or more often play the role of large-scale storage for files, and modern operating systems are recommended to be installed on an SSD screw.

  • Facebook
  • LiveJournal
  • Blogger

It is recommended to install modern operating systems on an SSD screw

  • Is it possible to defragment an SSD?
  • What operating systems are best to use for an SSD drive?
  • What happens if there is little disk space left?
  • Is it possible to store large files on an SSD?
  • What else can’t be done?
  • Results

Compared to a traditional HDD, an SSD hard drive has a number of advantages:

  1. Accelerated loading and access to data.
  2. Quiet operation.
  3. Increased reliability.

But at the same time, many questions arose regarding the use of solid-state drives. The most common:

  • Is it possible and necessary to defragment an SSD drive?
  • Is frequent rewriting harmful?
  • Is it possible to format an SSD drive?
  • Is it possible to fill an SSD with files to capacity?
  • how to extend the life of an SSD drive in Windows 7?

And so on. But there are more questions, topics and myths regarding the operation of Solid State Drive. Let's figure out in order what is true and what is fiction and how to extend the life of an SSD.

How to protect an SSD from harmful influences and not kill the drive in two weeks?

I just received my first SSD. And I have SSDLife monitoring running in the background. After that I installed all the software and tested the SSD. The SSDLife program said that “Total Data written, GB” = 52.1 (40GB used space, 70GB free).

That is, there is about 40 GB of data on the SSD, with 52.1 GB written?

A feature of a solid-state drive is that data is recorded in blocks. A block can contain 256KB: 256 * 1000 * 8 binary bits. To change even one of these numbers, you must rewrite the entire block. That is, your operating system sees 1 bit, but the SSD wear is equivalent to 256Kb: a difference of 2.048 million times.

This means that the formula (SSD SIZE) * (cycles) = total data written to the SSD before failure

This is only for the best case, which would allow you to write data 1000 to 1000000 times before failure. But, even in the worst case, this is more likely for all small write cycles on the SSD. This is confirmed in

Background

The only blog entry in English received 60 thousand views in a year and a half (the Russian one had only 15 thousand more sessions). A link to it popped up on some large forums, but there wasn't much movement. And then Hanselman came.

Who is Scott Hanselman

Scott Hanselman is a Microsoft development evangelist, teacher, speaker, programmer, and blogger. I've been reading his posts about programs and efficient work in the OS for a long time, although there are fewer and fewer of them lately.

In my understanding, Scott is for developers what Mark Russinovich is for IT specialists (perhaps a little smaller in caliber). By the way, we crossed paths at the DevCon 2012 summer conference, where he was invited as a speaker. He willingly signed autographs and posed for photos with the participants and MVP.

What Microsoft Claimed

And now, almost a year after the publication of the article about defrag, Scott came to my blog!

(Translation) Windows does not defragment the SSD. And period. If the disk is detected as an SSD, it will not be defragmented under any circumstances. This is just a dummy message. There is no bug here, sorry.

“Oh, how!”, I thought, simultaneously noting where the wind was blowing from. Indeed, on Twitter, someone asked Scott to report the problem to the product group, but he decided to simply dot his i's without delving into the article. When I asked him on the blog and Twitter to pay attention to the facts, I received in response a link to SuperUser with the same mantra.

However, Scott did raise the question internally and published a response the same day.

@mdsharpe confirmed just now.
Windows DOES NOT Defrag SSDs. Period. — Scott Hanselman (@shanselman) February 4, 2014

(Translation) Just received confirmation. Windows does NOT defragment the SSD. Dot.

@mdsharpe I just talked to that team.
Bad message but no actual defragging happens. — Scott Hanselman (@shanselman) February 4, 2014

(Translation) I just spoke with that team. Bad message, but in reality defragment does not happen.

As I continued to follow my Twitter line, Scott referred me to someone on the Windows team, pointing to a specific representative (obviously, he was communicating with him). He did not respond to my cheerful letter (copied, by the way, by one of the blog readers), although he is the host of the fairly popular show Defrag (!!!) on Channel 9, where he solves three problems with Windows per episode.

And what to do with them? I added four questions to the product group to the English entry and suggested not to bother me with links to outdated captaincy of other sites.

Water wears away stones

However, there was another stubborn friend who read the exchange on Twitter, decided to check everything himself and 10 months later caught the hand of Windows 8.1. He again turned to Hanselman, publishing a screenshot of a defrag in progress, coupled with the obvious load on the disk.

@shanselman @0utsideTheBox finally I've caught my up2date Windows 8.1 doing defrag on SSD (with proof attached).. pic.twitter.com/lPXdpfciHf

— Patrick Heyer (@patthemav) November 11, 2014

Perhaps Patrick was not the only one, because... This time Scott decided to talk to a different team - the file system. And soon he reported that secret knowledge had been revealed to him, but there was nothing to be afraid of - wait for the blog article. Finally she appeared.

What is the lifespan of an SSD drive?

SSDs are more durable than hard drives and should last up to 20 years, at least without any performance degradation.

And this is what we might call an average. You can come up with a worst-case SSD lifespan if you want. But I can assure you that they don't look too optimistic!

Let's maximize the lifespan of our SSD by wear leveling and minimizing all those little write cycles using simple and advanced technologies...

Discussion and poll

How do you like the story? In my opinion, this is a very interesting case of the type where the right hand does not know what the left hand is doing, but thinks it does :)

The problem here is that the words of a Microsoft employee (especially a well-known technical specialist) automatically receive the status of truth and transfer opposing statements to the category of erroneous. It becomes almost impossible to prove otherwise, regardless of the amount of evidence presented.

It is curious that to some extent this also applies to my statements, since in a number of cases they are referred to, considering the source to be quite authoritative. I don’t pretend to have the ultimate truth (that’s Microsoft’s lot), but in any case I don’t refuse to delve into the technical component of an interesting question.

The survey is designed to reveal your attitude towards defragment after its cause has become known. The concept of “defrag disabled” applies equally to disabling a job in the scheduler and system protection. Write in the comments how you voted and share the information you collected

, if your configuration meets the specified criteria.

Blog readers' reaction to the news that SSD defragmentation in Windows 8+ is not a bug https://t.co/Qd9st6jnNc pic.twitter.com/JprVH9iRQM

— Windows Outsiders (@0utsidethebox) January 31, 2015

Make sure TRIM is enabled

Firstly, there is no point in checking and trying to enable TRIM if your ssd drive does not support this technology. How do you know if your SSD supports TRIM? The easiest way is to get this information through the CrystalDiskInfo program.

In the Supported Features field you can see whether the SSD supports TRIM:

The next step is to check if your operating system is familiar with the TRIM function. On Windows 7, you can find out this using the fsutil behavior query disabledeletenotify command. If the result is zero, the operating system uses TRIM.

In case the system does not recognize your drive as an SSD, you must troubleshoot and fix the problem. Refer to the information contained in Device Manager and SSD Properties. You may need to update your disk controller drivers so that the operating system treats the drive as an SSD.

What operating systems are best to use for an SSD drive?

The TRIM command is a way to notify Solid State Drive that it can physically delete blocks of data that are no longer contained in the file system. It is recommended to select an OS that supports this command for installation on the SSD. That is, the operating system for the SSD must be modern. Windows 7, 8, 8.1 and 10 are ideal.

The TRIM team itself appeared with the spread so that new data storage technologies could compete with HDD. Accordingly, Windows Vista, XP and earlier operating systems are not suitable for

installations on SSD. You can use them, but they will be slow.

Disable SSD swap file

The paging file (swap) is necessary to improve the performance of the operating system in resource-intensive applications (graphics packages, video editors, games). In addition, if many “heavy” programs are running and the RAM cannot cope with the amount of data, unused applications are temporarily stored in swap.

The optimal paging file size is approximately 3/2 the size of RAM. If you have more than 8 GB of RAM, there is no need for a page file on the SSD. Try disabling it and testing your computer for a while. It's unlikely you'll notice any performance issues.

You can find out the amount of RAM on your computer and disable it on the SSD in the System Properties window (Win+Pause Break).

  1. Open the “Performance” dialog (My Computer ->System Properties ->Performance Options (see previous topic)).
  2. In the "Advanced" tab, click the "Change" button.
  3. In the virtual memory window, opposite the name of the system disk, the size of the paging file is shown. Select the SSD disk - set the option “Without paging file” - the “Set” button to apply the changes.

Is it possible to store large files on an SSD?

In most cases, users use SSD for the operating system and applications.

A program launched from a solid-state drive will run faster than from an HDD, and the OS will load faster.

To store user files, it is better to use a regular HDD operating in parallel. Why?

Firstly, because the capacity of a solid-state drive is often small, and secondly, the cost of an SSD drive is equal to the price of a much larger HDD. The first speeds up the loading and operation of the operating system and all programs, and in terms of volume it is suitable only for them. But it is not always possible to install a HDD (in ultrabooks).

WATCH THE VIDEO

In this case, it is recommended to purchase an external hdd. It is better suited for storing movies, music and other large files. In fact, nothing bad will happen to it if you store large files on it, but while the volumes of such disks are small and the cost is high, it is better to use them where they show a confident increase in performance.

Sleep mode (hibernation)

Another feature that can cause problems is the computer's sleep mode (hibernation). If you really don't need this feature, consider hibernating or shutting down, because when you hibernate, the OS writes its memory to the hibernation file, each time the computer enters sleep mode. If you decide not to use sleep mode, you can disable it with the command

powercfg /hibernate off

by running it as an administrator. This will disable the hibernation option and delete the hibernation file. It is not possible to move the hibernation file.

Search indexing

Most people believe that a search indexer is essential because it greatly speeds up searching for data on your hard drive.

If you only have an SSD, you can safely disable the Search Indexer. If you have an SSD and an HDD, you should move the search indexer cache to your HDD. This will avoid multiple writes to disk every time a file is stored in the search cache.

Another way to deal with the indexer is to reduce indexing places to a minimum if you know for sure that you won’t be looking for anything there.

Wear leveling

But don’t be upset, the number of SSD disk erase/write cycles is not that small. And in addition, modern SSD drives have technologies that increase the efficiency of writing to the disk and reduce the load on information cells. The most important of these technologies is wear-leveling algorithms, with the help of which data is recorded evenly throughout the entire volume of the drive, thereby achieving its maximum service life. Moreover, larger SSD drives have a longer service life than smaller ones.

How long is the service life of an SSD drive? In order for users to estimate the service life of an SSD drive, most manufacturers show it as the amount of capacity that can be written to the disk over the entire period of its use. This value is calculated in TBW (Total Bytes Written).

For example, a capacity of 220 TBW means that 220 terabytes of data can be written to the disk before it becomes unreliable. This means that if a user writes 50 GB of data to the drive every day, the drive will last 12 years.

Most users never write more than 50 GB to disk per day. And this happens rarely, and most other days there is significantly less or nothing written to the disk. Moreover, reading documents or viewing photos is not a writing process; it is reading, which does not affect the life of the disk. Only copying files from another drive, downloading files, or editing documents involves writing information to the drive.

This suggests that if you use an SSD drive in the same mode as an HDD drive, then their service life should be comparable.

Temporary files, cache and logs

There are a huge number of temporary files, caches and logs stored on your computer. This results in a lot of redundant writes on the SSD! It depends on what browser and other software you are using.

For example, Google Earth stores a cache of images of places you've visited, so every time you use Google Earth, the images are written to the SSD. Let's see in the next chapters how to find the culprits and further use connection points when we cannot move or disable them.

How to clear the cache?

How to disable automatic defragmentation of an SSD drive?

With everything you've learned today, you may be wondering, “Do I have automatic SSD defragmentation enabled? And how can I turn it off?” I hasten to reassure you. If you are using Windows 7 or higher, the system itself disables automatic defragmentation as soon as it sees an SSD drive on board. So there is nothing to worry about.

As for Windows XP or Vista, the use of SSD drives with such versions of operating systems is strongly not recommended. They are completely incompatible. This leads to very rapid wear of SSD drives.

Windows Resource Monitor

Let's take a look at the built-in resource monitor in newer versions of Windows:

  1. Enter 'Monitor' Resource in the start menu and run it (or resmon.exe command via Start - Run).
  2. Go to the "Disk" tab.
  3. Sort the 'Processes with disk activity' column into 'Write (bytes/s)'. This will allow you to estimate the amount of disk writes on your system.

If you want to get more data, you will need the Process Monitor utility.

Process Monitor utility

Let's download the Process Monitor program from Microsoft Sysinternals and set up a filter on the entries:

  1. and run the utility.
  2. Click on the 'Reset' button to reset the filter.
  3. Set the filter to 'Operation contains WRITE then Include', then click the Add button.
  4. Then click the "Apply" button and then click the "OK" button.
  5. Additionally, you can filter the list by your SSD drive.

You will now be able to see the write operations taking place in real time. You can also select an individual item and find out detailed information about the entry. In the “Tools” menu there is “File Summary”, this command allows you to view the entire set of records in different tabs.

What's the best way to copy to an SSD?

You should put files on the SSD that really require fast performance. This is mainly true for programs and games. Placing video files on an SSD will not provide noticeable speedup compared to a hard drive. The same applies to various documents.

Images and photos will load faster in programs and packages such as Adobe Lightroom. Music will be analyzed faster in DJ programs like Traktor Studio. However, the current SSD sizes don't quite fit into these tasks, so that's a bummer.

However, photographs and music are good examples of data. If you save them once and do not plan to edit them, feel free to move this data to the SSD.

Another look at the question “is defragmentation necessary for SSDs”

Undoubtedly, the question posed in the title of the article is not new, it has been raised more than once and a consensus has been reached on it: “it is not particularly necessary, and may even be harmful.” However, a recent discussion in the comments got me thinking again.

Over time, any SSD still becomes highly fragmented (internally, in the FTL)... A freshly written SSD will give a high speed when reading linearly, but an SSD that has already been used will give a much lower speed, because it will be linear only for you.

Yes, usually this should not happen: either we write “little by little” to small files/small blocks of FS metainformation (the linear reading speed of which we are not particularly concerned about), or we write “a lot” to large files and everything will be fine. It also happens that small blocks are added to large files - logs, for example, but they are relatively short-lived and I don’t see any particular problem here. But it was easy to imagine a very real scenario in which internal SSD fragmentation could still manifest itself: a database file to which there is a fairly active random write. Over time, it (while remaining unfragmented at the operating system level) will become physically very fragmented, which can significantly reduce the speed of seq scan, backup, etc.

To test it, I wrote a script and ran tests.

Spoiler: the problem is present (significantly affects performance) only on one of the models that came to hand (and that one is positioned by the manufacturer not as a datacenter, but as a desktop/laptop).
What are we even talking about here?
What other fragmentation is there inside the SSD? In a nutshell, SSDs are very complicated. NAND flash can only be written (or rather erased) in large blocks. And the operating system sees the SSD as a collection of 512-byte (or 4096-byte) sectors, each of which can be addressed independently. To somehow combine this, a thing called FTL (flash translation layer) was invented: data in flash memory is not sequential, but (very conditionally) in the order in which it was written, something like log-structured file systems. Such structures process a random write very well, turning it into a sequential one, but, alas, nothing comes for free - as a result, a sequential read often turns into a random one.

The algorithms by which FTL operates are closed, however, as far as we can tell, they can vary dramatically between different manufacturers. Accordingly, the behavior of drives under load can differ radically. This is exactly what we will explore.

The idea of ​​the script: we create a file of several gigabytes, filled with random data, and measure the sequential reading speed. Next, using random access, we rewrite part of the test file and again measure the linear reading speed. If our suspicions are correct, then reading from the file will now be slower. After each write, we perform three read operations with a delay between them in case some drive in the background is defragmenting and then the read speed will improve.

A little about why you need to fill the SSD before testing

More than once I have come across reviews in which they start reading from a new drive, get some fantastic numbers and, without hesitation, publish them. After some time, the test is repeated on a less virgin disk, and suddenly it turns out that the access time has increased, and the speed, accordingly, has decreased. The point is TRIM support: the controller inside the SSD can “know” that there is no useful data in a particular block, information about this is stored in the FTL. And when requesting to read from such a block, it does not access the slow NAND flash, but immediately returns zeros. On the new drive, all blocks are marked as unused, which means it is ready to set records in reading tests. Only we are interested in the speed at which the SSD can transfer not zeros, but data.

In addition, some drives are able to compress data, and on well-compressed test data they may not show exactly the same results as they will in real life.

Therefore, before testing, it is worth filling the SSD with incompressible data (on Linux, /dev/urandom can be a good source).

shell script

the test file is created in the current directory.

I tested only under Linux with dash, coreutils and fio from debian buster, with other distributions there are unlikely to be problems, but under freebsd and other operating systems the script will most likely have to be “finished”.

echo preparing… dd if=/dev/urandom of=testfile bs=1M count=4096 status=none sync for A in 1 2 3; do sleep 10 dd if=testfile of=/dev/NULL bs=1M iflag=direct done for A in 50 200 800 4000; do echo fio: write ${A}M... fio —name=test1 —filename=testfile —bs=4k —iodepth=1 —numjobs=1 —rw=randwrite —io_size=${A}M —randrepeat=0 —direct =1 —size=4096M > /dev/NULL sync for B in 1 2 3; do echo sleep ${B}0 sleep ${B}0 dd if=testfile of=/dev/NULL bs=1M iflag=direct done echo done sleep 3600 sleep 3600 dd if=testfile of=/dev/NULL bs=1M iflag=direct

It turned out that I now only have Intel NVMe drives on servers with Windows; I had to use Google, stackexchange and some mother to create an option for Windows

option on ps

Of the external dependencies, only fio;
The path to the exe file and temporary file is indicated in the first lines of the script. $testfile = "c:\temp\testfile" $fio = "c:\temp\fio-3.18-x64\fio" echo "preparing..." $filestream = New-Object System.IO.FileStream($testfile, "Create ") $binarywriter = New-Object System.IO.BinaryWriter($filestream) $out = new-object byte[] 1048576 For ($i=1; $i -le 4096; $i++) { (new-object Random) .NextBytes($out);
$binarywriter.write($out) } $binarywriter.Close() For ($i=1; $i -le 3; $i++) { sleep 10 $time = Measure-Command { Invoke-Expression “$fio —name= test1 —filename=$testfile —bs=1M —iodepth=1 —numjobs=1 —rw=read —direct=1 —size=4096M" *>$NULL } $seconds = $time.Minutes*60+$time.Seconds+ $time.Milliseconds/1000 echo "read in $seconds" } foreach ($A in 50,200,800,4000) { echo "fio: write ${A}M..." Invoke-Expression "$fio -name=test1 -filename=$ testfile —bs=4k —iodepth=1 —numjobs=1 —rw=randwrite —io_size=${A}M —randrepeat=0 —direct=1 —size=4096M" *>$NULL For ($i=10; $ i -le 30; $i+=10) { echo “sleep $i” sleep $i $time = Measure-Command { Invoke-Expression “$fio —name=test1 —filename=$testfile —bs=1M —iodepth=1 —numjobs=1 —rw=read —direct=1 —size=4096M" *>$NULL } $seconds = $time.Minutes*60+$time.Seconds+$time.Milliseconds/1000 echo "read in $seconds" } } rm $testfile Got the following results:

  • no background defragmentation was detected in the tested models: the reading speed does not increase after some time after writing, including a long “sucks” (an hour or even more than a day) does not change anything, so in the table below I present simply the best result of three runs;
  • For some reason, under Windows, the reading time was less stable and turned out to be higher than expected (however, perhaps the fact is that these servers turned out to be more loaded);
  • Continuing to write beyond what is specified in the script (overwriting the file more than once) does not affect performance.

Reading time (in seconds) of a 4GB file for different disks:

DiskFirst read after sequentially filling a fileAfter random recording 50MB+200MB+800MB+4000MB
intel S3510 SSDSC2BB480G610.710.710.810.810.8
toshiba XG5 KXG50ZNV512G1.92.93.74.86.8
Samsung PM963 MZQLW960HMJP2.83.23.53.74.2
Samsung PM983 MZQLB960HAJR3.33.63.43.43.4
samsung PM981 MZVLB1T0HALR1.81.82.12.53.5
Samsung PM1725b MZPLL1T6HAJQ1.81.92.02.32.9
micron 5200 eco9.39.810.412.210.7
Samsung PM883 MZ7LH1T9HMLT7.97.98.18.18.0
intel P3520 (win)5.85.96.06.15.8
intel P4500 (win)4.24.24.34.44.3

DC models are marked in bold (the rest are desktop/laptop); where is SATA and where is NVMe, I think, is clear without explanation.

We see that as the Samsung PM981 randomly wrote to a file, the reading speed dropped and eventually dropped by half (but remained, however, quite good), and for the only Toshiba in the table - by 3.5 times, actually equaling that of SATA devices . On the other hand, for most devices, random recording either did not affect performance at all or only slightly.

My interpretation of these results: the linear read speed of an SSD can indeed degrade over time, but the degradation caused by internal fragmentation is not entirely fatal on most drives (on Intel drives, for example, it is completely unnoticeable; on Samsung drives, even if noticeable, Still, the reading speed remains quite acceptable).

The question remains open whether the read speed degrades over time for other reasons (for example, due to NAND flash wear). I can say about the Toshiba XG5: I didn’t notice any difference in behavior between the disk on which >>150TB was written via SMART and the new disk - either 300-400 rewrites are not enough for flash wear to become noticeable, or it does not affect SSD performance at all .

Regarding the drop in performance after an accidental write: I have a fairly loaded mysql database of about 100GB in size on just such a Toshiba. Indeed, in full accordance with the theory and measurements outlined above, the reading speed of mysql “combat” tables turned out to be quite low (about 600 MB/s), while the reading speed of other large files from the same file system is much higher (>2 GB/s).

How to deal with internal SSD fragmentation

If you want to overcome it, you can use one of the first defragmentation methods: make a backup, delete files, restore from the backup.
The disadvantage of this method is that it is quite long and involves downtime (and after some time the data in flash memory will again be fragmented and everything will have to be repeated all over again). So it’s easier to either accept it or choose disks that are not susceptible to this problem. I came up with a relatively quick way to get rid of internal (and only internal) SSD fragmentation: sync fsfreeze -f /mountpoint dd if=/dev/nvme0n1p2 of=/dev/nvme0n1p2 bs=512M iflag=direct oflag=direct status=progress fsfreeze -u /mountpoint
It can be launched on a “live” system without unmounting the FS and stopping services. This may also result in some downtime due to the file system being frozen, but if desired, you can break it up into multiple iterations to reduce the amount of time it takes to freeze the I/O. There is one more “but”: I’m not 100% sure that all SSDs correctly handle the situation “we write zeros to an area for which TRIM was previously done” (that is, from the point of view of the drive, FS areas that were previously TRIMed on can is now considered not free, but occupied data). In general, the recommendation to “suck it up or choose discs” remains valid.

Summary: Defragmentation can be useful for some SSDs, but it is not quite the same (not at all?) as for HDDs. It is important for us not only that the file is located in a continuous chain of sectors, but also that writing to these sectors occurred sequentially.

PS I would be grateful if readers would run the script for themselves and provide the numbers for their SSDs, since my selection of drives is quite one-sided.

Rating
( 2 ratings, average 4.5 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]