Disk Speed Test (Read/Write): HDD, SSD Performance in Linux
From this article you’ll learn how to measure an input/output performance of a file system on such devices as HDD, SSD, USB Flash Drive etc.
I’ll show how to test the read/write speed of a disk from the Linux command line using dd command.
I’ll also show how to install and use hdparm utility for measuring read speed of a disk on Linux Mint, Ubuntu, Debian, CentOS, RHEL.
To get the accurate read/write speed, you should repeat the below tests several times (usually 3-5) and take the average result.
Cool Tip: How to choose SSD with the best quality/price relation! Read more →
dd: TEST Disk WRITE Speed
Run the following command to test the WRITE speed of a disk:
dd: TEST Disk READ Speed
To get the real speed, we have to clear cache.
Run the following command to find out the READ speed from buffer:
Clear the cache and accurately measure the real READ speed directly from the disk:
dd: TEST Read/Write Speed of an External Drive
Cool Tip: Have added a new drive to /etc/fstab ? No need to reboot! Mount it with one command! Read more →
To check the performance of some External HDD, SSD, USB Flash Drive or any other removable device or remote file-system, simply access the mount point and repeat the above commands.
Or you can replace tempfile with the path to your mount point e.g.:
Reminder: All the above commands use the temporary file tempfile . Don’t forget to delete it when you complete the tests.
hdparm: Test HDD, SSD, USB Flash Drive’s Performance
And it can also be used as a simple benchmarking tool that allows to quickly find out the READ speed of a disk.
hdparm is available from standard repositories on the most Linux distributions.
Install hdparm depending on your Linux distribution.
Cool Tip: Troubleshooting an issue with a hard drive performance? It will be a good idea also to test download/upload Internet speed. It can be easily done from the Linux command line! Read more →
On Linux Mint, Ubuntu, Debian:
On CentOS, RHEL:
Run hdparm as follows, to measure the READ speed of a storage drive device /dev/sda :
17 Replies to “Disk Speed Test (Read/Write): HDD, SSD Performance in Linux”
Anyone has hdparm version for Android?
“Reminder: All the above commands use the temporary file tempfile. Don’t forget to delete it when you complete the tests.”
I can not find any place where you instruct as to how to delete the tempfile. How is this done safely?
go to directory where you executed the command, in terminal:
““rm tempfile““
or in a gui select the file and delete it.
I must have done something wrong. I tested first with bs=4k and count=256k.
It finished quickly.
Afterwards I decided myself to alter the parameters like so: bs=1M and count=256k
I didn’t know exactly what I was doing. I left it running not having slightest hunch if it’s wrong to interrupt it via Ctrl-C. It run approximately 1000 seconds having written almost 100GB of all 150GB free on the SSD. Only then I’ve read the man pages searching for clues but still didn’t found. So I have a couple of questions if kindly allowed. That ‘k’ at the end of count I am not sure of it’s meaning or even if it makes sense. I have to also ask what would have happened if the command filled the whole free space? Would it have stopped by itself with message/error? Was it dangereous for an ssd doing this. The fact I performed it from sysresccd on ssd with Windows installed has any effect on outcome?
I mean the if = is it from the RAM memory? I specified an of= on the ssd after mounting it like /mnt/windows/some.output.file. Is the way I did it significant for the results?
> bs=4k and count=256k
k means what it always means: about 1,000, but in the case of computers (here), usually 1024. “bs” means block size, “count” means number of blocks. So this means write 4k x 256k bytes. 1k x 1k = 1 megabyte (about 1,000 x about 1,000 = about 1,000,000). How many megabytes? Since we already took care of the ‘k’s; 4x 256 = 1024 (aka about 1000, or 1k again.) What’s 1k x 1k x 1k? 1 gigabyte (about 1,000,000,000.) You wrote 1 gigabyte of zeros.
> bs=1M and count=256k
1M = (1k x 1k)
(1k x 1k) x 1k(the k from “count”) = 1 gigabyte
1 gigabyte x 256 = 256 gigabytes.
You were writing 256 gigabytes of zeros. Your drive is only 150 gigabytes in size. It won’t hurt your drive, it will just delete everything on your drive. When it fills your drive, it will stop.
The “if” is not from ram, it is a program (/dev/zero) in your system disguised as a file but whenever it is read is just endless zeros.
How to check sdb drive?
Should I use /dev/sdb instead of /dev/zero here:
sync; dd if=/dev/zero of=/media/user/MyUSB/tempfile bs=1M count=1024; sync
?
I think you missed the best software package for this kind of tests. It’s called fio:
https://github.com/axboe/fio/
It’s not accurate. The second sync does not influence the measurement (it’s being run after dd reports the results) and thus it’s influenced by caching. If you try the same test with 4096 or 8192 megs, you’ll have worse results (but closer to the reality).
One way to correct for this is measuring the whole process with the time command and then doing the division manually. E.g.:
# time (sync; dd if=/dev/zero of=tempfile bs=1M count=8192; sync)
You’ll see that dd will report a higher throughput, but you can then divide 8192 with whatever seconds time comes up with.
You need `conv=fdatasync` in your dd commands to include flush and sync time. Otherwise the results will be way too high, as others have mentioned.
/dev/sdb2:
Timing cached reads: 16830 MB in 1.99 seconds = 8454.99 MB/sec
Timing buffered disk reads: 434 MB in 3.01 seconds = 144.27 MB/sec
great post:) keep simple
Hello, after doing some tests with different “GB” my storage on NVMe was filled with 7% (56GB), can I delete that storage or stay there permanently?
How to check hard disk performance
How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
8 Answers 8
Terminal method
hdparm is a good place to start.
sudo hdparm -v /dev/sda will give information as well.
dd will give you information on write speed.
If the drive doesn’t have a file system (and only then), use of=/dev/sda .
Otherwise, mount it on /tmp and write then delete the test output file.
Graphical method
- Go to System -> Administration -> Disk Utility.
- Alternatively, launch the Gnome disk utility from the command line by running gnome-disks
- Select your hard disk at left pane.
- Now click “Benchmark – Measure Drive Performance” button in right pane.
- A new window with charts opens.You will find and two buttons. One is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.
How to benchmark disk I/O
Is there something more you want?
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
If you want accuracy, you should use fio . It requires reading the manual ( man fio ) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:
Sequential READ speed with big blocks (this should be near the number you see in the specifications for your drive):
Sequential WRITE speed with big blocks (this should be near the number you see in the specifications for your drive):
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
Mixed random 4K read and write QD1 with sync (this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
Increase the —size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give «too good» results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.
However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs have significantly faster performance with pre-erased blocks or it might have small SLC flash area that’s used as write cache and the performance changes once SLC cache is full (e.g. Samsung EVO series which have 20-50 GB SLC cache). As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible and continue with the real test immediately afterwards. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both —io_size and —runtime parameters. Note that some media (e.g. most cheap flash devices) will suffer from such testing because the flash chips are poor enough to wear down very quick. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case. That said, do not repeat big write tests for 1000s of times because all flash cells will have some level of wear with writing.
In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, if test file is smaller than total SLC cache the device always writes to SLC cache only). For such devices, the file size starts to matter again. If you need your actual workload it’s best to test with file sizes that you’ll actually see in real life. Otherwise your numbers may look too good.