Flash drive life-span and test script
- Vous devez vous identifier ou créer un compte pour écrire des commentaires
Hi all,
driven by the recent discussion thread (http://trisquel.info/de/forum/install-microsd) I decided to hack together a script which writes randomly generated data to any flash-based media. It is mentioned to be run from either an installed GNU/Linux or a GNU/Linux Live distribution. In order to save any HDD from wearing out I decided to put the source file into a RAM disk.
Mine is mounted to /opt/ramdisk as you can see in the script below. I generated a 2GB RAM disk with this line in my fstab:
tmpfs /opt/ramdisk tmpfs nodev,nosuid,size=2G 0 0
Beside the RAM disk you need to make shure that the flash media you want to access is mounted somewhere in your file system. Adjust the value of the bash variable OUTFILE accordingly.
The script is rather lame without any big error checking. Feel free to change this.
You also need to adjust the size of the source file accordingly to the available space on your flash drive. If you do not have enough RAM available to store a sourcefile which is big enough to fill up the flash drive, I would add a count loop and write multiple times the source file (e.g. 4x4GB on a 16GB flash). Of course you have to ajdust the line with the printing of the MD5 checksum.
In order to log all scriptoutput to a file simple redirect the output like this.
./test_flash >> test_flash.log
Magic banana questioned if the test is valid because you simply overwrite each inode with identical data at each run. If you agree to this then simply fill up the flash drive with zeros prior the next run. This would double the write access as well.
I mean something like dd if=/dev/zero of=$OUTFILE bs=1M count=$SIZE
BTW: My crappy 1GB No Name USB-Stick just passed something like the 1700 wrinting pass and still lives. This translates to somewhat 1.5 - 1.6 TB transferred to the stick. If someone is willing to sent me a 16GB micro SD flash drive plus adapter I can easily set up a similar test at my Linux box at work which runs 24/7 anyhow.
Have fun,
Holger
#!/bin/bash
INFILE=/opt/ramdisk/sourcefile
OUTFILE=/outfile
SIZE=920
dd if=/dev/urandom of=$INFILE bs=1M count=$SIZE
md5sum $INFILE
COUNTER=0
MD5_SUM=$(md5sum $INFILE | cut -d " " -f 1)
SIZE=$(ls -la $INFILE | cut -d " " -f 5)
echo $COUNTER MD5:$MD5_SUM Size:$SIZE Bytes File:$INFILE
while [ 1 = 1 ]; do
let COUNTER=COUNTER+1
cp $INFILE $OUTFILE
MD5_SUM=$(md5sum $OUTFILE | cut -d " " -f 1)
SIZE=$(ls -la $OUTFILE | cut -d " " -f 5)
echo $COUNTER MD5:$MD5_SUM Size:$SIZE Bytes File:$OUTFILE
cat /dev/null > $OUTFILE
done
You seem to know much more than me about how flash drives break... but since you continue with this topic I feel like arguing more (even though I am probably wrong):
Why writing on a file system (and not directly on /dev/sdb or whatever the last letter) when you want to test the hardware? The file system may always write at the same place or, in the contrary, prefer to write on blocks that have been less used in the past (for instance, I believe btrfs does that to lengthen SSD's lives). Here are consequences for these two extremal writing strategies:
- If the file system maximizes the number of written blocks, more iterations will pass if the drive is larger. To avoid that, the whole drive should be filled.
- If the file always write in the same blocks, it may be that some of these blocks have longer lives than others (I really doubt it is the case with Flash drives but it must be so with hard disk where writing at the beginning or the end of the disk means spinning more or less).
- If the file system always write in the same blocks and the problem is a difficulty to change the polarization of a bit, then the drive will never fail because a new write will confirm what was previously on the drive. Using /dev/zero between the writes is good... for the bits at 1 only.
All in all, the following (simpler) script seems to avoid these pitfalls. It must be ru with administrator's privileges. It makes far less iterations since the whole drive is used every time. I have not tested it since I do not have any drive to destroy!
#!/bin/sh
DEVICE=/dev/sdaX
while true
do
if [ `cat /dev/urandom | tee $DEVICE | md5sum` = `cat $DEVICE | md5sum` ]
then
counter=`expr $counter + 1`
echo $counter successful iterations
fi
done
> Why writing on a file system (and not directly on /dev/sdb or whatever
> the last letter) when you want to test the hardware? The file system
> may always write at the same place or, in the contrary, prefer to
> write on blocks that have been less used in the past (for instance, I
> believe btrfs does that to lengthen SSD's lives). Here are
> consequences for these two extremal writing strategies:
I think caching should also affect the results.
> while true
> do
> if [ `cat /dev/urandom | tee $DEVICE | md5sum` = `cat $DEVICE | md5sum` ]
> then
> counter=`expr $counter + 1`
> echo $counter successful iterations
> fi
> done
"cat $DEVICE" might get data from before the write. Would serialize it
and find a way to not cache the data.
You are right. Here is a new version with a call to 'sync'... and an 'exit' after a corrupted write (the previous version would keep on iterating)... and the device specified as an argument to the script... and this indicated in a message if no argument is given:
#!/bin/sh
if [ -z "$1" ]
then
echo "Usage: $0 storage-device-file
Example: sudo $0 /dev/sdb
Warning: the device will be used until failing!"
exit
fi
while true
do
counter=`expr $counter + 1`
md5=`cat /dev/urandom | tee "$1" | md5sum`
sync
if [ "$md5" = "`cat "$1" | md5sum`" ]
then
echo $counter successful iterations
else
echo iteration $counter was a failure
exit
fi
done
I did not really want to continue with this topic but post my "results" as well as the test script I used.
You can bet that there are at least two a dozens of variants in order to execute such a "stress test".
No matter what testing approach you choose, the weak point is still how to match the data gathered with real live. My stick (1GB) got written with a data volume of around 1.7TB. I can not really estimate what lifetime I could expect from a media which achieves at least lets say 2TB data being written to the stick. My 16GB san disk I use with Trisquel 5.5 is still happy. So is my 4GB CF card from my Debian box.
Thanks for making your own thoughts and posting your script. I choose to make writes to a file system rather than the block device because I wanted to make my test as close to the daily use as possible.
I will end my experiments now. If Chris wants to step in with further evaluation on compact SD cards I would be interested by the results. For now I will simply trust on my USB stick as I would any HDD.
To write to a (formatted) partition, I believe the (untested) script I proposed could be called with a partition (e.g., /dev/sdb1) instead of a device (e.g., /dev/sdb) as an argument.
:) Maybe instead of writing scripts we should copy several ISO images to the Trisquel flash drive. That is all I had to do to get it to fail (the goal was not to get it to fail, it was the result of having copied quite a few ISO images though to it- was testing various images).
Well then, may be the ISOs used had some secret flash suicide code which made any flash memory cell commit harakiri as soon as the content is stored inside the cell ?
;-)
Really Chris, how much more prove does one need beside running the script himself on any flash-media of choice to find out when it breaks ?
My budget is way too limited to buy a 16 or 32 GB micro SD card of a higher quality simply to try to shred it. Otherwise I would just step over the street to the PC store, grab a quality micro SD card and try to break it.
In the end it doesn't matter if you fill up the flash media with data from /dev/zero, /dev/urandom or any ISO. What matters is that you do a full write to the flash media and verify the written data any time. I think the real art is to tell how long any flash-based stuff out there will really last based on observations like 16k full write cycles killed the media.
- Vous devez vous identifier ou créer un compte pour écrire des commentaires