Install on microSD

23 risposte [Ultimo contenuto]
apvp
Offline
Iscritto: 12/10/2011

Hi everyone! I'm trying to install (i do mean install - not make a liveUSB) Trisquel 5.5 mini on a 4 GB microSD so that i can make several partitions and mount them with different mount options, just like i would do in a regular HDD.

I've formatted all partitions as ext2 and i intend to mount all of them with the noatime mount option, send all logs to /dev/null and mount everything i can as tmpfs to reduce all writes to the medium in order to preserve its lifespan.

Installation worked fine but when i boot from the microUSB i just get BusyBoxs command line. So my question is: can Trisquel really be installed on a microSD (any tips or work-arounds on how-to would be much appreciated) or will i have to contempt myself with making a liveUSB and use its persistent storage option?

For the kind of thing i have in mind (proxy server/residential gateway) i would really like to be able to make an install and not a liveUSB.

Thanks!

apvp
Offline
Iscritto: 12/10/2011

[I forgot to mention]

The microSD has a USB adapter, so it's just like a USB thumbdrive. I've tried Trisquel as a liveUSB in that medium and everything was o.k, so the problem isn't the device itself.

Darksoul71
Offline
Iscritto: 01/04/2012

Normally most Flash devices I know (SD Card, Micro SD Card, USB Stick, CF Card in Reader) will behave like a usual HDD. Of course you have much lower writing speed but when using the Trisquel you will not notice anything. I run Trisquel 5.5 from a 16GB USB stick without issues.

Only modifications compared to a HDD installation so far:
- Noatime option for mounted filesystems
- No swap partition
- Moved some directories to tempfs

I can not comment on the average lifespan of a USB stick in daily usage but so far I haven't experienced any issues.

Your hardware must support booting from USB devices of course but this is what most modern hardware does.

Chris

I am a member!

Offline
Iscritto: 04/23/2011

It'll fail. I guarantee it. Any serious use of flash media will cause it to fail. I'd be shocked if you got 6 months out of it. Mind you I am assuming that you are applying security updates and the likes.

yeehi
Offline
Iscritto: 06/02/2012

"It'll fail. I guarantee it. Any serious use of flash media will cause it to fail"

Does this mean that you are not a fan of solid state hard drives, Chris?

Chris

I am a member!

Offline
Iscritto: 04/23/2011

You have to back up a moment. I'm mainly referring to USB flash, Micro SDHC, Mini SDHC, SDHC, and similar.

However :) the SSD sata drives have too high a failure rate too. If speed is taken into consideration and you have a backup/or your data isn't that important (gaming, web browsing, etc) then sure. They are great... but SDHC, and Micro SDHC in particular are horrendous. It doesn't matter the brand or price. SDHC SLC cards are slightly better although not by much... still not "good enough" to replace a hard drive (SSD or platter).

I've done considerable testing and a whole lot of research. SDHC / flash tends to work well if you don't write data to it. For instance if you use a script which turns the drive into a read-only medium and then stores everything else in memory (AUFS) and you don't update it- you will be fine. You can even store files on it. Just don't plan on installing software/updating it. For whatever reason it isn't reliable enough for that. You should also expect to see data corrupt.

Darksoul71
Offline
Iscritto: 01/04/2012

@apvp: The behaviour of your installation is quite strange though. You should check out your system logs (dmesg, Xorg.0.log, etc).

@Chris: I disagree on this one. I have been using a 4 GB CF card in my Shuttle barebore for over a year running vanilla Debian 6.0 with the modifications described above which will mostly limit unnecessary write access to the media itself. The CF card still runs fine.

The german computer magazine C't did a test over 6 years ago where they tried to kill a 2 GB USB stick by writing incredible often to the flash media but even after 16 000 000 write cycles the stick didn't die. Unfortunately I did not find the article itself but only a reference to it:
http://www.heise.de/ct/artikel/Ueberflieger-291740.html

Flash-Haltbarkeit

Bereits vor rund zwei Jahren haben wir versucht, einen USB-Stick durch kontinuierliches Beschreiben eines einzelnen Blocks zu zerstören [4]. Damals murrte unser Opfer auch nach 16 000 000 Zyklen nicht. In Anbetracht moderner Wear-Leveling-Algorithmen haben wir unser Testszenario verändert: Diesmal beschreiben wir einen 2-GByte-USB-Stick in jedem Zyklus von vorn bis hinten mit (mehr oder weniger) zufälligen Daten. Alle 50 Schreibvorgänge prüft ein Skript anhand der MD5-Prüfsummen, ob die Daten auch korrekt auf dem Stick stehen. Bei einer Schreibrate von rund 7 MByte/s dauert ein Schreibzyklus knapp 5 Minuten. Bislang hat der Stick in mehr als einem Monat Dauertest über 23,5 TByte Daten klaglos gefressen und liefert auch nach diesen 12 240 kompletten Schreibzyklen beim Auslesen keine Fehler.

The original article was this:
Boi Feddern, Speicherschwarm, 58 USB-Sticks mit zwei, vier und acht GByte, c't 18/06, S. 168

I could try to find the article on my CTRoms, which have all articles of one year stored as html if this is of interest. They are german though.

Given you use a high quality USB stick / CF card / SD card, it is pretty unlikely that they will die within 2-3 years of normal use. When I say "normal use" here, I am speaking about a Linux system taylored to flash usage and everyday stuff (updating your system, writing mails, surfing the web, etc). I would say that quickly failing flash media are a myth of the past. I have tons of USB sticks varying from 1 to 16 GB in daily use and the only one which died was a really cheap 1GB stick I got from a friend. Using a standard Trisquel on a cheap USB stick without any modification might kill the stick quite quickly. The big benefit of USB stick I see is simply the price. Even a quality USB stick will only cost half the price of a cheaper SSD.

HTH,
Holger

P.S.: The arch wiki has several interesting articles on installing Linux to Flash:
https://wiki.archlinux.org/index.php/Install_to_usb#Optimizing_for_the_lifespan_of_flash_memory
https://wiki.archlinux.org/index.php/SSD#Tips_for_Minimizing_SSD_Read.2FWrites

P.P.S.: One important modification I forgot is related to the profile directory of any Mozilla-based webbrowser.
Simply checkout this: http://www.verot.net/firefox_tmpfs.htm

malberts

I am a member!

I am a translator!

Offline
Iscritto: 04/19/2011

> @Chris: I disagree on this one. I have been using a 4 GB CF card in my
> Shuttle barebore for over a year running vanilla Debian 6.0 with the
> modifications described above which will mostly limit unnecessary write
> access to the media itself. The CF card still runs fine.
>
> The german computer magazine C't did a test over 6 years ago where they
> tried to kill a 2 GB USB stick by writing incredible often to the flash
> media but even after 16 000 000 write cycles the stick didn't die.
> Unfortunately I did not find the article itself but only a reference to it:
> http://www.heise.de/ct/artikel/Ueberflieger-291740.html

I don't have any experience and I don't know how microSD compares to CF,
but K.Mandla has also used CF cards as "hard drives" with success:
http://kmandla.wordpress.com/2010/07/22/poor-mans-ssd/
http://kmandla.wordpress.com/2010/07/28/poor-mans-ssd-one-week-later/
http://kmandla.wordpress.com/2010/08/25/poor-mans-ssd-of-course-you-know-this-means-war/
http://kmandla.wordpress.com/2010/09/18/poor-mans-ssd-test-results/

--
Morne Alberts

Darksoul71
Offline
Iscritto: 01/04/2012

@malberts: Thanks for posting this !

It essentially supports my experiences from running Debian via a 4GB CF card and an IDE2CF adapter. Mind you ! It is still possible to screw up a CF card in a minimal amount of time. Our apprentice at work had a small Via Eden system running IPFire from a CF card and unfortunately did not use the Flash-specific version of IP Fire. I have no real clue what happened but after a few weeks when using the IDS system of IPFire the CF Card "blew up".

Unfortunately manufacturers of Flash media (e.g. Sandisk) does not provide any technical data. At least I never found a specific MTBF or MTTF values for flash memory.

Given on the links to the archwiki, my observations and the test of C't it is not really possible to break a good quality flash media. At least not if you use it with precaution, do your homework and do not do strange things.

Of course it might happen that a flash media breaks but so does any HDD.

Chris

I am a member!

Offline
Iscritto: 04/23/2011

How you use it make a huge difference. Don't use it for databases is all I can say. Using it for routers and similar types of devices where nothing gets written to the flash should be fine.

Chris

I am a member!

Offline
Iscritto: 04/23/2011

Compact flash may work better than the mediums I've tested. I've tested the best quality parts with the highest reviews and various other factors. There may be an issue in relation to connector that is ultimately the real problem. I'm still working out where the issue is. My tests have not been to compact flash so I can't confirm it is flash issue specifically. It may be related to USB/and or the SDHC connection. Ultimately though I would not suggest using SDHC/USB flash/Micro SDHC/Mini SDHC/or similar unless you set it up for read-only operation with all other writes going to memory. That does work better from my experience. The constant read/write from updating these cards appears to wear them out quickly regardless of the quality/price. This is true when you use read-only for regular use and boot in a read/write mode only for update purposes.

I should also mention that the 16,000 write tests I find hard to believe. Mainly because I have never found a card that would allow anywhere near that number of writes. I believe I've tested just about every "brand" and type of Micro SDHC card on the market. Unless I'm not understanding what that 16,000 writes meant. My experience is fill any flash disk 10x with random data and chances are it will fail. The SLC flash is not much better. Although it really should be too surprising considering the nature of the technology.

I'll also add I'm doubtful many people have used USB/SDHC/or other types of flash for "regular" daily use and kept up to date with security updates. Updating flash is unbearably slow. While the read time may be acceptable particular when writing to ram the write times are awful.

SSD hard drives are much better in terms of write performance.

Chris

I am a member!

Offline
Iscritto: 04/23/2011

Darksoul71:

The way it's used seems to be a factor although the connection may be an issue too. USB vs IDE. Although I think writing is also a major issue.

I haven't been able to duplicate these tests and I find them hard to believe or otherwise I am not interpreting the numbers correctly.

The other thing is the example you gave makes me think you didn't implement security updates frequently. "real world" to me means taking a disk (USB/SDHC/etc) in and out daily and at least once a week applying apply any new security updates. Not to mention storing movies, documents, pictures, and other data on the card.

I should also mention that I've seen tested (low quality probably) mini hard drives and seen similar or worse results than flash. Over USB.

Darksoul71
Offline
Iscritto: 01/04/2012

Chris,

it seems logical to me that someone would not use any Flash-based storage for heavy random write-access.
I do not think that the interface (USB vs. SATA vs. IDE) plays a big part here. A CF card inside a USB card reader will have a different lifespan compared to the CF card in a Flash2IDE adapter.

The german part of the C't article I posted says this:
Their script was writing more or less random data to a 2GB usb stick (full data write). Every 50 writes the script checked if the data at the stick was written correctly by MD5 checksum. After a month they were doing more than 16 milions (!) write access which translates to a volume of 23.5 TB data written to the USB stick.

To quote one of the archlinux wiki entries I mentioned:
Note: A 32GB SSD with a mediocre 10x write amplification factor, a standard 10000 write/erase cycle, and 10GB of data written per day, would get an 8 years life expectancy. It gets better with bigger SSDs and modern controllers with less write amplificati

OK, even with a standard 10k write/erase cycles for flash cells and 10GB data volume written to a SSD (which of course might have techically different layout, different write algorithms) they expect a life expectancy of 8 years.

Unfortunately I do not have the DVD from C't magazine (http://www.heise.de/ct/) from 2006, so I can not find out which script they used but this is no problem.

I will do the following (may be this weekend):
Write a script which fills up a USB device with random data and compares the written file with the source file via MD5 checksum. Once the file written to the USB device is different from the source file I will stop the test. Then I will publish the results here. I have both an old 1GB USB stick and several smaller CF cards lying here around which I do not mind trashing.

A rough outline of the test scenario is this:
1) Create a RAM disk on my Linux box holding the source file (to minimize read access to my HDD) via tempfs
2) Create a file inside the RAM disk with roughly the size from the target device via dd from /dev/random
3) Generate the MD5 checksum from the source file in the RAM disk
4) Wipe the target device via rm -rf *
5) Copy the source file to the target device
6) Generate MD5 checksum of the target device file
7) Write loop count and MD5 checksum to a log file
8) If source and target files differ stop the loop

Unfortunately I neither have enough RAM in my system to set up a 16 GB RAM disk), nor am I willing to shell 10€ for a 16GB USB stick simply to trash it. This is up to someone else with bigger wallet and a more beefy system.

I will both publish the bash script and interim results to the mailing list as I guess this is interesting for everyone running Trisquel from Flash. By posting the script everyone can try himself how flash stands up the intense use. We will see. Being an engineer myself I rather trust those things I can see / try out myself than relying on ańything that someone else posted.

Oh yes, and my Debian system on the 4GB CF received the usual apt-get update / apt-get upgrade as soon as I was aware of any security fixes. This means at least once a week. And as for the lower write speed of the USB Flash devices compared to SSD or even a normal HDD you will most likely only notice it during installation, updates and of course if saving bigger files. A normal use (writing mails, surfing the web, watching video, listening to podcasts, etc.) means usually much lower write access.

HTH,
Holger

satellit
Offline
Iscritto: 12/16/2010

On 06/07/2012 02:45 AM, name at domain wrote:
> Chris,
>
> it seems logical to me that someone would not use any Flash-based
> storage for heavy random write-access.
> I do not think that the interface (USB vs. SATA vs. IDE) plays a big
> part here. A CF card inside a USB card reader will have a different
> lifespan compared to the CF card in a Flash2IDE adapter.
>
> The german part of the C't article I posted says this:
> Their script was writing more or less random data to a 2GB usb stick
> (full data write). Every 50 writes the script checked if the data at
> the stick was written correctly by MD5 checksum. After a month they
> were doing more than 16 milions (!) write access which translates to a
> volume of 23.5 TB data written to the USB stick.
>
> To quote one of the archlinux wiki entries I mentioned:
> Note: A 32GB SSD with a mediocre 10x write amplification factor, a
> standard 10000 write/erase cycle, and 10GB of data written per day,
> would get an 8 years life expectancy. It gets better with bigger SSDs
> and modern controllers with less write amplificati
>
> OK, even with a standard 10k write/erase cycles for flash cells and
> 10GB data volume written to a SSD (which of course might have
> techically different layout, different write algorithms) they expect a
> life expectancy of 8 years.
>
> Unfortunately I do not have the DVD from C't magazine
> (http://www.heise.de/ct/) from 2006, so I can not find out which
> script they used but this is no problem.
>
> I will do the following (may be this weekend):
> Write a script which fills up a USB device with random data and
> compares the written file with the source file via MD5 checksum. Once
> the file written to the USB device is different from the source file I
> will stop the test. Then I will publish the results here. I have both
> an old 1GB USB stick and several smaller CF cards lying here around
> which I do not mind trashing.
>
> A rough outline of the test scenario is this:
> 1) Create a RAM disk on my Linux box holding the source file (to
> minimize read access to my HDD) via tempfs
> 2) Create a file inside the RAM disk with roughly the size from the
> target device via dd from /dev/random
> 3) Generate the MD5 checksum from the source file in the RAM disk
> 4) Wipe the target device via rm -rf *
> 5) Copy the source file to the target device
> 6) Generate MD5 checksum of the target device file
> 7) Write loop count and MD5 checksum to a log file
> 8) If source and target files differ stop the loop
>
> Unfortunately I neither have enough RAM in my system to set up a 16 GB
> RAM disk), nor am I willing to shell 10€ for a 16GB USB stick simply
> to trash it. This is up to someone else with bigger wallet and a more
> beefy system.
>
> I will both publish the bash script and interim results to the mailing
> list as I guess this is interesting for everyone running Trisquel from
> Flash. By posting the script everyone can try himself how flash stands
> up the intense use. We will see. Being an engineer myself I rather
> trust those things I can see / try out myself than relying on ańything
> that someone else posted.
>
> Oh yes, and my Debian system on the 4GB CF received the usual apt-get
> update / apt-get upgrade as soon as I was aware of any security
> fixes. This means at least once a week. And as for the lower write
> speed of the USB Flash devices compared to SSD or even a normal HDD
> you will most likely only notice it during installation, updates and
> of course if saving bigger files. A normal use (writing mails, surfing
> the web, watching video, listening to podcasts, etc.) means usually
> much lower write access.
>
> HTH,
> Holger
>
>
>
FYI:
I have been using USB sticks and SDXC cards (in a Lexar USB adapter)
to test Installs of Fedora 17 from the USB/SDXC. I have never had a
failure. All tests required a f17 disk-utility re-format to GPT and fat
/dev/sdb1. I was able to create persistent USB's of the live.iso's;
Build USB's that install DVD contents to HD; and do dd writes and
liveusb-creator (with persistence) with no failures. I recommend using a
custom format of /Bios Boot +/ ext4 and no swap. when formatting. I
have not tried LVM on a USB

I have seen NO failures in the 2-3 years that I have been writing to
these USB's _Including some $4.95 EMtec 4 GB usb's.- (I normally use
HP125, Toshiba 8 and 16GB; Lexar Firefly 2GB and 4 GB.) Newly arrived:
SanDisk ULTRA SDXC 30x 64GB Cards

Tom Gilliard
satellit_

I also use Ubuntu/Trisquel USB Startup Disk Creator successfully
(usb-creator-gtk) with these sticks.

Chris

I am a member!

Offline
Iscritto: 04/23/2011

One of the things I've always wondered is if the density of flash could be negatively impacting the results. If 64GB is the latest available in Micro SDHC then maybe 16 or 32GB would work better. I've always tended to use the most expensive density available given the limited space. However the results seem to impact all flash media from my experience.

Chris

I am a member!

Offline
Iscritto: 04/23/2011

Maybe I should have rephrased it. The problems I see may not be the result of write access to the card. It could have other explanations.

I have done a lot of testing though with solid setups (ext2 rather than ext3/ext4, read-only setups with AUFS and ram, etc). The updating part in my experience kills it. However it could be that taking it in and out frequently (daily) for months causes the problem. It could also be that maybe data isn't being properly flushed to the disk on shut down or something similar.

However that said I can think of other scenarios where lots of data does appear to have killed the USB flash sticks. I'll give you examples. I used the Trisquel flash drive to write bootable ISO images to it. I got maybe 5-10 images written to it before I started seeing disk errors. In that situation I didn't take the drive in and out though more than maybe a dozen times. Now these are cheap flash drives and not using SLC or anything like that. And it is pretty well known that the cheap stuff doesn't last.

However I have seen similar results with highly rated "never fails" MLC Micro SD cards that run significantly higher. I was testing with 16GB cards.

I've also seen problems with 32GB SDHC cards with SLC.

For those who don't know SLC is a higher quality flash which allows more writes to it before failing.

Some of the setups I've had included real world testing with things like raid: software raid'd that is between two 32GB SLC SDHC cards that used USB card readers. Ultimately the one drive failed in a rather short time. And yes- this was with a properly setup card. It was used in a read-only setup (and verified) except during the update process. The only other time it got written to were when files were saved to a second partition although this 2nd partition was not for swap. It was for data and did not hold a home directory and didn't get mounted until I manually mounted it. No data would have been written to it without having manually mounted it and then going to save a file.

And the reason for the raid'ing of SDHC cards and using SLC had to do with the fact flash is terribly unreliable in the real world.

Now- I'm not trying to dispute your results. There could be an explanation for this that has nothing to do with the read/write or the connection. I haven't figured it out and am doubtful though given all the evidence I've seen.

I think when SD/Micro SDHDC/etc cards are used to store files infrequently or used in cameras/phones/etc they tend to last. That has been my experience and what I've seen from reading well reviewed cards with hundreds of people reviewing the same product.

More data would be useful. My data comes from years and years and years of testing. I'm actually working on testing a type of Mini non-removable SSD card that fits into the Mini PCIe card slot (If I recall correctly, this has been a project that has spanned 10+ years which I infrequently gets to work on, these slots though have to support a USB function to work with the Mini PCIe SSD cards).

I've tried compact flash using adapters to IDE adapters in the past. I always seemed to run into problems with the adapters though. You can get an OS loaded on them, it boots, and shortly thereafter fails. I haven't done that though in 10+ years probably though. I last tested that with a 200Mgz Pentium system.

Darksoul71
Offline
Iscritto: 01/04/2012

OK, I have found my first delinquent for the test. A 1GB USB stick I once received during a Red Hat event. The device itself is listed as:
ID 0204:6025 Chipsbank Microelectronics Co., Ltd CBM2080 Flash drive controller

I hacked together a cruel bash script which writes roughly 940 MB data generated from /dev/urandom from a RAM disk to the USB stick. Then it verifies the MD5 checksum, deletes the file at the USB stick and starts again. Since hacking together the script this noon I have completed around 160 write cycles. So we speak about 150-160 GB data which have been written to the USB stick and still no sign of corruption. The test will continue at least for another day.

After some polishing I will publish the script here and anyone can decide if he/she evaluates the durability of flash media by his/herself.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

You are not always writing the same random file, do you? If you do, I believe the result does not tell much since the new write confirms what was on the drive ('rm' does *not* remove the data, just the inode).

Darksoul71
Offline
Iscritto: 01/04/2012

Magic Banana,

it is valid to question any test setup :)

Right now I hit the 500th writing cycle without issues. We are approaching the 0.5 TB border.
Anyone who is in doubt about the approach the script now works is free to add another write cycle from /dev/zero to wipe the filesystem on the USB.

I am unsure what you mean by "..the new write confirms what was on the drive..". Shure, every write cycle will overwrite the data in from the previous run with identical data but this doesn't matter. To my understanding a typical failure of the flash media will lead to an I/O error for the corresponding Inode. Thus the written randon data file will not be identical with the complete file in the ram disk.

1st) The file will be much smaller than the source file
2nd) Even if the rest of the USB stick is still filled up with the data from the previous run, they do not belong to the target file. An I/O error almost always causes imediate stopping of the copying process.

Both things above lead to different MD5 checksums compared to the previous runs. Thus you can find out the sticks corruption.

In theory one could simply fill up the flash drive with data from /dev/zero and calculate the MD5 checksum at each run. Every corruption in the filesystem will lead to diverging MD5 checksums compared to the previous run.

BTW: I switched away from deleting the target file and simply do a cat /dev/null > target file to "reset" it.

apvp
Offline
Iscritto: 12/10/2011

I've got it. Apparently i was setting the /boot partition too small {50 MB) and as ubiquity didn't warn as it usually does about / and /usr partition sizes i just kept doing it again and again :] I've set it to 100 MB and everything's ok now.

Thanks everyone!

Darksoul71
Offline
Iscritto: 01/04/2012

Great to hear you solved it. Someone (may be me :-) should sum up the modification necessary to run Trisquel stable from flash-based media in the wiki.

lembas
Offline
Iscritto: 05/13/2010

Glad to hear apvp you got it sorted. Silly ubiquity bug!

I wonder how a flash thumb drive would work if used as a btrfs seed. Thus it would be a read-only drive in a copy-on-write setup.

I think this could be interesting as most files probably remain unchanged. Of course, flash is slower in sequential operations and many files change, some fairly rapidly and btrfs is still in testing, so no magic bullet here. :)

Sachin
Offline
Iscritto: 06/02/2012

I have installed Trisquel on a sandisk mm2 card using usb adapter and it boots, runs faster than live Disc

Chris

I am a member!

Offline
Iscritto: 04/23/2011

There are definitely benefits like this to running off flash based medium than CD. Not to mention size.