Bootup won't progress past the login step

39 risposte [Ultimo contenuto]
amenex
Offline
Iscritto: 01/03/2015

Trying to bring an old hard drive back to life so I can upgrade its operating system,
I've found that the present Trisquel_8 won't accept my login credentials and simply
repeats the login window. It's not a wrong password, which tells me that I goofed,
but it simply presents the same-old password window again & again.

The /etc/fstab file lists all the external HDD's UUIDs appropriately, and I've made
sure that there aren't any obsolete entries.

I can login in Guest mode. Is there anything I can do from there ?

amenex
Offline
Iscritto: 01/03/2015

Followup: These two errors appear during bootup:
[ 1.731246] [drm:intel_set_pch_fifo_underrun_reporting [i915]] *ERROR* uncleared pch fifo underrun on pch transcoder A
[ 1.731272] [drm:intel_pch_fifo_underrun_irq_handler [i915]] *ERROR* PCH transcoder A FIFO underrun

Might be an old bug:
https://www.reddit.com/r/archlinux/comments/3r4rv2/error_uncleared_pch_fifo_underrun_on_pch
https://bugzilla.redhat.com/show_bug.cgi?id=1289997

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

From a live system, try to rename the directory .config (hidden, since its name starts with a dot) in your home folder on the installed system. Can you then log in?

amenex
Offline
Iscritto: 01/03/2015

Sage advice. However, after ascertaining that the app information that I wanted to protect
is in a data-dedicated partition, I decided to overwrite the Tq_8 operating system with
the current Tq_10.

After pinning down the locations of all the external HDD's in fstab by their UUID's, now
I was forced to rely on the live DVD version of Tq_10, because the flash-based version
is not activated on startup. The DVD is a lot slower that the flash. What will now
force the live flash to be recognized ?

After refreshing the installations of two other HDDs' operating systems with Tq_10, now
the primary HDD's Tq_10 operating systems are broken, where one was fully operational
before that booted up properly. I suspect that HDD, as hw-probe says it's malfunctioning
and should be replaced. Another one, a SSD, is on its way. From all I can find in the
Trisquel forum, are there no longer any special considerations regaring partitioning
and space requirements for SSDs vs. HDDs ?

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

After refreshing the installations of two other HDDs' operating systems with Tq_10, now the primary HDD's Tq_10 operating systems (...)

Why on earth do you need several Trisquel 10 systems on a same machine? To multiply the administration work?

Trisquel forum, are there no longer any special considerations regaring partitioning and space requirements for SSDs vs. HDDs ?

The root partition, /, on a SSD (for a faster bootup and applications starting faster as well) and /home as a single partition on a large disk, taking it all. Trisquel's default types of filesystem are OK: ext4 for / and XFS for /home. One single swap partition is needed, even if you have several GNU/Linux systems. It can be on the SSD with /. The swap partition must be as large as your RAM if you want to be able to hibernate any non-swapping system. Define one single partition, for data, on any other disk you have, taking it all. It can use XFS.

amenex
Offline
Iscritto: 01/03/2015

Magic Banana inquires;l Why on earth do you need several Trisquel 10 systems on a
same machine? To multiply the administration work?

Mainly for redundancy. For example, the icedove data is on the most recently installed
4TB HDD on its own partition. Two other HDDs with Trisquel_10 point their icedove logins
to that one place on the Data partition of a separate HDD, so it doesn't matter which
instance of Trisquel that I boot up. Recent troubles with the increasingly flaky main
HDD (soon to be replaced with the aforementioned SSD) have saved me from more frequent
resorts to the Live DVD ... with the live flash Trisquel.

Regarding XFS ... I've been thinking of that as the workaround to deal with too many
partitions ... all the rest of the many attached HDDs are formatted ext4 which has
been a reliable format for all my purposes. Does XFS have another quality of which
I am unaware ? My new 240GB SSD will not have to store any data, so I'll format the
/ partition ext4 and the /home partition xfs, which is a new arrangement for me.

Thanks to Magic Banana for his very prompt & thoughtful reply.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

Mainly for redundancy.

You need no redundancy for the system. If your system cannot be repaired (for instance, it is on a drive that died), you reinstall it from the ISO (in the case I gave, on a different drive).

For example, the icedove data is on the most recently installed 4TB HDD on its own partition.

That is data. Data need to be backed up. Preferably on a disk out of the computer. In this way, the entire machine can die (for instance in a fire) and you do not lose the backup.

How large is your your .icedove? Mine is 13 GB large, 0.0003% of your 4 TB disk. Essentially all emails I received/sent in the past 14 years or so. I use emails a lot. I doubt you have a significantly larger .icedove. It needs not its own separate disk: keep it in your home folder.

so it doesn't matter which instance of Trisquel that I boot up.

You only need one single Trisquel system.

Regarding XFS ... I've been thinking of that as the workaround to deal with too many partitions ...

I believe you are confusing XFS (a type of filesystem) with LVM, a method of transparently storing data spread over several partitions. You do not need "too many partitions". As I wrote:

  • on the 240 GB SSD: one for swap (as much as your RAM) and one for / (the rest);
  • on the 4 TB disk: one single partition for /home;
  • on any other disk: one single partition for data.

the rest of the many attached HDDs are formatted ext4 which has been a reliable format for all my purposes. Does XFS have another quality of which
I am unaware ?

You can use ext4 everywhere, if you want to maximize reliability: it has been the most tested type of filesystem. XFS has been around for almost 30 years and provide slightly faster accesses to larger files.

It is uncommon for individuals to need as many disks as you have, except for who works with pictures/videos/music, I guess. As far as I understand from your previous threads, you fill your disks with uncompressed plain-text data. First of all, identify what takes most of the space. You can use the GNOME disk usage analyzer. The package, named "baobab", is in Trisquel's repository. Then, remove useless data files. Finally, compress the remaining plain-text data files with zstd --rm or sets of files (for instance a directory) with tar --zstd --remove-files -cf archive.zstd. ZSTD is fast, in Trisquel's repository (eponymous package), and may divide the disk space requirements by more than 13. For instance, here, zstd takes ~2 seconds to do so from a 1.3-GB uncompressed data file the GNOME disk usage analyzer helped me find in my "data" folder:
$ zstd --rm ordered-sessions
ordered-sessions : 7.28% (1311663566 => 95522067 bytes, ordered-sessions.zst)

You will probably discover that your 4 TB disk hosting /home has enough space for all your data! You then only need another (preferably external) disk for backups. To read a zstd-compressed file, here the "ordered-sessions.zst" file I created above, starts the command line with:
$ zstdcat ordered-sessions.zst | ...

amenex
Offline
Iscritto: 01/03/2015

By way of response, the computer (Lenovo T430 running Trisquel nabia) just experienced a grub
failure, wherein an error in a script file meant to keep a Seagate hybrid SSHD from restarting
and operating on its data partition Data-2 instead turned off that drive (AKA made it enter
sleep mode).

I expected I could turn it back on by rebooting, but instead got a grub error wherein it
could not recognize another hard drive.

I've identified that hard drive as one of the three drives with a Trisquel operating system
(all nabia), but I suspect that the sleeping drive had its device ID changed by booting up
slower than another drive. Its UUID is in fstab and usually has the same device ID every time
I restart the computer. There are a dozen data partitions in fstab, every one with its UUID.
Those always bootup to their usual device IDs, but I can't be sure that simply booting up
again without the Live DVD will get things right again.

Here is the script that I had just started when the drive it was attempting to keep alive
abruptly went into sleep mode and dropped off the desktop:
while :
do
date +%s > /media/george/Data-2/george/KeepAliveLinux.txt
sync
sleep 10m
done

The identical script (but with Expansion replacing Data-2) works fine to keep the 4.0TB Seagate
HDD from periodically restarting. I was hoping to keep the other Seagate SSHD from doing about
the same, albeit on a different schedule of unknown period. Expansion is one of the five data
partitions on the 4.0 TB Seagate drive.

Can grub be made to operate on UUIDs instead of device ID's ? Otherwise, grub will be a
continuing weak link in my setup.

After restarting and finding all twelve partitions mounted in their intended places, I added
a line to the KeepAlive Linux script:
while :
do
date +%s > /media/george/Data-2/george/KeepAliveLinux.txt
date +%s > /media/george/Expansion/george/KeepAliveLinux.txt
sync
sleep 10m
done
Now we'll see if those annoying HDD restarts will have been put to rest.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

I've identified that hard drive as one of the three drives with a Trisquel operating system (all nabia) (...) There are a dozen data partitions in fstab, every one with its UUID. (...) Expansion is one of the five data partitions on the 4.0 TB Seagate drive.

Just stop with all that craziness. Do what I explained in my previous post.

amenex
Offline
Iscritto: 01/03/2015

Magic Banana ventured: You will probably discover that your 4 TB disk hosting /home has
enough space for all your data! You then only need another (preferably external) disk for backups.

Exactly; on both counts. However, I've got 300 thousand files in one partition of that 4 TB disk
(also external) that have to be squeezed into another 1 TB external disk, and we're addressing
that farther along in this thread.

amenex
Offline
Iscritto: 01/03/2015

That "craziness" was put to rest when the consolidation of scattered data completed.
There are still several instances of the operating system available for portability
of the saved user data and as a faster substitute for the Live DVD to fix fstab
errors.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

There are still several instances of the operating system available for portability of the saved user data

What do you mean?

amenex
Offline
Iscritto: 01/03/2015

All four of the USB-attached external drives are mounted on a 4336 Lenovo Pprt
Replicator, and the Lenovo T430 laptop drops onto that. I have a second laptop,
a Lenovo T420 in working condition which also fits the Port Replicator. That's
one version of redundancy. Three of the external drives are 1.0TB each, and
two of those have Trisquel operating systems. The fourth drive is 4.0TB and is
my primary Data drive. The other three are intended for backup of three of
the five partitions on the fourth drive that are 1.0TB each. Icedove is on
the fourth partition and I've copied that onto one of the two external drives
that has Trisquel, and I'll be setting up that one to run Icedove independently
of the main Icedove that's on the third of the three 1.0TB partitions. There's
a third Lenovo laptop, a T420, that's in need of maintenance. There are a couple
of local municipal libraries that also provide redundancy in that I can carry
one laptop and the external Data drive to one of those. There are four USB ports
on each of the Lenovo laptops, so the mouse and the Think penguin WiFi dongle
can occupy the 2nd & 3rd USB ports, leaving a fourth USB port for one of the
backup 1.0TB drives. I've replaced the three HDDs that S.M.A.R.T. didn't like.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

Are you actually saying that external drives host systems you boot? The USB transfers must make them quite inefficient. Having in each machine, one single system installed on an internal drive would be better, performance-wise.

amenex
Offline
Iscritto: 01/03/2015

Until I installed the 240 GB SSD as the T430's main drive, I could not discern
any difference in performance between the various Trisquel operating systems.
I could make transfers between any pair of drives at about the same rate.

Tiny files, like those in eBay item pages, do slow things down, but that's a
rare occurrence nowadays, as I've dropped out of the antique tools market.

To maintain redundancy, each of the Lenovo laptops that I maintain could use
its own 4.0TB SSD drive, as it's not a trivial task to swap out the main drive,
unless I put that drive in a caddy in the DVD slot. Not the present 4.0TB
drive, as that has the next size larger form factor. One that would fit is
about US$400.

Elimination of the craziness has reduced the attached storage capacity from
about 12.0 TB to 7.25 TB.

I've been starting up my nMap searches again, and those may go faster with
that solid state drive; I forgot to time the one I ran this morning.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

One that would fit is about US$400.

You certainly have enough disk space: compress your text files, as I explained. If your objective is faster systems, then it indeed makes sense to add a SSD to a machine that does not have one. But you can buy 20$ SSDs of 120GB (more than enough for a root partition and a swap partition) and keep the HDDs already in the machines for /home (on partitions taking the whole disks).

Elimination of the craziness has reduced the attached storage capacity from about 12.0 TB to 7.25 TB.

You should actually gain a little space with fewer partitions. Haven't you expanded the remaining partitions in the free space the deleted partitions left? You can do so from a live system such as Trisquel's ISO, which includes GParted.

I've been starting up my nMap searches again, and those may go faster with that solid state drive

That should make no significant difference: accessing data through the network is much slower than writing them to a disk. Also, compressing with ZSTD the plain text you write (piping to zstd before redirecting to a file) would probably speed up things more than switching to a SSD. Because that is possibly much less data to write.

amenex
Offline
Iscritto: 01/03/2015

Big drives for the drive caddy that fits Lenovo T430's DVD slot are either too dear or too thick.
Therefore, on to the task of learning zstd syntax. The source directory has to be compressed
from 788 GB to below 756 GB. There are 28 subdirectories, not all of which must be compressed.
My first attempt to compress one of those 28 directories reads:
/media/george/Data-A/george/Georgesbasement.com.A/Thumb256E$ zstd December2020 -rz /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/but that dropped the compressed files right next to the source files in the source directories.
I found that out with my second attempt:/media/george/Data-A/george/Georgesbasement.com.A/Thumb256E$ zstd December2020 -T0 -r -z /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/
which tried to do the same thing again. Apparently zstd missed the difference between the
source partition and the target partition. The syntax of using the cp command and a pipe to zstd
as you suggest escapes me so far.

Will the following syntax undo my errors in the first two scripts ?/media/george/Data-2/george/Georgesbasement.com.A/Thumb256E$ unzstd December2020 -rz /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/

prospero
Offline
Iscritto: 05/20/2022

> basement

Are you doing all this craziness from your basement? Or just the antique tools business?

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

As I wrote in https://trisquel.info/forum/bootup-wont-progress-past-login-step#comment-168567 :
sets of files (for instance a directory) with tar --zstd --remove-files -cf archive.zstd

I should actually have written "archive.tar.zstd". Anyway, to archive the directory "December2020", compress that archive with ZSTD, save it as /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd and remove the directory, execute that command:
$ tar --zstd --remove-files -cf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd December2020
To extract in the working directory the content of the archive (while keeping it):
$ tar --zstd -xf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd

The syntax of using the cp command and a pipe to zstd as you suggest escapes me so far.

I have never mentioned cp in this thread. In https://trisquel.info/forum/bootup-wont-progress-past-login-step#comment-168642 I wrote:
Also, compressing with ZSTD the plain text you write (piping to zstd before redirecting to a file) would probably speed up things more than switching to a SSD.

I mean, that instead of writing much uncompressed plain-text data in "out" like that:
$ my_command > out
You can first compress those data with ZSTD to not only save disk space but also time (because writing much data to a disk is slow and ZSTD is fast):
$ my_command | zstd > out.zstd

amenex
Offline
Iscritto: 01/03/2015

Half a lifetime ago, all my oldtools acquisitions resided in my basement.
Then, malware and covid intervened. My basement ended up online, and malware
came to be addressed in pinthetaleonthedonkey.com, with technical discussions
in the present forum. Now it's all squeezed onto my dining room table.

amenex
Offline
Iscritto: 01/03/2015

Continuing where Magic Banana left off on Wed, 10/05/2022 - 17:40,
Before proceeding with the archiving of the directory December2020 from the partition
Data-A to the partition Data-2, the misplaced .zst.zst and .zst files have to be dealt
with, as in the directory Malware, the first directory under December2020:
/media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020/Malware/2.61.132.185.Malware.nMap.txt
/media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020/Malware/2.61.132.185.Malware.nMap.txt.zst
/media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020/Malware/2.61.132.185.Malware.nMap.txt.zst.zst

There are a couple thousand of each variety in the Malware directory. I removed the misplaced .zst files with ls /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020/Malware | mv *.zst.zst /media/george/Expansion/george/Junk
ls /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020/Malware | mv *.zst /media/george/Expansion/george/Junk
I applied the same approach to the four thousand files in a second directory under January2020. Then I deleted the contents of the Junk folder.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

Before proceeding with the archiving of the directory December2020 from the partition Data-A to the partition Data-2, the misplaced .zst.zst and .zst files have to be dealt with

I have no idea how you ended up with that. There is single command to execute to archive the directory "December2020", compress that archive with ZSTD, save it as /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd and remove the directory (one single command for all that):
$ tar --zstd --remove-files -cf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd December2020

amenex
Offline
Iscritto: 01/03/2015

Presuming that Magic Banana expected me to execute his script from the folder containing the source directory:
/media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/ I executed that script:
tar --zstd --remove-files -cf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd December2020but that removed the original December2020 folder from partition Data-A, which isn't my
intention, because I was expecting to continue my nMap project in the plain text environment of the 4.0 TB HDD.
Now trisquel is expecting an additional application to view the contents of the folder /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd. That would create an impossible problem, as the entire contents of /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E are bigger that the available capacity of /media/george/Data-2,
so an archived /media/george/Data-2/george/Georgesbasement.com.A.tar.zstd would be impossible to open unless I could view the internal directory structure. I hope that the missing application might so enable trisquel.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

but that removed the original December2020 folder from partition Data-A

It does, as I specifically wrote twice.

To create the compressed archive without removing the files, do not use the option --remove-files:
$ tar -caf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst December2020

... but you are then using more space instead of saving some!

(Observation .zst is the "proper" extension, which allows to use option -a, which selects the desired compression program from the extension, instead of --zstd.)

Out of curiosity: how many times smaller is the compressed archive?

Now trisquel is expecting an additional application to view the contents of the folder /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd.

tar can do that:
$ tar -tf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst

Trisquel's default graphical archive manager, Engrampa, can too... as long as the extension is .zst (and not .zstd as I wrote in my previous messages: sorry about that).

That would create an impossible problem, as the entire contents of /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E are bigger that the available capacity of /media/george/Data-2, so an archived /media/george/Data-2/george/Georgesbasement.com.A.tar.zstd would be impossible to open unless I could view the internal directory structure.

I have just given you the command to "view" ("list" is the proper term). tar can extract from the archive specific files/directories, specifying their names as output by that command. Not specifying files/directories, the whole content of the archive is extracted *in the working directory* (not necessarily on the same disk) with the command I have already given you:
$ tar -xf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst

Please read the "Tutorial Introduction to tar": https://www.gnu.org/software/tar/manual/html_node/Tutorial.html

Or you can list and extract with Engrampa or any other archive manager.

amenex
Offline
Iscritto: 01/03/2015

In order to preserve the original folder and its contents, Magic Banana modified the script,
$ tar --zstd --remove-files -cf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zstd December2020 to read:
$ tar -caf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst December2020
where -caf tells tar to use the zstd compression algorithm zstd by the .zst extension of the output folder, which I propose changing to
$ tar -caf8 /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020which I think would put the archived version of the December2020 folder contained in the partition Data-A into the separate partition Data-2 with greater compression according to prospero's suggestion.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

$ tar -caf8 /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020

That will create an archive named "8", in the working directory. I repeat:
Please read the "Tutorial Introduction to tar": https://www.gnu.org/software/tar/manual/html_node/Tutorial.html

In that tutorial, https://www.gnu.org/software/tar/manual/html_node/short-create.html#short-create explains your error.

It is of course possible to specify options for the compression program. The tar manual explains how. But you have more important issues to deal with. And you should definitely start reading documentation instead of making up commands that create many of your problems.

amenex
Offline
Iscritto: 01/03/2015

OK; not:
$ tar -caf8 /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/December2020Cannot alter tar's 50% compression ratio _and_ tar appears to include the entire path to the source file in the .tar.zst file, so tar must operate from the source's working directory instead:
cd /media/george/Data-A/george/Georgesbasement.com.A/Thumb256E/
tar -caf /media/george/Data-2/george/Georgesbasement.com.A/Thumb256E/December2020.tar.zst December2020

Actual ratio is 553/10200 bytes.

To uncompress from Data-2 to Data-0 (which has more room):
Must be in the .tar.zst folder's source directory:cd /media/george/Data-2/george/Georgesbasement.A/Thumb256E/
tar -xf December2020.tar.zst -C /media/george/Data-0/george/Georgesbasement.A/Thumb256E/December2020
Works OK.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

Actual ratio is 553/10200 bytes.<.cite>

That is a division by more than 18. Your 4TB disk will be able to store all your data, if they can all be compressed that much. :-)

amenex
Offline
Iscritto: 01/03/2015

Five have been completed by now;
Real time Comp. Was
1s 0.6MB 10MB
27s 80MB 729MB
7.3m 1.4GB 13GB
0.6s 36MB 8.9MB
125m 12GB 81GB
Currently four more folders totaling 391GB are being compressed, all at once ...
The network monitor reads 60% processor in use, but the system monitor reads only 15%.
2GB of 8GB RAM in use, swap usage about 100+ MB.

prospero
Offline
Iscritto: 05/20/2022

Side note: would it help to use a higher compression level with zstd, like 10 or even 20, instead of the default 3? The answer is probably going to depend on the type of files to compress, and on acceptable compression time.

amenex
Offline
Iscritto: 01/03/2015

prospero wondered if I could experiment with compression ratios.
It turns out that tar used with the .zst extension has a fixed compression ratio,
but the default is fine with me, judging from the December2020 folder.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

It turns out that tar used with the .zst extension has a fixed compression ratio

As I wrote:
It is of course possible to specify options for the compression program. The tar manual explains how.

Here is the link: https://www.gnu.org/software/tar/manual/html_node/gzip.html

amenex
Offline
Iscritto: 01/03/2015

That's the least of my worries. The overall for reduction in storage is 4:1. Took about six hours overall.

During uncompression the Keep Alive scripts applied to the four Seagate hard drives
inexplicably failed, allowing the Seagate drives to enter sleep mode. After I restarted
them, I restarted three ucompression scripts. The one operating on the smallest folders
finished immediately, but the two scripts uncompressing a 172 GB folder and an 84 GB
folder started consuming all the system's resources, except swap (ca. 100MB) and RAM
(1.2 GB of 7.5 GB), so I executed the commandsudo swapoff -a
which took about 45 m to finish. After that, the two large-file uncompression scripts
continue to run OK, using about 65% of the processor on the Panel's System Monitor but
only 10% on the display's System Monitor.

The dropping of the four Keep Alive scripts (running on a 16 GB thumb drive) puzzles
me. It could ruin some work, but hasn't yet done so.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

The overall for reduction in storage is 4:1.

Only? It was above 7:1 in https://trisquel.info/forum/bootup-wont-progress-past-login-step#comment-168700

After that, the two large-file uncompression scripts continue to run OK

Why are you *un*compressing? The idea is to compress. To have all your data fitting in your home folder, in a single partition on the 4 TB disk (the system and the swap on the SSD). No more external disks (but for backups), crazy fstab, Keep Alive scripts, and whatnot. A simple and effective setup.

amenex
Offline
Iscritto: 01/03/2015

Four-to-one is great and more than enough for my work; after all, it took six years to
generate 789 GB of data, so I'll be 104 before Data-2 runs out of storage room.

I uncompressed the compressed folders to find out if I could do that to get access to
those files. The aim of the compression process was to backup the 789 GB of data in
Data-A to a format which is recoverable. My experience with backups in another O/S
was that those backups were an all-or-nothing proposition which I could not trust to
work when the chips were down ... so to speak.

Now it turns out that Data-0, the target of the uncompression exercise, experienced
an error late in the process of uncompressing the largest (172 GB) folder and became
read-only. That drive is now inaccessible and "does not exist"in the words of the
failure of the mounting process:sudo mount /dev/sdb2 /media/george/Data-0orsudo mount 700a21ec-1dda-4460-ab63-0972108e1a5e /media/george/Data-0 ext4 errors=remount-ro 0which demonstrates the risk of using a disk that S.M.A.R.T. advised me to replace.
Another partition in that same drive, Temp-0, appears on the desktop after a reboot,
but when I try to unmount it, all the other USB-connected drives unmount as well.

Those Keep-Alive scripts are only required to deal with the unacceptable Seagate
practice of making their products go through a sleep-and-restart exercise after
only 15 minutes of inactivity. It turns out that after some unknown and random
period, all of the active Keep-Alive scripts are closed at once, perhaps by their
host, Data-4, which is a SanDisk Cruzer Blade 16 GB flash drive. Seagate drives
won't remount after going into sleep mode when they're connected through USB.
That issue can be fixed by changing to other manufacturers' hard drives for the
compressed-to-backup data on the 4.0 TB drive, which does in fact hold all my
current data files.

Are there any drawbacks to the use of a 4.0 TB internal SSD ? A hard drive of the
same capacity won't fit my Lenovo T430 (7mm SATA) or T420 (10mm SATA), even in
a hard drive caddy (12.5mm SATA max) in the DVD slot, as they are all 15mm thick.

My inclination is to keep the 240 GB SSD and switch to two external 4.0 TB HDDs of
non-Seagate make or (alternatively) a 2.0 TB HDD in the DVD slot (which is on the
same SATA bus as the primary drive) for compressed backups and one USB-connected
4.0 TB HDD for the data files. That would simplify fstab to less than 1/10 the
present size.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

Are there any drawbacks to the use of a 4.0 TB internal SSD ?

Besides its significant price and ecological considerations, no. But, after compression of your data, you will have more-than-enough disk space and storing those data on SSD will not make much difference, regarding performances (whereas having the system on SSD makes much difference).

amenex
Offline
Iscritto: 01/03/2015

Here's what I decided: Stay with the 240GB internal SSD on /dev/sda, and get an external
3.5" 4TB HDD powered by its USB connection and an internal 2TB HDD that fits the DVD slot
with a drive caddy on the internal SATA bus at /dev/sdb. Neither will be Seagate ! Only
one USB connection. Immediate nMap search results will go to the internal SSD, thence to
the USB-connected HDD, and eventually compressed to the internal 2TB HDD that shares the
SATA bus with the primary SSD. Cost: ca. $US 160. That will do until I'm over 90.

Magic Banana

I am a member!

I am a translator!

Offline
Iscritto: 07/24/2010

Immediate nMap search results will go to the internal SSD, thence to the USB-connected HDD, and eventually compressed to the internal 2TB HDD that shares the SATA bus with the primary SSD.

Those are many useless transfers. Why wouldn't you directly write to the internal HDD? If you fear that piping to zstd before writing will make you problems, you can compress every day/week/month the data accumulated during the previous day/week/month and remove the same data uncompressed. Besides the internal SSD (with two partitions, for / and for the swap) and the internal HDD (with one single partition, for /home), you only want an external disk, with one single partition at least as large as /home, for backups. If you do not pipe to zstd, configure the backup system to ignore the folder with the still-uncompressed data. You can use Back In Time, by default in Trisquel, which can also be setup to automatically start the backup when plugged in and allows to easily copy the data that were backed up to any other machine (unlike Déjà-Dup, whose backups are essentially unreadable without Déjà-Dup).

amenex
Offline
Iscritto: 01/03/2015

I agree that it's useless to drop nMap output to the internal SSD if there's no speed
advantage, and my preferred alternative is to go directly to the 4TB external HDD,
as the internal HDD (in the DVD slot) is the biggest it can be at 2TB. That's for
compressed backups from the external larger HDD.

Can .tar.zst backups be made incremental so they overwrite earlier portions like man
tar says ? I get the impression that man tar says that applies only to uncompressed
backups.

The next task at hand (after installing the new internal 2TB HDD and replacing the
external 4TB Seagate HDD) is to apply .tar.zst to the second 1TB partition on that
4TB external HDD. However, there are many more internal folders which are different
enough in their content that I want them to be in separate folders in the 2TB HDD.
My present thought process is that I should create that directory structure in the
target HDD and then apply the compression process to the contents of the folders of
the source HDD. The source partition's files total 943 GB, and one of the top-level
folders has over 300GB. There are few unattached text files in that 943GB partition.

amenex
Offline
Iscritto: 01/03/2015

Now I'm trying to make such a list of directories.
Ordinarily one can set the list function in the file manager to list the folders
first and then the filenames (if there are any filenames), and for the ls command in
bash, that setting is --group-directories-first.

Grep isn't working for me in eliminating those filenames from the directory listing:
ls --group-directories-first /absolute path of source directory/ | grep -vf Pattern.GBBcom.txt | lesswhere
Patterns.GBBcom.txt is
'\.txt$'
'\.jpg$'
'\.htm$'
'\.html$'
'\.gif$'
'\.pdf$'
lists everything, ignoring grep altogether. Why ?

The tedious alternative is to print the list and then manually select and delete the
unwanted filenames. In order to make progress, I just did that:
cd /media/george/Data-B/george/Georgesbasement.com.B ;
ls --group-directories-first | awk '{print "mkdir "$1" | sleep 2s ;"}' '-' > /media/george/Data-4/george/Scripts/Georgesbasement.com.B.temp.txt
followed by manual edit to remove filenames from directory list.
The following step is to make those directories: cd /media/george/IOMEGA/Data-B-Archive/Georgesbasement.com.B | mkdir AABImages04 | sleep 2s ;
mkdir AABPortfolio | sleep 2s ;
...[655 other directories] ...
mkdir YankeeDrill | sleep 2s ;
mkdir YardPix | sleep 23 ;
Alas, there are many errors without the sleep pauses and fewer with the two second pauses, but it's impossible to figure out what was missed. That IOMEGA HDD is from an old desktop PC and is _not_ SATA. It would be better to restart the computer and make those directories using IOMEGA's Trisquel_10 operating system, avoiding the long USB path.