RAID + LVM
- Inicie sesión o regístrese para enviar comentarios
¿Has anyone successfully installed a RAID + LVM system with Trisquel 7.0?
Thanks in advance
I finally could make it run...
Welcome to Trisquel GNU/Linux 7.0, Belenos (GNU/Linux 3.16.0-43-generic x86_64)
root@faraday:~# mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=115b759a:f154ae20:47f3a1fd:07c60245 name=faraday:0
ARRAY /dev/md/1 metadata=1.2 UUID=24dbf934:4d8aad5c:9e4d0895:877dd35f name=faraday:1
ARRAY /dev/md/2 metadata=1.2 UUID=e579bb4c:9eb33b06:67b520f9:793c86be name=faraday:2
ARRAY /dev/md/3 metadata=1.2 UUID=4f69a8d8:79821ed9:5559e8a5:d38886d9 name=faraday:3
ARRAY /dev/md/4 metadata=1.2 UUID=45a99058:96dc9010:72249648:4abde5fe name=faraday:4
/dev/md0 is RAID 1
/dev/md1, /dev/md2, /dev/md3 and /dev/md4 are RAID 5.
I had some troubles with grub but I solved it partitioning the disk with gdisk like this:
# Create the partition table with gpt
parted /dev/sdd mktable gpt
# Create BIOS Boot Partition 1Mb from the start of disk
parted /dev/sdd mkpart non-fs 1049KB 2048KB
# Activate the partition
parted /dev/sdd set 1 bios_grub on
# The following is the actual disk partition:
root@faraday:~# sgdisk -p /dev/sda
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 8DD74597-2C88-4408-AA9F-09774BB79BEA
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 6253 sectors (3.1 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 6143 2.0 MiB EF02 BIOS boot partition
2 6144 780287 378.0 MiB 8300 Linux filesystem
5 2930466816 3907028991 465.7 GiB FD00 md4_a
6 1953904640 2930466815 465.7 GiB FD00 md2_a
7 977342464 1953902591 465.7 GiB FD00 md127_a
8 780288 977340415 465.7 GiB FD00 md0_a
After grub-install /dev/sda, I had to modify the following file /etc/default/grub:
I changed the following line
GRUB CMDLINE LINUX DEFAULT="nomdmonddf nomdmonisw"
with this one:
GRUB CMDLINE LINUX DEFAULT=""
root@faraday:~# pvscan
PV /dev/md127 VG gibbs-vg lvm2 [465,53 GiB / 0 free]
PV /dev/md0 VG faraday-vg lvm2 [465,53 GiB / 0 free]
PV /dev/md1 lvm2 [931,07 GiB]
PV /dev/md2 lvm2 [931,07 GiB]
PV /dev/md3 lvm2 [931,07 GiB]
PV /dev/md4 lvm2 [931,07 GiB]
Total: 6 [4,55 TiB] / in use: 2 [931,06 GiB] / in no VG: 4 [3,64 TiB]
Although some further adjustments will be needed to discard diskfilter option in grub, the system is now running fine...
Impressive.
what is nomdmonddf nomdmonisw entries for? I have them in my grub too but I don't use raid.
As far as I know, those flags were used to prevent the kernel to activate mdadm while migrating from a previous raid system.
See https://lists.ubuntu.com/archives/ubuntu-devel/2014-February/038095.html
Ubuntu Server doesn't have those flags anymore, so I decided to remove them to see what happened.
By the way, what is mdadm? I've noticed it whenever installing or updating
kernel-related things (or updating GRUB after I've tweaked it). On my system it
reports:
Warning: mdadm.conf defines no arrays
or somesuch. Anyone know, or care to explain?
apt-cache show mdadm will inform you:
Tool to administer Linux MD arrays (software RAID) The mdadm utility can be used to create, manage, and monitor MD (multi-disk) arrays for software RAID or multipath I/O.
This package automatically configures mdadm to assemble arrays during the system startup process. If not needed, this functionality can be disabled.
I see. Thank you.
- Inicie sesión o regístrese para enviar comentarios