Installing Ubuntu 22.04 LTS with RAID 1 and LVM
If you have been reading my article on the Tuxedo InfinityBook Pro 16 - Gen 8 then you know that I received a laptop with no SSD and therefore, no OS pre-installed. With 2x TB Crucial P3 Plus PCIe 4.0 my goal is to install Ubuntu 22.04 LTS on LVM on a RAID 1 array. However, as you can imagine, this is not as easy as simply using the graphical installer and that is the reason why I write this guide.
I will assume that you already have a bootable Ubuntu Live with the latest LTS version. Once booted, you should be left with a choice Install Ubuntu
or Try Ubuntu
- Select
Try Ubuntu
. - Start a terminal by pressing
Ctrl-Alt-T
.
First, you should identify the name of your disk using lsblk
:
AME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
...
nvme1n1 259:0 0 3.6T 0 disk
nvme0n1 259:3 0 3.6T 0 disk
...
In my case, the disks are nvme0n1
and nvme1n1
. You should adjust the commands according to the name of your disks.
Create partitions on the physical disks
Remove any previous partition schema using the following destructive operation:
sudo sgdisk --zap-all /dev/nvme0n1
sudo sgdisk --zap-all /dev/nvme1n1
Create two partitions on each drive; one for EFI (type ef00
) and one for the RAID device (type fd00
).
sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/nvme0n1
sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/nvme0n1
sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/nvme1n1
sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/nvme1n1
Create a FAT32 system for the EFI partition on the first drive.
Create the RAID device
Ensure that you have internet connection, either wired, or by configuring a wireless connection which can be done using the graphical interface.
Then install mdadm
:
sudo apt update -y
sudo apt install mdadm -y
Create the RAID array using the second partition of each disk:
sudo mdadm --create /dev/md0 --bitmap=internal --level=1 --raid-disks=2 /dev/nvme0n1p2 /dev/nvme1n1p2
You can safely ignore the warning about the metadata.
Partition the RAID device
We now have a RAID device created ( /dev/md0
) that we can partition using sgdisk
. Create a single partition with type LVM
(code E6D6D379-F507-44C2-A23C-238F2A3DF928
):
sudo sgdisk --zap-all /dev/md0
sudo sgdisk -n 1:0:0 -t 1:E6D6D379-F507-44C2-A23C-238F2A3DF928 -c 1:"Linux LVM" /dev/md0
Create the LVM devices
Create a physical volume on the first and unique partition of the RAID device:
sudo pvcreate /dev/md0p1
Create a volume group on the physical volume:
sudo vgcreate vg0 /dev/md0p1
Create logical volumes (partitions) on the new volume group. For my purpose, I will create an extremely simple layout with a single volume that takes the whole RAID device partition that itself takes the whole disk using 100%FREE
.
sudo lvcreate -Z y -L 100%FREE --name root vg0
If you want multiple logical volumes, for instance, to separate the root
from the home
, you will have to use something like this:
sudo lvcreate -Z y -L 100GB --name root vg0
sudo lvcreate -Z y -L 100%FREE --name home vg0
Install Ubuntu 22.04
Double-click on the Install Ubuntu 22.04 LTS
icon on the desktop.
Follow the instruction according to your needs until the storage configuration step.
On the Installation type
page, select Something else
. This will present you with a list of partitions called /dev/mapper/vg0-root
, etc.
Double-click on each partition starting with /dev/mapper/vg0-
. Select Use as:
Ext4
, check the Format the partition
box, and choose the appropriate mount point (/
for vg0-root
, /home
for vg0-home
, etc.).
Select the first disk /dev/nvme0n1
for the boot loader.
Press Install Now
and continue the installation.
When the installation is finished, select Continue Testing
.
mdadm
software to manage the RAID array is present only on the live session. Therefore, if you reboot, the OS will not be able to detect and assemble the array!If every is correct, by typing lsblk
in a terminal, you should see something like:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
nvme1n1 259:0 0 3.6T 0 disk
├─nvme1n1p1 259:1 0 512M 0 part
└─nvme1n1p2 259:2 0 3.6T 0 part
└─md0 9:0 0 3.6T 0 raid1
└─md0p1 259:6 0 3.6T 0 part
└─vg0-workstation 253:0 0 3.6T 0 lvm /target
nvme0n1 259:3 0 3.6T 0 disk
├─nvme0n1p1 259:4 0 512M 0 part
└─nvme0n1p2 259:5 0 3.6T 0 part
└─md0 9:0 0 3.6T 0 raid1
└─md0p1 259:6 0 3.6T 0 part
└─vg0-workstation 253:0 0 3.6T 0 lvm /target
Installing mdadm
on the system
To install mdadm
, we need to chroot
into the system. To do so, we need to mount the partitions and bind the devices.
The root partition is mount on /target
. If you have more than one logical volume, you should mount them. For instance, assuming a separate partition for home
:
sudo mount /dev/mapper/vg0-home /target/home
Then you can bind some devices
cd /target
sudo mount --bind /dev dev
sudo mount --bind /proc proc
sudo mount --bind /sys sys
Finally, chroot
into the system:
sudo chroot .
Help the system to resolve the repositories' DNS:
echo "nameserver 1.1.1.1" >> /etc/resolv.conf
We can now install mdadm
:
apt install mdadm -y
Configuring the RAID on the target system
We need to check if the array has properly been discovered. Check the file /etc/mdadm/mdadm.conf
. It should contain a line near the end similar to
ARRAY /dev/md0 metadata=1.2 name=ubuntu:0 UUID=e1a1650d:138d43bd:d53e1e18:29aa74ea
If the line is present, then simply delete name=ubuntu:0
.
If the line is not present, then you can generate it using the following:
sudo mdadm -Db /dev/md0
Check that the output looks like the line above and if so, output it to the end of /etc/mdadm/mdadm.conf
:
sudo mdadm -Db /dev/md0 >> /etc/mdadm/mdadm.conf
Then, we need to edit the file and remove the name=ubuntu:0
.
Update the module list the kernel should load at boot.
echo raid1 >> /etc/modules
Update the boot ramdisk:
update-initramfs -u
Finally, exit from chroot
:
exit
And that's it! We now have a fresh new installation of Ubuntu with RAID 1 and LVM.
While the RAID array is protecting the main partition from a disk failure, the EFI boot partition is not protected via RAID as it resides only on the first partition of the first disk.
Instead of relying on RAID, we will clone the partition and insert clone into the boot chain. Be certain to copy the first partition of the first disk to the first partition of the secon disk:
sudo dd if=/dev/nvme0n1p1 of=/dev/nvme1n1p1 bs=4096
You can check the result with
sudo blkid /dev/nvme[01]n1p1
which should return something like
/dev/nvme0n1p1: UUID="F41E-A163" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="a10fa1d6-0079-4370-9152-1761b2282c07"
/dev/nvme1n1p1: UUID="F41E-A163" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="ac3839c7-cf53-47c4-a376-905646e0a193"
Then, to insert the second EFI partition into the boot chain, identify the location of the EFI file with
sudo efibootmgr -v
The path is located after the File
in the output:
oot0000* ubuntu HD(1,GPT,a10fa1d6-0079-4370-9152-1761b2282c07,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Insert the EFI file into the first partition of the second disk. Note that the -d
(for disk) expect a disk and not directly a partition. The partition is specified using the flag -p
:
sudo efibootmgr -c -d /dev/nvme0n1 -p 1 -L "ubuntu2" -l '\EFI\ubuntu\shimx64.efi'
In case the first disk fails, the data it protected by the RAID array, and the system will still boot from the EFI partition of the healthy disk. The RAID array would be in degraded mode.
In my case, after a reboot, here is the output of lsblk
:
aquemy@ws:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 3.6T 0 disk
├─nvme0n1p1 259:2 0 512M 0 part
└─nvme0n1p2 259:3 0 3.6T 0 part
└─md0 9:0 0 3.6T 0 raid1
└─md0p1 259:6 0 3.6T 0 part
└─vg0-workstation 253:0 0 3.6T 0 lvm /
nvme1n1 259:1 0 3.6T 0 disk
├─nvme1n1p1 259:4 0 512M 0 part /boot/efi
└─nvme1n1p2 259:5 0 3.6T 0 part
└─md0 9:0 0 3.6T 0 raid1
└─md0p1 259:6 0 3.6T 0 part
└─vg0-workstation 253:0 0 3.6T 0 lvm /
The status of the RAID device:
aquemy@ws:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Sep 13 17:10:02 2023
Raid Level : raid1
Array Size : 3906361152 (3.64 TiB 4.00 TB)
Used Dev Size : 3906361152 (3.64 TiB 4.00 TB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Sep 16 16:45:40 2023
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : ubuntu:0
UUID : e1a1650d:138d43bd:d53e1e18:29aa74ea
Events : 31227
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n1p2
1 259 5 1 active sync /dev/nvme1n1p2
The status of the LVM physical volume:
aquemy@ws:~$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/md0p1
VG Name vg0
PV Size <3.64 TiB / not usable <4.80 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 953700
Free PE 0
Allocated PE 953700
PV UUID 8ocB3w-R3Dp-76Yr-PE2H-mfC2-D4Wf-cmAG8S
And finally, the status of the logical volume:
aquemy@ws:~$ sudo lvdisplay
--- Logical volume ---
LV Path /dev/vg0/workstation
LV Name workstation
VG Name vg0
LV UUID kq9vhC-OqZA-QaTv-JkBe-OU1i-PJsA-Sukflj
LV Write Access read/write
LV Creation host, time ubuntu, 2023-09-13 17:22:46 +0200
LV Status available
# open 1
LV Size <3.64 TiB
Current LE 953700
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
Member discussion