2012-09-03

prima revisione

New page

[[Category:Getting and installing Arch]]

[[Category:File systems]]

{{Article summary start|Sommario}}

{{Article summary text|In questo articolo si spiega come installare , configurare e mantenere un sistema RAID.}}

{{Article summary heading|Software richiesto}}

{{Article summary link|mdadm|http://neil.brown.name/blog/mdadm}}

{{Article summary link|parted|http://www.gnu.org/software/parted/}}

{{Article summary heading|Articoli Correlati}}

{{Article summary wiki|Software RAID and LVM}}

{{Article summary wiki|Installing with Fake RAID}}

{{Article summary wiki|Convert a single drive system to RAID}}

{{Article summary end}}

{{translateme | Questo articolo è in fase di traduzione. Seguite per ora le istruzioni della versione inglese. | Talk:ArchWiki Translation Team (Italiano)#Pagine Marcate come "out of date" e "Traslateme"}}

== Introduzione ==

{{box BLUE||Si veda l'articolo di [http://it.wikipedia.org Wikipedia]su questo argomento per maggiori informazioni:[[wikipedia:it:RAID]].}}

I dispositivi RAID (Redundant Array of Independent Disks - insieme ridondante di dischi indipendenti) sono dispositivi virtuali creati da due o più dispositivi a blocchi reali . Questo consente a più dispositivi (in genere unità disco o partizioni di esse) per essere combinati in un unico blocco per contenere (ad esempio) un singolo filesystem. RAID è progettato per impedire la perdita di dati nel caso di un guasto del disco rigido. Vi sono diversi [http://it.wikipedia.org/wiki/RAID#Livelli_RAID_standard livelli di RAID].

===Livelli RAID Standard===

; [[Wikipedia:it:RAID#RAID_0_(Striping)|RAID-0]]: Utilizza lo striping per combinare i dischi. Non propriamente un sistema RAID, in quanto ''non fornisce alcuna ridondanza''. Essa, tuttavia , offre ''un grande vantaggio in velocità''. In questo esempio si utilizza RAID-0 per lo swap, partendo dal presupposto che un sistema desktop è in uso, in cui l'aumento della velocità vale la possibilità di arresto anomalo del sistema se uno dei dischi si guasta. Su un server, un RAID-1 o RAID-5 è più appropriato. l'affidabilità di un dato sistema RAID-0 è uguale all'affidabilità media dei dischi diviso per il numero di dischi presenti. Quindi l'affidabilità, misurata come tempo medio tra due guasti (MTBF) è inversamente proporzionale al numero degli elementi; cioè un sistema di due dischi è affidabile la metà di un disco solo.

; [[Wikipedia:it:RAID#RAID_1_(Mirroring)|RAID-1]]: Il livello RAID più semplice: copia esatta. Come con altri livelli RAID, ha senso solo se le partizioni sono su unità disco fisiche diverse. Se uno di questi dischi si guasta, il dispositivo a blocchi fornito dal sistema RAID continuerà a funzionare normalmente. L'esempio utilizza RAID-1 per tutto tranne che per swap. Si noti che RAID-1 è l'unica opzione per la partizione di boot, perché un bootloader (che legge la partizione di avvio) non riconosce il RAID, ma una partizione componente del RAID-1 può essere letta come una normale partizione. La dimensione di un sistema RAID-1 equivale alla dimensione della partizione più piccola che la compone.

; [[Wikipedia:it:RAID#RAID_5_(Distributed_Parity)|RAID-5]]: Funziona con 3 o più unità fisiche, e fornisce la ridondanza di RAID-1 in combinazione con i vantaggi di velocità e le dimensioni di RAID-0. RAID-5 utilizza lo striping, come RAID-0, ma memorizza anche blocchi di parità distribuiti su ogni disco che lo compone. Nel caso di un disco guasto, questi blocchi di parità sono utilizzati per ricostruire i dati su un disco sostitutivo. RAID-5 in grado di sopportare la perdita di un disco che lo compone.

{{nota|RAID-5 è comunemente scelto per la sua combinazione tra velocità e ridondanza dei dati. L'avvertenza è che se 1 unità dovesse fallire e precedentemente una unità è stata sostituita da un'altra unità guasta, tutti i dati saranno persi. Per informazioni esaustive, per quanto riguarda questo argomento, si veda la discussione ''[http://ubuntuforums.org/showthread.php?t=1588106 RAID5 Risks]'' sul forum di Ubuntu. La migliore alternativa a RAID-5, quando la ridondanza è di fondamentale importanza, è RAID-10.}}

=== Livelli RAID annidati ===

; [[Wikipedia:it:RAID#RAID_1+0|RAID 1+0]]: Comunemente chiamato RAID-10, è un RAID nidificato che combina due dei livelli standard di RAID per ottenere prestazioni e ridondanza supplementari.

=== Ridondanza ===

{{Attenzione|L'installazione di un sistema con RAID è un processo complesso che può distruggere i dati. Assicurarsi di eseguire il backup di tutti i dati prima di procedere.}}

RAID non fornisce una garanzia che i dati siano al sicuro. In caso di incendio, se il computer viene rubato o se si dispone di più errori del disco rigido, RAID non proteggerà i dati. Pertanto, è importante fare delle copie di backup (vedi i [[Backup Programs|programmi di backup]]). Se si utilizzano unità a nastro, DVD, CDROM o un altro computer, mantenere una copia aggiornata dei vostri dati dal vostro computer (e preferibilmente fuori sede). Si prenda l'abitudine di fare backup regolari. È anche possibile dividere i dati sul computer in directory correnti e archiviati. Quindi eseguire il backup dei dati correnti in modo frequente, e di tanto in tanto dei dati archiviati .

=== Comparazione tra i livelli RAID ===

{| class="wikitable" border="1" cellpadding="5" cellspacing="0"

! Livello RAID!!Ridondanza Dati!!Utilizzo fisico delle unità!!Prestazioni in lettura!!Prestazioni in scrittura!!Unità Min!!Unità Max

|-align="center"

| '''0'''||'''No'''||100%||'''Superiore'''||'''Superiore'''||1||16

|-align="center"

| '''1'''||Si||50%||Molto alta||Molto alta||2||2

|-align="center"

| '''5'''||Si||67% - 94%||'''Superiore'''||Alta||3||16

|-align="center"

| '''6'''||Si||50% - 88%||Molto alta||Alta||4||16

|-align="center"

| '''10'''||Si||50%||Molto alta||Molto alta||4||16

|}

==Installazione==

[[pacman|Install]] {{Pkg|mdadm}} and {{Pkg|parted}}, available in the [[Official Repositories]].

===Prepare the device===

To prevent possible issues down the line, you should consider wiping your entire disk before setting up RAID. This should be repeated for each disk you will be using for RAID, these commands completely erase anything currently on the device!

{{Warning|These steps erase everything on the {{ic|/dev/disk-to-clean}} so type carefully}}

Erase any old RAID configuration info

{{bc|1=# mdadm --zero-superblock /dev/disk-to-clean}}

Erase all partition-table data

{{bc|1=# dd if=/dev/zero of=/dev/disk-to-clean bs=4096 count=1}}

Make sure kernel clears old entries

{{bc|1=# partprobe -s}}

Verify the entries in {{ic|/etc/fstab}} and {{ic|/etc/mdadm.conf}}

With a software RAID, disabling the hard disk cache will help prevent data loss during power loss, as long as you do not use a [[Wikipedia:Uninterruptible power supply|UPS]]. Repeat the command for each drive in the array. Note however, that this decreases performance.

{{bc|# hdparm -W 0 /dev/path_to_disk}}

===Create the partition table===

The RAID setup varies between different RAID-levels. If you know what RAID you want and already set up your hardware accordingly, you can proceed with formatting the disks you want in your array. It is also possible to create a RAID-array directly on the raw disks (without partitions), but not recommended because it can cause problems when swapping a failed disk.

When replacing a failed disk of a RAID-array, the new disk has to be exactly the same size as the failed disk or bigger — otherwise the array recreation process will not work. Even hard drives of the same manufacturer and model can have small size differences. By leaving a little space at the end of the disk unallocated one can compensate for the size differences between drives, which makes choosing a replacement drive model easier. Therefore, it is good practice to leave about 100 MB of unallocated space at the end of the disk.

Format one of the drives in the array with your favorite tool. For example,

{{bc|# cfdisk /dev/path_to_disk}}

{{Tip|Using GParted to create the partitions and align them to the cylinder will create optimized disk alignment. This can be achieved using the [http://gparted.sourceforge.net/livecd.php Gnome Partition Editor Live Media].}}

====Partition code====

The two [[Wikipedia:Partition types|partition type]]s that are applicable to RAID devices are Non-FS data and Linux RAID auto. Non-FS data is recommended, as your array is not auto-assembled during boot. With Linux RAID auto one may run into trouble when booting from a live-cd or when installing the degraded RAID-array in a different system (maybe with other degraded RAID-arrays in worst case) as Linux will try to automatically assemble and resync the array which could render your data on the array unreadable if it fails.

{{note|cfdisk and mkpart use a set of "filesystem types" to set the partition codes. Each type corresponds to a partition code (see [http://www.gnu.org/software/parted/manual/html_node/mkpart.html#mkpart Parted User's Manual]). It uses the {{ic|da}} type to denote Non-FS data and {{ic|fd}} for Linux RAID auto.}}

===Copy the partition table===

Once you have a properly partitioned and aligned disk you can copy the setup to any other disk.

Verify your partitions meet basic requirements:

{{bc|1=# sfdisk -lRV /dev/path_to_formatted_array_disk}}

Dump the partition table from the formatted disk to a file:

{{bc|
# sfdisk -d /dev/path_to_formatted_array_disk > ~/formatted_array.dump
}}

Copy the partition table from the disk dump file to all other disks in the array:

{{bc|
# sfdisk /dev/path_to_unformatted_array_disk
}}

After repeating the command for every unformatted disk of the array, verify that the disks are identical with

# fdisk -l

or

# sfdisk -l -u S

===Build the array===

Now build the array (e.g. [http://fomori.org/blog/blog/2011/10/19/raid5-server-to-hold-all-your-data-%e2%80%94-the-nas-alternative/ post on RAID5 setup]).

{{Warning|Make sure to change the '''bold values''' below to match your setup.}}

{{bc|
# mdadm --create --verbose /dev/md/your_array --level=
'''5'''
--metadata=
'''1.2'''
--chunk=
'''256'''
--raid-devices=
'''5 /dev/path_to_array_disk-1 /dev/path_to_array_disk-2 /dev/path_to_array_disk-3 /dev/path_to_array_disk-4 /dev/path_to_array_disk-5''' }}

The array is created under the virtual device ''/dev/md/your_array'', assembled and ready to use (in degraded mode). You can directly start using it while mdadm resyncs the array in the background. It can take a long time to restore parity, you can check the progress with:

{{bc|$ cat /proc/mdstat}}

===Update configuration file===

Since the installer builds the initrd using {{ic|/etc/mdadm.conf}} in the target system, you should update the default configuration file. The default file can be overwritten using the redirection operator, because it only contains explanatory comments.

Redirect the contents of the metadata stored on the named devices to the configuration file:

# mdadm --examine --scan > /etc/mdadm.conf

{{Note|If you are updating your RAID configuration from within the Arch Installer by swapping to another TTY, you will need to ensure that you are writing to the correct {{ic|mdadm.conf}} file:}}

# mdadm --examine --scan > /mnt/etc/mdadm.conf

Once the configuration file has been updated the array can be assembled using mdadm:

# mdadm --assemble --scan

===Configure filesystem===

The array can now be formatted like any other disk, just keep in mind that:

* Due to the large volume size not all filesystems are suited (see: [[Wikipedia:Comparison of file systems#Limits|File system limits]]).

* The filesystem should support growing and shrinking while online (see: [[Wikipedia:Comparison of file systems#Features|File system features]]).

* The biggest performance gain you can achieve on a raid array is to make sure you format the volume aligned to your RAID stripe size (see: [http://wiki.centos.org/HowTos/Disk_Optimization RAID Math]).

===Assemble array on boot===

If you selected the Non-FS data partition code the array will not be automatically recreated after the next boot. To assemble the array issue the following command:

{{bc|
# mdadm --assemble --scan /dev/your_array --uuid=your_array_uuid
}}

or write it to {{ic|rc.local}}.

== Mounting from a Live CD ==

If you want to mount your RAID partition from a Live CD, use

# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3

(or whatever mdX and drives apply to you)

{{Note | Live CDs like [http://www.sysresccd.org/Main_Page SystemrescueCD] assemble the RAID arrays automatically at boot time if you used the partition type fd at the install of the array)}}

==Removing device, stop using the array==

You can remove a device from the array after you mark it as faulty.

# mdadm --fail /dev/md0 /dev/sdxx

Then you can remove it from the array.

# mdadm -r /dev/md0 /dev/sdxx

Remove device permanently (for example in the case you want to use it individally from now on).

Issue the two commands described above then:

# mdadm --zero-superblock /dev/sdxx

After this you can use the disk as you did before creating the array.

{{Warning | If you reuse the removed disk without zeroing the superblock you will '''LOSE''' all your data next boot. (After mdadm will try to use it as the part of the raid array). '''DO NOT''' issue this command on linear or RAID0 arrays or you will '''LOSE''' all your data on the raid array. }}

Stop using an array:

# Umount target array

# Repeat the three command described in the beginning of this section on each device.

# Stop the array with: {{ic|mdadm --stop /dev/md0}}

# Remove the corresponding line from /etc/mdadm.conf

== Adding a device to the array ==

Adding new devices with mdadm can be done on a running system with the devices mounted.

Partition the new device "/dev/sdx" using the same layout as one of those already in the arrays "/dev/sda".

# sfdisk -d /dev/sda > table

# sfdisk /dev/sdx
iotop -a -p $(sed 's, , -p ,g'
}}

==Troubleshooting==

If you are getting error when you reboot about "invalid raid superblock magic" and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line in {{ic|/boot/grub/menu.lst}} accordingly. This is what happened to me anyway.

===Start arrays read-only===

When an md array is started, the superblock will be written, and resync may begin. To start read-only set the kernel module {{ic|md_mod}} parameter {{ic|start_ro}}. When this is set, new arrays get an 'auto-ro' mode, which disables all internal io (superblock updates, resync, recovery) and is automatically switched to 'rw' when the first write request arrives.

{{Note|The array can be set to true 'ro' mode using {{ic|mdadm -r}} before the first write request, or resync can be started without a write using {{ic|mdadm -w}}.}}

To set the parameter at boot, add {{ic|
md_mod.start_ro=1
}} to your {{ic|/boot/grub/menu.lst}} kernel line

{{bc|
kernel /vmlinuz-linux root=/dev/sda1 ro rootwait md_mod.start_ro=1 quiet 3
}}

Or set it at module load time from {{ic|/etc/modprobe.d/}} file or from directly from {{ic|/sys/}}.

{{bc|echo 1 > /sys/module/md_mod/parameters/start_ro}}

===Recovering from a broken or missing drive in the raid===

You might get the above mentioned error also when one of the drives breaks for whatever reason. In that case you will have to fore the raid to still turn on even with one disk short. Type this (change where needed):

# mdadm --manage /dev/md0 --run

Now you should be able to mount it again with something like this (if you had it in fstab):

# mount /dev/md0

Now the raid should be working again and available to use, however with one disk short! So, to add that one disc partition it the way like described above in #Partition_the_Hard_Drives. Once that is done you can add the new disk to the raid by doing:

# mdadm --manage --add /dev/md0 /dev/sdd1

If you type:

# cat /proc/mdstat

you probably see that the raid is now active and rebuilding.

You also might want to update your configuration (see: [[#Update configuration file]]).

== Benchmarking ==

There are several tools for benchmarking a RAID. The most notable improvement is the speed increase when multiple threads are reading from the same RAID volume.

[http://sourceforge.net/projects/tiobench/ Tiobench] specifically benchmarks these performance improvements by measuring fully-threaded I/O on the disk.

[http://www.coker.com.au/bonnie++/ Bonnie++] tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed [http://www.coker.com.au/bonnie++/zcav/ ZCAV] program tests the performance of different zones of a hard drive without writing any data to the disk.

{{ic|hdparm}} should '''NOT''' be used to benchmark a RAID, because it provides very inconsistent results.

== Additional Resources ==

* [http://en.gentoo-wiki.com/wiki/RAID/Software RAID/Software] on the Gentoo Wiki

* [http://en.gentoo-wiki.com/wiki/Software_RAID_Install Software RAID Install] on the Gentoo Wiki

* [http://www.gentoo.org/doc/en/articles/software-raid-p1.xml Software RAID in the new Linux 2.4 kernel, Part 1] and [http://www.gentoo.org/doc/en/articles/software-raid-p2.xml Part 2] in the Gentoo Linux Docs

* [http://raid.wiki.kernel.org/index.php/Linux_Raid Linux RAID wiki entry] on The Linux Kernel Archives

* [http://linux-101.org/howto/arch-linux-software-raid-installation-guide Arch Linux software RAID installation guide] on Linux 101

* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-raid.html Chapter 15: Redundant Array of Independent Disks (RAID)] of Red Hat Enterprise Linux 6 Documentation

* [http://tldp.org/FAQ/Linux-RAID-FAQ/x37.html Linux-RAID FAQ] on the Linux Documentation Project

* [http://support.dell.com/support/topics/global.aspx/support/entvideos/raid?c=us&l=en&s=gen Dell.com Raid Tutorial] - Interactive Walkthrough of Raid

* [http://www.miracleas.com/BAARF/ BAARF] including ''[http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt Why should I not use RAID 5?]'' by Art S. Kagel

* [http://www.linux-mag.com/id/7924/ Introduction to RAID], [http://www.linux-mag.com/id/7931/ Nested-RAID: RAID-5 and RAID-6 Based Configurations], [http://www.linux-mag.com/id/7928/ Intro to Nested-RAID: RAID-01 and RAID-10], and [http://www.linux-mag.com/id/7932/ Nested-RAID: The Triple Lindy] in Linux Magazine

'''mdadm'''

* [http://anonscm.debian.org/gitweb/?p=pkg-mdadm/mdadm.git;a=blob_plain;f=debian/FAQ;hb=HEAD Debian mdadm FAQ]

* [http://www.kernel.org/pub/linux/utils/raid/mdadm/ mdadm source code]

* [http://www.linux-mag.com/id/7939/ Software RAID on Linux with mdadm] in Linux Magazine

'''Forum threads'''

* [http://forums.overclockers.com.au/showthread.php?t=865333 Raid Performance Improvements with bitmaps]

* 2011-08-28 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=125445 GRUB and GRUB2]

* 2011-08-03 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=123698 Can't install grub2 on software RAID]

* 2011-07-29 - Gentoo - [http://forums.gentoo.org/viewtopic-t-888624-start-0.html Use RAID metadata 1.2 in boot and root partition]

'''RAID with encryption'''

* [http://www.shimari.com/dm-crypt-on-raid/ Linux/Fedora: Encrypt /home and swap over RAID with dm-crypt] by Justin Wells

Show more