Installing EVMS

1. Downloading EVMS
2. Adding Support to the Kernel
3. Installing the Engine
4. Activating EVMS Volumes
5. Root Filesystem on an EVMS Volume
6. Boot Filesystem on an EVMS Volume
7. Note to Software-RAID Users

===============================================================================
1. Downloading EVMS

   To install EVMS, you must first download the latest version (2.0.0) from
   the EVMS homepage (http://sourceforge.net/projects/evms/). Download the
   file "evms-2.0.0.tar.gz". This file contains all of the source code for
   the user-space administration tools, as well as some patches for the Linux
   kernel.

   After downloading the file, untar it in the appropriate place, using the
   following command:

       cd /usr/src
       tar xvzf evms-2.0.0.tar.gz

       NOTE: This command assumes the file will be untarred in the "/usr/src"
             directory. Other directories will work just as well.

===============================================================================
2. Adding Support to the Kernel

   A. Obtaining the Linux Kernel Source

      If you do not have a current Linux kernel source tree, you can obtain one
      from The Linux Kernel Archives (http://www.kernel.org). The current
      stable kernel is 2.4.20 and the current development kernel is 2.5.66.
      For general instructions on configuring and compiling a Linux kernel,
      please see The Kernel HOWTO (http://www.tldp.org/HOWTO/Kernel-HOWTO.html).

   B. Obtaining the Latest Device-Mapper Source

      The Device-Mapper driver has been accepted into the development kernel
      tree as of 2.5.45. As of version 2.5.66, the stock kernel is up-to-date
      with all of the extra patches from Joe Thornber at Sistina.

      The Device-Mapper driver has not yet been accepted into the stable kernel
      tree, and must be added to the 2.4.20 kernel.

      1. Download one of the two following packages, depending on the kernel
         version you are compiling.

         For 2.5.66 kernels:
           (no extra download currently required)

         For 2.4.20 kernels:
           http://people.sistina.com/~thornber/patches/2.4-stable/2.4.20/2.4.20-dm-10.tar.bz2

      2. Untar the downloaded package:

         For 2.5.66:
            (no extra base patches currently required)

         For 2.4.20:
            cd /usr/src/
            tar xvjf 2.4.20-dm-10.tar.bz2

      3. Apply each of the patches from this package to your kernel:

         For 2.5.66:
            (no extra base patches currently required)

         For 2.4.20:
            cd /usr/src/linux-2.4.20/
            cat /usr/src/2.4.20-dm-10/*.patch | patch -p1

   C. Adding EVMS Kernel Patches

      In addition to the base MD and Device-Mapper support, EVMS requires
      a few additional patches for the EVMS engine to work correctly with
      these drivers. These patches are provided in the EVMS package, in the
      "kernel/2.5.66/" and "kernel/2.4.20/" subdirectories. See the INDEX
      files in those directories for descriptions of the patches.

      NOTE: If you are currently using the Sistina LVM2 tools, the patches
      provided here will *NOT* be compatible with the LVM2 tools. EVMS has
      made some changes to the kernel ioctl packets so they will work
      correctly on certain 64-bit architectures. These changes are still
      being reviewed by Sistina. Thus, for the time being, you will need to
      use separate kernels for running EVMS and LVM2.

      Apply each of the patches to your kernel.

         For 2.5.66:
            cd /usr/src/linux-2.5.66/
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.5.66/1-dm-base.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.5.66/2-syncio.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.5.66/3-dm-bbr.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.5.66/4-dm-sparse.patch

         For 2.4.20:
            cd /usr/src/linux-2.4.20/
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.4.20/1-dm-base.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.4.20/2-syncio.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.4.20/3-dm-bbr.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.4.20/4-dm-sparse.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.4.20/5-md.c.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.4.20/6-vsprintf.c.patch
            patch -p1 < /usr/src/evms-2.0.0/kernel/2.4.20/7-vfs-lock.patch

   D. Configuring the Kernel

      After patching the kernel, the next step is configuring it with the
      required support. To configure the kernel, complete the following steps:

      1. Type the following command:

             make xconfig

             NOTE: You can also use "config" or "menuconfig".

      2. Select the "Code maturity level options" menu and enable the
         following option:

           <y> Prompt for development and/or incomplete code/drivers

      3. To enable MD and DM support, select the
         "Main Menu->Multi-device support (RAID and LVM)" menu, and select
         the following option:

           <y> Multiple devices driver support (RAID and LVM)

      4. If you will be using the MD plugin in EVMS, turn on the following
         menu options. Also, please see the "Note To Software-RAID Users" at
         the end of this INSTALL file.

           <y> RAID support
           <y>   Linear (append) mode
           <y>   RAID-0 (striping) mode
           <y>   RAID-1 <mirroring) mode
           <y>   RAID-4/RAID-5 mode

      5. If you will be using any other plugins in EVMS (besides MD), turn
         on the following menu options.

           <y> Device mapper support
           <y>   Bad Block Relocation Device Target
           <y>   Sparse Device Target

      Continue configuring your kernel as required for your system and hardware.
      When you have finished configuring your kernel, choose Save and Exit to
      quit the kernel configuration.

   E. Building and Installing the New Kernel

      Once you have configured the kernel, you will need to build the kernel.

      1. Type the following command:

             make dep && make clean && make bzImage modules modules_install

      2. Copy the new kernel to the appropriate location.

         NOTE: Use the arch/i386/boot/bzImage file on Intel machines.

      3. If you use LILO as your boot-loader, run lilo to install the new
         kernel image.

      4. Re-boot your machine to start the new kernel.

===============================================================================
3. Installing the Engine

   The EVMS Engine consists of all the user-space administration tools and
   libraries for EVMS. The Engine also contains a stand-alone library, dlist,
   that the Engine uses for linked-list management.

   A. To build and install the Engine, type the following commands:

          cd /usr/src/evms-2.0.0
          ./configure [--options]

   B. Select the appropriate options for your configuration. Some of the more
      important ones are listed here:

      --prefix=dir
      The default installation path is /.

      --libdir=dir
      The directory to install the main engine and dlist libraries. The default
      path is ${prefix}/lib. The EVMS plugin libraries will be installed in the
      "evms" subdirectory of this path.

      --sbindir=dir
      The directory to install all EVMS user-interface binaries. The default
      path is ${prefix}/sbin.

      --disable-"plugin-name"
      By default, all EVMS plug-ins are compiled (unless a plug-in has
      dependencies that are not satisfied on the building machine). This option
      allows the user to remove one or more plug-ins from the build. Acceptable
      options for "plugin-name" are:
         aix, bbr, bbr_seg, bsd, csm, disk, dos, drivelink, ext2, gpt, ha, jfs,
         lvm, md, os2, reiser, replace, s390, snapshot, sparse, swap, xfs

      --disable-"interface-name"
      By default, all EVMS user-interfaces are compiled (unless an interface
      has dependencies that are not satisfied on the building machine). This
      option allows the user to remove one or more interfaces from the build.
      Acceptable options for "interface-name" are:
         cli, gui, text-mode, utils

      --enable-text-mode-old
      A new ncurses-based text-mode interface has been written. However, the
      old interface is still included in the package, but is not built by
      default. To build the old text-mode interface, use this option.

      --with-debug
      Include extra debugging information when building EVMS.

      --with-efence
      Specify this if the engine should be linked with the ElectricFence
      memory-debugging library. You must have libefence installed on your
      system for this option to work.

   C. Type the following commands:

          make
          make install
          ldconfig

      Unless you specified other directories, the following list describes
      where files will be installed on your system:

      - The core Engine library will be installed in /lib.
      - All plug-in libraries will be installed in /lib/evms.
      - All user interface binaries will be installed in /sbin.
      - The EVMS man pages will be installed in /usr/man/man8.
      - The EVMS header files will be installed in /usr/include/evms.
      - The EVMS configuration file will be installed in /etc.

   D. Add the Engine library path to your LD_LIBRARY_PATH environment variable,
      or to your /etc/ld.so.conf file. Do not add the plug-in library path
      because the Engine will dynamically load these libraries directly.

   E. Examine the EVMS configuration file (evms.conf). This file contains
      settings to control how EVMS operates. For example, the logging level,
      the location of the engine log, and the list of disk devices to examine
      can all be controlled through settings in the configuration file. The
      sample file is well commented, and will advise you of appropriate
      values for each setting.

      This file is normally installed in /etc/evms.conf. However, If you
      already have a configuration file, it is installed as evms.conf.sample.
      You should examine the new sample to see if your existing file should
      be updated.

   You can now begin using EVMS by typing "evmsgui" to start the GUI, or
   "evmsn" to start the ncurses UI, or "evms" to start the command line.

===============================================================================
4. Activating EVMS Volumes

In the previous EVMS design (releases 1.2.1 and earlier), volume discovery was
performed in the kernel, and all volumes were immediately activated at boot
time. With the new EVMS design, volume discovery is performed in user-space,
and volumes are activated by communicating with the kernel. Thus, in order to
activate your volumes, you must open one of the EVMS user-interfaces and
perform a save, which will activate all inactive volumes.

For instance, start the GUI by running "evmsgui". You should see all empty
checkboxes in the "Active" column. Press the "Save" button, which will
activate all of the volumes, and each of those checkboxes should then be
filled in.

In addition to manually starting one of the EVMS UIs, there is a new
utility called "evms_activate". This utility simply opens the EVMS engine and
issues a commit command. You may want to add a call to "evms_activate" to your
boot scripts in order to automatically activate your volumes at boot time. If
you have volumes listed in your /etc/fstab file, you will need to call
evms_activate before the fstab file is processed.

   NOTE: EVMS requires /proc to be mounted in order to find the Device-Mapper
         driver. If you run evms_activate before processing the fstab file,
         you may need to manually mount and unmount /proc around the call to
         evms_activate.

Once the volumes are activated, you may mount them in the normal fashion, using
the dev-nodes in the /dev/evms/ directory.

To go along with the "evms_activate" utility, there is also an
"evms_deactivate" utility. This will deactivate all Device-Mapper devices
from the kernel. It can be called from your system's shutdown scripts to
ensure clean deactivation of all EVMS volumes before halting or rebooting.
This utility only handles Device-Mapper devices, since the MD kernel driver
automatically deactivates RAID devices at system shutdown.

===============================================================================
5. Root Filesystem on EVMS Volume

Now that volume discovery and activation are done in user-space, there is an
issue with having your system's root filesystem on an EVMS volume. In order for
the root filesystem's volume to be activated, the EVMS tools must run. But in
order to get to the EVMS tools, the root filesystem must be mounted.

The solution to this dilemma is to use an initial ramdisk (initrd). This is a
ram-based device that acts as a temporary root filesystem at boot time, and
provides the ability to run programs and load modules that are necessary to
activate the true root filesystem.

In order to setup an init ramdisk for your system, please see the
INSTALL.initrd instructions included in the EVMS package.

===============================================================================
6. Boot filesystem on an EVMS Volume

Currently, there are two boot-loaders commonly in use on Linux: LILO and Grub.
The bootloader you are running will determine whether your /boot filesystem can
be on an EVMS volume (if /boot is not on its own volume, then this discussion
applies to the root filesystem instead, since that is where /boot will reside).

LILO Users:

   After compiling a new kernel, you run the "lilo" command to record the
   kernel's location in a place that is accessible at boot time. LILO does
   this by asking the filesystem for a list of the blocks that make up the
   kernel image. It then translates this list of blocks to a list of sectors
   on the raw disk. This list of sectors is recorded and accessible to LILO
   at boot time. However, LILO can only generate this list when the kernel
   image is on a regular partition. For more complex volumes, it must ask the
   kernel driver for that volume to translate the sector location within the
   volume to a sector location on the disk.

   Currently, LILO does not support Device-Mapper devices. Device-Mapper (on
   2.4 kernels) does contain the necessary support to provide LILO with the
   required translations, but LILO has not yet been updated to ask Device-
   Mapper for this information. As with early versions of EVMS, we hope to
   have a patch available soon to make LILO work with Device-Mapper.

   For the time being, you will not be able to mount your /boot filesystem
   through the EVMS volume. If your /boot is on a regular partition, you
   should mount that partition through the traditional device-node
   (e.g. /dev/hda1).

   Alternatively, you can mount the /boot partition through the EVMS volume
   during normal system operation. Then, if you ever need to run "lilo",
   temporarily unmount /boot and remount using the traditional device-node.

Grub Users:

   Grub works differently than LILO, in that it contains native support for
   partitions and filesystems. At boot time, it finds the /boot partition and
   looks for its configuration file in the filesystem. It uses this config
   file to locate the kernel image, which it then loads into memory.

   However, Grub does not have support for complex volumes. This means your
   /boot volume must be based on a single partition. In EVMS, this partition
   can be either a compatibility volume or an EVMS volume. Since the underlying
   partition is understood by both EVMS and Grub, this method is compatible
   with both systems. You may mount this volume through the regular EVMS
   device-node.

===============================================================================
7. Note to Software-RAID Users

EVMS uses Device-Mapper to create mappings for all disk partitions/segments.
The major/minor numbers for these DM devices are different than the major/minor
numbers of the traditional disk partitions exported from the kernel.

If you have any MD devices made of disk partitions, those devices will have the
major/minor numbers of the traditional partitions recorded in their superblocks.
When EVMS activates those MD devices, it will rewrite the superblocks using the
major/minor numbers of the Device-Mapper partitions so that the MD driver can
correctly discover its devices using the Device-Mapper partitions instead of
the traditional partitions.

This means that if you use EVMS to discover and activate your MD devices, the
regular raidtools will no longer be able to activate them using the traditional
partitions. Please keep this in mind before trying the new EVMS engine!!!

However, we have added an option in the MD plug-in to allow an MD object to
revert back to the old device numbers. When an MD object is first activated
by EVMS, the original device numbers will be saved in an unused area of the
MD superblock, and restored to the proper location when the user selects this
option within one of the EVMS user-interfaces.

