Category: Linux

  • Debian on self-encrypting drive using cryptsetup OPAL support

    Context

    For a long time, I have been using a hard drive with cryptsetup LUKS software encryption for my desktop. However, I recently decided to purchase a new hard disk and this time went for SSD instead of HDD. Interestingly, the SSD came with built-in support for hardware encryption. What this meant was that instead of having the CPU waste cycles on encryption / decryption, the SSD would take care of it, thereby freeing up CPU resources. The challenge? Setting up disk encryption is complicated. Until Aug 2023! Because that is when crypsetup, the tool that I have been using for software encryption, introduced support for hardware OPAL disk encryption. And this is what I have used to set up disk encryption for my Debian OS. I have documented the steps that I followed (mostly for self-reference), but I hope others find it useful.

    Some caveats before you start:

    1. Support for hard disk OPAL encryption landed in cryptsetup 2.7.0. Debian stable (bookworm) still has cryptsetup 2.6.0. If you want to use this, you will either have to use Debian testing (trixie) or if you intend to use Debian stable, upgrade cryptsetup from the testing repo (which is what I did)
    2. I have not installed Debian but instead, once disk encryption along with LUKS encrypted LVM was set up, I just copied all files from my existing OS to this. If you want to do a fresh installation, I believe you can do it, once you create the all partitions. The LVM is purely optional, and I prefer to use it for convenience.
    3. I also have tow additional mount points (/stuff/ and /gallery/) which are in no way required or even a standard. Again, this is just my personal preference. Feel free to ignore them.
    4. I have a EFI partition as my motherboard uses the UEFI interface for booting.
    5. You can set up disk encryption only or disk encryption with software encryption. As my need was to provide basic protection, I decided t go for disk only encryption.
    6. The SSD was identified as /dev/sda and that is what I have used below. The device location might be different in your case.
    7. Last but not the least, if you are copying over an existing OS, have a proper backup.

    If you have any queries, feel free to leave a comment and if I have an answer to them, I will get back to you.

    Setup

    • Fix the new disk to the laptop
    • Prepare a usb drive with Debian live
    • Boot into Debian live. While booting:
    • Pass the boot option libata.allow_tpm=1 so that sedutil-cli --scan works fine
    • Pass the boot option efi=runtime if you use EFI for booting your system

    Check disk support for hardware encryption

    $ sudo apt install sedutil
    $ sudo sedutil-cli --scan
    Scanning for Opal compliant disks
    /dev/sda 2 CT2000MX500SSD1 M3CR046
    /dev/sdb No
    No more disks present ending scan
    

    If you see a number in the second column, like the 2 above, your drive supports OPAL.

    Reset your OPAL drive

    You will need to have your PSID to reset the drive. You can mostly find the PSID on your drive, printed on a sticker. If the PSID has dashes, ignore them.

    $ sudo cryptsetup luksErase --hw-opal-factory-reset /dev/sda
    Enter OPAL PSID:
    

    Create partitions

    My system partitions layout with mount points /dev/sda1 fat32 /boot/efi /dev/sda2 ext4 /boot /dev/sda3 luks encrypted LVM   /dev/mapper/media-vg–root ext4 /   /dev/mapper/media-vg–home btfs /home   /dev/mapper/media-vg–swap_1 swap   /dev/mapper/media-vg–stuff btrfs /stuff (optional partition that I use for storing general stuff)   /dev/mapper/media-vg–gallery btrfs /gallery (optional partition that I use for storing images/videos)

    Create partitions

    $ sudo parted /dev/sda
    (parted) mklabel gpt
    (parted) mkpart ESP fat32 1MiB 526MiB
    (parted) set 1 boot on
    (parted) mkpart primary ext4 526MiB 1550MiB
    (parted) mkpart primary 1550MiB 100%
    (parted) print
    (parted) quit
    

    Create encrypted OPAL disk partition

    As mentioned earlier, ensure that you have cryptsetup 2.70 or newer. Older versions of cryptsetup will not work.

    $ sudo cryptsetup luksFormat /dev/sda3 --type luks2 --hw-opal-only
    

    The –hw-opal-only flag tells cryptsetup to use hardware encryption only. If you want to use software encryption on top of hardware encryption, pass the –hw-opal flag instead.

    Check configuration with luksDump. This output will be different if you used –hw-opal flag.

    $ sudo cryptsetup luksDump /dev/sda3
    LUKS header information
    Version: 2
    ...
    Data segments:
    0: hw-opal
    offset: 16777216 [bytes]
    length: ... [bytes]
    cipher: (no SW encryption)
    HW OPAL encryption:
    OPAL segment number: 1
    OPAL key: 256 bits
    OPAL segment length: ... [bytes]
    Keyslots:
    0: luks2
    Key: 256 bits
    ...
    

    If you used –hw-opal flag, output will be something like this.

    LUKS header information
    Version: 2
    ...
    
    Data segments:
    0: hw-opal
    offset: 16777216 [bytes]
    length: ... [bytes]
    cipher: (no SW encryption)
    HW OPAL encryption:
    OPAL segment number: 1
    OPAL key: 256 bits
    OPAL segment length: ... [bytes]
    Keyslots:
    0: luks2
    Key: 256 bits
    ...
    

    Create LVM

    Mount LUKS partition

    $ sudo cryptsetup open /dev/sda3 sda3_crypt
    

    Create a PV

    $ sudo pvcreate /dev/mapper/sda3_crypt
    

    Create a volume group of physical volume

    $ sudo vgcreate media-vg /dev/mapper/sda3_crypt
    

    Verify VG configuration

    $ sudo vgdisplay
    

    Create logical volumes

    $ sudo lvcreate -n root -L 100g media-vg
    $ sudo lvcreate -n home -L 100g media-vg
    $ sudo lvcreate -n swap_1 -L 20g media-vg
    $ sudo lvcreate -n stuff -L 200g media-vg
    $ sudo lvcreate -n gallery -l 100%FREE media-vg
    $ sudo lvdisplay
    

    Format partitions

    $ sudo mkfs.fat -F32 /dev/sda1
    $ sudo mkfs.ext4 /dev/sda2
    $ sudo mkfs.ext4 /dev/media-vg/root
    $ sudo mkfs.btrfs /dev/media-vg/home
    $ sudo mkswap /dev/media-vg/swap_1
    $ sudo mkfs.btrfs /dev/media-vg/stuff
    $ sudo mkfs.btrfs /dev/media-vg/gallery
    

    Mount partitions and restore data

    Set up folder structure and mount partitions

    $ sudo mount /dev/media-vg/root /mnt/
    $ sudo mkdir /mnt/boot/ && sudo mount /dev/sda2 /mnt/boot/
    $ sudo mkdir /mnt/boot/efi && sudo mount /dev/sda1 /mnt/boot/efi
    $ sudo mkdir /mnt/home/ && sudo mount /dev/media-vg/home /mnt/home/
    $ sudo mkdir /mnt/stuff/ && sudo mount /dev/media-vg/stuff /mnt/stuff/
    $ sudo mkdir /mnt/gallery/ && sudo mount /dev/media-vg/gallery /mnt/gallery/
    

    Now, copy over all the files from the old system to the new system. If you are using rsync for the transfer, here are some pointers on how you can do it.

    Note: If you are installing OS freshly, then you can stop following the guide here as the next steps will no longer be relevant and proceed with the OS installation the usual way. As a matter of fact, I believe, once you created the encrypted OPAL disk partition (/dev/sda3), then itself, you could have switched to the installation tool and created LVM from it. However, I have not tried fresh installation. So can’t confirm.

    chroot into the system

    First bind mount points

    $ for i in /dev /dev/pts /proc /sys /sys/firmware/efi/efivars /run; do sudo mount -o bind $i /mnt$i; done
    

    chroot into the system

    $ sudo chroot /mnt/
    

    Update partition and LVM related information

    Update /etc/fstab. To generate UUID, use blkid command. Below is the one for my system.

    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # /dev/mapper/media--vg-root / ext4 errors=remount-ro 0 1
    # /boot was on /dev/sda2 during installation
    UUID=e63fbce3-8e20-40ed-af1a-17b73768f853 /boot ext4 defaults 0 2
    # /boot/efi was on /dev/sda1 during installation
    UUID=CDDE-67F3 /boot/efi vfat umask=0077 0 1
    /dev/mapper/media--vg-home /home btrfs defaults 0 0
    /dev/mapper/media--vg-swap_1 none swap sw 0 0
    /dev/mapper/media--vg-gallery /gallery btrfs defaults,nofail 0 0
    /dev/mapper/media--vg-stuff /stuff btrfs defaults,nofail 0 0
    

    Update /mnt/etc/crypttab. To get UUID of luks partition, run cryptsetup luksUUID /dev/sda3. Below is the one for my system.

    sda3_crypt UUID=42834177-2cb5-45ef-897f-af1c85f35bf1 none luks,discard
    

    Additionally, generate LVM metadata backup

    # vgcfgbackup media-vg
    

    Finally, update initramfs (not sure if this step is really needed)

    # update-initramfs -u -k all
    

    Reinstall grub from within chroot

    Reinstall GRUB

    # grub-install --target=x86_64-efi
    

    Generate the GRUB configuration file:

    # update-grub
    

    More information about the GRUB bootloader can be found here. Of special interest are the grub-install command arguments --efi-directory and --bootloader-id, which default to /boot/grub and debian respectively on Debian.

    Boot into system

    Now, exit chroot and reboot the system, remove Debian live and boot into the new system.

    References

  • Lesser Know Useful SSH Features

    I had given a small presentation in my organization on five of the lesser known useful SSH features. I am uploading the presentation here.

    Her is an interesting link I came across talking about the same topic – http://blog.tjll.net/ssh-kung-fu/

  • Configuring Thunderbird As KNotes Mail Action

    The steps are really simple.

    1. Save the below script to /usr/local/bin/knotes-mail-wrapper.sh

      #!/bin/bash
      
      set -eu
      
      # See /usr/share/doc/util-linux/examples/getopt-parse.bash for getopt example
      TEMP=$(getopt --name "$0" --options s:,b: --longoptions "subject:,body:" -- "$@")
      
      # Bad arguments, something has gone wrong with the getopt command.
      if [ $? -ne 0 ]; then
        echo "Termintaing ..." >&2
        exit 1
      fi
      
      # Note the quotes around `$TEMP': they are essential!
      eval set -- "$TEMP"
      
      # Now go through all the options
      while true; do
        case "$1" in
          --subject)
            subject=$2
            shift 2;;
      
          --body)
            body=$2
            shift 2;;
      
          --)
            shift
            break;;
      
          *)
            echo "Error parsing arguments" >&2
            exit 1;;
        esac
      done
      
      thunderbird_command=$(command -v icedove >/dev/null 2>&1 && echo icedove || echo thunderbird)
      $thunderbird_command -compose "subject='$subject',body='$body'"
    2. Make the script executable.
      $ chmod +x /usr/local/bin/knotes-mail-wrapper.sh
      
    3. Go to Configure Knotes… -> Actions and in mail action set it to “/usr/local/bin/knotes-mail-wrapper.sh –subject %t –body %f”.

    Now whenever you right click on a note toolbar and click on Mail…, thunderbird compose window should pop-up with the subject as the note title and body as the content of the note.

  • Setting up SSH and OpenVPN on Netgear WNDR4300

    Introduction
    Unknown to many, the official Netgear WNDR4300 firmware is based on OpenWrt. The procedure described here involves making use of the OpenWrt repos to have openvpn up and running. It also involves recompiling the official firmware. So, this is certainly not for the faint hearted and expects you to have a good knowledge of Linux.

    Stock Firmware Info
    From the file /etc/banner on the router (I will tell later how you can telnet to the router), it is clear that stock firmware is based on OpenWrt kamikaze (bleeding edge, r18571). Based on http://wiki.openwrt.org/about/history, the closest available stable release is Kamikaze 8.09.2 r18801, released in 2010 January. Also, as there is a file /lib/ar71xx.sh in the stock firmware, it indicates that the arch is ar71xx.

    Stock Firmware Compilation And Installation

    1. For stock firmware source compilation, it is recommended to use Ubuntu 10.04 (server edition) as the official firmware binary has been compiled on Ubuntu 10.04.1 (Server) with gcc 4.1.3. So, download and install Ubuntu 10.04 (you can use a VM as it is more convenient).
    2. After installing Ubuntu 10.04 for building the firmware, install build dependencies.
      $ sudo apt-get install gcc-4.1 g++-4.1 libncurses-dev zlib1g-dev gawk flex
      $ cd /usr/bin
      $ sudo ln -s gcc-4.1 gcc
      $ sudo ln -s g++-4.1 g++
      $ sudo ln -s gcc cc
      
    3. Netgear stock firmwares can be downloaded from http://kb.netgear.com/app/answers/detail/a_id/2649. Download and extract WNDR4300-V1.0.1.42_gpl_src.zip. You will also need WNDR4300-V1.0.1.30_gpl_src.zip for the toolchain.
      $ unzip /path/to/WNDR4300-V1.0.1.42_gpl_src.zip
      $ bunzip2 WNDR4300-V1.0.1.42_gpl_src.tar.bz2
      $ tar -xvf WNDR4300-V1.0.1.42_gpl_src.tar
      $ ls
      README.build  wndr4300-GPL.git  wndr4300_gpl_source_list.txt  WNDR4300-V1.0.1.42_gpl_src.tar
      
    4. Add init script wndr4300-GPL.git/target/linux/wndr4300/base-files/etc/init.d/startup with below content:
      #!/bin/sh /etc/rc.common
      START=99
      start() {
        if [ -x /jffs/startup.sh ]; then
          /jffs/startup.sh
        fi
      }
      

      Also make the init script executable

      $ chmod +x wndr4300-GPL.git/target/linux/wndr4300/base-files/etc/init.d/startup
      

      Now, you can write any commands in /jffs/startup.sh and they will be executed whenever the router boots up.

    5. Follow remaining instructions in README.build to finish the build.
    6. The final image is “bin/WNDR4300-V1.0.1.42.img”. Go to the Router Upgrade Page and upgrade to this newly built firmware.

    Logging in to the router (using Telnet)
    You can use the software at https://code.google.com/p/netgear-telnetenable/ to telnet to the router. The instructions for doing this are pretty straight forward.

    OpenWrt wiki page http://wiki.openwrt.org/toh/netgear/telnet.console also mentions other ways of accessing the telnet console but I haven’t tried them as netgear-telnetenable worked like a charm.

    Setting up ipkg

    1. wget, which is used by ipkg for downloading packages, is broken in the stock firmware. So, we need to download wget and dependent packages from http://downloads.openwrt.org/kamikaze/8.09.2/ar71xx/packages and install them. However, as wget is broken, I didn’t know how to download the packages directly to the router. So, I downloaded them to my laptop, started a tftp server on my laptop, logged into the router, and using the tftp client transferred and installed the packages.
      Setting up tftp server is out of the scope of this tutorial, but it is quite easy and you will find many tutorials on-line on how to do it.

      $ python telnetenable.py <IP> <MAC> <Username> <Password>
      BusyBox v1.4.2 (2013-12-26 18:08:07 UTC) Built-in shell (ash)
      Enter 'help' for a list of built-in commands.
      
        _______                     ________        __
       |       |.-----.-----.-----.|  |  |  |.----.|  |_
       |   -   ||  _  |  -__|     ||  |  |  ||   _||   _|
       |_______||   __|_____|__|__||________||__|  |____|
                |__| W I R E L E S S   F R E E D O M
       KAMIKAZE (bleeding edge, unknown) ------------------
        * 10 oz Vodka       Shake well with ice and strain
        * 10 oz Triple sec  mixture into 10 shot glasses.
        * 10 oz lime juice  Salute!
       ---------------------------------------------------
      root@WNDR4300:~# tftp -g -r libopenssl_0.9.8i-3.2_mips.ipk 
      root@WNDR4300:~# ipkg install libopenssl_0.9.8i-3.2_mips.ipk
      root@WNDR4300:~# tftp -g -r libopenssl_0.9.8i-3.2_mips.ipk 
      root@WNDR4300:~# ipkg install wget_1.11.4-1_mips.ipk
      
    2. Create /etc/ipkg.conf and update ipkg list.
      root@WNDR4300:~# echo -e "dest root /jffs\nsrc openwrt http://downloads.openwrt.org/kamikaze/8.09.2/ar71xx/packages" > /etc/ipkg
      root@WNDR4300:~# export PATH=/jffs/bin:/jffs/sbin:/jffs/usr/bin:/jffs/usr/sbin:$PATH
      root@WNDR4300:~# ipkg update
      

    Installing SSH

    1. Now, you can install SSH from OpenWrt Kamikaze repos.
      root@WNDR4300:~# ipkg install openssh-server
      

      As a matter of fact, you can install any of the packages in http://downloads.openwrt.org/kamikaze/8.09.2/ar71xx/packages/ and they should most probably work.

    2. Kindly note that the binaries and libraries are installed to /jffs partition and not /, as we we have configured the same in /etc/ipkg.conf (dest root /jffs). We did this so that the files persist when we reboot the router. So, to accommodate this, you will have to modify /jffs/etc/init.d/sshd accordingly. Here is the modified script.
      #!/bin/sh /etc/rc.common
      # Copyright (C) 2006 OpenWrt.org
      START=50
      
      start() {
              for type in rsa dsa; do {
                      # check for keys
                      key=/jffs/etc/ssh/ssh_host_${type}_key
                      [ ! -f $key ] && {
                              # generate missing keys
                              [ -x /jffs/usr/bin/ssh-keygen ] && {
                                      /jffs/usr/bin/ssh-keygen -N '' -t $type -f $key 2>&- >&- && exec /etc/rc.common "$initscript" start
                              } &
                              exit 0
                      }
              }; done
              mkdir -p /var/empty
              chmod 0700 /var/empty
              /jffs/usr/sbin/sshd -f /jffs/etc/ssh/sshd_config
      }
      
      stop() {
              killall sshd
      }
      
    3. To start OpenSSH server:
      root@WNDR4300:~# /jffs/etc/init.d/sshd start
      

    Installing OpenVPN

    1. Install OpenVPN using ipkg
      root@WNDR4300:~# ipkg install openvpn
      
    2. Dump your config file (ex. amaram.vpn.conf) in /jffs/etc/openvpn/ directory.
    3. For OpenVPN, I preferred to start it directly and avoid calling the openvpn init.d script.
      root@WNDR4300:~# LD_LIBRARY_PATH=/jffs/usr/lib /jffs/usr/sbin/openvpn --daemon --cd /jffs/etc/openvpn --config amaram.vpn.conf --log /tmp/openvpn.log
      
    4. You might need to modify iptables rules whenever openvpn starts. This can be achieved by passing the –route-up option to the openvpn binary with argument as path to the script containing the firewall rules to be executed whenever a tunnel is established.

    Configuring the launch script
    We finally have to write the /jffs/startup.sh script to automate setting up of ipkg and starting ssh and openvpn servers whenever the router reboots. Here is the content of /jffs/startup.sh script that I am using:

    # Set PATH
    echo "export PATH=/jffs/bin:/jffs/sbin:/jffs/usr/bin:/jffs/usr/sbin:\$PATH" >> /etc/profile
    
    # Set LD_LIBRARY_PATH
    echo "export LD_LIBRARY_PATH=/jffs/usr/lib" >> /etc/profile
    
    # Setup ipkg
    echo -e "dest root /jffs\nsrc openwrt http://downloads.openwrt.org/kamikaze/8.09.2/ar71xx/packages" > /etc/ipkg.conf
    
    # SSH authorized_keys
    mkdir -p /tmp/.ssh
    echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6emYnBS1NLG1j1HsuMb3X6nI0+jWrpRvjhBSuB9q4lOO4NpxNdgCiDd7+qoYGLd4fE7hy/GYN1TvXXuDtDZPnuIOg8XaRxZg5wSDZV0nRsDNKGH8NikGzvxGEI9KeqBNrl+iRLS/ipl0QRmLpNScwXWOW6h9eP+S7GaL6Y56YyL+uwuUg14ow2nA2YFYQKLRXM20EiEm4C419XknYHsIG16ix2AamrH1CGJrQCo0m6f1Kf5OUjX8gSQvaToaD2J5NFbdGfaykW/RvmQH+37PlVnfE24SVrZ0ylRHvnqMTgSE1ZQ54U/zAbRpwB3vpEQCdW/kNz/gLwzbUHW0yzEw+w== rahul@rahul-laptop" > /tmp/.ssh/authorized_keys
    
    # Start SSH
    /jffs/etc/init.d/sshd start
    
    # Start openvpn
    LD_LIBRARY_PATH=/jffs/usr/lib /jffs/usr/sbin/openvpn --daemon --cd /jffs/etc/openvpn --config amaram.vpn.conf --log /tmp/openvpn.log
    
  • Linux Backup Solutions

    In an earlier post, I spoke about the need for backup. However, I hardly spoke about the available backup solutions in Linux and just mentioned about a software that I no longer use. In this article, I will primarily focus on some of the popular linux backup solutions.

    Factors to consider

    There are many questions one should ask himself before deciding on a backup solution. Some of them being:

    • What is the scale of the backup? Is it just my system or is it multiple systems in the network?
    • Do I want to backup to hark disk or cloud (S3, rackspace) or tapes or CD/DVD?
    • What protocol should be used for backing up over network?
    • Do I want the backed up archives to be automatically split into fixed sizes? (for example for writing to CD/DVD)
    • While using windows, should the permissions of files be backed up as well?
    • For network based backup, should it be a pull model or push model i.e. should the backup be initiated from the backup server or from the machine being backed up?
    • Do I want the backups to be stored opaquely (archives) or transparently (plain files)? Note that storing in transparent format (i.e. as plain files) has its limitations that compression and encryption cannot be supported
    • What is the backup frequency and policy that I want to use?
    • Is it feasible to automatically purge old backup data?

    Graphical Backup Software for Desktop PCs

    There are three popular graphical backup software : Déjà Dup (which is actually a graphical frontend to duplicity), Back In Time and luckyBackup. All of them are good but there are features in one which is missing in other. Therefore it became impossible for me to decide on which was the best. Below is a comparison chart of all the three software. I leave it for you to decide which one do you want to use.

      Déjà Dup Back In Time luckyBackup
    Description Déjà Dup is a simple backup tool. It hides the complexity of backing up the Right Way (encrypted, off-site, and regular) and uses duplicity as the backend. Back In Time is a simple backup tool for Linux inspired from “flyback project” and “TimeVault”. The backup is done by taking snapshots of a specified set of directories. luckyBackup is an application for data back-up and synchronization powered by the rsync tool. It is simple to use, fast (transfers over only changes made and not all data), safe (keeps your data safe by checking all declared directories before proceeding in any data manipulation), reliable and fully customizable
    Scheduling Method Daemon (deja-dup-monitor) started upon user desktop login. Can also be scheduled using Cron. Cron Cron
    Highest Backup Frequency Daily (can be configured for hourly by following this tip) 5 mins Cron compatible
    Visual notification during backup System Tray None System Tray
    Simulated Run (for backup and restore) No No Yes
    Backup Locations S3, FTP, SSH, Webdav, Windows Share, Custom Location, Local Folder SSH, Local Folder SSH
    Restore individual files Yes Yes No
    Backup data format Opaque All revisions are stored transparently Only the most recent backup is stored transparently
    File Browser Integration Out-of-the box integration with Nautius (GNOME) and can be integrated with Dolphin (KDE) as well No No
    Support for multiple profiles No Yes Yes
    UI Ease of Use Very Simple Easy Average
    Encrypted Backups Yes No No
    Old Backups Purge Policy Based on time Based on time Based on number of snapshots

    As you can see, it is pretty difficult to select one clear winner as the features provided by one are not found in the other. I encourage you to take the above comparison table as a starter and evaluate each of the products before deciding on one.

    Command Line Backup Software

    rdiff-backup – rdiff-backup backs up one directory to another, possibly over a network. The target directory ends up a copy of the source directory, but extra reverse diffs are stored in a special subdirectory of that target directory, so you can still recover files lost some time ago. The idea is to combine the best features of a mirror and an incremental backup. rdiff-backup also preserves subdirectories, hard links, dev files, permissions, uid/gid ownership, modification times, extended attributes, acls, and resource forks. Also, rdiff-backup can operate in a bandwidth efficient manner over a pipe, like rsync. Thus you can use rdiff-backup and ssh to securely back a hard drive up to a remote location, and only the differences will be transmitted. Finally, rdiff-backup is easy to use and settings have sensical defaults. Check out its complete set of features.

    duplicity – Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server. Check out its complete set of features.

    rsnapshot – rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. Using rsync and hard links, it is possible to keep multiple, full backups instantly available. The disk space required is just a little more than the space of one full backup, plus incrementals. Depending on your configuration, it is quite possible to set up in just a few minutes. Files can be restored by the users who own them, without the root user getting involved. There are no tapes to change, so once it’s set up, your backups can happen automatically untouched by human hands. And because rsnapshot only keeps a fixed (but configurable) number of snapshots, the amount of disk space used will not continuously grow.

    DAR – Disk ARchive – dar is a shell command that backs up directory trees and files, taking care of hard links, Extended Attributes, sparse files, MacOS’s file forks, any inode type (including Solaris Door inodes), etc. From a filesystem, dar creates an archive, which may be split in a set of files (called slices) which size is user defined. Dar archives are suitable to be stored on floppy, CD, DVD, usb key, hard disks, and since release 2.4.0 to tapes too. Dar can perform full backup, incremental backup, differential backup and decremental backup. Dar is able to be run on a live filesystem. Dar can be used through ssh for remote backups. Check out its complete set of features. Combined with SaraB, it is possible to schedule and rotate backups on random-access media. SaraB supports the Towers of Hanoi, Grandfather-Father-Son, or any custom backup rotation strategy. It is easy to use and highly configurable.

    Box Backup (supports continuous backup) – (As mentioned in wikipedia) Box Backup is a client-server application in which a client sends data to the server for storage. The server provides management of client via certificates, storage quotas, and data retention. Together, a unique and robust solution is created that scales allowing clients with low-bandwidth connections to effectively perform reliable backups. Box Backup is ideal for backing up laptops and computers with intermittent or low-bandwidth connections, because it is capable of continuous data protection in the background, starting automatically when an internet connection is present, and recovering gracefully and efficiently from connection failures. Box Backup uses a modified version of the Rsync algorithm, that works with encrypted blocks. This allows it to store data on the server in a form that the server operator cannot read, while still uploading only changed portions of data files.

    Enterprise Grade Network Backup Software

    My personal favourite when it comes to enterprise grade backup software is BackupPC. The most important reasons being:

    • Disk-based backup medium (and not tape based)
    • Simple to setup (unlike most other enterprise grade backup solutions)
    • Easy to use web interface where I can manage backups, restore from backups and view status of all backups (missing with most other solutions)
    • Compression and De-duplication
    • Password protection of archives. See http://akutz.wordpress.com/2008/12/23/creating-archives-with-backuppc-and-7zip/
    • Auto-splitting to fit on CDs/DVDs

    Other popular software considered were Amanda (too complicated for small networks and primary backup medium is tapes) and Bacula (almost ideal solution but is slightly complicated to setup. Also provides native client for windows implying backup and restore preserves file permissions as they are).

    References:

    • http://askubuntu.com/questions/2596/comparison-of-backup-tools
    • http://forums.linuxmint.com/viewtopic.php?f=90&t=87236
    • http://www.techdrivein.com/2010/12/top-5-open-source-backup-software-for.html
    • http://www.techradar.com/news/software/applications/best-linux-backup-software-8-tools-on-test-909380/4
    • http://www.techrepublic.com/blog/10things/10-outstanding-linux-backup-utilities/895
    • http://en.wikipedia.org/wiki/List_of_backup_software
  • Setting up hourly backups in Deja Dup

    Déjà Dup is an excellent graphical backup solution for the GNOME Desktop environment. However, it is limited by that it does not provide backup frequency higher than a day through its graphical interface. So, we will leverage cron to achieve this for us. Adding a simple cron job with the command “xvfb-run deja-dup –backup –auto” will do the trick. But there no fun in it. So lets do it with some style so that we receive the same visual notification that we get whenever a backup happens.

    1. First we need to keep track of the X server display that we log into. This can be done by creating a temporary file whenever we login which contains the value of the DISPLAY environment variable. For this, we will have to add an autostart script. Running the below command should do that:
      $ echo -e '[Desktop Entry]\nName=Dump Display\nType=Application\nExec=echo $DISPLAY > /tmp/display-`whoami`' > ~/.config/autostart/dump-display.desktop
      
    2. Next add the below cron entry (customize the frequency as you like)
      0 * * * *       DISPLAY_FILE="/tmp/display-`whoami`" ; test -f $DISPLAY_FILE  && XCMD="env DISPLAY=`cat $DISPLAY_FILE`" || XCMD="xvfb-run" ; $XCMD deja-dup --backup --auto 1>/dev/null 2>&1
      
    3. If you do not have xvfb-run command, you can install the xvfb package which provides it:
      $ apt-get install xvfb

    However, it should be noted that if you login and logout, the file /tmp/display- is left dangling with old information and so the cron command might fail to run successfully till you again login to your desktop session.

  • Super Grub2 Disk

    The primary purpose of Super GRUB2 Disk is to help you boot into an OS whose bootloader is broken. Documentation for Super GRUB2 Disk can be found at http://www.supergrubdisk.org/wiki/SuperGRUB2Disk.

    I had an old system with Intel YM430TX motherboard lying around which would not boot from its hard disk. So I decided to patch Super GRUB2 Disk for three things:

    1. To timeout and boot the “Detect any Operating System” option
    2. To timeout and boot the first detected OS
    3. To insert the appropriate modules while enabling PATA

    Here is the super_grub2_disk_1.99_beta_1_intel_ym430tx.patch. After applying the patch to super_grub2_disk_1.99_beta_1_source_code, I built the Super GRUB2 Disk iso, burnt it to a CD-ROM, inserted in the CD drive, and it then automatically booted Linux from my hard disk.

  • Convert HTML to PDF using wkhtmltopdf

    wkhtmltopdf is an excellent utility for converting an URL or HTML file to PDF. Kindly note that the current version of wkhtmltopdf in Debian, as of today, is 0.9.9, which is outdated. I highly recommend that you download the latest featured version from http://code.google.com/p/wkhtmltopdf/downloads/list and use it instead. Here are some examples for using wkhtmltopdf.

    Simple conversion of html to pdf.

    # wkhtmltopdf http://www.google.com google.pdf
    # wkhtmltopdf input.html output.pdf
    

    Input and output files could be replaced with standard input and standard output.

    # cat input.html | wkhtmltopdf - - > output.pdf
    

    Disabling PDF outline (versions prior to 0.10.0 had outline disabled by default).

    # wkhtmltopdf --no-outline input.html output.pdf
    

    Adding a custom header and footer.

    # wkhtmltopdf --header-left "Some header text" --footer-center "Some footer text" input.html output.pdf
    

    References:
    http://code.google.com/p/wkhtmltopdf/
    http://code.google.com/p/wkhtmltopdf/wiki/Usage
    http://madalgo.au.dk/~jakobt/wkhtmltoxdoc/

  • Screencast in Linux

    After trying out various screencasting applications in Linux, I figured out the below set of applications are the best for creating, modifying and editing screencasts. Kindly note that all the below mentioned software are available in the Debian main repository.

    • recordMyDesktop: recordMyDesktop is a free and open source desktop screencasting software application written for GNU/Linux. The program is separated into two parts; a command line tool that performs the tasks of capturing and encoding, and an interface that exposes the program functionality graphically. There are two front-ends written in python with pyGtk (gtk-recordMyDesktop) and pyQt4 (qt-recordMyDesktop). RecordMyDesktop also offers the ability to record audio through ALSA, OSS or the JACK audio server. RecordMyDesktop only outputs to Ogg using Theora for video and Vorbis for audio.
    • OpenShot: OpenShot Video Editor is a free and open-source video editing software package for Linux, built with Python, GTK, and the MLT Framework. The project was started in August 2008 by Jonathan Thomas, with the objective to provide a stable, free, and friendly to use video editor. Some of its features include:

      • Support for many video, audio, and image formats (based on FFmpeg)
      • Multiple tracks
      • Clip resizing, trimming, snapping, and cutting
      • Video transitions with real-time previews
      • Compositing, image overlays, watermarks
      • Title templates, title creation, sub-titles
      • Drag and drop timeline
      • Video encoding (based on FFmpeg)
      • Audio mixing and editing
      • Digital video effects, including brightness, gamma, hue, greyscale, chroma key (bluescreen / greenscreen), and over 20 other video effects
    • Ogg Video Tools: The “Ogg Video Tools” is a toolbox for manipulating Ogg video files, which usually consist of a video stream (Theora) and an audio stream (Vorbis). It includes a number of handy command line tools for creating an manipulating these video files, such as for splitting the different streams. Actually there are the following tools available: oggResize, oggThumb, oggSlideshow, oggCut, oggCat, oggSplit, oggJoin, oggDump and oggLength.
  • Reducing VirtualBox Dynamic VDI Size

    Detailed instructions for reducing a dynamic VDI size can be found at http://forums.virtualbox.org/viewtopic.php?p=29272 (FAQ entry “How can I reduce the size of a dynamic VDI on disk?”).

    Reducing the size of a dynamic VDI is mainly a two stage process:

    1. Fill the filesystem free space with zeroes
      The mentioned way in the FAQ is to start the system in single-user mode, mount the root filesystem as read-only and run “zerofree” command. But trying to remount the root filesystem as read-only (from single user mode) in Debian was failing with a message “mount:/ is busy”. An easy way to workaround this is to first boot using a LiveCD such as SystemRescueCD, which comes bundled with zerofree. If you boot using Ubuntu LiveCD, then you might have to install the zerofree package after booting the LiveCD. Next, you will just have to run the zerofree command for each of VDI’s Linux (ext2/ext3/ext4) partitions.

      # zerofree -v /dev/sda1
      # zerofree -v /dev/sda2
      # zerofree -v /dev/sda3
      ...
      ...
      

      You will have to fill any non-Linux partitions, such as NTFS, with zeros as well (refer to the FAQ entry mentioned above).

    2. Compacting the VDI
      The VDI compact command should be run from the host system.

      # VBoxManage modifyhd /full/path/to/xxxx.vdi --compact
      

    For your convenience, I have written a script which takes the absolute path to the VDI as an argument, mounts any Linux (ext2/ext3/ext4) partition it finds in the VDI, fills it with zeroes, and then finally compresses the VDI. I hope you find the script useful. Download the script, extract it (gunzip compact-vdi.sh_.gz && mv compact-vdi.sh_ compact-vdi.sh), and execute it (./compact-vdi.sh vdi-absolute-path).