.. post:: Nov 23, 2015 :tags: containers, lxc, lxd, docker :author: Tze Liang .. include:: subs.inc .. _sec-iaas-containers: *************** IaaS Containers *************** With the increasing awareness of containers in the IT world, mostly due to the high profile and sky-rocketing popularity of `Docker `_ [#docker-valuation]_ , we are starting to witness an accelerated shift in how people think about application development and deployment, as well as ongoing application management. Containers are not new to Unix/Linux users. The concepts of containers are more or less based on the primitives of `chroot `_ , which existed in most UNIX-like OS since early 80s. Many enhanced variants and new solutions were developed since then including, #. `FreeBSD jail `_ : Started since FreeBSD 4.x in 2000 #. `Linux-VServer `_ : Started in 2001, no longer in active maintenance. #. `Solaris Containers (Zones) `_ : Started since Solaris 10 in 2004. Many more were created and further improved in the Linux world after `cgroups and namespaces `_ made it into the mainline Linux kernel in 2007, #. |r_openvz|_ : Actively maintained since 2005 #. Linux Containers - |r_lxc|_/|r_lxd|_ : Actively maintained since 2008 #. |r_docker|_ : Actively maintained since 2013 (Originally based on LXC. Docker now uses its own libcontainer to facilitate sandbox execution environment) #. |r_rkt|_ : Actively maintained since 2014 The above 4 projects are probably the most widely known, with |r_docker|_ leading the pack in terms of popularity as application centric containers; |r_lxd|_ and |r_openvz|_ as the leading "full system" containers. .. note:: |r_lxd|_ sits on top of |r_lxc|_ to provide extra management features controllable over the network and API layer for 3rd party integration (currently has an existing OpenStack Nova plugin, nova-compute-lxd). The project is led by Conocial Ltd (Ubuntu). `Docker `_ and `rkt `_ are thin containers designed to be, as much as possible, singular service instances, and ephemeral and stateless. Whereas |r_lxd|_ and |r_openvz|_ is a full container intended to look and feel like a normal Linux environment. Think of |r_docker|_ and |r_rkt|_ as closer to `chroot `_ (as the most common use case for `chroot `_ is to contain single program/process). |r_lxd|_ and |r_openvz|_ are somewhat in between `chroot `_ and lightweight hypervisor, akin to a lightweight paravirtual KVM. (You can even run |r_docker|_ inside |r_lxd|_ - i.e., running micro-service container on system container). |r_lxd| and |r_openvz|_ are not better than |r_docker|_ and |r_rkt|_ , or vice-versa. Each of them has pros and cons, and they generally serve different use cases and purposes - although they may also overlap in functionality. They can even be used simultaneously by running app containers on top of system containers. This article is centred around |r_lxd| as it is a more flexible (insofar as the customisation of what full system execution environment allow you to do) and easier to demonstrate how containers look, feel and behave. (i.e., we can demonstrate |r_lxd|_ as a system container, and also demonstrate examples of Docker/rkt, all by using |r_lxd|_ system container. However, it is not possible to demonstrate the two use cases by using app containers) .. _sec-iaas-containers-lxd: Linux Containers - LXD ====================== |r_lxd| is a Linux based container under the umbrella of `Linux Containers `_ project. It leverages Linux `cgroups and namespaces `_ that was started in back in 2007, and wrap it around a set of tools to run lightweight virtualised Linux OS instance with the ability to support some `advanced networking `_ and `storage features `_ . .. _sec-iaas-lxd-install: Installing LXD -------------- Brief installation overview of |r_lxd|_ .. _sec-iaas-lxd-install-ubuntu: Install LXD on Ubuntu Linux ^^^^^^^^^^^^^^^^^^^^^^^^^^^ In Ubuntu 16.04, LXD is installed by default. For Ubuntu 15.10, install by running, .. code-block:: bash apt-get install lxd Brief installation overview on Ubuntu .. note:: Note for Ubuntu Linux .. _sec-iaas-lxd-setup: Setup LXD on Ubuntu Linux ^^^^^^^^^^^^^^^^^^^^^^^^^ Create ZFS volume ################# .. code-block:: bash # zpool create /dev/ zpool create /dev/sdc zvol01 Create a ZFS volume for LXD repository from zvol01 pool .. code-block:: bash zfs create zvol01/lxd zfs set compression=on zvol01/lxd zfs set sync=disabled zvol01/lxd zfs set atime=off zvol01/lxd zfs set checksum=off zvol01/lxd Initialise LXD Configuration ############################ .. code-block:: bash lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? no Name of the existing ZFS pool or dataset: zvol01/lxd Would you like LXD to be available over the network (yes/no)? yes Address to bind LXD to (not including port): 127.0.0.1 Port to bind LXD to (8443 recommended): 8443 Trust password for new clients: Again: Do you want to configure the LXD bridge (yes/no)? yes # Setup Remote Image Source ######################### Optionally, add remote the official LXD remote image repository as our source. .. code-block:: bash # lxc remote add .. note:: images.linuxcontainers.org is an unofficial remote image repository for LXD, it is referred to as "images". However, it is often treated as the de-facto remote repository for test and development. To list all available remote image repositories, type :command:`lxc remote list` List all images available from images.linuxcontainers.org. .. code-block:: bash #lxc image list [image_repository_name:] [filter] lxc image list images: Or list only Ubuntu images, .. code-block:: bash lxc image list images: ubuntu Copy Remote Image to Local Store ################################ Assuming that we want to copy 64-bit Ubuntu Xenial (16.04) image and save it locally, we would do, .. code-block:: bash # lxc image copy images:/ubuntu/xenial/amd64 local: --alias= lxc image copy images:/ubuntu/xenial/amd64 local: --alias=lxd-xenial or copy remote image retaining its associated aliases .. code-block:: bash lxc image copy images:/ubuntu/xenial/amd64 local: --copy-aliases List the images locally, lxc image list Launch LXD Container from Local Image ##################################### .. code-block:: bash # lxc launch lxc launch ubuntu/xenial/amd64 first-container This will create an LXD container called first-container, and start running it. Or create a container but not starting it, .. code-block:: bash lxc init ubuntu/xenial/amd64 first-container To display the list of containers, type .. code-block:: bash lxc list To run bash (and therefore running inside first-container), .. code-block:: bash lxc exec first-container bash Create an image from container, useful for establishing baseline image after installing packages. .. code-block:: bash # lxc publish --alias lxc publish first-container --alias my-image or creating image from a container snapshot, .. code-block:: bash lxc publish first-container/some-snapshot --alias my-image-2 To add a directory on the host and mount it inside a container .. code-block:: bash # lxc config device add source= path= lxc config device add first-container testdisk disk source=/container/repo/first-container path=/home The above command will mount the directory :file:`/container/repo/first-container` on the underlying host to first-container's :file:`/home` .. note:: Device mounting is not allowed inside container by default (FUSE is fine as it is userspace interface). The default behaviour can be overwritten by changing the profile settings, for example, if a container is unprivileged we can turn off apparmor, :command:`lxc config set my-container raw.lxc lxc.aa_profile=unconfined` Networking ---------- To allow containers the ability to join the same network segments that the parent host has access to, i.e., bridge network, this can be achieved via profiles. Create a Bridge Interface ^^^^^^^^^^^^^^^^^^^^^^^^^ Install bridge-utils package, by running .. code-block:: bash apt install bridge-utils Create new bridge interface by editing :file:`/etc/network/interfaces` .. code-block:: cfg # Config for primary network interface (e.g., enp0s25 or eth0) to be used as bridge uplink # Set it to manual auto enp0s25 iface enp0s25 inet manual # Setup bridge interface here, e.g., vbr0 auto vbr0 iface vbr0 inet dhcp bridge_ports enp0s25 Restart networking services to update the configuration changes. Configure Container to Use Bridge Interface ########################################### Once the new bridge interface is up and running, a container can start using it. This can be done either via lxc command or by creating new profile. To change the default profile to use the new bridge interface, run .. code-block:: bash lxc profile device set default eth0 parent vbr0 Alternatively, create a separate profile with reference to the bridge interface. This has the flexibility of allowing containers to be assigned to different network profile settings. The following example clone the default profile to a new profile named 'hostbridge' and change the parent interface to new bridge interface. .. code-block:: bash lxc profile copy default hostbridge lxc profile device set hostbridge eth0 parent vbr0 .. [#docker-valuation] `Docker's billion valuation `_ .. target-notes:: .. .. _LXC: `LXC `_ .. _Docker: `Docker `_ .. _rkt: `rkt `_