IaaS Containers¶
With the increasing awareness of containers in the IT world, mostly due to the high profile and sky-rocketing popularity of Docker [1] , we are starting to witness an accelerated shift in how people think about application development and deployment, as well as ongoing application management.
Containers are not new to Unix/Linux users. The concepts of containers are more or less based on the primitives of chroot , which existed in most UNIX-like OS since early 80s. Many enhanced variants and new solutions were developed since then including,
- FreeBSD jail : Started since FreeBSD 4.x in 2000
- Linux-VServer : Started in 2001, no longer in active maintenance.
- Solaris Containers (Zones) : Started since Solaris 10 in 2004.
Many more were created and further improved in the Linux world after cgroups and namespaces made it into the mainline Linux kernel in 2007,
- OpenVZ [6] : Actively maintained since 2005
- Linux Containers - LXC [2]/LXD [3] : Actively maintained since 2008
- Docker [4] : Actively maintained since 2013 (Originally based on LXC. Docker now uses its own libcontainer to facilitate sandbox execution environment)
- rkt [5] : Actively maintained since 2014
The above 4 projects are probably the most widely known, with Docker [4] leading the pack in terms of popularity as application centric containers; LXD [3] and OpenVZ [6] as the leading “full system” containers.
Note
LXD [3] sits on top of LXC [2] to provide extra management features controllable over the network and API layer for 3rd party integration (currently has an existing OpenStack Nova plugin, nova-compute-lxd). The project is led by Conocial Ltd (Ubuntu).
Docker and rkt are thin containers designed to be, as much as possible, singular service instances, and ephemeral and stateless. Whereas LXD [3] and OpenVZ [6] is a full container intended to look and feel like a normal Linux environment. Think of Docker [4] and rkt [5] as closer to chroot (as the most common use case for chroot is to contain single program/process). LXD [3] and OpenVZ [6] are somewhat in between chroot and lightweight hypervisor, akin to a lightweight paravirtual KVM. (You can even run Docker [4] inside LXD [3] - i.e., running micro-service container on system container).
LXD and OpenVZ [6] are not better than Docker [4] and rkt [5] , or vice-versa. Each of them has pros and cons, and they generally serve different use cases and purposes - although they may also overlap in functionality. They can even be used simultaneously by running app containers on top of system containers.
This article is centred around LXD as it is a more flexible (insofar as the customisation of what full system execution environment allow you to do) and easier to demonstrate how containers look, feel and behave. (i.e., we can demonstrate LXD [3] as a system container, and also demonstrate examples of Docker/rkt, all by using LXD [3] system container. However, it is not possible to demonstrate the two use cases by using app containers)
Linux Containers - LXD¶
LXD is a Linux based container under the umbrella of Linux Containers project. It leverages Linux cgroups and namespaces that was started in back in 2007, and wrap it around a set of tools to run lightweight virtualised Linux OS instance with the ability to support some advanced networking and storage features .
Installing LXD¶
Brief installation overview of LXD [3]
Install LXD on Ubuntu Linux¶
In Ubuntu 16.04, LXD is installed by default.
For Ubuntu 15.10, install by running,
apt-get install lxd
Brief installation overview on Ubuntu
Note
Note for Ubuntu Linux
Setup LXD on Ubuntu Linux¶
Create ZFS volume¶
# zpool create /dev/<disk-label> <ZFS volume name>
zpool create /dev/sdc zvol01
Create a ZFS volume for LXD repository from zvol01 pool
zfs create zvol01/lxd
zfs set compression=on zvol01/lxd
zfs set sync=disabled zvol01/lxd
zfs set atime=off zvol01/lxd
zfs set checksum=off zvol01/lxd
Initialise LXD Configuration¶
lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? no
Name of the existing ZFS pool or dataset: zvol01/lxd
Would you like LXD to be available over the network (yes/no)? yes
Address to bind LXD to (not including port): 127.0.0.1
Port to bind LXD to (8443 recommended): 8443
Trust password for new clients: <password>
Again: <password>
Do you want to configure the LXD bridge (yes/no)? yes
# <Wizard starts. Use default values>
Setup Remote Image Source¶
Optionally, add remote the official LXD remote image repository as our source.
# lxc remote add <local name> <remote URL/FQDN>
Note
images.linuxcontainers.org is an unofficial remote image repository for LXD, it is referred to as “images”. However, it is often treated as the de-facto remote repository for test and development. To list all available remote image repositories, type lxc remote list
List all images available from images.linuxcontainers.org.
#lxc image list [image_repository_name:] [filter]
lxc image list images:
Or list only Ubuntu images,
lxc image list images: ubuntu
Copy Remote Image to Local Store¶
Assuming that we want to copy 64-bit Ubuntu Xenial (16.04) image and save it locally, we would do,
# lxc image copy images:/ubuntu/xenial/amd64 local: --alias=<preferred-name>
lxc image copy images:/ubuntu/xenial/amd64 local: --alias=lxd-xenial
or copy remote image retaining its associated aliases
lxc image copy images:/ubuntu/xenial/amd64 local: --copy-aliases
List the images locally, lxc image list
Launch LXD Container from Local Image¶
# lxc launch <local image name> <container name>
lxc launch ubuntu/xenial/amd64 first-container
This will create an LXD container called first-container, and start running it.
Or create a container but not starting it,
lxc init ubuntu/xenial/amd64 first-container
To display the list of containers, type
lxc list
To run bash (and therefore running inside first-container),
lxc exec first-container bash
Create an image from container, useful for establishing baseline image after installing packages.
# lxc publish <container> --alias <image name>
lxc publish first-container --alias my-image
or creating image from a container snapshot,
lxc publish first-container/some-snapshot --alias my-image-2
To add a directory on the host and mount it inside a container
# lxc config device add <container> <device-name> <type> source=<local path> path=<path in containter>
lxc config device add first-container testdisk disk source=/container/repo/first-container path=/home
The above command will mount the directory /container/repo/first-container on the underlying host to first-container’s /home
Note
Device mounting is not allowed inside container by default (FUSE is fine as it is userspace interface). The default behaviour can be overwritten by changing the profile settings, for example, if a container is unprivileged we can turn off apparmor, lxc config set my-container raw.lxc lxc.aa_profile=unconfined
Networking¶
To allow containers the ability to join the same network segments that the parent host has access to, i.e., bridge network, this can be achieved via profiles.
Create a Bridge Interface¶
Install bridge-utils package, by running
apt install bridge-utils
Create new bridge interface by editing /etc/network/interfaces
# Config for primary network interface (e.g., enp0s25 or eth0) to be used as bridge uplink
# Set it to manual
auto enp0s25
iface enp0s25 inet manual
# Setup bridge interface here, e.g., vbr0
auto vbr0
iface vbr0 inet dhcp
bridge_ports enp0s25
Restart networking services to update the configuration changes.
Configure Container to Use Bridge Interface¶
Once the new bridge interface is up and running, a container can start using it. This can be done either via lxc command or by creating new profile.
To change the default profile to use the new bridge interface, run
lxc profile device set default eth0 parent vbr0
Alternatively, create a separate profile with reference to the bridge interface. This has the flexibility of allowing containers to be assigned to different network profile settings. The following example clone the default profile to a new profile named ‘hostbridge’ and change the parent interface to new bridge interface.
lxc profile copy default hostbridge
lxc profile device set hostbridge eth0 parent vbr0
| [1] | Docker’s billion valuation |
| [2] | (1, 2) https://linuxcontainers.org/lxc/introduction/ |
| [3] | (1, 2, 3, 4, 5, 6, 7, 8, 9) https://linuxcontainers.org/lxd/introduction/ |
| [4] | (1, 2, 3, 4, 5) https://www.docker.com/ |
| [5] | (1, 2, 3) https://coreos.com/rkt/docs/latest/ |
| [6] | (1, 2, 3, 4, 5) https://openvz.org/Main_Page |