Understanding and Using LXC and LXD
LXC/LXD is a Virtual Machine-like (VM), yet lightweight, Linux container system. Each container has its own filesystem, process space and network stack, thus firewalling a container from its host and the other containers. Rather than emulating hardware they all use the same kernel, so the containers run much more efficiently.
Interesting uses for lxc:
- compartmentalization (for security, or imposing resource limits)
- self-contained build or application environments that are independent of the host OS
- run an older OS version than the host (e.g. run a Ubuntu 12.04 container on a 16.04 host)
- run a different distro than the host (e.g. run a Fedora container on an Ubuntu host)
What is LXC (lex-see)?
From the official LXC page:
“LXC containers are often considered as something in the middle between a chroot and a full fledged virtual machine. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.”
“LXC is a userspace interface for the Linux kernel containment features. Through a powerful API and simple tools, it lets Linux users easily create and manage system or application containers.
“Current LXC uses the following kernel features to contain processes:
- Kernel namespaces (ipc, uts, mount, pid, network and user)
- Apparmor and SELinux profiles
- Seccomp policies
- Chroots (using pivot_root)
- Kernel capabilities
- CGroups (control groups)
What is LXD (lex-dee)?
From the official LXD page:
“LXD is a container ‘hypervisor’ and a new user experience for LXC.” Basically it’s a layer above lxc that manages containers using liblxc and a Go binding. “The daemon exports a REST API both locally and, if enabled, over the network.”
“It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.”
How does LXC/LXD differ from Virtualbox, Docker?
Virtualbox is a full VM, emulating hardware and running a kernel on top of it, whereas lxc does not emulate hardware and uses the same kernel as the host.
Docker containers only run a single process and do not have init, whereas lxc does have init and therefore can run a full userspace. Docker containers also have less networking flexibility. It is possible to run Docker in an lxc container.
By default, when a container is created it assigned a non-routable IP address dynamically via DHCP, and that address is added to the bridge lxdbr0 running NAT. All kinds of configurations are possible, such as:
- routable IP address so packets flow directly to the container
- bridge two interfaces, e.g. for a front-end container that needs a routable IP address to talk to the internet while bridging to containers running various back-end application components on non-routable IP addresses
- fan networking, for very dense virtualization
Note that lxd 2.3+ has much better networking support, since most networking config happens in lxd with its config stored alongside all the other container config, instead of having to configure it all separately.
A Word about ZFS
ZFS is the preferred filesystem to use with lxd, due to:
- instantaneous snapshots
- per-filesystem quotas and reservations
- partition-less, so any number of filesystems can be created dynamically
- ease of transferring filesystems between hosts (zfs send/recv), which is useful for migrating containers between hosts
Installation (Ubuntu 16.04 or newer)
LXC/LXD Crib Sheet
Launch a new container
Get a list of OS images
Start a shell inside a container
See running containers
Start/stop a container
Move files into/out of a container
Limit how much disk space a container can use (ZFS/btrfs only)
Add a remote lxd to control
Create and launch a container on a remote host
Connect to a remote container
Move a container from one instance of lxd to another
Automatic Container Setup upon First Boot
Cloud-init is a de facto standard for specifying container configuration. May have originated with AWS. With it you can do things on first boot like:
- copy ssh keys into the container
- update all packages to the latest
- pre-install packages