The target audience is anybody who is interested in learning and wanting to use LXC. But more specifically,
- those coming from another distro who have used or know about LXC and want to use it in OMLx. (like the author of this thread)
- testers who are willing to test LXC package
OK. Here we go.
Installation:
LXC is provided as packge lxc
in OMLx. So you simply do,
sudo dnf install lxc
Configuration:
LXD (and probably Incus, which I have not used), has nice setup command called lxd-init
which will take us through initial configuration. We don’t have such a tool in ‘raw’ lxc. Hence this guide
The configuration files are under,
- /usr/share/lxc
(package provided)
- /etc/sysconfig
and /etc/lxc
for system-wide configuration
- ~/.config/lxc
, for per-user configuration
The downloaded ‘image’ files are under,
- /var/cache/lxc/
for root user
- ~/.cache/lxc/
for regular user
The containers will be under,
- /var/lib/lxc/
for root user
- ~/.local/share/lxc
for regular user
The contents of /etc/lxc/default.conf
is as below.
$ cat /etc/lxc/default.conf
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 10:66:6a:xx:xx:xx
You don’t have to touch it unless you have some specific requirements. The defaults are good enough.
The contents of /etc/sysconfig/lxc
is,
[sv@openmandriva-x8664 ~]$ cat /etc/sysconfig/lxc
# LXC_AUTO - whether or not to start containers at boot
LXC_AUTO="true"
# BOOTGROUPS - What groups should start on bootup?
# Comma separated list of groups.
# Leading comma, trailing comma or embedded double
# comma indicates when the NULL group should be run.
# Example (default): boot the onboot group first then the NULL group
BOOTGROUPS="onboot,"
# SHUTDOWNDELAY - Wait time for a container to shut down.
# Container shutdown can result in lengthy system
# shutdown times. Even 5 seconds per container can be
# too long.
SHUTDOWNDELAY=5
# OPTIONS can be used for anything else.
# If you want to boot everything then
# options can be "-a" or "-a -A".
OPTIONS=
# STOPOPTS are stop options. The can be used for anything else to stop.
# If you want to kill containers fast, use -k
STOPOPTS="-a -A -s"
USE_LXC_BRIDGE="false"
[ ! -f /etc/sysconfig/lxc-net ] || . /etc/sysconfig/lxc-net
LXC needs a bridge setup to work properly. I don’t know why the default is set as false. But leave this file untouched. Instead, create and/or edit lxc-net
file that goes under /etc/sysconfig/
.
echo 'USE_LXC_BRIDGE="true"' | sudo tee /etc/sysconfig/lxc-net
There are other settings that we can add to this file like disabling IPv6, changing ip addr range, defining a domain etc. But above line will suffice to get our containers working.
We now have to enable ‘lxc-net’ and ‘lxc’ services.
sudo systemctl enable --now lxc-net.service
sudo systemctl enable --now lxc.service
This will setup ‘lxcbr0’ bridge that will be used by the containers. You can verify that the bridge is setup, with ip a
,
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:1f:09:29 brd ff:ff:ff:ff:ff:ff
altname enx0800271f0929
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3
valid_lft 84080sec preferred_lft 84080sec
inet6 fd00::d19e:c70c:13dc:f30a/64 scope global dynamic noprefixroute
valid_lft 86123sec preferred_lft 14123sec
inet6 fe80::4bc9:2064:ba79:f9eb/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 10:66:6a:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
valid_lft forever preferred_lft forever
inet6 fc42:5009:ba4b:5ab0::1/64 scope global
valid_lft forever preferred_lft forever
Now, with above settings, we are good to setup and run privileged containers. The containers will be created and run by the root user.
We will however, continue configuring so that regular user can create and manage containers. This is the tricky part and where most confusion/misconfiguration happen.
Firstly, we have to create some directories.
mkdir ~/.config/lxc
mkdir ~/.local/share/lxc
Your home directory and ‘~/.local/share/lxc’ must have execute permission. This is important! In OMLx, by default user’s home and ‘~/.local’ directory have permission 700.
chmod +x /home/$USER
chmod +x $HOME/.local
chmod +x $HOME/.local/share/lxc
We will now place a ‘default.conf’ file inside our local configuration directory ‘~/.config/lxc’.
touch ~/.config/lxc/default.conf
nano ~/.config/lxc/default.conf
Paste the following content and save.
lxc.include = /etc/lxc/default.conf
lxcpath = ~/.local/share/lxc
lxc.start.auto = 1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
- The first line says to read the contents of ‘/etc/lxc/default.conf’ which in effect configures the networking.
- Then we set path to store user containers to be ‘~/.local/share/lxc’.
- The third line starts (or tries to start) the container automatically. I haven’t seen it work.
- The fourth and fifth lines are important. They do what is called mapping and is very essential to unprivileged containers.
Basically the line says, map user with ‘uid 0’ (which is root) of the container to user with ‘uid 100000’ on the host. And continue this mapping for a range of ‘65536’ uids i.e. uids of range 0-65536 on the container will become uids 100000-165536 on the host. Same holds for gids.
This way, if the container is ever compromised i.e., if unauthorized person gets root access there, that will not translate to a root privilege on the host, root on the container would only be a normal user with ‘uid 100000’ on the host. That is how I understand it. Basically you can say, it is for security
Now, create two files /etc/subuid
and /etc/subgid
.
sudo touch /etc/subuid /etc/subgid
Then execute the following commands,
sudo usermod --add-subuids 100000-165535 $USER
sudo usermod --add-subgids 100000-165535 $USER
After this, the contents of the ‘/etc/subuid’ and ‘/etc/subgid’ will be as below.
$ cat /etc/subuid
sv:100000:65536
$ cat /etc/subgid
sv:100000:65536
You will see your user name in the first column.
One last configuration is to setup regular user to be able to create network devices inside the containers. This is done by creating a ‘/etc/lxc/lxc-usernet’ file.
echo "$USER veth lxcbr0 10" | sudo tee /etc/lxc/lxc-usernet
This reads as, let user $USER (which is you) be able to create upto 10 virtual ethernets (veth) on the bridge ‘lxcbr0’ from the container.
This is all the configuration required to get lxc working. Now we will create some containers to test.
LXC does not come with a lxc
command with other operations as subcommand(s); each operation is its own command. i.e., instead of something like dnf install <pkg> or dnf remove <pkg>
it is like having, dnf-install <pkg> or dnf-remove <pkg>
. Is this a good thing or not I don’t know. In my opinion, it is not intuitive and rather counter productive. But the upstream may have their own reasoning.
Anyways, following are the operations supported by lxc.
lxc-attach lxc-config lxc-device lxc-monitor lxc-unfreeze
lxc-autostart lxc-console lxc-execute lxc-snapshot lxc-unshare
lxc-cgroup lxc-copy lxc-freeze lxc-start lxc-update-config
lxc-checkconfig lxc-create lxc-info lxc-stop lxc-usernsexec
lxc-checkpoint lxc-destroy lxc-ls lxc-top lxc-wait
Let’s learn about essential ones.
lxc-info --version
- to check version of lxc
lxc-ls (or lxc-ls -f)
- to list containers
lxc-checkconfig
- to check lxc configuration.
If you see lot of lines that contain ‘missing’ and/or ‘zgrep: command not found’, you have to install the ‘gzip-utils’ package (dnf install gzip-utils
) and then run the same command. As long as we don’t see any obvious errors and see all green ‘enabled’ (even though some may say ‘not loaded’) we are good to go.
lxc-create -n <container name> -t download
- to create a container where,
-n | --name is name of the container
-t is template which is most often ‘download’.
Templates are under ‘/usr/share/lxc/templates’ There seems to have been an ‘openmandriva’ template but not there anymore. You can explore other templates if you like.
lxc-start -n <containername>
- to start a container
lxc-stop -n <containername>
- to stop a container
lxc-destroy -n <containername>
- to delete (destroy) or remove a container
lxc-attach -n <containername>
- to connect to container and run a bash session as root; exit
to exit the session.
lxc-console -n <containername>
- to login to a console session on the container; we are asked for login/password and this allows us to login a another user than a root user.
To detach, press <ctrl-a>
and then q. You anyway see the instruction when you execute lxc-console
.
I am not familiar with other commands. The mentioned ones should suffice for normal operation. You can explore more if you want.
I will now go through the commands one by one so you can see the typical output. I’ll be running commands as root user and create a privileged container. However, same holds for regular user creating an unprivileged containers.
[root@openmandriva-x8664 ~]# lxc-info --version
6.0.4
[root@openmandriva-x8664 ~]# lxc-ls -f
[root@openmandriva-x8664 ~]# lxc-create -n alpine-priv -t download
Downloading the image index
---
DIST RELEASE ARCH VARIANT BUILD
---
almalinux 8 amd64 default 20250701_23:08
almalinux 8 arm64 default 20250701_23:08
almalinux 9 amd64 default 20250701_23:08
almalinux 9 arm64 default 20250701_23:08
alpine 3.19 amd64 default 20250701_13:00
alpine 3.19 arm64 default 20250701_13:02
alpine 3.19 armhf default 20250701_13:02
alpine 3.20 amd64 default 20250701_13:00
alpine 3.20 arm64 default 20250701_13:01
alpine 3.20 armhf default 20250701_13:05
alpine 3.20 riscv64 default 20250701_13:03
alpine 3.21 amd64 default 20250701_13:00
alpine 3.21 arm64 default 20250701_13:00
alpine 3.21 armhf default 20250701_13:02
alpine 3.21 riscv64 default 20250701_13:12
alpine 3.22 amd64 default 20250701_13:00
alpine 3.22 arm64 default 20250701_13:00
alpine 3.22 armhf default 20250701_13:05
alpine 3.22 riscv64 default 20250701_13:00
alpine edge amd64 default 20250701_13:00
alpine edge arm64 default 20250701_13:00
alpine edge armhf default 20250701_13:05
alpine edge riscv64 default 20250701_13:11
alt Sisyphus amd64 default 20250702_01:17
alt Sisyphus arm64 default 20250702_01:17
alt p11 amd64 default 20250702_01:17
alt p11 arm64 default 20250702_01:17
amazonlinux 2 amd64 default 20250702_05:09
amazonlinux 2 arm64 default 20250702_05:09
amazonlinux 2023 amd64 default 20250702_05:09
archlinux current amd64 default 20250702_04:18
archlinux current arm64 default 20250702_04:43
archlinux current riscv64 default 20250702_04:18
busybox 1.36.1 amd64 default 20250702_06:00
busybox 1.36.1 arm64 default 20250702_06:00
centos 9-Stream amd64 default 20250701_07:08
centos 9-Stream arm64 default 20250701_07:08
debian bookworm amd64 default 20250701_05:24
debian bookworm arm64 default 20250701_05:24
debian bookworm armhf default 20250701_05:37
debian bullseye amd64 default 20250701_05:24
debian bullseye arm64 default 20250701_05:24
debian bullseye armhf default 20250701_05:53
debian buster amd64 default 20250701_05:24
debian buster arm64 default 20250701_05:24
debian buster armhf default 20250701_05:48
debian trixie amd64 default 20250701_05:24
debian trixie arm64 default 20250701_05:24
debian trixie riscv64 default 20250701_05:24
devuan beowulf amd64 default 20250701_11:50
devuan beowulf arm64 default 20250701_11:50
devuan chimaera amd64 default 20250701_11:50
devuan chimaera arm64 default 20250701_11:50
devuan daedalus amd64 default 20250701_11:50
devuan daedalus arm64 default 20250701_11:50
fedora 40 amd64 default 20250630_20:33
fedora 40 arm64 default 20250630_20:45
fedora 41 amd64 default 20250630_20:33
fedora 41 arm64 default 20250630_20:33
fedora 42 amd64 default 20250630_20:33
fedora 42 arm64 default 20250630_20:45
funtoo next amd64 default 20250630_16:45
kali current amd64 default 20250630_17:14
kali current arm64 default 20250630_17:48
mint ulyana amd64 default 20250701_08:51
mint ulyssa amd64 default 20250701_08:51
mint uma amd64 default 20250701_08:51
mint una amd64 default 20250701_08:51
mint vanessa amd64 default 20250701_08:51
mint vera amd64 default 20250701_08:51
mint victoria amd64 default 20250701_08:51
mint virginia amd64 default 20250701_08:51
mint wilma amd64 default 20250701_08:51
nixos 24.11 amd64 default 20250701_01:00
nixos 24.11 arm64 default 20250701_01:00
nixos 25.05 amd64 default 20250702_01:00
nixos 25.05 arm64 default 20250702_01:01
nixos unstable amd64 default 20250702_01:00
nixos unstable arm64 default 20250702_01:00
openeuler 20.03 amd64 default 20250630_20:23
openeuler 20.03 arm64 default 20250628_15:48
openeuler 22.03 amd64 default 20250630_20:23
openeuler 22.03 arm64 default 20250630_20:23
openeuler 24.03 amd64 default 20250630_22:51
openeuler 24.03 arm64 default 20250630_20:23
openeuler 25.03 amd64 default 20250630_20:23
openeuler 25.03 arm64 default 20250630_22:51
opensuse 15.5 amd64 default 20250702_04:20
opensuse 15.5 arm64 default 20250701_06:11
opensuse 15.6 amd64 default 20250702_04:20
opensuse 15.6 arm64 default 20250701_04:20
opensuse tumbleweed amd64 default 20250701_04:20
opensuse tumbleweed arm64 default 20250701_06:11
openwrt 22.03 amd64 default 20250701_11:57
openwrt 22.03 arm64 default 20250701_11:57
openwrt 23.05 amd64 default 20250701_11:57
openwrt 23.05 arm64 default 20250701_11:57
openwrt 24.10 amd64 default 20250701_11:57
openwrt 24.10 arm64 default 20250701_11:57
openwrt snapshot amd64 default 20250701_11:57
openwrt snapshot arm64 default 20250701_11:57
oracle 7 amd64 default 20250701_07:46
oracle 7 arm64 default 20250701_08:11
oracle 8 amd64 default 20250701_07:46
oracle 8 arm64 default 20250701_08:09
oracle 9 amd64 default 20250701_07:46
oracle 9 arm64 default 20250701_08:11
plamo 8.x amd64 default 20250702_01:33
rockylinux 8 amd64 default 20250702_02:06
rockylinux 8 arm64 default 20250702_02:06
rockylinux 9 amd64 default 20250702_02:52
rockylinux 9 arm64 default 20250702_02:06
slackware 15.0 amd64 default 20250701_23:08
slackware current amd64 default 20250701_23:08
springdalelinux 7 amd64 default 20250701_06:38
springdalelinux 8 amd64 default 20250701_06:38
springdalelinux 9 amd64 default 20250701_06:38
ubuntu focal amd64 default 20250628_07:42
ubuntu focal arm64 default 20250628_08:09
ubuntu focal armhf default 20250628_09:19
ubuntu focal riscv64 default 20250628_08:17
ubuntu jammy amd64 default 20250701_07:42
ubuntu jammy arm64 default 20250701_07:42
ubuntu jammy armhf default 20250701_08:55
ubuntu jammy riscv64 default 20250701_08:54
ubuntu noble amd64 default 20250701_07:42
ubuntu noble arm64 default 20250701_07:42
ubuntu noble armhf default 20250701_08:30
ubuntu noble riscv64 default 20250701_07:43
ubuntu oracular amd64 default 20250701_07:42
ubuntu oracular arm64 default 20250701_07:42
ubuntu oracular armhf default 20250701_08:08
ubuntu oracular riscv64 default 20250701_08:01
ubuntu plucky amd64 default 20250701_07:42
ubuntu plucky arm64 default 20250701_07:42
ubuntu plucky armhf default 20250701_08:07
ubuntu plucky riscv64 default 20250701_08:38
voidlinux current amd64 default 20250630_17:10
voidlinux current arm64 default 20250630_17:45
---
Distribution:
alpine
Release:
3.22
Architecture:
amd64
Using image from local cache
Unpacking the rootfs
---
You just created an Alpinelinux 3.22 x86_64 (20250628_13:00) container.
[root@openmandriva-x8664 ~]# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
alpine-priv STOPPED 0 - - - false
[root@openmandriva-x8664 ~]# lxc-start -n alpine-priv
[root@openmandriva-x8664 ~]# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
alpine-priv RUNNING 0 - 10.0.3.228 fc42:5009:ba4b:5ab0:1266:6aff:feac:f542 false
[root@openmandriva-x8664 ~]# lxc-info -n alpine-priv
Name: alpine-priv
State: RUNNING
PID: 45966
IP: 10.0.3.228
IP: fc42:5009:ba4b:5ab0:1266:6aff:feac:f542
Link: vethFYckPI
TX bytes: 1.80 KiB
RX bytes: 4.73 KiB
Total bytes: 6.53 KiB
[root@openmandriva-x8664 ~]# lxc-attach -n alpine-priv
[root@alpine-priv ~]# /bin/cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.22.0
PRETTY_NAME="Alpine Linux v3.22"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"
[root@alpine-priv ~]# exit
[root@openmandriva-x8664 ~]#
[root@openmandriva-x8664 ~]# lxc-console -n alpine-priv
Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
Welcome to Alpine Linux 3.22
Kernel 6.14.2-desktop-3omv2590 on x86_64 (/dev/tty1)
alpine-priv login: sv
Password:
Welcome to Alpine!
The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <https://wiki.alpinelinux.org/>.
You can setup the system with the command: setup-alpine
You may change this message by editing /etc/motd.
$
[root@openmandriva-x8664 ~]# lxc-stop -n alpine-priv
[root@openmandriva-x8664 ~]# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
alpine-priv STOPPED 0 - - - false
[root@openmandriva-x8664 ~]# lxc-destroy -n alpine-priv
[root@openmandriva-x8664 ~]# lxc-ls -f
[root@openmandriva-x8664 ~]#
.
Note: As of this writing, there is a bug in lxc, that prevents unprivileged containers i.e., you will not be able to create and run containers as regular user. There is a fix however but we are waiting for next release (6.0.5). Once the version that includes the fix is released, we will be able to use unprivileged containers as well.