[cooker] Ideas and brainstorming for cooker

Hi,

i think it is good time to start a discussion about ideas where cooker
development should get toward to.

Here are my ideas:

*Development:*

1. split-usr
Currently our distro uses split-usr. Idea is to move /bin /sbin /lib into
/usr
More information can be found here
https://freedesktop.org/wiki/Software/systemd/separate-usr-is-broken/

2. Disable debuginfo generation
Use -g0 as a default compiler flag and disable debuginfo generation rpms
Each build there are tons of data generated in debuginfo packages which
nobody uses them. If aything bad happens, a segfaulting software may always
be rebuilded with debuginfo enabled.

3. Use BFD
By default LD.gold is used for linking shared objects. Looks like LD.gold
is not maintained at all in a couple of years. By default move to LD.bfd as
it is actievly maintained.

4. Use LD.lld
Try to start a research on using LD.lld (lld is a new linker from LLVM
suite) maybe not globally, but for some important packages or these which
current LD does not handle well i.e. LibreOffice

5. Toybox
Try to start a research to use Toybox (http://landley.net/toybox/about.html)
as coreutils replacement

6. SecureBoot EFI
Try to start a research to adapt our ISO and boot loader to be SecureBoot
friendly. https://wiki.archlinux.org/index.php/Secure_Boot

7. Get rid of GCC
Try to start a reserch how GCC can be stripped out of builds with
LLVM/clang and use compiler-rt by default

8. PGO
Type a list of packages that may benefit form PGO (i.e firefox, webkit,
kernel http://coolypf.com/kpgo.htm)

9. Reduce kernel size
Try to start a research on reducing kernel size, maybe disable some modules
which are not needed etc.

10. AutoFDO/BOLT
Try to start research on post-link optimizations
http://perl11.org/blog/bolt.html

11. Bye to 32-bit

12. IWD
Use IWD as a modern alternative for wpa_supplicant for WiFi connections

*ABF:*

1. Fix "Create Build Lists of dependent projects"

2. EVRD check
Extend repoclosure report to report differences in EVRD between x86_64 and
other arches and releases. Just to show only those rpm packages that have
older EVRD.

3. Integrate QA (voting for a release of package) tool into ABF

*Other*:

1. Use github issues as bug tracker

2. Enforce control for PR to other branches
Integrate "travis-like" tool into our github repo to allow merging changes
between branches.
Enforce PR approval process - i.e. Release Manager or QA accepts PR for
other branches.

Hi,

i think it is good time to start a discussion about ideas where cooker development should get toward to.

Here are my ideas:

Development:

1. split-usr
Currently our distro uses split-usr. Idea is to move /bin /sbin /lib into /usr
More information can be found here separate-usr-is-broken

This I definitely want to see. It's not a difficult change to
implement, and doing so will bring us in line with all the other major
distros (even Debian, which is now doing this by default in Debian
10!).

2. Disable debuginfo generation
Use -g0 as a default compiler flag and disable debuginfo generation rpms
Each build there are tons of data generated in debuginfo packages which nobody uses them. If aything bad happens, a segfaulting software may always be rebuilded with debuginfo enabled.

Nope. I use them when things are breaking. We don't have a more
user-friendly way to leverage them (like a retrace server), but they
are useful. And if you can't produce debuginfo the first go around,
you probably can't produce it when you need it.

If we didn't have them, it would have been a lot harder for me to do
quite a bit of the work I did during the omv4000 development cycle.

3. Use BFD
By default LD.gold is used for linking shared objects. Looks like LD.gold is not maintained at all in a couple of years. By default move to LD.bfd as it is actievly maintained.

We should do this now, unless someone wants to step up and become an
upstream developer of gold. :slight_smile:

4. Use LD.lld
Try to start a research on using LD.lld (lld is a new linker from LLVM suite) maybe not globally, but for some important packages or these which current LD does not handle well i.e. LibreOffice

I'm not a fan, but since we *are* already using Clang, it's probably
worth looking into.

5. Toybox
Try to start a research to use Toybox (What is toybox?) as coreutils replacement

No. There are ways to build coreutils in a busybox/toybox style for
those configurations that need it. Fedora does this today, and we
could do it as well, now that we have the required RPM features.

6. SecureBoot EFI
Try to start a research to adapt our ISO and boot loader to be SecureBoot friendly. Secure Boot - ArchWiki

This is not hard. I already prototyped this with an OMV livecd-tools
build. Getting the pieces in place merely requires people being okay
with the idea of us doing this work.

7. Get rid of GCC
Try to start a reserch how GCC can be stripped out of builds with LLVM/clang and use compiler-rt by default

Not a fan of getting rid of GCC or switching to compiler-rt. Not using
it by default in any package is probably a good goal. Using
compiler-rt will break ABI compatibility with everyone else, so we
should not go there.

8. PGO
Type a list of packages that may benefit form PGO (i.e firefox, webkit, kernel Kernel PGO)

Sure, why not? It's a fair bit of work to make PGO profiles, though...

9. Reduce kernel size
Try to start a research on reducing kernel size, maybe disable some modules which are not needed etc.

Or, you know, split the kernel package up so that extra modules are in
a subpackage? That's what Fedora does so that kernel sizes can be
reduced. Also, initramfs images should already only include modules
being used since that's dracut's default behavior.

10. AutoFDO/BOLT
Try to start research on post-link optimizations Link-time and post-link optimizations // perl11 blog

This is interesting...

11. Bye to 32-bit

As evidenced by the Ubuntu kerfuffle, that's probably a bad idea. I'd
like to see us reduce to Red Hat/Fedora style multilib repos in for
x86_64 and aarch64 and not provide separate i686 and armv7hnl trees.

12. IWD
Use IWD as a modern alternative for wpa_supplicant for WiFi connections

This should be an easy swap. NetworkManager will use it if it is available.

ABF:

1. Fix "Create Build Lists of dependent projects"

2. EVRD check
Extend repoclosure report to report differences in EVRD between x86_64 and other arches and releases. Just to show only those rpm packages that have older EVRD.

3. Integrate QA (voting for a release of package) tool into ABF

These ABF items seem good to me.

Other:

1. Use github issues as bug tracker

2. Enforce control for PR to other branches
Integrate "travis-like" tool into our github repo to allow merging changes between branches.
Enforce PR approval process - i.e. Release Manager or QA accepts PR for other branches.

I'm not a fan of our usage of GitHub. I'd love to see us self-host our
source code using Pagure, which would give us the flexibility to
integrate useful testing, gating, etc. for packages.

PR approvals are probably not a good idea on the whole right now, our
community isn't large enough to be able to support that model.

Hi,
  

i think it is good time to start a discussion about ideas where cooker
development should get toward to.

Indeed -- we already did a bit of that in the last IRC meeting and decided to make a 4.1 release rather quickly (must-haves: fix issues found with 4.0, Qt 5.13, Plasma 5.16.x, kapps 19.08), so if we have anything HUGE, we may want to hold off on it until after 4.1 -- but right now at least I don't see anything in the core system that would require a major overhaul.

1. split-usr

I don't have a strong opinion on this either way -- the rationale given by proponents of merged-usr essentially comes down to "some stuff is broken, we don't want to fix it, so let's hide the issues".

The primary rationale given by opponents of it is also pretty weak and pretty much comes down to "let's not break the old concept so going back to it will be a lot harder if the stuff currently breaking it ever gets fixed".

With initrds/initramfs being what it is these days, the minimal rescue system that used to be in /bin, /sbin and friends has essentially moved to the initrd - so not really a lot of things lost by getting rid of that.

2. Disable debuginfo generation

While I agree that debuginfo packages aren't as useful as they could be (would be nice if the kde crash dialog could automatically download them and generate a more useful backtrace, for example), I don't think they're entirely useless, and outside of build time generating them doesn't have any drawbacks. While our infrastructure can deal with it, I don't see much of a reason to disable it.

3. Use BFD
4. Use LD.lld

Agreed about both of those -- gold was a nice try (and bfd still hasn't caught up with some things like ICF), but lately it hasn't been maintained and is more trouble than it's worth.
I'm leaning towards trying to use lld for everything (faster and can keep ICF) with exceptions for where it breaks, but haven't gotten around to checking how well lld works these days. I'll definitely run some checks when 9.0 is in the tree (lld is making progress fast).

5. Toybox

Toybox works really well and would certainly be a good replacement inside initramfs at least -- not sure if we want to do it in the whole system though (keep in mind that people [and Makefiles] will run scripts written on other distributions, and that hardcode flags to coreutils tools). Even switching to libarchive tar wasn't as 100% painless as it ought to be.
Also, toybox can replace more than just coreutils - awk, grep, sed, find, kmod, procps, psmisc, which, util-linux and a few more are all in there.
I'm not sure how complete those tools are and whether or not we could end up replacing some of those as well. The fewer tools are replaced, the less likely it is to have any benefit...

6. SecureBoot EFI

That would certainly be useful if it can be done in a way that doesn't complicate things too much for a normal user...

7. Get rid of GCC
Try to start a reserch how GCC can be stripped out of builds with
LLVM/clang

One big issue there is libstdc++ -- while clang has libc++ (which is actually better in many ways), if we care about binary compatibility with other distributions, we can't use it because otherwise we'd run into situations where e.g. a binary-only game built on some other distro links to Qt and libstdc++ (that would crash instantly on trying to runtime link to Qt built with libc++).
clang also uses the gcc command line tool to locate headers for intrinsics (/usr/lib64/gcc/x86_64-openmandriva-linux-gnu/9.1.0/include etc.) - that could probably be changed.

and use compiler-rt by default

We tried that before -- while it worked great for most stuff, we ran into binary compatibility problems in some weird situations (e.g. firefox built with compiler-rt crashing when loading the Flash plugin because of clashes with statically linked libgcc bits in there).
Fortunately almost nothing cares about Flash anymore, but that sort of BS is something we still need to look out for.
Probably it's safe to build most applications with compiler-rt -- but there's some potential for it getting messy as soon as something starts dlopening stuff built with something else. For normal libraries, fortunately it's easy to test (e.g. see if a Qt application built with libgcc will keep running if Qt is linked with compiler-rt).
Just swapping out Qt should give us a good impression, given that will also show us if plasma can still load plugins built with libgcc.

8. PGO
Type a list of packages that may benefit form PGO (i.e firefox, webkit,
kernel Kernel PGO)

Certainly worth looking into - probably mesa and qtwebengine would be good candidates too, but getting decent PGO data on them is hard given their use can vary greatly between applications

9. Reduce kernel size
Try to start a research on reducing kernel size, maybe disable some modules
which are not needed etc.

Absolutely, I've been meaning to split some modules people typically don't need into separate packages for some time, just never got around to do that (better to just split them into a separate package than to disable them altogether -- e.g. a typical desktop user will never need infiniband, but as soon as you start installing on servers, things may look different).

10. AutoFDO/BOLT
Try to start research on post-link optimizations
Link-time and post-link optimizations // perl11 blog

Good idea.

11. Bye to 32-bit

This was already decided about in the last meeting -- essentially the outcome was that we should build stuff for a while longer, but start phasing it out (no more stable releases, but keep cooker/rolling so all libraries get built and wine, steam and other legacy cruft is guaranteed to find everything they need).

12. IWD
Use IWD as a modern alternative for wpa_supplicant for WiFi connections

Definitely time to start looking at it, but it doesn't seem to be ready for a load of WiFi drivers yet. Unless things improve quickly, we may need to have IWD for devices where it's known to work and keep wpa_supplicant for the others.

1. Fix "Create Build Lists of dependent projects"

+1

2. EVRD check

+1

3. Integrate QA (voting for a release of package) tool into ABF

+1

*Other*:

1. Use github issues as bug tracker

AFAIK QA people prefer bugzilla (and there's the issue of github being owned by M$ -- so far they've done well, but can we trust them forever?) - but now is the right time to look at possible options again...

2. Enforce control for PR to other branches
Integrate "travis-like" tool into our github repo to allow merging changes
between branches.
Enforce PR approval process - i.e. Release Manager or QA accepts PR for
other branches.

+1, I wonder if we can somehow merge this into the QA tool...
PR (or something like it) to non-cooker branch --> triggers build to see if it compiles, notifies QA and RM, starts voting process

Here's a few more things I've been thinking of -- not necessarily for 4.1:

Cooker:
- Early move to LLVM/Clang 9 -- there's finally patches adding RISC-V hardfloat support, and LLD has made a lot of progress
- Reduce the duplication of binutils tools (binutils, elfutils, llvm). Chances are the LLVM tools are good enough to handle debuginfo generation these days (main reason for elfutils to be in the default build environment right now)
- Check if using Polly can speed up some libraries -- it should be possible to at least somewhat parallelize libjpeg and friends (but the tools for doing that automatically may not be up to the task yet)
- Plasma customization tool in OM Welcome that allows people to quickly get to something they feel at home in, e.g. "Act like [*] OpenMandriva [ ] Windows [ ] MacOS" (where OpenMandriva is what we think is best, Windows does things like double-clicks, MacOS does stuff like global menu bar on top of screen)
- Allow running Android apps (I have Anbox sort of working, but we can do better than the Brokenbuntu crowd...)
- Add OnlyOffice as an alternative to LO (this is a bit painful because they use a load of nonstandard tools, including some prehistoric stuff)
- Text mode/server installer
- Finish the work on the Java stack at least to the point where maven/xmvn work properly again
- Fix the conflict between wine32 and wine64
- An experimental build that does what could be done if we didn't care about binary compatibility (glibc -> musl, libstdc++ -> libc++, libgcc -> compiler-rt) that doesn't replace the existing OS, but would be rather interesting to benchmark -- if only to see how much binary compatibility is costing us.
- Possibly replace firewalld with ufw (because the latter has a proper plasma interface). (Build the [fire]wall! Make Linux great again!)

And probably the most controversial one:
- Change the filesystem layout for multilib. Some distributions have started going for stuff like /usr/x86_64-linux-gnu/lib instead of /usr/lib64, /usr/i686-linux-gnu/lib instead of /usr/lib etc., and that's a really good idea because it allows installing compat libraries for more than just one 64bit and one 32bit architecture. This is useful especially now that qemu binfmt-misc stuff is working well and transparently launches stuff built for other architectures (think wine on ARM being able to run x86 Windows binaries because it has access to qemu and i686/x86_64 libraries). With the current lib64/lib FS layout, e.g. a user on aarch64 may want to install an x86_64 libjpeg to please whatever binary-only application and an i686 libjpeg to please wine -- but they can't go to /usr/lib64 (where aarch64 stuff lives) nor /usr/lib (where armv7hnl stuff lives).
We don't have to break compatibility with anything to do this -- /usr/lib64 and /usr/lib could be symlinks to the native 64 and 32 bit arches, so stuff hardcoding /usr/lib64 or /usr/lib would still find what it's looking for and install to the right place.

ABF:
- Better handling of errors that usually go away by trying again, such as "[MIRROR] kauth-5.59.0-1-omv4000.znver1.rpm: Interrupted by header callback: Server reports Content-Length: 148 but expected size is: 65492
[FAILED] kauth-5.59.0-1-omv4000.znver1.rpm: No more mirrors to try - All mirrors were already tried without success
Failed to set locale, defaulting to C
Error: Error downloading packages:
  Cannot download kauth-5.59.0-1-omv4000.znver1.rpm: All mirrors were tried"
- Better handling of interrupted chain builds (e.g. when building KDE Frameworks and a build somewhere in the middle fails on one arch, allow resuming the chain build when the failed package is fixed)
- Easier way to publish a load of interdependent packages at the same time (e.g. build updated KDE Frameworks and rebuild Plasma against it, then publish all at the same time so people updating in between don't end up with a system that is part 5.58, part 5.59 and unlikely to work properly)

> 1. split-usr
> Currently our distro uses split-usr. Idea is to move /bin /sbin /lib
into /usr
> More information can be found here
separate-usr-is-broken
>

This I definitely want to see. It's not a difficult change to
implement, and doing so will bring us in line with all the other major
distros (even Debian, which is now doing this by default in Debian
10!).

+1

> 2. Disable debuginfo generation
> Use -g0 as a default compiler flag and disable debuginfo generation rpms
> Each build there are tons of data generated in debuginfo packages which
nobody uses them. If aything bad happens, a segfaulting software may always
be rebuilded with debuginfo enabled.
>

Nope. I use them when things are breaking. We don't have a more
user-friendly way to leverage them (like a retrace server), but they
are useful. And if you can't produce debuginfo the first go around,
you probably can't produce it when you need it.

If we didn't have them, it would have been a lot harder for me to do
quite a bit of the work I did during the omv4000 development cycle.

Ok let's count how many times debuginfo were _REALLY_ needed during 4.0
development ?
I may count these on fingers on my one hand. Is it really worth to generate
tons of data (prolly not all debug stuff are stripped to debuginfo
packages) just to potentially use debuginfo couple of times in a year ?

> 3. Use BFD
> By default LD.gold is used for linking shared objects. Looks like
LD.gold is not maintained at all in a couple of years. By default move to
LD.bfd as it is actievly maintained.
>

We should do this now, unless someone wants to step up and become an
upstream developer of gold. :slight_smile:

+1

5. Toybox
> Try to start a research to use Toybox (
What is toybox?) as coreutils replacement
>

No. There are ways to build coreutils in a busybox/toybox style for
those configurations that need it. Fedora does this today, and we
could do it as well, now that we have the required RPM features.

Well our coreutils is compiled in that style. Question is does we may
benefit somehow to use toybox as a default toolbox for various tools
(coreutils, others) ?
Currently toybox offers replacement for:
acpi
bunzip2
cpio
coreutils
diffutils
dos2unix
e2fsprogs
eject
file
findutils
grep
hostname
kmod
nc
net_tools
passwd
procps
psmisc
rfkill
sed
sharutils
time
util_linux
which

6. SecureBoot EFI
> Try to start a research to adapt our ISO and boot loader to be
SecureBoot friendly. Secure Boot - ArchWiki
>

This is not hard. I already prototyped this with an OMV livecd-tools
build. Getting the pieces in place merely requires people being okay
with the idea of us doing this work.

Can you share you wisdom ? I was trying to generate key and stuff based on
few docs :frowning:
We already have pesign, and shim is missing.

> 7. Get rid of GCC
> Try to start a reserch how GCC can be stripped out of builds with
LLVM/clang and use compiler-rt by default
>

Not a fan of getting rid of GCC or switching to compiler-rt. Not using
it by default in any package is probably a good goal. Using
compiler-rt will break ABI compatibility with everyone else, so we
should not go there.

> 8. PGO
> Type a list of packages that may benefit form PGO (i.e firefox, webkit,
kernel Kernel PGO)
>

Sure, why not? It's a fair bit of work to make PGO profiles, though...

> 9. Reduce kernel size
> Try to start a research on reducing kernel size, maybe disable some
modules which are not needed etc.
>

Or, you know, split the kernel package up so that extra modules are in
a subpackage? That's what Fedora does so that kernel sizes can be
reduced. Also, initramfs images should already only include modules
being used since that's dracut's default behavior.

How user will know that he/she needs to install kernel-foo-supbackage to
make it non-standard gear to work ?
Speaking of dracut, we do not use hostmode only. What about users who uses
portable disk drives and connect them to various gears to boot linux on
them ? Anyways why not drop dracut, as these days grub2 can boot directly
from the partition ?

> 2. Enforce control for PR to other branches
> Integrate "travis-like" tool into our github repo to allow merging
changes between branches.
> Enforce PR approval process - i.e. Release Manager or QA accepts PR for
other branches.
>

I'm not a fan of our usage of GitHub. I'd love to see us self-host our
source code using Pagure, which would give us the flexibility to
integrate useful testing, gating, etc. for packages.

This means we need HA hosting for that. We had many ABF failures, where
github is rock solid.

PR approvals are probably not a good idea on the whole right now, our

community isn't large enough to be able to support that model.

Some were yelling that chnages were made without supervision...

> 2. Disable debuginfo generation

While I agree that debuginfo packages aren't as useful as they could be
(would be nice if the kde crash dialog could automatically download them
and generate a more useful backtrace, for example), I don't think they're
entirely useless, and outside of build time generating them doesn't have
any drawbacks. While our infrastructure can deal with it, I don't see much
of a reason to disable it.

Are we sure that rpms are _REALLY_ stripped of debug information ?

3. Use BFD
> 4. Use LD.lld

Agreed about both of those -- gold was a nice try (and bfd still hasn't
caught up with some things like ICF), but lately it hasn't been maintained
and is more trouble than it's worth.
I'm leaning towards trying to use lld for everything (faster and can keep
ICF) with exceptions for where it breaks, but haven't gotten around to
checking how well lld works these days. I'll definitely run some checks
when 9.0 is in the tree (lld is making progress fast).

+1

> 5. Toybox

Toybox works really well and would certainly be a good replacement inside
initramfs at least -- not sure if we want to do it in the whole system
though (keep in mind that people [and Makefiles] will run scripts written
on other distributions, and that hardcode flags to coreutils tools). Even
switching to libarchive tar wasn't as 100% painless as it ought to be.
Also, toybox can replace more than just coreutils - awk, grep, sed, find,
kmod, procps, psmisc, which, util-linux and a few more are all in there.
I'm not sure how complete those tools are and whether or not we could end
up replacing some of those as well. The fewer tools are replaced, the less
likely it is to have any benefit...

If we never give it a research and try then we will never know how complete
these tools are. IIRC android uses toybox, right?

> 7. Get rid of GCC
> Try to start a reserch how GCC can be stripped out of builds with
> LLVM/clang

One big issue there is libstdc++ -- while clang has libc++ (which is
actually better in many ways), if we care about binary compatibility with
other distributions, we can't use it because otherwise we'd run into
situations where e.g. a binary-only game built on some other distro links
to Qt and libstdc++ (that would crash instantly on trying to runtime link
to Qt built with libc++).
clang also uses the gcc command line tool to locate headers for intrinsics
(/usr/lib64/gcc/x86_64-openmandriva-linux-gnu/9.1.0/include etc.) - that
could probably be changed.

What distribution may benefit from chaning to libstdc++ ?
Well i guess that stuff that rely on glibc and gcc may fail, like
proprietary software - steam, nvidia blob drivers etc, right ?

> and use compiler-rt by default

We tried that before -- while it worked great for most stuff, we ran into
binary compatibility problems in some weird situations (e.g. firefox built
with compiler-rt crashing when loading the Flash plugin because of clashes
with statically linked libgcc bits in there).
Fortunately almost nothing cares about Flash anymore, but that sort of BS
is something we still need to look out for.
Probably it's safe to build most applications with compiler-rt -- but
there's some potential for it getting messy as soon as something starts
dlopening stuff built with something else. For normal libraries,
fortunately it's easy to test (e.g. see if a Qt application built with
libgcc will keep running if Qt is linked with compiler-rt).
Just swapping out Qt should give us a good impression, given that will
also show us if plasma can still load plugins built with libgcc.

+1

8. PGO
> Type a list of packages that may benefit form PGO (i.e firefox, webkit,
> kernel Kernel PGO)

Well llvm/clang can be build with PGO.

> 10. AutoFDO/BOLT
> Try to start research on post-link optimizations
> Link-time and post-link optimizations // perl11 blog

Good idea.

If i understand right, we need to run distro on real hardware for example
x86_64, gather run various scripts that will bench python, libc whatever,
gather bolt data and then use bolt.fdata to compile stuff on ABF ?

> *Other*:
>
> 1. Use github issues as bug tracker

AFAIK QA people prefer bugzilla (and there's the issue of github being
owned by M$ -- so far they've done well, but can we trust them forever?) -
but now is the right time to look at possible options again...

> 2. Enforce control for PR to other branches
> Integrate "travis-like" tool into our github repo to allow merging
changes
> between branches.
> Enforce PR approval process - i.e. Release Manager or QA accepts PR for
> other branches.

+1, I wonder if we can somehow merge this into the QA tool...
PR (or something like it) to non-cooker branch --> triggers build to see
if it compiles, notifies QA and RM, starts voting process

Yes something like that.

Here's a few more things I've been thinking of -- not necessarily for 4.1:

Cooker:
- Early move to LLVM/Clang 9 -- there's finally patches adding RISC-V
hardfloat support, and LLD has made a lot of progress

+1

- Reduce the duplication of binutils tools (binutils, elfutils, llvm).

Chances are the LLVM tools are good enough to handle debuginfo generation
these days (main reason for elfutils to be in the default build environment
right now)

+1

- Check if using Polly can speed up some libraries -- it should be possible

to at least somewhat parallelize libjpeg and friends (but the tools for
doing that automatically may not be up to the task yet)

Speaking of llvm-polly i did some tests and compared results of -O3 build
against polly build


polly build was slightly faster than O3.
If i understand right polly is good for optimizations where polyhedrons are
used. This means llvm-polly should be used on all 2D/3D graphics tools like
krita, gimp, blender... maybe mesa ?
Doubt it has any positive impact on polynomials.

- Plasma customization tool in OM Welcome that allows people to quickly get

to something they feel at home in, e.g. "Act like [*] OpenMandriva [ ]
Windows [ ] MacOS" (where OpenMandriva is what we think is best, Windows
does things like double-clicks, MacOS does stuff like global menu bar on
top of screen)

+1
Maybe as a calamares module ?

- Allow running Android apps (I have Anbox sort of working, but we can do

better than the Brokenbuntu crowd...)

+1
That would be a really nice feature

- Add OnlyOffice as an alternative to LO (this is a bit painful because

they use a load of nonstandard tools, including some prehistoric stuff)
- Text mode/server installer

- An experimental build that does what could be done if we didn't care
about binary compatibility (glibc -> musl, libstdc++ -> libc++, libgcc ->
compiler-rt) that doesn't replace the existing OS, but would be rather
interesting to benchmark -- if only to see how much binary compatibility is
costing us.

Sounds like a lot of work to be done.

- Possibly replace firewalld with ufw (because the latter has a proper

plasma interface). (Build the [fire]wall! Make Linux great again!)

-1
It's boontoo centric, and last updated in 2016. That does not sound like it
is actievly maintained.
This looks promising GitHub - nx-desktop/nx-firewall: Firewall KCM

And probably the most controversial one:
- Change the filesystem layout for multilib. Some distributions have
started going for stuff like /usr/x86_64-linux-gnu/lib instead of
/usr/lib64, /usr/i686-linux-gnu/lib instead of /usr/lib etc., and that's a
really good idea because it allows installing compat libraries for more
than just one 64bit and one 32bit architecture. This is useful especially
now that qemu binfmt-misc stuff is working well and transparently launches
stuff built for other architectures (think wine on ARM being able to run
x86 Windows binaries because it has access to qemu and i686/x86_64
libraries). With the current lib64/lib FS layout, e.g. a user on aarch64
may want to install an x86_64 libjpeg to please whatever binary-only
application and an i686 libjpeg to please wine -- but they can't go to
/usr/lib64 (where aarch64 stuff lives) nor /usr/lib (where armv7hnl stuff
lives).
We don't have to break compatibility with anything to do this --
/usr/lib64 and /usr/lib could be symlinks to the native 64 and 32 bit
arches, so stuff hardcoding /usr/lib64 or /usr/lib would still find what
it's looking for and install to the right place.

Sounds interesting.

ABF:
- Better handling of errors that usually go away by trying again, such as
"[MIRROR] kauth-5.59.0-1-omv4000.znver1.rpm: Interrupted by header
callback: Server reports Content-Length: 148 but expected size is: 65492
[FAILED] kauth-5.59.0-1-omv4000.znver1.rpm: No more mirrors to try - All
mirrors were already tried without success
Failed to set locale, defaulting to C
Error: Error downloading packages:
  Cannot download kauth-5.59.0-1-omv4000.znver1.rpm: All mirrors were
tried"
- Better handling of interrupted chain builds (e.g. when building KDE
Frameworks and a build somewhere in the middle fails on one arch, allow
resuming the chain build when the failed package is fixed)
- Easier way to publish a load of interdependent packages at the same time
(e.g. build updated KDE Frameworks and rebuild Plasma against it, then
publish all at the same time so people updating in between don't end up
with a system that is part 5.58, part 5.59 and unlikely to work properly)

I'd suggest to add support for odered mass builds. So you can easily build

whole KDE frameworks in a give order :slight_smile:

If anyone is interesetd i've prepared a kanban project list with ideas.

You are welcome to comment there, track progress, add new tasks.

> 2. Disable debuginfo generation

While I agree that debuginfo packages aren't as useful as they could be (would be nice if the kde crash dialog could automatically download them and generate a more useful backtrace, for example), I don't think they're entirely useless, and outside of build time generating them doesn't have any drawbacks. While our infrastructure can deal with it, I don't see much of a reason to disable it.

Are we sure that rpms are _REALLY_ stripped of debug information ?

Yes. Mainly because RPM will fail the build if it can't properly strip
them and the debuginfo/debugsource packages are empty.

> 3. Use BFD
> 4. Use LD.lld

Agreed about both of those -- gold was a nice try (and bfd still hasn't caught up with some things like ICF), but lately it hasn't been maintained and is more trouble than it's worth.
I'm leaning towards trying to use lld for everything (faster and can keep ICF) with exceptions for where it breaks, but haven't gotten around to checking how well lld works these days. I'll definitely run some checks when 9.0 is in the tree (lld is making progress fast).

+1

Preliminary tests seem to indicate the LTO stuff is still busted
across compilers with lld (mixing gcc and lld is a no-go). But maybe
that'll change in the near future...

> 5. Toybox

Toybox works really well and would certainly be a good replacement inside initramfs at least -- not sure if we want to do it in the whole system though (keep in mind that people [and Makefiles] will run scripts written on other distributions, and that hardcode flags to coreutils tools). Even switching to libarchive tar wasn't as 100% painless as it ought to be.
Also, toybox can replace more than just coreutils - awk, grep, sed, find, kmod, procps, psmisc, which, util-linux and a few more are all in there.
I'm not sure how complete those tools are and whether or not we could end up replacing some of those as well. The fewer tools are replaced, the less likely it is to have any benefit...

If we never give it a research and try then we will never know how complete these tools are. IIRC android uses toybox, right?

Unfortunately, yes. But Android can get away with this because they
don't ship a lot of userspace anyway. In generally, I'm very much
*not* in favor of even going down this rabbithole. I've been there
with embedded stuff and it's an ugly world.

> 7. Get rid of GCC
> Try to start a reserch how GCC can be stripped out of builds with
> LLVM/clang

One big issue there is libstdc++ -- while clang has libc++ (which is actually better in many ways), if we care about binary compatibility with other distributions, we can't use it because otherwise we'd run into situations where e.g. a binary-only game built on some other distro links to Qt and libstdc++ (that would crash instantly on trying to runtime link to Qt built with libc++).
clang also uses the gcc command line tool to locate headers for intrinsics (/usr/lib64/gcc/x86_64-openmandriva-linux-gnu/9.1.0/include etc.) - that could probably be changed.

What distribution may benefit from chaning to libstdc++ ?
Well i guess that stuff that rely on glibc and gcc may fail, like proprietary software - steam, nvidia blob drivers etc, right ?

> and use compiler-rt by default

We tried that before -- while it worked great for most stuff, we ran into binary compatibility problems in some weird situations (e.g. firefox built with compiler-rt crashing when loading the Flash plugin because of clashes with statically linked libgcc bits in there).
Fortunately almost nothing cares about Flash anymore, but that sort of BS is something we still need to look out for.
Probably it's safe to build most applications with compiler-rt -- but there's some potential for it getting messy as soon as something starts dlopening stuff built with something else. For normal libraries, fortunately it's easy to test (e.g. see if a Qt application built with libgcc will keep running if Qt is linked with compiler-rt).
Just swapping out Qt should give us a good impression, given that will also show us if plasma can still load plugins built with libgcc.

+1

> 8. PGO
> Type a list of packages that may benefit form PGO (i.e firefox, webkit,
> kernel Kernel PGO)

Well llvm/clang can be build with PGO.

> 10. AutoFDO/BOLT
> Try to start research on post-link optimizations
> Link-time and post-link optimizations // perl11 blog

Good idea.

If i understand right, we need to run distro on real hardware for example x86_64, gather run various scripts that will bench python, libc whatever, gather bolt data and then use bolt.fdata to compile stuff on ABF ?

> *Other*:
>
> 1. Use github issues as bug tracker

AFAIK QA people prefer bugzilla (and there's the issue of github being owned by M$ -- so far they've done well, but can we trust them forever?) - but now is the right time to look at possible options again...

> 2. Enforce control for PR to other branches
> Integrate "travis-like" tool into our github repo to allow merging changes
> between branches.
> Enforce PR approval process - i.e. Release Manager or QA accepts PR for
> other branches.

+1, I wonder if we can somehow merge this into the QA tool...
PR (or something like it) to non-cooker branch --> triggers build to see if it compiles, notifies QA and RM, starts voting process

Yes something like that.

I'm uncomfortable with the fact that we rely so heavily on GitHub. I'd
like to see if we can explore migrating our sources to something like
Pagure hosted on our own infrastructure. Unlike most of our services,
a Git server isn't in itself terribly expensive, especially since we
store binary files (like tarballs) on ABF file store anyway.

We could put it on a small VPS or cloud server somewhere.

Here's a few more things I've been thinking of -- not necessarily for 4.1:

Cooker:
- Early move to LLVM/Clang 9 -- there's finally patches adding RISC-V hardfloat support, and LLD has made a lot of progress

+1

- Reduce the duplication of binutils tools (binutils, elfutils, llvm). Chances are the LLVM tools are good enough to handle debuginfo generation these days (main reason for elfutils to be in the default build environment right now)

+1

I tried this, remember? There are two problems: clang doesn't generate
proper DWARF 5 debug symbols yet (they were broken and missing a bunch
of data), and the LLVM based elfutils doesn't support all the features
actually used by RPM for debuginfo generation that sourceware elfutils
has.

- Check if using Polly can speed up some libraries -- it should be possible to at least somewhat parallelize libjpeg and friends (but the tools for doing that automatically may not be up to the task yet)

Speaking of llvm-polly i did some tests and compared results of -O3 build against polly build
build with LLVM/polly support · OpenMandrivaAssociation/openssl@b77dbe1 · GitHub
polly build was slightly faster than O3.
If i understand right polly is good for optimizations where polyhedrons are used. This means llvm-polly should be used on all 2D/3D graphics tools like krita, gimp, blender... maybe mesa ?
Doubt it has any positive impact on polynomials.

- Plasma customization tool in OM Welcome that allows people to quickly get to something they feel at home in, e.g. "Act like [*] OpenMandriva [ ] Windows [ ] MacOS" (where OpenMandriva is what we think is best, Windows does things like double-clicks, MacOS does stuff like global menu bar on top of screen)

+1
Maybe as a calamares module ?

- Allow running Android apps (I have Anbox sort of working, but we can do better than the Brokenbuntu crowd...)

+1
That would be a really nice feature

- Add OnlyOffice as an alternative to LO (this is a bit painful because they use a load of nonstandard tools, including some prehistoric stuff)
- Text mode/server installer

- An experimental build that does what could be done if we didn't care about binary compatibility (glibc -> musl, libstdc++ -> libc++, libgcc -> compiler-rt) that doesn't replace the existing OS, but would be rather interesting to benchmark -- if only to see how much binary compatibility is costing us.

Sounds like a lot of work to be done.

- Possibly replace firewalld with ufw (because the latter has a proper plasma interface). (Build the [fire]wall! Make Linux great again!)

-1
It's boontoo centric, and last updated in 2016. That does not sound like it is actievly maintained.
This looks promising GitHub - nx-desktop/nx-firewall: Firewall KCM

The ManaTools project is already working on a FirewallD frontend:

It'd be great if people would come and help with that, we could ship
that for a frontend. Like dnfdragora, it offers Qt5, ncurses, and (if
desired) GTK+3 frontends.

And probably the most controversial one:
- Change the filesystem layout for multilib. Some distributions have started going for stuff like /usr/x86_64-linux-gnu/lib instead of /usr/lib64, /usr/i686-linux-gnu/lib instead of /usr/lib etc., and that's a really good idea because it allows installing compat libraries for more than just one 64bit and one 32bit architecture. This is useful especially now that qemu binfmt-misc stuff is working well and transparently launches stuff built for other architectures (think wine on ARM being able to run x86 Windows binaries because it has access to qemu and i686/x86_64 libraries). With the current lib64/lib FS layout, e.g. a user on aarch64 may want to install an x86_64 libjpeg to please whatever binary-only application and an i686 libjpeg to please wine -- but they can't go to /usr/lib64 (where aarch64 stuff lives) nor /usr/lib (where armv7hnl stuff lives).
We don't have to break compatibility with anything to do this -- /usr/lib64 and /usr/lib could be symlinks to the native 64 and 32 bit arches, so stuff hardcoding /usr/lib64 or /usr/lib would still find what it's looking for and install to the right place.

Sounds interesting.

I'm really not a fan of this idea. There are several issues:
* We literally can't have compatibility for /usr/lib/i686-linux-gnu ->
/usr/lib, because that's fundamentally not possible. So we lose actual
binary compatibility with standard FHS-centric ABI.
* It doesn't actually help anything, because you're still restricted
to native CPU unless you load qemu-user. And even with that, you get
all kinds of weird states.
* There are strange issues with mixing expectations with native
binaries and non-native libraries. Output parsing, helpers, etc. are
not necessarily the same and things will break in odd ways.

With mock being able to dynamically setup foreign architecture
chroots, I think the need for this feature is pretty much nil. The
current issue right now is that we don't ship any mock configs for
people to use locally. I do this in Mageia so that people can use mock
locally, and that's something we should do for OpenMandriva too.

Mock upstream has mock-core-configs, and in Mageia, I added a
supplementary package that builds on top of that called
mock-mageia-configs to include targets for tainted (restricted) and
nonfree: [packages] Index of /cauldron/mock-mageia-configs

We probably want to do something similar for OpenMandriva.

ABF:
- Better handling of errors that usually go away by trying again, such as "[MIRROR] kauth-5.59.0-1-omv4000.znver1.rpm: Interrupted by header callback: Server reports Content-Length: 148 but expected size is: 65492
[FAILED] kauth-5.59.0-1-omv4000.znver1.rpm: No more mirrors to try - All mirrors were already tried without success
Failed to set locale, defaulting to C
Error: Error downloading packages:
  Cannot download kauth-5.59.0-1-omv4000.znver1.rpm: All mirrors were tried"
- Better handling of interrupted chain builds (e.g. when building KDE Frameworks and a build somewhere in the middle fails on one arch, allow resuming the chain build when the failed package is fixed)
- Easier way to publish a load of interdependent packages at the same time (e.g. build updated KDE Frameworks and rebuild Plasma against it, then publish all at the same time so people updating in between don't end up with a system that is part 5.58, part 5.59 and unlikely to work properly)

I'd suggest to add support for odered mass builds. So you can easily build whole KDE frameworks in a give order :slight_smile:

You could use the DNF Python API to be able to determine build orders
by pushing it through the dependency resolver...