New kernel 4.17 does not "use" dkms to build modules upon installation

Usually when a new kernel is installed, dkms builds needed modules so that these modules are properly loaded in a new boot.

This seems not to happen with kernel 4.17. Two machines with nvidia and one with bumblebee, dkms did not build modules during 4.17 installation…

2 Likes

During reboot, an attempt to build and install modules is made and it seems succesfull but the process doesn’t go. A message like

“Stop while booting process don’t finish”

is the last.

A second boot, the same result.

Then, I reboot and install nvidia’s dkms and new drivers in recovery mode to solve the problem.

2 Likes

Adelson; I confirm this; the problem is that the latest virtualbox dkms package is broken. The service that starts the build of the modules exits after the failure of the virtual box dkms build thus the nVidia modules never get built. It will be fixed soon.

2 Likes

New kernel 4.17.11 installation was followed by an attempt to build nvidia modules. Nice. But it failed. Here is the make.log;

makenvidia.log.txt (209,2 KB)

Looking at the make.log file we see that building stops with a message about “stack”. I just would like to point that a coleague of mine told me he has been facing problems of “stack” in kernel in Ubuntu Linux.

This is strange but I already had this problem. Then I did as follows:

# dkms remove nvidia-long-lived/390.77-1 --all
# urpmi -U --replacepkgs --replacefiles dkms-nvidia-long-lived

Don’t know why it works this way, just that it does!
I had tried dkms install but it did not worked (same “stack” problem) as in the makenvidia.log file above.

1 Like

Also bbswitch is working as well …

1 Like

Just applied last updates to another computer, a desktop with nvidia-current-396.45. Just the same …
During installation of the kernel-4.17.11, an attempt to build nvidia-current drivers was made and failed. Since nvidia-current is main graphical card in this desktop, I would not be able to reboot in normal mode. Then, before rebooting, downloaded dkms-nvidia-current-396.45-1 to be able to install in recovery mode.

Reboot and did the same two steps as above: dkms remove and urpmi --replacepkgs --replacefiles dkms-nvidia-current

make.log had the same messages on “required executable stack”

1 Like

There is no error in your logs.

Guess there is an issue with building dkms module when new kernel-release-devel package is not installed yet.

1 Like

Don’t have any idea! Just would like to remind that the error also happens when the kernel is already installed and I try manually,

$ dkms install -m module_name -v module_version

And it works with the kernel installed when I do,

$ urpmi (–replacepkgs --replacefiles) dkms-module

1 Like

@adelson.oliveira this seems like a good catch to me as if it should help devs figure out what is wrong with the dkms script/kernel install script.

Looking forward to seeing this one fixed as it has been around for a long time as I recall.

:monkey_face:

I’m onto this. Part of the problem was that the virtualbox dkms was only building 3 modules when there should be four. The other issue is that a quirk in the dkms script requires that the kernel.symvers be in a specific place this breaks the automated install of the nvidia dkms. The virtualbox issue is now fixed and an update is in testing; I am currently working on the nvidia issue.