Old problem is back! bbswitch is broken!

It would be good if we could somehow deal with these dual adapter laptops at the installation stage. I’ll have a word with crazy. We would of course have to tune up the Bumblebee packaging to support it as well as looking into PRIME.

1 Like

From what I remember, MGA6 added support for NVIDIA Prime.

A new experimental tool named mageia-prime can be used to configure the NVIDIA Prime supported by recent Linux kernels and Xorg servers. It allows to fully switch to using the NVIDIA GPU without the overhead of Bumblebee, and is particularly suited for use with CUDA.

from mga6 release note.

Source code: GitHub - ghibo/mageia-prime: Configuration Tool for NVidia Prime for Mageia GNU/Linux

Maybe it would be a good idea to do something similar?

1 Like

I have been looking for options to use this hybrid graphics laptop I have in all moments bumblebee stop working. I tend to prefer bumblebee because it provides the option to turn nvidia graphics only when and just for executing specific applications, saving energy in all other purposes. This site says that mageia-prime allows you to change X support from IGP to DGP at the expense of more energy from the battery. In other words, upon turning to DGP, all graphical applications will be handle by the nvidia more expensive card.
Bumblebee is a kind of a Linux counterpart to nvidia’s optimus technology which allows for saving energy by activating nvidia’s graphic card only when needed.
Couldn’t find anything else in Linux that does this but bumblebee.
The site above also says the perfomance of nvidia’s graphic card with bumblebee may sometimes be equal or even smaller than the IGP (intel in my case). A test with glxspheres64 shows that this may happen,

Using nvidia’s graphical card with bumblebee:

$ optirun glxspheres64
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
Visual ID of window: 0xe2
Context is Direct
OpenGL Renderer: GeForce GT 520M/PCIe/SSE2
63.409458 frames/sec - 55.432548 Mpixels/sec
59.860649 frames/sec - 52.330180 Mpixels/sec
59.860763 frames/sec - 52.330279 Mpixels/sec

Using intel’s graphic card / Mesa

$ glxspheres64
Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
Visual ID of window: 0xe2
Context is Direct
OpenGL Renderer: Mesa DRI Intel(R) Sandybridge Mobile
60.648635 frames/sec - 53.019037 Mpixels/sec
59.836652 frames/sec - 52.309201 Mpixels/sec
59.859503 frames/sec - 52.329178 Mpixels/sec

which shows a negligible gain in frames/second with the optirun.
But this is not the case for all applications. The flightgear simulator is an example where nvidia provides a much more seamlessly experience than the intel’s/Mesa which yields often to freezings.

Also, some applications demand gpu simply for computational perfomance and having to instal mageia-prime and rebooting just to run a single application, saving energy otherwise, is not something I’d would consider practical.

1 Like

I forgot to say that the purpose of the last post was to propose not to remove bumblebee from the repositories … Many people would think just changing the graphical support to DGP permanently with mageia-prime/nvidia-prime would be a nice option

1 Like

Actually, I can give only a list of steps I guess one should try to have bumblebee with newer kernels. But, there are a lot of questions I can’t answer yet. For example, I’ve just installed new kernel 4.17.5 and had to rebuild the bumblebee stuff,

Why new kernel did not tried to install nvidia-long-lived and bbswitch with dkms?

Why

dkms -m nvidia-long-lived -v 390.77-1

did not work (attached make.log file)

but,

urpmi -U --replacefiles --replacepkgs dkms-nvidia-long-lived

did work just finemake_error.log.txt (209,2 KB)
?

1 Like

Ok. If ever you feel that you can, please do.

1 Like

I no longer know what expected behavior is but that is what needs to be defined first. Back when I did have nVidia graphics here is how it worked (for example of what might be expected behavior).

First let me say that when I had nVidia graphics the nv driver never worked and nouveau was not worth a flip. I know people keep saying nouveau is way better now a days. They said the same thing back then too. Even today for some people nouveau just does not work well. That is unfortunate.

Anyway you would install Mandriva and after first boot you would run XFdrake and pick your nVidia graphics series form the list and XFdrake would install all necessary packages and automatically configure your graphics for the correct nvidia driver and you would reboot. Afterwards when you installed a new kernel the new nvidia packages were also automatically installed. Nothing for user to do but reboot which you already needed to for the new kernel anyway.

That was done by dkms or course. So one might suppose there is something wrong or just plain #$%^&* up with our XFdrake and dkms scripts. And I’m certain both could be fixed. But is seems both have been broken for a long time (or am I mistaken?).

I don’t fully understand why OpenMandriva can’t or won’t do this the same way? I keep trying to make the point that OpenMandriva should make out of tree kernel modules being built an iron clad part or introducing a new kernel. I believe this is common with many popular Linux distros. But I’m left with the feeling that I’ll still be making the point 2 year so from now.
Nothing about this ever changes it seems.

Now I need to go ponder why I’m even talking about this when I don’t have any nVida or other graphics requiring 3rd party software. :roll_eyes:

:monkey_face:

This happened again with my desktop with a nvidia graphic card. Installed new kernel 4.17 and dkms has not done its expected work… Problems. Upon rebooting, if dkms doesn’t do what it is expected to do I’ll have to manually fix this again. Openning a new thread …

1 Like

please do not use bbswitch - this software is old and outdated.

use xrandr

https://xorg-team.pages.debian.net/xorg/howto/use-xrandr.html

More info here:
https://wiki.archlinux.org/index.php/PRIME

Following this link you sent one can find,

“Note: GPU offloading is not supported by the closed-source drivers. To get PRIME to work you have to use the discrete card as the primary GPU. For GPU switching, see Bumblebee.”

Bumblebee has included bbswitch inside.

This is the feature I mention above (see citation below)

Thus, there seems to be a feature provided by bumblebee(bbswitch) that is not covered by Prime or Xrandr.

bumblebee is a piece of code from 2013 and it was not updated since that date.

Here are some proofs that native offloading wrorks, and how to set up without bumblebee ancient software:
https://us.download.nvidia.com/XFree86/Linux-x86/375.39/README/randr14.html
https://forum.manjaro.org/t/howto-set-up-prime-with-nvidia-proprietary-driver/40225

TPG,

In the first reference you gave one can find:

Caveats

The NVIDIA driver currently only supports the Source Output capability. It does not support render offload and cannot be used as an output sink.

In the second reference there is this:

bumblebee, using “render offload” Uses the dGPU only when requested, allows power saving, is the Manjaro default Some overhead so lower raw performance
PRIME, using “output offload” Uses the dGPU directly, better raw performance dGPU and iGPU both powered on constantly, needs manual configuration

Thus, for nvidia, the only option to launch dGPU only when requested is to use bumblebee.