← Back to team overview

hybrid-graphics-linux team mailing list archive

Re: Optimus Solution found!!!! Using my Nvidia card in my Alienware M11X R2!!!

 

Hi,

> 	Secondly, there's a -modulepath switch, so I created
> a duplicate of /usr/lib/xorg/modules (let's
> say /usr/lib/xorg/modules.nvidia), and symlink all
> the files from /usr/lib/xorg/modules, except extensions/libglx.so,
> which I symlink it to the real nvidia library. And when you want to
> run a GL app using the nvidia libGL, set
> LD_LIBRARY_PATH=/usr/lib/nvidia (this is where the nvidia libGL is on
> my setup).
> 
> 	I'm on Debian and using all the nvidia binary driver from
> Debian (non-free). It uses the "alternatives" (/etc/alternatives/)
> approach for things like libglx.so and libGL, so I don't have to deal
> with them myself.


	Just tried your VirtualGL approach with two seperate Xorg on
my M11x-R2, and it worked pretty well.

	Also been experimenting different "-c" for vglrun, very
interesting with the performance.

	Quick note with my intel + nvidia GL setup. It doesn't quite
work with vglrun. We need the actual app to use the nvidia libGL, while
the displaying part to use the intel (mesa) libGL. But since by using
vglrun it is actually one process only, which is not possible to use
two different libGLs.

	On the other hand, by using vglclient -gl and vglrun -c jpeg
combo, in theory the displaying client is able to use GL with the intel.
However I'm getting a lot of jitter with the framerate comparing with
vgclient without -gl. Not sure why that's happening. The only thing
that's suspicious is vglclient -gl is using like 100% cpu.


Pigeon.


References