On 12/5/07, Fajar A. Nugraha <fajar@xxxxxxxxxxxxx> wrote:
rishi pathak wrote:
> Hi ,
> The problem is that we have got a high speed interconnect and the
> driver for it was compiled for 2.6.9-5 EL.
> We have got only binaries of the driver and not source.For
this reason
> we have to stick to the said kernel.
>
You DO know a BINARY kernel module for normal linux kernel doesn't work
on Xen PV kernel, right?
I was not aware of that fact:(.Thanks for the information
The module itself must be recompiled. If you HAVE to use that particular
version of the kernel (and not from RH
4.5, for example) then most
likely it won't work in PV guest.
If I compile the module from within the guest It will work right?.
> Let me explain the scenario/problem for which we need
> virtualization.We
have a compute cluster with nodes having 8GB of
> memory.The nodes are all dual socket single core Intel Xeon.The driver
> for high speed interconnect can only work with 4GB of available
> RAM(i.e. it cannot see more than 4GB).If I restrict RAM to 4GB then it
> would be a wastage.So the solution that came up was to divide node
> into two virtual machine and assign high speed interconnect to one
> virtual machine.Both machines will have 4GB.
Start here to run RHEL4 as PV guest
http://people.redhat.com/riel/RHEL4-Xen-HOWTO
I have a strong hunch your setup won't work though.
You might be able to use HVM domU instead, and (if the device is PCI)
use PCI passthrough. This way you should be able to use normal RHEL4 kernel.
The nodes does not have support for Intel VT. What would be performance degradation if I use HVM instead of PV.
I tried with inserting module in 2.6.9-42 based domU...oops:kernel panic
Regards,
Fajar
Regards --
Rishi Pathak