[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Exporting a PCI Device



>-----Original Message-----
>From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
>[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jan Kalcic
>Sent: Monday, 18 February 2008 12:02
>To: deshantm@xxxxxxxxx
>Cc: xen-users@xxxxxxxxxxxxxxxxxxx
>Subject: Re: [Xen-users] Exporting a PCI Device
>
>Todd Deshane wrote:
>>
>> Hi,
>>
>> On Feb 17, 2008 6:34 PM, Jan Kalcic <jandot@xxxxxxxxxxxxxx 
>> <mailto:jandot@xxxxxxxxxxxxxx>> wrote:
>>
>>     Hi All,
>>
>>     just a quick question I could not figure out. Is there a way to
>>     export a
>>     PCI device to multiple VMs (para) keeping it available 
>to dom0? Xen
>>     version is 3.0.4.
>>
>>
>> As far as I know you can't. That is what virtual devices are 
>used for 
>> right?
>>
>> In what scenario would you want to grant direct access to a 
>PCI device 
>> in VMs and also in dom0?
>>
>Hi Todd,
>
>the PCI device which I would need to "share" is the fibre 
>channel card connected to two different storage, on of this is 
>the VMs repository which has to be visible to dom0 and the 
>other one is the data storage for VMs which, obviously, has to 
>be visibile to VMs. So the solution would be using two 
>different fibre channel cards, right?

What I would do is make all storage available to the dom0 and use
regular methods to export it to the domU.
In other words: treat dom0 as a very fancy piece of hardware that's
between your kernel and the fibre-channel attached storage. For generic
solutions the Virtual Block Device should be fast enough, otherwise you
should probably consider a separate server, dedicated to that single
task.

I don't know if you are going to loose any fibre-channel advantages, but
I figure you also reduce administrative complexity to dom0's only.

- Joris


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.