[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/4] pvSCSI driver



Hi, all,

Following e-mails are latest version of pvSCSI driver which provides
functionality for guest domains to issue SCSI command. We consider 
that the functionality is very useful for various cases, for example
backup to tape drive from guest domain is a typical case.


After the last post of pvSCSI driver to Xen community on Feb. 18th, 
we have gotten a lot of requests for improvements. We recognize that 
the primary improvements, which affects the basic design of the 
pvSCSI driver, are as follows.

1. Create a "virtual (SCSI) host" on Dom0, and attach it to 
   appropriate guest domain.
2. Hot-plug a LUN to the virtual host on the Dom0 created by above 1.
   The LUN immediately appears and becomes usable on the guest 
   domain. (By repeating this procedure, you can attach multiple LUNs
   to the guest domain.)
3. Support arbitrary SCSI ID mapping between physical host(s) on the 
   Dom0 and the virtual host. (Physical ID "host:channel:target:lun" 
   on the Dom0 can be mapped to arbitrary virtual ID 
   "host:channel:target:lun".)
4. The above 1. to 3. causes needs for "munging" of request and reply
   packet on Dom0. (e.g. The LUN list in reply packet for REPORT_LUN
   command should be appropriately alterd in order to report correct
   LUNs actually attached to the virtual host.)
5. Support the Netchannel2, new communication mechanism between Dom0 
   and guest domain, in order to improve performance.


At first, I have to mention about above 5. As for the Netchannel2, 
We recognize that the source code is not yet available, therefore 
current pvSCSI driver does not support it. If it became available, 
we would like to replace current communication mechanism with it.


As for above 1. to 3., I will briefly describe the design of the 
driver below.

a. The direction to attach the virtual host to the guest domain, 
   which is derected by user land tool (e.g. Xend), triggers off 
   following sequence.
     - A pair of backend driver and frontend driver start to execute.
     - A instance of Xenbus, including ring, event channel and grant
       table, between the pair of drivers is allocated.
     - In backend driver, only a empty "translation table" (see below
       for detail) is prepared.
     - In frontend driver, a "Scsi_Host" structure is allocated. (in 
       case of Linux guest)
     - At this point, the guest domain can see the virtual host which
       contains no LUNs.

b. The direction to attach a LUN to the virtual host triggers off
   following sequece.
     - A "translation table entry" for mapping between the virtual ID
       and the physical ID is added to the "translation table" in the
       backend driver. (The virtual ID and the physical ID are
       given by the user land tool.)
     - The attachment is notified to the frontend driver, and the
       frontend driver execute "scsi_add_device()". It causes issue 
       of INQUIRY command to backend driver. (in case of Linux guest)
     - The virtual ID for the INQUIRY command is appropriately 
       translated into physical ID by the tranlation table in the 
       backend driver, and routed to native SCSI driver.
     - After getting a reply of the INQUIRY command, the LUN becomes
       usable as a "virtualized LUN" on the guest VM.


As for above 4., we have implemented a sample code of "framework" in 
order to flexibly add or modify various "munging" functions, which is
for each SCSI command.
We consider that more discussion how the design and implementation
should be is needed. We would like to have a lot of comments from
community.


As for the user land tool, we also consider that more discussion 
about extension of Xend in order to support "attach host" and "attach
lun", especially for user interface, is needed. We would like to 
continue to discuss about it. So, I temporary attached a simple 
script with the patch. (Usage is very simple. Please see the script.)


Best regards,

-----
Jun Kamada



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.