[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm





On Fri, Oct 30, 2020 at 1:34 PM Masami Hiramatsu <masami.hiramatsu@xxxxxxxxxx> wrote:
Hi Oleksandr,
 
Hi Masami, all

[sorry for the possible format issue]
 
>> >
>> >       Could you tell me how can I test it?
>> >
>> >
>> > I assume it is due to the lack of the virtio-disk backend (which I haven't shared yet as I focused on the IOREQ/DM support on Arm in the
>> > first place).
>> > Could you wait a little bit, I am going to share it soon.
>>
>> Do you have a quick-and-dirty hack you can share in the meantime? Even
>> just on github as a special branch? It would be very useful to be able
>> to have a test-driver for the new feature.
>
> Well, I will provide a branch on github with our PoC virtio-disk backend by the end of this week. It will be possible to test this series with it.

Great! OK I'll be waiting for the PoC backend.

Thank you!

You can find the virtio-disk backend PoC (shared as is) at [1]. 

Brief description...

The virtio-disk backend PoC is a completely standalone entity (IOREQ server) which emulates a virtio-mmio disk device.
It is based on code from DEMU [2] (for IOREQ server purposes) and some code from kvmtool [3] to implement virtio protocol,
disk operations over underlying H/W and Xenbus code to be able to read configuration from the Xenstore
(it is configured via domain config file). Last patch in this series (marked as RFC) actually adds required bits to the libxl code.   

Some notes...

Backend could be used with current V2 IOREQ series [4] without any modifications, all what you need is to enable
CONFIG_IOREQ_SERVER on Arm [5], since it is disabled by default within this series.

Please note that in our system we run backend in DomD (driver domain). I haven't tested it in Dom0,
since in our system the Dom0 is thin (without any H/W) and only used to launch VMs, so there is no underlying block H/W.
But, I hope, it is possible to run it in Dom0 as well (at least there is nothing specific to a particular domain in the backend itself, nothing hardcoded).
If you are going to run a backend in other than Dom0 domain you need to write your own policy (FLASK) for the backend (running in that domain)
to be able to issue DM related requests, etc. Only for test purposes you could use this patch [6] that tweeks Xen dummy policy (not for upstream).
  
As I mentioned elsewhere you don't need to modify Guest Linux (DomU), just enable VirtIO related configs.
If I remember correctly, the following would be enough:
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
If I remember correctly, if your Host Linux (Dom0 or DomD) version >= 4.17 you don't need to modify it as well.
Otherwise, you need to cherry-pick "xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESOURCE" from the upstream to be able
to use the acquire interface for the resource mapping.

We usually build a backend in the context of the Yocto build process and run it as a systemd service,
but you can also build and run it manually (it should be launched before DomU creation).

There are no command line options at all. Everything is configured via domain configuration file:
# This option is mandatory, it shows that VirtIO is going to be used by guest
virtio=1
# Example of domain configuration (two disks are assigned to the guest, the latter is in readonly mode):
vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]

Hope that helps. Feel free to ask questions if any.

[1] https://github.com/xen-troops/virtio-disk/commits/ioreq_v3

--
Regards,

Oleksandr Tyshchenko

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.