[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
- To: Wei Chen <Wei.Chen@xxxxxxx>
- From: Oleksandr <olekstysh@xxxxxxxxx>
- Date: Mon, 2 Nov 2020 20:05:24 +0200
- Cc: Masami Hiramatsu <masami.hiramatsu@xxxxxxxxxx>, Alex Bennée <alex.bennee@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <Julien.Grall@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Tim Deegan <tim@xxxxxxx>, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
- Delivery-date: Mon, 02 Nov 2020 18:05:38 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 02.11.20 09:23, Wei Chen wrote:
Hi Oleksandr,
Hi Wei.
Thanks for the sharing of virtio-disk backend. I have tested it on arm FVP_base
platform.
We used domain-0 to run virtio disk backend. The backend disk is a loop device.
"virtio_disks": [
{
"backend_domname": "Domain-0",
"devid": 0,
"disks": [
{
"filename": "/dev/loop0"
}
]
}
],
It works fine and I've pasted some logs:
-------------------------------------------
Domain-0 logs:
main: read backend domid 0
(XEN) gnttab_mark_dirty not implemented yet
(XEN) domain_direct_pl011_init for domain#2
main: read frontend domid 2
Info: connected to dom2
demu_seq_next: >XENSTORE_ATTACHED
demu_seq_next: domid = 2
demu_seq_next: filename[0] = /dev/loop0
demu_seq_next: readonly[0] = 0
demu_seq_next: base[0] = 0x2000000
demu_seq_next: irq[0] = 33
demu_seq_next: >XENCTRL_OPEN
demu_seq_next: >XENEVTCHN_OPEN
demu_seq_next: >XENFOREIGNMEMORY_OPEN
demu_seq_next: >XENDEVICEMODEL_OPEN
demu_initialize: 2 vCPU(s)
demu_seq_next: >SERVER_REGISTERED
demu_seq_next: ioservid = 0
demu_seq_next: >RESOURCE_MAPPED
demu_seq_next: shared_iopage = 0xffffae6de000
demu_seq_next: buffered_iopage = 0xffffae6dd000
demu_seq_next: >SERVER_ENABLED
demu_seq_next: >PORT_ARRAY_ALLOCATED
demu_seq_next: >EVTCHN_PORTS_BOUND
demu_seq_next: VCPU0: 3 -> 7
demu_seq_next: VCPU1: 5 -> 8
demu_seq_next: >EVTCHN_BUF_PORT_BOUND
demu_seq_next: 0 -> 9
demu_register_memory_space: 2000000 - 20001ff
Info: (virtio/mmio.c) virtio_mmio_init:290:
mailto:virtio-mmio.devices=0x200@0x2000000:33
demu_seq_next: >DEVICE_INITIALIZED
demu_seq_next: >INITIALIZED
IO request not ready
IO request not ready
----------------
Dom-U logs:
[ 0.491037] xen:xen_evtchn: Event-channel device installed
[ 0.493600] Initialising Xen pvcalls frontend driver
[ 0.516807] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[ 0.525565] cacheinfo: Unable to detect cache hierarchy for CPU 0
[ 0.562275] brd: module loaded
[ 0.595300] loop: module loaded
[ 0.683800] virtio_blk virtio0: [vda] 131072 512-byte logical blocks (67.1
MB/64.0 MiB)
[ 0.684000] vda: detected capacity change from 0 to 67108864
/ # dd if=/dev/vda of=/dev/null bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (64.0MB) copied, 3.196242 seconds, 20.0MB/s
/ # dd if=/dev/zero of=/dev/vda bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (64.0MB) copied, 3.704594 seconds, 17.3MB/s
---------------------
The read/write seems OK in dom-U. The FVP platform is a emulator, so the
performance is no reference.
We will test it on real hardware like N1SDP.
This is really a good news. Thank you for testing!
--
Regards,
Oleksandr Tyshchenko
|