[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [qemu-xen staging-4.13] virtio-net: prevent offloads reset on migration
commit 4887acf574a573137660aa98d9d422ece0a41a5a Author: Mikhail Sennikovsky <mikhail.sennikovskii@xxxxxxxxxxxxxxx> AuthorDate: Fri Oct 11 15:58:04 2019 +0200 Commit: Michael Roth <mdroth@xxxxxxxxxxxxxxxxxx> CommitDate: Mon Nov 4 08:24:34 2019 -0600 virtio-net: prevent offloads reset on migration Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET command are not preserved on VM migration. Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES) get enabled. What happens is: first the VirtIONet::curr_guest_offloads gets restored and offloads are getting set correctly: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334 #3 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:168 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 However later on the features are getting restored, and offloads get reset to everything supported by features: #0 qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474 #1 virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720 #2 virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773 #3 virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052 #4 virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220 #5 virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036 #6 vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143 #7 vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829 #8 qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211 #9 qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449 Fix this by preserving the state in saved_guest_offloads field and pushing out offload initialization to the new post load hook. Cc: qemu-stable@xxxxxxxxxx Signed-off-by: Mikhail Sennikovsky <mikhail.sennikovskii@xxxxxxxxxxxxxxx> Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx> (cherry picked from commit 7788c3f2e21e35902d45809b236791383bbb613e) Signed-off-by: Michael Roth <mdroth@xxxxxxxxxxxxxxxxxx> --- hw/net/virtio-net.c | 27 ++++++++++++++++++++++++--- include/hw/virtio/virtio-net.h | 2 ++ 2 files changed, 26 insertions(+), 3 deletions(-) diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index b9e1cd71cf..6adb0fe252 100644 --- a/hw/net/virtio-net.c +++ b/hw/net/virtio-net.c @@ -2330,9 +2330,13 @@ static int virtio_net_post_load_device(void *opaque, int version_id) n->curr_guest_offloads = virtio_net_supported_guest_offloads(n); } - if (peer_has_vnet_hdr(n)) { - virtio_net_apply_guest_offloads(n); - } + /* + * curr_guest_offloads will be later overwritten by the + * virtio_set_features_nocheck call done from the virtio_load. + * Here we make sure it is preserved and restored accordingly + * in the virtio_net_post_load_virtio callback. + */ + n->saved_guest_offloads = n->curr_guest_offloads; virtio_net_set_queues(n); @@ -2367,6 +2371,22 @@ static int virtio_net_post_load_device(void *opaque, int version_id) return 0; } +static int virtio_net_post_load_virtio(VirtIODevice *vdev) +{ + VirtIONet *n = VIRTIO_NET(vdev); + /* + * The actual needed state is now in saved_guest_offloads, + * see virtio_net_post_load_device for detail. + * Restore it back and apply the desired offloads. + */ + n->curr_guest_offloads = n->saved_guest_offloads; + if (peer_has_vnet_hdr(n)) { + virtio_net_apply_guest_offloads(n); + } + + return 0; +} + /* tx_waiting field of a VirtIONetQueue */ static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = { .name = "virtio-net-queue-tx_waiting", @@ -2909,6 +2929,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data) vdc->guest_notifier_mask = virtio_net_guest_notifier_mask; vdc->guest_notifier_pending = virtio_net_guest_notifier_pending; vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO); + vdc->post_load = virtio_net_post_load_virtio; vdc->vmsd = &vmstate_virtio_net_device; } diff --git a/include/hw/virtio/virtio-net.h b/include/hw/virtio/virtio-net.h index b96f0c643f..07a9319f4b 100644 --- a/include/hw/virtio/virtio-net.h +++ b/include/hw/virtio/virtio-net.h @@ -182,6 +182,8 @@ struct VirtIONet { char *netclient_name; char *netclient_type; uint64_t curr_guest_offloads; + /* used on saved state restore phase to preserve the curr_guest_offloads */ + uint64_t saved_guest_offloads; AnnounceTimer announce_timer; bool needs_vnet_hdr_swap; bool mtu_bypass_backend; -- generated by git-patchbot for /home/xen/git/qemu-xen.git#staging-4.13
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |