[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] virtio leaks cpu mappings, was: qemu crash with virtio on Xen domUs (backtrace included)



On 11/25/2014 09:53 PM, Stefano Stabellini wrote:
> On Tue, 25 Nov 2014, Jason Wang wrote:
>> On 11/25/2014 02:44 AM, Stefano Stabellini wrote:
>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>> On Mon, 24 Nov 2014, Stefano Stabellini wrote:
>>>>> CC'ing Paolo.
>>>>>
>>>>>
>>>>> Wen,
>>>>> thanks for the logs.
>>>>>
>>>>> I investigated a little bit and it seems to me that the bug occurs when
>>>>> QEMU tries to unmap only a portion of a memory region previously mapped.
>>>>> That doesn't work with xen-mapcache.
>>>>>
>>>>> See these logs for example:
>>>>>
>>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb68 len=0xa
>>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>>> Sorry the logs don't quite match, it was supposed to be:
>>>>
>>>> DEBUG address_space_map phys_addr=78ed8b44 vaddr=7fab50afbb64 len=0xa
>>>> DEBUG address_space_unmap vaddr=7fab50afbb68 len=0x6
>>> It looks like the problem is caused by iov_discard_front, called by
>>> virtio_net_handle_ctrl. By changing iov_base after the sg has already
>>> been mapped (cpu_physical_memory_map), it causes a leak in the mapping
>>> because the corresponding cpu_physical_memory_unmap will only unmap a
>>> portion of the original sg.  On Xen the problem is worse because
>>> xen-mapcache aborts.
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index 2ac6ce5..b2b5c2d 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -775,7 +775,7 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, 
>>> VirtQueue *vq)
>>>      struct iovec *iov;
>>>      unsigned int iov_cnt;
>>>  
>>> -    while (virtqueue_pop(vq, &elem)) {
>>> +    while (virtqueue_pop_nomap(vq, &elem)) {
>>>          if (iov_size(elem.in_sg, elem.in_num) < sizeof(status) ||
>>>              iov_size(elem.out_sg, elem.out_num) < sizeof(ctrl)) {
>>>              error_report("virtio-net ctrl missing headers");
>>> @@ -784,8 +784,12 @@ static void virtio_net_handle_ctrl(VirtIODevice *vdev, 
>>> VirtQueue *vq)
>>>  
>>>          iov = elem.out_sg;
>>>          iov_cnt = elem.out_num;
>>> -        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>>>          iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
>>> +
>>> +        virtqueue_map_sg(elem.in_sg, elem.in_addr, elem.in_num, 1);
>>> +        virtqueue_map_sg(elem.out_sg, elem.out_addr, elem.out_num, 0);
>>> +
>>> +        s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
>> Does this really work?
> It seems to work here, as in it doesn't crash QEMU and I am able to boot
> a guest with network. I didn't try any MAC related commands.
>

It was because the guest (not a recent kernel?) never issue commands
through control vq.

We'd better hide the implementation details such as virtqueue_map_sg()
in virtio core instead of letting device call it directly.
>> The code in fact skips the location that contains
>> virtio_net_ctrl_hdr. And virtio_net_handle_mac() still calls
>> iov_discard_front().
>>
>> How about copy iov to a temp variable and use it in this function?
> That would only work if I moved the cpu_physical_memory_unmap call
> outside of virtqueue_fill, so that we can pass different iov to them.
> We need to unmap the same iov that was previously mapped by
> virtqueue_pop.
>

I mean something like following or just passing the offset of iov to
virtio_net_handle_*().

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 9b88775..fdb4edd 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -798,7 +798,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
     virtio_net_ctrl_ack status = VIRTIO_NET_ERR;
     VirtQueueElement elem;
     size_t s;
-    struct iovec *iov;
+    struct iovec *iov, *iov2;
     unsigned int iov_cnt;
 
     while (virtqueue_pop(vq, &elem)) {
@@ -808,8 +808,12 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
             exit(1);
         }
 
-        iov = elem.out_sg;
         iov_cnt = elem.out_num;
+        s = sizeof(struct iovec) * elem.out_num;
+        iov = g_malloc(s);
+        memcpy(iov, elem.out_sg, s);
+        iov2 = iov;
+
         s = iov_to_buf(iov, iov_cnt, 0, &ctrl, sizeof(ctrl));
         iov_discard_front(&iov, &iov_cnt, sizeof(ctrl));
         if (s != sizeof(ctrl)) {
@@ -833,6 +837,7 @@ static void virtio_net_handle_ctrl(VirtIODevice
*vdev, VirtQueue *vq)
 
         virtqueue_push(vq, &elem, sizeof(status));
         virtio_notify(vdev, vq);
+        g_free(iov2);
     }
 }



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.