[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] wireshark capture of failed download from mirage-www on ARM



On 24 July 2014 17:30, Thomas Leonard <talex5@xxxxxxxxx> wrote:
> On 23 July 2014 13:10, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
>> On 23 Jul 2014, at 02:46, Dave Scott <Dave.Scott@xxxxxxxxxx> wrote:
>>
>>> On 23 Jul 2014, at 08:10, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>>
>>>> On 22 July 2014 13:17, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
>>>>> On 22 Jul 2014, at 02:01, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>>>>
>>>>>> On 22 July 2014 09:14, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
>>>>>>> On 22 Jul 2014, at 01:10, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>>>>>>>
>>>>>>>> I noticed I'm getting a lot of these in the dom0 log (dmsg):
>>>>>>>>
>>>>>>>> Jul 22 08:09:05 cubietruck kernel: [11892.128241]
>>>>>>>> xen_add_phys_to_mach_entry: cannot add pfn=0x00079abe ->
>>>>>>>> mfn=0x000bed87: pfn=0x000799a6 -> mfn=0x000bed87 already exists
>>>>>>>> Jul 22 08:09:05 cubietruck kernel: [11892.128727]
>>>>>>>> xen_add_mach_to_phys_entry: cannot add pfn=0x00079bcb ->
>>>>>>>> mfn=0x000bed87: pfn=0x000799a6 -> mfn=0x000bed87 already exists
>>>>>>>> Jul 22 08:09:05 cubietruck kernel: [11892.133264]
>>>>>>>> xen_add_phys_to_mach_entry: cannot add pfn=0x00079b70 ->
>>>>>>>> mfn=0x000bed87: pfn=0x00079bd7 -> mfn=0x000bed87 already exists
>>>>>>>> Jul 22 08:09:09 cubietruck kernel: [11896.041106]
>>>>>>>> xen_add_phys_to_mach_entry: cannot add pfn=0x0007999a ->
>>>>>>>> mfn=0x000bed87: pfn=0x00079b2b -> mfn=0x000bed87 already exists
>>>>>>>
>>>>>>> That looks like this bug in Xen/ARM/netback:
>>>>>>> http://lists.xenproject.org/archives/html/xen-users/2014-07/msg00067.html
>>>>>>
>>>>>> Aha! This seems to fix it (I can browse mirage-www without huge delays):
>>>>>>
>>>>>> diff --git a/lib/netif.ml b/lib/netif.ml
>>>>>> index 4e02e5e..3cc7760 100644
>>>>>> --- a/lib/netif.ml
>>>>>> +++ b/lib/netif.ml
>>>>>> @@ -455,7 +455,7 @@ let writev nf pages =
>>>>>>        lwt rest_th = xmit other_pages in
>>>>>>        (* All fragments are now written, we can now notify the backend *)
>>>>>>        Lwt_ring.Front.push nf.t.tx_client (notify nf.t);
>>>>>> -         return ()
>>>>>> +         join rest_th
>>>>>>   )
>>>>>>
>>>>>> let wait_for_plug nf =
>>>>>>
>>>>>> Not very efficient, but at least it works! Looks like the Xen people
>>>>>> will fix it at some point.
>>>>>
>>>>> That's going to serialize requests on the TX ring, isn't it?  Might
>>>>> be worth making conditional via a feature flag if it's going to take
>>>>> x86 transmit performance down.
>
> I've done some tests, and this absolutely kills performance. Sending a
> load of simple 1K UDP packets, the CubieTruck sends at 28 KB/s with
> this patch and 1,531 KB/s with the wait outside the mutex.
>
> However, this still isn't quite right, because in the fast case I send
> all the packets and then wait for them all to complete. But really I'd
> like the sender to slow down when the receiver is getting behind,
> rather than allocating ever more memory to the queue. Doing that will
> require an API change I think.

Here's a slightly better work-around:

https://github.com/talex5/mirage-tcpip/commit/9f7e0a628ebd1cece7d0fb979273a96be78233ad

It changes the TCP packet splitting code so that the second part of
the page is copied to a new IO page. This avoids sending the same
physical page to Linux twice (which is what doesn't work).

With this, the previous hack can be reverted.

In my testing, this increased the streaming TCP download speed from 56
KB/s to 3.9 MB/s on the CubieTruck.


-- 
Dr Thomas Leonard        http://0install.net/
GPG: 9242 9807 C985 3C07 44A6  8B9A AE07 8280 59A5 3CC1
GPG: DA98 25AE CAD0 8975 7CDA  BD8E 0713 3F96 CA74 D8BA

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.