[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ARM][xencons] PV Console hangs due to illegal ring buffer accesses


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>
  • From: George Mocanu <george.mocanu@xxxxxxx>
  • Date: Fri, 21 Jul 2023 14:28:42 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0fkF1gi4yv1anqQphJwiAe5V5eXV4Q9xEOnZXRJ9f4k=; b=fc2+KeaXkQLvxNibkpa2AhDOEubzxW3iZp4TPIRbXZHXR1UCQpXamkHdrTa1iU7FpUvyCkMvsGYFQBjWoyjjm+L5WQvwcfV5absjec4pumaYAQe7MFpzUUm+XqHdeb4nnd3k1FhiwZ4FAbiqclvP+9uLlgWRCo9MtN644JE9zfIBGftSx6oCCqy5V3uh7oAdYp8x746Elko92gFb2xBZdNzQ0ZaKUxys01fsVbefS2uxGqXbjaL7yvim2ZuXyW2oHvyG+e+PjAKn3m+MvLWlZcbku7pB/xGydQDF0pKm3qcthoil/8Tk2tpDH8X+DP7SwgcQQ2ghXgc6r+zhZ3chMw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I4BEAsbRhyY14PyJZ2Oxnaindd5Jv6Txpy2PNMT58g3Wu6EngT1LZRFbWsd3VfDxKESengt91CHRDNqk/5TOnhfrPxH+latxi19XlL04VChsSBr1R0lj0W+x1zDZLhmZFZMhO6I/WjfHeZBoXTz6aW4sXdop1qd+UhMOt9Tzy9VTa7jAfhLNCiHXKJHP0apPDWZ+ctin29e4dTSgReHj/3A5okvi3xDK2lYGx/nDHzvgH7QR2h+IzbQju6oFz/Fkb3xtNLr64LPjWo+3Q7y74miUBvYTmF6WBoRSj8DUP++LD4nyN5N+xgWGmk1PROm9KXK8UZ7cz3eImO3/B7CyoQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com;
  • Cc: "Andrei Cherechesu (OSS)" <andrei.cherechesu@xxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Sat, 22 Jul 2023 05:45:28 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZu9+mUGszY5KMlkWI3r1B3a7IQw==
  • Thread-topic: [ARM][xencons] PV Console hangs due to illegal ring buffer accesses

Hello, Stefano, 
Hello, Julien,

Thanks for your suggestions. I gave each of them a try, but it doesn't
look like it brings me anywhere at the moment.

On 21/07/2023 02:25, Stefano Stabellini wrote:
> 
> On Thu, 20 Jul 2023, Julien Grall wrote:
> > (+ Juergen)
> >
> > On 19/07/2023 17:13, Andrei Cherechesu (OSS) wrote:
> > > Hello,
> >
> > Hi Andrei,
> >
> > > As we're running Xen 4.17 (with platform-related support added) on NXP
> S32G
> > > SoCs (ARMv8), with a custom Linux distribution built through Yocto, and
> > > we've set some Xen-based demos up, we encountered some issues which we 
> > > think
> > > might not be related to our hardware. For additional context, the Linux
> > > kernel version we're running is 5.15.96-rt (with platform-related support
> > > added as well).
> > >
> > > The setup to reproduce the problem is fairly simple: after booting a Dom0
> > > (can provide configuration details if needed), we're booting a normal PV
> > > DomU with PV Networking. Additionally, the VMs have k3s (Lightweight
> > > Kubernetes - version v1.25.8+k3s1) installed in
> > > their rootfs'es.
> > >
> > > The problem is that the DomU console hangs (no new output is shown, no 
> > > input
> > > can be sent) some time (non-deterministic, sometimes 5 seconds, other 
> > > times
> > > like 15-20 seconds) after we run the `k3s server` command. We have this
> > > command running as part of a sysvinit service, and the same behavior can 
> > > be
> > > observed in that case as well. The k3s version we use is the one mentioned
> > > in the paragraph above, but this can be reproduced with other versions as
> > > well (i.e., v1.21.11, v1.22.6). If the `k3s server` command is ran in the
> > > Dom0 VM, everything works fine. Using DomU as an agent node is also 
> > > working
> > > fine, only when it is run as a server the console problem occurs.
> > >
> > > Immediately after the serial console hangs, we can still log in on DomU
> > > using SSH, and we can observe the following messages its dmesg:
> > > [   57.905806] xencons: Illegal ring page indices
> >
> > Looking at Linux code, this message is printed in a couple of place in the
> > xenconsole driver.
> >
> > I would assume that this is printed when reading from the buffer (otherwise
> > you would not see any message). Can you confirm it?
> >
> > Also, can you provide the indices that Linux considers buggy?

Adding to what Andrei said previously, we login into the DomU console
to observe its state, and send some input keys to confirm whether it is
in the buggy state. Considering this flow, it looks like this message
comes from the write_console() call. In one instance I started the k3s
server process in the console (disabled the sysvinit service beforehand),
then proceeded to kill it after some time - a message from read_console()
was displayed in that instance. As for the indices, I've dumped them in
a separate message, and they are different always:

[   45.303520] xencons: Illegal ring page indices -- write_console()
[   45.303529] xencons: prod 4289880869, cons 2015782840, intf->out size 2048

[   59.203570] xencons: Illegal ring page indices -- write_console()
[   59.203576] xencons: prod 1735287148, cons 1869033263, intf->out size 2048

[   40.838740] xencons: Illegal ring page indices -- write_console()
[   40.838753] xencons: prod 1647211507, cons 2923534489, intf->out size 2048
[...]
[  126.184299] xencons: Illegal ring page indices -- read_console()
[  126.184317] xencons: prod 127, cons 1815732224, intf->int size 1024

> >
> > Lastly, it seems like the barrier used are incorrect. It should be the
> > virt_*() version rather than a plain mb()/wmb(). I don't think it matter for
> > arm64 though (I am assuming you are not running 32-bit).
> >

Replaced them with the virt_*() relatives, but I couldn't notice any change
in the behavior.

> > > [   59.399620] xenbus: error -5 while reading message
> >
> > So this message is coming from the xenbus driver (used to read the xenstore
> > ring). This is -EIO, and AFAICT returned when the indices are also 
> > incorrect.
> >
> > For this driver, I think there is also a TOCTOU because a compiler is free 
> > to
> > reload intf->rsp_cons after the check. Moving virt_mb() is probably not
> > sufficient. You would also want to use ACCESS_ONCE().
> >
> > What I find odd is you have two distinct rings (xenconsole and xenbus) with
> > similar issues. Above, you said you are using Linux RT. I wonder if this 
> > has a
> > play into the issue because if I am not mistaken, the two functions would 
> > now
> > be fully preemptible.
> >
> > This could expose some races. For instance, there are some missing
> > ACCESS_ONCE() (as mentioned above).
> >
> > In particular, Xenstored (I haven't checked xenconsoled) is using += to 
> > update
> > intf->rsp_cons. There is no guarantee that the update will be atomic.
> >
> > Overall, I am not 100% sure what I wrote is related. But that's probably a
> > good start of things that can be exacerbated with Linux RT.

Added memory barriers wherever I saw the corresponding ring indexes used in
both the xenconsole and xenbus drivers, but nothing changed.

> >
> > > [   59.399649] xenbus: error -5 while writing message
> >
> > This is in xenbus as well. But this time in the write part. The analysis I
> > wrote above for the read part can be applied here.
> 
> This is really strange. What is also strange is that somehow the indexes
> recover after 10-15 seconds? How is that even possible. Let's say there
> is a memory corruption of some sort, maybe due to missing barriers like
> Julien suggested, how can it go back to normal after a while?
> 
> I am really confused. I would try with regular Linux instead of Linux RT
> and also would try to replace all the barriers in
> drivers/tty/hvc/hvc_xen.c with their virt_* version to see if we can
> narrow down the problem a bit.
> 
> 
> Keep in mind that during PV network operations grants are used, which
> involve mapping pages at the backend and changing the MMU/IOMMU
> pagetables to introduce the new mapping. After the DMA operation,
> typically the page is unmapped and removed from the pagetable.
> 
> Is it possible that the pagetable change is causing the problem, and
> when the mapping is removed everything goes back to normal?
> 
> I don't know how that could happen, but the mapping and unmapping of the
> page is something ongoing which could break things then go back to
> normal. One thing you could try is to force all DMA operations to go via
> swiotlb-xen in Linux:
> 
> diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> index 3d826c0b5fee..f78d86f1bb9c 100644
> --- a/arch/arm/xen/mm.c
> +++ b/arch/arm/xen/mm.c
> @@ -112,8 +112,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
>          * require a bounce buffer because the device doesn't support coherent
>          * memory and we are not able to flush the cache.
>          */
> -       return (!hypercall_cflush && (xen_pfn != bfn) &&
> -               !dev_is_dma_coherent(dev));
> +       return true;
>  }
> 
>  static int __init xen_mm_init(void)
> 
> 
> Then you can remove any iommu pagetable flushes in Xen:
> 
> 
> diff --git a/xen/arch/arm/include/asm/grant_table.h
> b/xen/arch/arm/include/asm/grant_table.h
> index d3c518a926..b72f8391bd 100644
> --- a/xen/arch/arm/include/asm/grant_table.h
> +++ b/xen/arch/arm/include/asm/grant_table.h
> @@ -74,7 +74,7 @@ int replace_grant_host_mapping(uint64_t gpaddr, mfn_t
> frame,
>      page_get_xenheap_gfn(gnttab_status_page(t, i))
> 
>  #define gnttab_need_iommu_mapping(d)                    \
> -    (is_domain_direct_mapped(d) && is_iommu_enabled(d))
> +    (0)
> 
>  #endif /* __ASM_GRANT_TABLE_H__ */
>  /*
> 
> 
> I don't know how this could be related but it might help narrow down the
> problem.

Applied your suggestion regarding DMA operations, but we observe the same
behavior (the serial console would hang after some time), besides some new
issues with some other drivers.

We will continue to look into this issue, but if you have some new ideas,
please let us know.

Thank you,
George Mocanu




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.