[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 3/3] xen/arm: vpl011: Do not try to handle TX FIFO status when backend in Xen


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Wed, 5 Apr 2023 13:17:50 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=u4oUaq7lN8GdxSqirWE6VLLbTs/fytDNa65dm3uA6/s=; b=mug+DyJJAZSjAG0UIkT8KjUfmhoWC6M965ZyxpF/CHnPOoTSHVuuNcQCzTqrMzXiu31/qbUpqnngH5eSEw4UsPwGJo30bnnytVsoHZ3UlUhkMLuoJ4dMjEEM+4pUUlafB8t8PM1djhU970gHPSMdgsSwdVFk/S7C3jdk9Il3GqlpuXN2sZhwOho/Z5U/G67nmMEG3uPe56R1rE4UIh1leuwauPjcWioHmhFetyGT068OIaXOv78YP5m9FbjB87yQQty/QQrka+VPBT+I0Gmyj2Y/aBuALCcIUojIicGcdGeVge7HBTAq8j5vzy0FYWv1pb2W757Dk6Erh3H4CGE1TQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HHh3tRuCS0Lg5C3Tq6P6x0xOB0FnbYQVfh4OfOuKxKw3t3a+PTDKaebTimcA2ee2m4MqTeUVlbqQbg5cnhOjC0Bmu5VAzxucmSa841KW+S1wfRDXjvDVtYxUCcjPCcZRTh5SG7jhVI5RUuMjerVOJGqvUVn782ESYZKZ7wuaKmbCSFwjZ4mHfnD4e+hBHgHAVhNIyGAuHO6CmIQwk4cznwLUBNcBDN5cmEQFO+ie73yjrwkFezyx3K4kLkQ2qa5ZB0RYr/uMmhCV/toVY8p95UiyG2DxqSXGcF4dXZ7a1Qjl9x09/2LqJb58YaBrUzWsUuzwRy1cvjW/HbuYE6DMcA==
  • Cc: Michal Orzel <michal.orzel@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Wed, 05 Apr 2023 11:18:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

>From vpl011_rx_char_xen(), we call vpl011_data_avail() that handles both
RX and TX state. Because we are passing 0 as out_fifo_level and
SBSA_UART_FIFO_SIZE as out_size, we end up calling a function
vpl011_update_tx_fifo_status() which performs TXI bit handling
depending on the FIFO trigger level. This does not make sense when backend
is in Xen, as we maintain a single TX state where data can always be
written and as such there is no TX FIFO handling. Furthermore, this
function assumes that the backend is in domain by making use of struct
xencons_interface unconditionally. Fix it by calling this function only
when backend is in domain. Also add an assert for sanity.

Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>
---
 xen/arch/arm/vpl011.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/vpl011.c b/xen/arch/arm/vpl011.c
index ff06deeb645c..7856b4b5f5a3 100644
--- a/xen/arch/arm/vpl011.c
+++ b/xen/arch/arm/vpl011.c
@@ -261,6 +261,9 @@ static void vpl011_update_tx_fifo_status(struct vpl011 
*vpl011,
     struct xencons_interface *intf = vpl011->backend.dom.ring_buf;
     unsigned int fifo_threshold = sizeof(intf->out) - SBSA_UART_FIFO_LEVEL;
 
+    /* No TX FIFO handling when backend is in Xen */
+    ASSERT(vpl011->backend_in_domain);
+
     BUILD_BUG_ON(sizeof(intf->out) < SBSA_UART_FIFO_SIZE);
 
     /*
@@ -547,7 +550,13 @@ static void vpl011_data_avail(struct domain *d,
          */
         vpl011->uartfr &= ~BUSY;
 
-        vpl011_update_tx_fifo_status(vpl011, out_fifo_level);
+        /*
+         * When backend is in Xen, we are always ready for new data to be
+         * written (i.e. no TX FIFO handling), therefore we do not want
+         * to change the TX FIFO status in such case.
+         */
+        if ( vpl011->backend_in_domain )
+            vpl011_update_tx_fifo_status(vpl011, out_fifo_level);
     }
 
     vpl011_update_interrupt_status(d);
-- 
2.25.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.