[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] hvc/xen: prevent concurrent accesses to the shared ring


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 9 Mar 2023 11:59:16 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=buZ3pymdN3LMIhtqum1619ORoNQIYxo0qAS2YKOqPg8=; b=SDD7PbdfH0Cv2iGSbnATdnCzanRHVcsoyGbA+4bHjkujNyn6/WQtwQqSBV1wbnchfb/thS+eBw5r1xIISnZOeWMPiz7lKB1U4eb2HMRXccTCyFyZCicUiJrmKOM/l2IwNXLB2BnagoiOVISL+PYzjhDE+n4tZcmjxCqSCCOOKVPbTgh665uLLq6xaZmhAG9IzcWFznKG+tBjAcsZMdzmcDibHgiHJgiTjzcBvCJjZUsztwl1db+XdWqjMEGmIWeyIU0jd0remBVGprVfrVEn5ZxdskNbzFyTc92ZUKkEpXuswxD+lAKoh11jbPM1BUjwDcGTbSJqCTobXvndP1DjUQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YydBXO71Tt+ttwaMlIIY8PKEdq+ZV7oz6Ke6w56RlDzEcgoEZc+LJ+2IwPgj6Y2vfWJgD3oiHv9PcsFDT3uv80HN4VjJhPVJpDui4hdzq+/hdkg2GWWugRjB8UqpbiA4RWuRbTy1qoE5LQUAY3h/pjR48Nbdgk2l20JRFB8IvDwfLYOYAbt+0AB2j9dPe+Yrq8TV/UeSk/IFZojQ4T+c1O/mvvmaZ37ty5shNVrSbGJJ3evbD57ddMytKTfD2hARjJHtY685jS1Aj71cBwD60FuhCjxQDpQ0udxtOEif9hcWWFQs6i1y6FRB7Ec6sB/Mdb7LUqKuOEWNpFP3CVQGcg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>, Jiri Slaby <jirislaby@xxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, Jan Beulich <jbeulich@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Chris Wright <chrisw@xxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>, Olof Johansson <olof@xxxxxxxxx>, linuxppc-dev@xxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 09 Mar 2023 10:59:41 +0000
  • Ironport-data: A9a23:1ZS9Wq+0SN/mMCOGP9ErDrUD+36TJUtcMsCJ2f8bNWPcYEJGY0x3y TAcDT+DOqnbamWnLdhzOt61/U5S7ZXcnNVhTApqri48E34SpcT7XtnIdU2Y0wF+jCHgZBk+s 5hBMImowOQcFCK0SsKFa+C5xZVE/fjUAOG6UKicYXoZqTZMEE8JkQhkl/MynrlmiN24BxLlk d7pqojUNUTNNwRcawr40Ire7kI/1BjOkGlA5AdmPqga5AW2e0Q9V/rzG4ngdxMUfaEMdgKKb 76r5K20+Grf4yAsBruN+losWhRXKlJ6FVHmZkt+A8BOsDAbzsAB+v9T2M4nQVVWk120c+VZk 72hg3ASpTABZcUgkMxFO/VR/roX0aduoNcrKlDn2SCfItGvn9IBDJyCAWlvVbD09NqbDkl1s sUEJDsUMCnZlvyHg5TjTrdOj/08eZyD0IM34hmMzBn/JNN/GdXmfP+P4tVVmjAtmspJAPDSI dIDbiZiZwjBZBsJPUoLDJU5n6GjgXyXnz9w8QrJ4/ZopTWOilUpjtABM/KMEjCObd9SkUuC4 HrP4kzyAw0ANczZwj2Amp6prr6fzH6gA9lKfFG+3sFrqwyPmFUBNDctaniQk+eFtFLhZOsKf iT4/QJr98De7neDVcLhVhe1pHqFuB80WNdKFeA+rgaXxcL84QmDAXMfZiVcc9Fgv8gzLRQlz FKGnNPBAT1pra2bTm+b+r6IrDS0fy8PIgcqZy4eTBAB6tPyiII+lBTCSpBkCqHdptn0Hyzgh jOHti4zg50NgsMRkaa251bKh3SrvJehZh444EDbU3yo6it9ZZW5fMq45F7D9/FCIY2FCF6bs xAsmcKT8eQPBpGljzGWTaMGG7TBz/yCKjzHx1l0A4Mm6S+u6lakZ4lb5Dw4L0BsWu4PeSXoe 1D7ogRM/9pIO3/sarV4C6q0CsIlyoDtEcjoEPDJBvJJZp9/XAuG7CZrI0idwwjFlEkqjLEnI ZzefcuyJXIbErh8ij2kQ+4Xyvks3C9W+I/IbZXyzhDi1KXEYneQEO8BKAHXNr5/676YqgLI9 doZL9GN1xhUTOz5ZG/Q7JIXKlcJa3M8APgatvBqSwJKGSI+cElJNhMb6epJl1BN90iNqtr1w w==
  • Ironport-hdrordr: A9a23:szxiuaiR1yFFwr88Je3JRxY8vHBQXvIji2hC6mlwRA09T+Wync rGpoVh6fY7skdpZJhAo6H5BEDkexnhHPFOkOws1NuZLWvbUS6TXeJfBOjZogEIeReOktK1vJ 0IG8ND4Z/LbWSS5vyKhzVQfexQpuVvM5rFuQ4d9RpQpM1RBJ2IJj0WNjqm
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hello,

It's been 3 months and no reply.

On Mon, Dec 12, 2022 at 01:36:48PM +0100, Roger Pau Monné wrote:
> Hello,
> 
> Gentle ping regarding the locking question below.
> 
> Thanks, Roger.
> 
> On Fri, Dec 02, 2022 at 12:40:05PM +0100, Roger Pau Monné wrote:
> > On Wed, Nov 30, 2022 at 05:08:06PM -0800, Stefano Stabellini wrote:
> > > On Wed, 30 Nov 2022, Roger Pau Monne wrote:
> > > > The hvc machinery registers both a console and a tty device based on
> > > > the hv ops provided by the specific implementation.  Those two
> > > > interfaces however have different locks, and there's no single locks
> > > > that's shared between the tty and the console implementations, hence
> > > > the driver needs to protect itself against concurrent accesses.
> > > > Otherwise concurrent calls using the split interfaces are likely to
> > > > corrupt the ring indexes, leaving the console unusable.
> > > > 
> > > > Introduce a lock to xencons_info to serialize accesses to the shared
> > > > ring.  This is only required when using the shared memory console,
> > > > concurrent accesses to the hypercall based console implementation are
> > > > not an issue.
> > > > 
> > > > Note the conditional logic in domU_read_console() is slightly modified
> > > > so the notify_daemon() call can be done outside of the locked region:
> > > > it's an hypercall and there's no need for it to be done with the lock
> > > > held.
> > > 
> > > For domU_read_console: I don't mean to block this patch but we need to
> > > be sure about the semantics of hv_ops.get_chars. Either it is expected
> > > to be already locked, then we definitely shouldn't add another lock to
> > > domU_read_console. Or it is not expected to be already locked, then we
> > > should add the lock.
> > > 
> > > My impression is that it is expected to be already locked, but I think
> > > we need Greg or Jiri to confirm one way or the other.
> > 
> > Let me move both to the 'To:' field then.
> > 
> > My main concern is the usage of hv_ops.get_chars hook in
> > hvc_poll_get_char(), as it's not obvious to me that callers of
> > tty->poll_get_char hook as returned by tty_find_polling_driver() will
> > always do so with the tty lock held (in fact the only user right now
> > doesn't seem to hold the tty lock).
> > 
> > > Aside from that the rest looks fine.

I guess I could reluctantly remove the lock in the get_chars hook,
albeit I'm not convinced at all the lock is not needed there.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.