[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 13/16] xen-blkback: Implement diskseq checks


  • To: Demi Marie Obenour <demi@xxxxxxxxxxxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 21 Jun 2023 12:07:05 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VgorcOhVAWPKSD25OVlXAg1i+IhLQ79OGbg8fRxYPLM=; b=O81kNAK9f1JgUur1IqZXZ83OGGd4lUILjcYYIygnLgi9WslCDxD/xdTkjGlH7DUAZMgshOqcPEW0aNOzLxrzLVmK0oKk13iDvXnN6HpXYadrzU4evkb1850x1KAI+295cJFioCPjUAc5X530PlgsUsU/jqbA8SpqTksF2zHGcjRIxqJZ0+VOZOBiFOgF4UcMaxsqWAMmCrkffyTTSec5vlhCuCMDuu8Z8c8wHCzINpQTIgnJhmxBhf+5GxIsgWHXnjanmb+mwhIj/aflROPsFItL71uK3/KrgE7UuM8AH/D+dp81CbzotNt+mFaF/T+l/n4kbiYYOGYzutBUP/XfJw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SM59qhdzng0cl4rm1c8o3VqPQrJnDRblnoQbo+Ahe1gwTXq+syP1nI97RqNSxAIuhn/qPkeARU46CYC8lj0MtHMcAkvj4Z0Xk2sZVI2sP7h5ZXk+mu/PdDV09J6e/wW6xcmRCupxdUo1q9dT+a1UAioXftYjOzzd9tNAejrVIiVxJhLpZXxlbfwKXU1wSz6J0ua3aH+UvmRwJgNgE1zKed5T7cnbo2MdszjTxw19gztlvgo80RMayg/1WwseDP7Gv9Zzb8oqtQuNhI1JW2nahXMkGznTQFFjOix3veBA1vOjuMY5ippAQ1vgvwsqmcJA856doeV8GPyFj3swYe01uw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jens Axboe <axboe@xxxxxxxxx>, Alasdair Kergon <agk@xxxxxxxxxx>, Mike Snitzer <snitzer@xxxxxxxxxx>, dm-devel@xxxxxxxxxx, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, linux-block@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 21 Jun 2023 10:07:29 +0000
  • Ironport-data: A9a23:njymha5VbyXT3IgsTX8J2AxRtHrHchMFZxGqfqrLsTDasY5as4F+v mMdC2HVb/+MYGqkLopxaNizoUtUvMfVn9dhHQRo+SA8Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0 fv/uMSaM1K+s9JOGjt8B5mr9lU35JwehBtC5gZlPa4R5QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5mp eIicxtQZDq/t++15ZfgeslKl8cNFZy+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOWF3Nn9I7RmQe1PmUmVv CTe9nnRCRAGLt2PjzGC9xpAg8eWxH2iBd9NTeDQGvhCq0HU/E88WSQqChinuaaczRS7XI9bN BlBksYphe1onKCxdfHxUhi5iH+CoB8HWtBUHvE66QeC0a7d6UCSAW1sZjdRYtsrnMw7Xzon0 hmFnLvBDDNsmKeYRXKU6vGfqjbaETMTLHMQaDUsTgYf5dTn5oYpgXrnQtMmHKOrg9ndAzz8w zmW6iM5gt07iMcR0qyh8FPvgjSyp4PIRAo4+gXWWG2+6gpzIoWiYuSA7Vnd8OYFJoKeRVqpo ncJgY6d4foIAJXLkzaCKM0JHbe097OGPSfajFpHAZYs7XKu9mSlcIQW5ytxTG95P8BBdTL3b Uv7vQJK+IQVLHasdbVwYY+6F4It16eIKDj+fvXdb94LZ4crcgaCpXlqfRTJhz2rl1Uwm6YiP 5vdadyrEXsRFaVgynyxWvsZ1rgogCs5wAs/WKzG8vhu6pLGDFb9dFvPGALmgjwRhE9cnDjoz g==
  • Ironport-hdrordr: A9a23:RcWz/avXxl96dOci9RaCUapw7skDstV00zEX/kB9WHVpm6yj+v xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jun 20, 2023 at 09:14:25PM -0400, Demi Marie Obenour wrote:
> On Mon, Jun 12, 2023 at 10:09:39AM +0200, Roger Pau Monné wrote:
> > On Fri, Jun 09, 2023 at 12:55:39PM -0400, Demi Marie Obenour wrote:
> > > On Fri, Jun 09, 2023 at 05:13:45PM +0200, Roger Pau Monné wrote:
> > > > On Thu, Jun 08, 2023 at 11:33:26AM -0400, Demi Marie Obenour wrote:
> > > > > On Thu, Jun 08, 2023 at 10:29:18AM +0200, Roger Pau Monné wrote:
> > > > > > On Wed, Jun 07, 2023 at 12:14:46PM -0400, Demi Marie Obenour wrote:
> > > > > > > On Wed, Jun 07, 2023 at 10:20:08AM +0200, Roger Pau Monné wrote:
> > > > > > Then the block script will open the device by diskseq and pass the
> > > > > > major:minor numbers to blkback.
> > > > > 
> > > > > Alternatively, the toolstack could write both the diskseq and
> > > > > major:minor numbers and be confident that it is referring to the
> > > > > correct device, no matter how long ago it got that information.
> > > > > This could be quite useful for e.g. one VM exporting a device to
> > > > > another VM by calling losetup(8) and expecting a human to make a
> > > > > decision based on various properties about the device.  In this
> > > > > case there is no upper bound on the race window.
> > > > 
> > > > Instead of playing with xenstore nodes, it might be better to simply
> > > > have blkback export on sysfs the diskseq of the opened device, and let
> > > > the block script check whether that's correct or not.  That implies
> > > > less code in the kernel side, and doesn't pollute xenstore.
> > > 
> > > This would require that blkback delay exposing the device to the
> > > frontend until the block script has checked that the diskseq is correct.
> > 
> > This depends on your toolstack implementation.  libxl won't start the
> > domain until block scripts have finished execution, and hence the
> > block script waiting for the sysfs node to appear and check it against
> > the expected value would be enough.
> 
> True, but we cannot assume that everyone is using libxl.

Right, for the udev case this won't be good, since the domain could be
already running, and hence could potentially attach to the backend
before the hotplug script realized the opened device is wrong.
Likewise for hot add disks.

> > > Much simpler for the block script to provide the diskseq in xenstore.
> > > If you want to avoid an extra xenstore node, I can make the diskseq part
> > > of the physical-device node.
> > 
> > I'm thinking that we might want to introduce a "physical-device-uuid"
> > node and use that to provide the diskseq to the backened.  Toolstacks
> > (or block scripts) would need to be sure the "physical-device-uuid"
> > node is populated before setting "physical-device", as writes to
> > that node would still trigger blkback watch.  I think using two
> > distinct watches would just make the logic in blkback too
> > complicated.
> > 
> > My preference would be for the kernel to have a function for opening a
> > device identified by a diskseq (as fetched from
> > "physical-device-uuid"), so that we don't have to open using
> > major:minor and then check the diskseq.
> 
> In theory I agree, but in practice it would be a significantly more
> complex patch and given that it does not impact the uAPI I would prefer
> the less-invasive option.

>From a blkback point of view I don't see that option as more invasive,
it's actually the other way around IMO.  On blkback we would use
blkdev_get_by_diskseq() (or equivalent) instead of
blkdev_get_by_dev(), so it would result in an overall simpler
change (because the check against diskseq won't be needed anymore).

> Is there anything more that needs to be done
> here, other than replacing the "diskseq" name?

I think we also spoke about using sscanf to parse the option.

The patch to Xen blkif.h needs to be accepted before the Linux side
can progress.


> I prefer
> "physical-device-luid" because the ID is only valid in one particular
> VM.

"physical-device-uid" then maybe?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.