[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

memory access atomicity during HVM insn emulation on x86


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 2 Mar 2023 09:35:32 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=21IoVljNvPXy39DMbxlaeVo53U5NEfNK6NJbPt7u9ng=; b=ULsLDKXIzHC6K2VjqpSDMavP8LTP+EEfxJyHzl2QAkef2KjDY8+Y3CTRLGTuhPmA45R46s3o6eqdYPBjmxH8Sedi5NFelJ3Y/Gskf27TM3Gry6jLWLZbEpzl4NUwv1wfEgADOjPCGAqQnic6xfWe8jpVXjZ5xgxKzCYNXtf9ypSS5Vht1Vic2ZrUt42f8eCUehBvsprp4BrxkD4robTSXl6JZzGz8p4gPO4ZG7MjWs+pvu5WZcLflbdXhFfWTP5kDzPnJKL7gRlCsxjywewElhjggafPvx8XYcbXpwP8Qiy2zd04nAoEEnqjMRpFfJ8UBQjxtfBmB8ypMJZOK1liEA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UwiqhTADC363VoLCTj64kcJF6vscLBzBR2htI5Xo9S/heGOZ2xwtpo+vzlNR6nmetrz0CGrkrZhNu6ApfCGTPeMIfiXkqJrciv3xe5phpJ7v3vUs0CGyd+Ce6puSZKmCtJ0FE9MAQq0JgtcFsDGAyrPpsRo1hO4D0dzlFgMu31tsehKuj+dbHg4378jhjhOY++BE4vLgp010vxWy0lKmhiI40UOHZmndDvtovKLosyoKHepGbaCfHyJ8j3Yhv7cXLa5Ll31uFXgIo0xwXTvhhT+G0s0fZlBUme1SHUsTsUKqxEslzx0DKwMN/MptZ5xKqBTTYVnd288G6pK4FDWeaA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Thu, 02 Mar 2023 08:35:48 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hello,

in (I think) Intel SDM version 076 a new guarantee of atomicity of certain
aligned 16-byte accesses appeared. While initially I thought this would be
another special case we need to invent a solution for (it still is, in
certain cases, as per further down), I had to realize that we don't even
guarantee atomicity of smaller accesses, including as simple ones as plain
16-, 32-, or 64-bit moves. All read/write operations are handled by the
very generic __hvm_copy(), which invokes memcpy() / memset(), which in
turn do byte-wise copies unless the compiler decides to inline the
operations (which from all I can tell it won't normally do for the uses in
__hvm_copy()).

The question here is whether to make __hvm_copy() handle the guaranteed-
aligned cases specially, or whether to avoid making use of that function
in those cases (i.e. deal with the cases in linear_{read,write}()). Both
options have their downsides (complicating a core function vs duplicating
a certain amount of code).

As to 16-byte atomic accesses: The SDM doesn't restrict this to WB memory.
As a result, in order to implement this correctly, we cannot just utilize
the rmw() or blk() hooks, as these expect to operate on guest RAM (which
they can map and then access directly). Instead the path invoking the
device model will also need to cope. Yet the ioreq interface is limited
to 64 bits of data at a time (except for the data_is_ptr case, which imo
has to be considered inherently non-atomic). So it looks to me that as a
prereq to fully addressing the issue in the hypervisor we need an
extension to the ioreq interface.

Thoughts anyone?

Thanks, Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.