[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case
- To: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>
- From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
- Date: Wed, 30 Aug 2023 19:09:07 +0100
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=18C9mQ3oPcOZLlKEWF87W2U1jA7o7oOhgi57VFTQ4+0=; b=VzNtugEH3299YSp+2P14KPSIADUcnO2i9dfc2jVRCvqTK5D+vVKWkhDThvrcG+WS/UItKWMm45BfZCJxDWDQeA5dYx1WYhYgbfbiynHr8+daZR+Qp2dpMWaMIIUHygUQYR+n/tLo5cG3c83p0fvnktpTbgUCn5dHlvbYhmDLb62Z5RZQau7q+yF0P8TuNLluOZW1cFKqRFsCQ/fkIIgLyZQT80nf/8iFXScX6FxqQbhF4pfU/4yb2LRYs/sV5PLO6JkN1v8Nfo18q4SVRSQVz6YtEZpIp1HlnsNJmKrh5xBaFnvI+3vGMVVtT/kGF+gLVnMF0O1L5BOMgqLIHyp6WQ==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UDoE0z0SGK7oGbbeNuaaXgQxojbTySmFH/rlMh/G/Gm+DCBT29jNtYJ62C+sTQfY4SrZ13BxnZmb9n4TFbUnGQsY+CCrzRRL+/cQmdXuS+f1IxJcMk/ggKQVbSZ47Qa8thWGBj5TS8A0vZ9QyB0ih6Y8jHfO3vt+ir/Rv1ZdM1hcZmLr0X2UGKNNGfqvsW1BtEiY96rtvEUuB0EfBnRRPGzaFiD5DHgO38bFEA1sOkqkiAaSLqXCPEOU3oyVx9tkp19CPZFYFY0Gy+Aj8+jZ6VK2u4+h0abJb8aNJhOD7GudmdqqGseiYkpwUSjOC3uS5CFbjVsxG6+0IEGyyZLDnA==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
- Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Paul Durrant <paul.durrant@xxxxxxxxxx>
- Delivery-date: Wed, 30 Aug 2023 18:09:33 +0000
- Ironport-data: A9a23:+MEv3qiizrZOMdFxmZhc9PauX161fhEKZh0ujC45NGQN5FlHY01je htvXm+FafiJajenf9l1O47l9U9UsJHWn9VlTQo4/i09Ei4b9cadCdqndUqhZCn6wu8v7q5Ex 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv cKai8DEMRqu1iUc3lg8sspvkzsx+qyo0N8klgZmP6sT7ASGzyJ94K83fsldEVOpGuG4IcbiL wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+ tREGRoUQi+koNjszbvjDc4vreAKEvL0adZ3VnFIlVk1DN4AaLWbGeDmwIQd2z09wMdTAfzZe swVLyJ1awjNaAFOPVFRD48imOCvhT/0dDgwRFC9/PJrpTSMilEvluSwWDbWUoXiqcF9t0CUv G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTuTlpqI60QD7Kmo7BwAaUAXjpeSCuFekAfVmN 0hI8yt0lP1nnKCsZpynN/Gim1amlBMBX9tbE8Uh9RqAjKHT5m6xFmUCCzJMdtEinMs3XiAxk E+EmcvzAj5iu6HTTmiSnop4thu3MCkRaGMHPikNSFNf58G5+N1uyBXSUtxkDai5yMXvHi39y CyLqy54gKgPickM1OOw+lWvby+Qm6UlhzUdvm3/Nl9JJCsjDGJ5T+REMWTm0Ms=
- Ironport-hdrordr: A9a23:AowHj6xUIob/P1oVksYzKrPw3L1zdoMgy1knxilNoHxuH/Bw9v re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxedZjLcKqweKJ8SUzJ8+6U 4PSchD4abLfD9HZcaR2njFLz4jquP3j5xBU43lvglQpQIBUdAQ0+9gYDzrdHGf3GN9dOAE/J z33Ls/mxOQPU45Q+6cHXc/U+3Kt7Tw5e/biU5vPW9e1OGW5wnYk4LHLw==
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 30/08/2023 3:30 pm, Roger Pau Monné wrote:
> On Wed, Sep 12, 2018 at 03:09:35AM -0600, Jan Beulich wrote:
>> The function does two translations in one go for a single guest access.
>> Any failure of the first translation step (guest linear -> guest
>> physical), resulting in #PF, ought to take precedence over any failure
>> of the second step (guest physical -> host physical).
Erm... No?
There are up to 25 translations steps, assuming a memory operand
contained entirely within a cache-line.
They intermix between gla->gpa and gpa->spa in a strict order.
There not a point where the error is ambiguous, nor is there ever a
point where a pagewalk continues beyond a faulting condition.
Hardware certainly isn't wasting transistors to hold state just to see
could try to progress further in order to hand back a different error...
When the pipeline needs to split an access, it has to generate multiple
adjacent memory accesses, because the unit of memory access is a cache line.
There is a total order of accesses in the memory queue, so any faults
from first byte of the access will be delivered before any fault from
the first byte to move into the next cache line.
I'm not necessarily saying that Xen's behaviour in
hvmemul_map_linear_addr() is correct in all cases, but it looks a hell
of a lot more correct in it's current form than what this patch presents.
Or do you have a concrete example where you think
hvmemul_map_linear_addr() behaves incorrectly?
~Andrew
|