[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/6] x86/debug: fix guest dr6 value for single stepping and HW breakpoints


  • To: Jinoh Kang <jinoh.kang.kr@xxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 18 Aug 2023 21:22:03 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qi4bK1mzMXI9n0x0Ain8VmBz09gGRrw9CeApVtaxyk8=; b=I0QkhFOSRbdULTovrL5VTAq9RbcQJ2wxgsBuoCdP5uLuOCepxg40+fRwaLXsn1QUSvnQ7G15lTxEDzXLiEra+0jSibdC3Qn7AaAII3LWekVxgyLWJn3Fif+Bgt73TW4W9cOkTepJ6OVbrnCJ4X9vJqkE0KJeraGq2WW0ipvL4fd9kppvZZEeCh9h03VYUT5oU32VKDEOimbHlAOFxv/+Tnf8/mEIAzUVEn/M06JyTfG1LFFYn70UPGoAvt4SckrpqrD3qv/zo6OKjrGsH6X6hoJCoM0lJPLoElMTo0DMPc6u/cpGMvBddpu/hEBASxwONt5BRz2DfU7bPrtPAcpNIw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PXSiXkT4E6EUMy81bIQg3tD+zhRkKhrCPmai1/FXrReeCcLHRqD+iOldZOLtOsNZ1tfI1TYKWYLPbohpZFJZZXk1xruz0+ipCZvV0w2vcWFt2GiO97xzGIk2Xq85XVhYfKnn3r80YEMIRR+DLmm7Cpw1bJlULfHy1MT0d5H9yvURTAVgfIpnl5YcfDYhSCBWT4km2Vy1P2+qKq6OSv8HCC721gN8O2CpyDhjwAYBbnU/N57/aBN1dvcEdp1h+mQTotxsedypOEkoVnFdG4GOZvFxYfNrANTNh06EXFtoIXAeCNROwKlv7I+F1bC0PAejavC1otT+0+UkLy/zpN3JSg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Henry Wang <Henry.Wang@xxxxxxx>
  • Delivery-date: Fri, 18 Aug 2023 20:22:33 +0000
  • Ironport-data: A9a23:JuRnTqlzbhEQKxLl44LVEMzo5gwaJ0RdPkR7XQ2eYbSJt1+Wr1Gzt xIeXmDQafmJYzD9ctkkaYu1804F7MLQmINiGwFkrytjECMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE0p5K2aVA8w5ARkPqgb5Q+GyxH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 eQRC2gsNRunvPPo7LeXWuVdlOoMffC+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea9WDbWUoXiqcF9t0CUv G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTezor6Iz2wX7Kmo7LzYRRVKmkaKAuhCQdslFJ RQL+gpxsv1nnKCsZpynN/Gim1aGvxgbW5xTGus1rgKX4qXR6gedQGMDS1ZpatYrqcs3TjwCz UKSkpXiAjkHmKKRYWKQ8PGTtzzaETAcMGsqdSICCwwf7LHLopw1yBTGTd9hEau8ptzzBTz0h TuNqUAWhLgNjMhNy6Sy+3jGhS6hot7CSQtdzhnWW0q14wU/Y5SqD6S45F6e4fteIYKxSliao GNCi8WY9PoJD5yGiGqKWuplNJGk4eyUdgLVh1FHFoMksT+q/haekZt45Th/IAJjNJYCcDqwO UvL41oPtNlUIWegarJxb8SpEcM2wKP8FNPjEPfJct5JZZs3fwiClM1zWXOtM6nWuBBEuckC1 V2zK65A0V5y5Xxb8QeL
  • Ironport-hdrordr: A9a23:cbQYr6GTvyUaviTYpLqFiJLXdLJyesId70hD6qkvc3Nom52j+/ xGws536fatskdoZJhSo6H6BEDgewKUyXcR2+J+AV7MZniBhILFFvAA0WKA+UypJ8SdzJ8l6U 4IScEXYryRMbEQt7ee3ODMKadG/DDxytHNuQ6x9QYOcShaL4VbqytpAAeSFUN7ACFAGJoCDZ KZouZXuja6fnwTT8KjQl0IRfLKqdHnnI/vJUduPW9s1CC+yReTrJLqGRmR2RkTFxtJ3LcZ6G DA1yDp+6m5tPm/6xnEk0ve9Y5fltfNwsZKQOaMls8WADPxjRvAXvUrZ5Sy+BQO5M2/4lcjl9 fB5z06Od5o1n/Xdmap5TPwxgjJyl8VmjPf4G7dpUGmjd3yRTo8BcYEr5leaAHl500pu8w5+L 5X3lieq4FcAXr77WvADpnzJl9Xf3iP0DofeN0o/j9iuEwlGf1sRLkkjQJo+VE7bWfHAc4cYa 1T5YrnlYxrmBuhHg3kVy9UsZGRtz0Ib2u7a1lHtcqP3zdMmndli0Me2cwEh38FsIkwUp9e+o 3/Q9BVfZx1P70rhJhGdZI8aNryDnaITQPHMWqUL1iiHKYbO2jVo5qy5Lku/umldJEB0ZN3wf 36ISVlnH93f1irBdyF3ZVN/ByISGKhXS71wsUb45RioLXzSLfiLCXGQlEzlMmrpekZH6TgKo GOEYMTB+WmIXrlGI5P0QG7U55OKWMGWMlQodo/U0LmmLO5FmQrjJ2qTB/+HsudLd9/YBKBPp IqZkmMGPl9
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 18/08/2023 4:44 pm, Jinoh Kang wrote:
> Xen has a bug where hardware breakpoint exceptions (DR_TRAP<n>) are
> erroneously recognized as single-stepping exceptions (DR_STEP).

I expected this to come back and bite.

https://lore.kernel.org/xen-devel/1528120755-17455-1-git-send-email-andrew.cooper3@xxxxxxxxxx/

Xen's %dr6 handling is very broken, and then Spectre/Meltdown happened
and some how my bugfixes are now 5 years old and still incomplete.  I've
got a form of this series rebased onto staging, which I'll need to dust off.


That said, I was not aware of this specific case going wrong.  (But I
can't say I'm surprised.)

Thankyou for the test case.  If I'm reading it right, the problem is
that when %dr0 genuinely triggers, we VMExit (to break #DB infinite
loops), and on re-injecting the #DB back to the guest, we blindly set
singlestep?

This is wrong for #DB faults, and you're using an instruction breakpoint
so that matches.

However, it is far more complicated for #DB traps, where hardware leaves
it up to the VMM to merge status bits, and it's different between PV,
VT-x and SVM.

Worse than that, there is an bug on Intel hardware where if a singlestep
is genuinely pending, then data breakpoint information isn't passed to
the VMM on VMExit.  I have yet to persuade Intel to fix this, despite
quite a lot of trying.

Looking at patch 3, I think I can see how it fixes your bug, but I don't
think the logic is correct in all cases.  In particular, look at my
series for the cases where  X86_DR6_DEFAULT is used to flip polarity. 
Notably, PV and SVM have different dr6 polarity to VT-x's pending_dbg field.

Also, on Intel you're supposed to leave pending bits in pending_dbg and
not inject #DB directly, in order for the pipeline to get the exception
priority in the right order.  This I didn't get around to fixing at the
time.


I suspect what we'll need to do is combine parts of your series and
parts of mine.  I never got around to fixing the introspection bugs in
mine (that's a far larger task to do nicely), and yours is more targetted.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.