[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/ctxt-switch: Document and improve GDT handling


  • To: Jan Beulich <JBeulich@xxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 5 Jul 2019 14:36:34 +0100
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Juergen Gross <JGross@xxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 05 Jul 2019 13:36:45 +0000
  • Ironport-sdr: Xk1fWb7ts3tQWNL3MFBRnb+4fUEHTQ+HMya+uZVZ2VTbk8ES7t9iraHDtBY4pW+2NcEf9PzgKI ZpqU17ULT0tCMV2EQGtMhnvfIdfbzxP4OXYkvjLYYXZenwZvpWdXcO3PUwhsZ11dXwRLnxUSbg mItu1uxqvwFp1ftq1adKkWMZ6rKO+Z/+RBsyK/lAfKE+a+Fw7xO4C6IZSltnzOBQEqRWugQVpG gDLzTfFz6MEGn1pAk3BkUSVErhSYR2l0KqFb/E921AG8PvSCIvq4cDdn/W3/PIUitDwkDRYj2k aCg=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 05/07/2019 11:00, Jan Beulich wrote:
> On 04.07.2019 19:57, Andrew Cooper wrote:
>> write_full_gdt_ptes() has a latent bug.  Using virt_to_mfn() and iterating
>> with (mfn + i) is wrong, because of PDX compression.  The context switch path
>> only functions correctly because NR_RESERVED_GDT_PAGES is 1.
> Whether this is a (latent) bug depends on how the allocation gets
> done. As long as it's a single alloc_xenheap_pages(), this is
> perfectly fine. There are no individual allocations which can span
> a PDX compression hole (or else MFN or struct page pointer
> arithmetic wouldn't work either, independent of the involvement of
> a virtual address).

Hmm - Its still very deceptive code.

>
>> Also, it should now be very obvious to people that Xen's current GDT handling
>> for non-PV vcpus is a recipe subtle bugs, if we ever manage to execute a 
>> stray
>> mov/pop %sreg instruction.  We really ought to have Xen's regular GDT in an
>> area where slots 0-13 are either mapped to the zero page, or not present, so
>> we don't risk loading a non-faulting garbage selector.
> Well, there's certainly room for improvement, but loading a stray
> selector seems pretty unlikely an event to happen, and the
> respective code having got slipped in without anyone noticing.
> Other than in context switching code I don't think there are many
> places at all where we write to the selector registers.

There are however many places where we write some bytes into a stub and
then execute them.

This isn't a security issue.  There aren't any legitimate codepaths for
which is this a problem, but there are plenty of cascade failures where
this is liable to make a bad situation worse is weird hard-to-debug ways.

Not to mention that for security hardening purposes, we should be using
a RO mapping to combat sgdt or fixed-ABI knowledge from an attacker.

And on that note... nothing really updates the full GDT via the
perdomain mappings, so I think that can already move to being RO.  This
does depend on the fact that noone has used segmented virtual memory
since long before Xen was a thing.  We can trap and emulate the setting
of A bits, and I bet that path will never get hit even with old PV guests.

>> @@ -1718,15 +1737,12 @@ static void __context_switch(void)
>>   
>>       psr_ctxt_switch_to(nd);
>>   
>> -    gdt = !is_pv_32bit_domain(nd) ? per_cpu(gdt_table, cpu) :
>> -                                    per_cpu(compat_gdt_table, cpu);
>> -
>>       if ( need_full_gdt(nd) )
>> -        write_full_gdt_ptes(gdt, n);
>> +        update_xen_slot_in_full_gdt(n, cpu);
>>   
>>       if ( need_full_gdt(pd) &&
>>            ((p->vcpu_id != n->vcpu_id) || !need_full_gdt(nd)) )
>> -        load_default_gdt(gdt, cpu);
>> +        load_default_gdt(cpu);
>  From looking at this transformation I cannot see how, as said in
> the description and as expressed by removing the gdt parameter
> from load_default_gdt(), the gdt having got passed in here would
> always have been per_cpu(gdt_table, cpu). It pretty clearly was
> the compat one for nd being 32-bit PV. What am I missing?

To be perfectly honest, I wrote "how it {does,should} logically work",
then adjusted the code.

> Or is the description perhaps instead meaning to say that it doesn't
> _need_ to be the compat one that we load here, as in case it is
> the subsequent load_full_gdt() will replace it again anyway?

lgdt is an expensive operation.  I hadn't even spotted that we are doing
it twice on that path.  There is surely some room for improvement here
as well.

I wonder if caching the last gdt base address per cpu would be a better
option, and only doing a "lazy" lgdt.  It would certainly simply the
"when should I lgdt?" logic.

>
>> @@ -2059,6 +2061,14 @@ void __init trap_init(void)
>>           }
>>       }
>>   
>> +    /* Cache {,compat_}gdt_table_l1e now that physically relocation is 
>> done. */
> "physical relocation" or "physically relocating"?

Oops.  I'll go with the former.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.