[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] RFC x86/hvm: Don't truncate the hvm hypercall index before range checking it



On 27/10/16 08:12, Jan Beulich wrote:
>>>> On 26.10.16 at 20:19, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 24/10/16 12:13, Jan Beulich wrote:
>>>>>> On 24.10.16 at 12:25, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> Yes we very much are at liberty to change things.  Viridian would not
>>>> function without using that page (as the hypercalls would be confused
>>>> with Xen hypercalls), and the spec is very clear that the hypercall page
>>>> will be used.
>>>>
>>>> As for the Xen hypercall page, the ABI is clearly stated as:
>>>>
>>>>     call hypercall_page + hypercall-number * 32
>>>>
>>>> in include/public/arch-x86/xen-x86_{32,64}.h, meaning that we are
>>>> perfectly at liberty to alter the layout and inner-workings of our
>>>> hypercall page as well.
>>> This, iirc, is not something that has been this way from the beginning;
>>> I think the page has got introduced as a courtesy for 64-bit PV guests,
>>> where the hypercall sequence involves multiple instructions (I can't
>>> tell whether perhaps for HVM guests it has always been there, to
>>> abstract out the vendor differences in what instruction to use).
>>>
>>> In fact even current upstream Linux still has a remnant of it being
>>> different, by way of the (now unused) TRAP_INSTR definition. If the
>>> presence of a hypercall page (as an obvious prerequisite of its use)
>>> was a requirement, we shouldn't boot guests not having one (and we
>>> probably should go as far as refusing calls originating from outside,
>>> which would break many if not all SUSE 32-bit PV kernels, which do a
>>> few early calls without going through hypercall_page).
>> PV guests aren't a problem.  Even now, they don't truncate %rax.
>>
>> HVM guests have always had hypercall pages.  Having gone through the
>> history again, it appears that the 64bit HVM ABI was introduced broken,
>> by c/s 5eeca68f, despite the fact that the mov $imm32, %eax in the
>> hypercall page provides the expected truncation.
> Okay, you've convinced me. I'd like to slightly refine my earlier minor
> adjustment request though:
>
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -4265,11 +4265,11 @@ int hvm_do_hypercall(struct cpu_user_regs *regs)
>>      struct domain *currd = curr->domain;
>>      struct segment_register sreg;
>>      int mode = hvm_guest_x86_mode(curr);
>> -    uint32_t eax = regs->eax;
>> +    unsigned long eax;
>>  
>>      switch ( mode )
>>      {
>> -    case 8:        
>> +    case 8:
>>      case 4:
>>      case 2:
>>          hvm_get_segment_register(curr, x86_seg_ss, &sreg);
>> @@ -4283,6 +4283,8 @@ int hvm_do_hypercall(struct cpu_user_regs *regs)
>>          break;
>>      }
>>  
>> +    eax = (mode == 8) ? regs->eax : regs->_eax;
> I think to avoid another conditional here, regs->_eax could remain to
> be the initializer of eax, and the use of regs->rax could be but into the
> "case 8:" which you touch anyway. I'm not insisting on this though, so
> no matter with just the originally requested adjustment of this one:
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Something like this?

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 11e2b82..69b740d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4279,11 +4279,12 @@ int hvm_do_hypercall(struct cpu_user_regs *regs)
     struct domain *currd = curr->domain;
     struct segment_register sreg;
     int mode = hvm_guest_x86_mode(curr);
-    uint32_t eax = regs->eax;
+    unsigned long eax = regs->_eax;
 
     switch ( mode )
     {
-    case 8:       
+    case 8:
+        eax = regs->rax;
     case 4:
     case 2:
         hvm_get_segment_register(curr, x86_seg_ss, &sreg);

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.