[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST use io-backend device.



On Tue, Feb 11, 2014 at 12:17 AM, Zhang, Yang Z <yang.z.zhang@xxxxxxxxx> wrote:
> Konrad Rzeszutek Wilk wrote on 2014-02-07:
>> On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
>>> Konrad Rzeszutek Wilk wrote on 2014-02-05:
>>>> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>>>>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>>>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
>>>> <konrad.wilk@xxxxxxxxxx> wrote:
>>>>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>>>>> get_ioreq()s folded by using a local variable?
>>>>>>>> Yes. As so
>>>>>>> Thanks. Except that ...
>>>>>>>
>>>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>>>>      struct vcpu *v = current;
>>>>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>>>> -
>>>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>>> ... you don't want to drop the blank line, and naming the new
>>>>>>> variable "ioreq" would seem preferable.
>>>>>>>
>>>>>>>>      /*
>>>>>>>>       * a pending IO emualtion may still no finished. In this case,
>>>>>>>>       * no virtual vmswith is allowed. Or else, the following IO
>>>>>>>>       * emulation will handled in a wrong VCPU context.
>>>>>>>>       */
>>>>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>>>>> the right thing here. Yang, Jun?
>>>>>> I have two patches - one the simpler one that is pretty
>>>>>> straightfoward and the one you suggested. Either one fixes PVH
>>>>>> guests. I also did bootup tests with HVM guests to make sure they
>>>>>> worked.
>>>>>>
>>>>>> Attached and inline.
>>>>>
>>>
>>> Sorry for the late response. I just back from Chinese new year holiday.
>>>
>>>>> But they do different things -- one does "ioreq && ioreq->state..."
>>>>
>>>> Correct.
>>>>> and the other does "!ioreq || ioreq->state...".  The first one is
>>>>> incorrect, AFAICT.
>>>>
>>>> Both of them fix the hypervisor blowing up with any PVH guest.
>>>
>>> Both of fixings are right to me.
>>> The only concern is that what we want to do here:
>>> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO
>> request emulation mechanism to continue nested check which current means
>> HVM VCPU.
>>> And "!ioreq || ioreq->state..." will check the VCPU that doesn't
>>> support the IO request emulation mechanism only which current means PVH
>>> VCPU.
>>>
>>> The purpose of my original patch only wants to allow the HVM VCPU that
>> doesn't has pending IO request to continue nested check. Not use it to
>> distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU
>> goes to here as Jan mentioned before that non-HVM domain should never call
>> nested related function at all unless it also supports nested.
>>
>> So it sounds like the #2 patch is preferable by you.
>>
>> Can I stick Acked-by on it?
>>
>
> Sure.

Konrad / Jan: Ping?

If this fix looks reasonable it would be nice to get this in before RC4.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.