[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 for 4.5] ioreq-server: handle the lack of a default emulator properly



>>> On 30.09.14 at 11:52, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 30/09/14 10:48, Jan Beulich wrote:
>>>>> On 30.09.14 at 11:29, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 30/09/14 10:18, Paul Durrant wrote:
>>>> I started porting QEMU over to use the new ioreq server API and hit a
>>>> problem with PCI bus enumeration. Because, with my patches, QEMU only
>>>> registers to handle config space accesses for the PCI device it implements
>>>> all other attempts by the guest to access 0xcfc go nowhere and this was
>>>> causing the vcpu to wedge up because nothing was completing the I/O.
>>>>
>>>> This patch introduces an I/O completion handler into the hypervisor for the
>>>> case where no ioreq server matches a particular request. Read requests are
>>>> completed with 0xf's in the data buffer, writes and all other I/O req types
>>>> are ignored.
>>>>
>>>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
>>>> Cc: Keir Fraser <keir@xxxxxxx>
>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>>> One bug, couple of nits.
>>>
>>> It is probably worth having a sentence in the commit message concerning
>>> the removal of list_is_singular().
>>>
>>>> ---
>>>> v3: - Fix for backwards string instruction emulation
>>>>
>>>> v2: - First non-RFC submission
>>>>     - Removed warning on unemulated MMIO accesses
>>>>
>>>>  xen/arch/x86/hvm/hvm.c |   35 ++++++++++++++++++++++++++++++++---
>>>>  1 file changed, 32 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>> index 5c7e0a4..e6611ed 100644
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>> @@ -2386,8 +2386,7 @@ static struct hvm_ioreq_server 
>>> *hvm_select_ioreq_server(struct domain *d,
>>>>      if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
>>>>          return NULL;
>>>>  
>>>> -    if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) ||
>>>> -         (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) )
>>>> +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>>>>          return d->arch.hvm_domain.default_ioreq_server;
>>>>  
>>>>      cf8 = d->arch.hvm_domain.pci_cf8;
>>>> @@ -2618,12 +2617,42 @@ bool_t hvm_send_assist_req_to_ioreq_server(struct 
>>> hvm_ioreq_server *s,
>>>>      return 0;
>>>>  }
>>>>  
>>>> +static bool_t hvm_complete_assist_req(ioreq_t *p)
>>>> +{
>>>> +    switch (p->type)
>>> Style: ( p-> type )
>>>
>>>> +    {
>>>> +    case IOREQ_TYPE_COPY:
>>>> +    case IOREQ_TYPE_PIO:
>>>> +        if ( p->dir == IOREQ_READ )
>>>> +        {
>>>> +            if ( !p->data_is_ptr )
>>>> +                p->data = ~0ul;
>>>> +            else
>>>> +            {
>>>> +                int i, step = p->df ? -p->size : p->size;
>>> 'i' must be unsigned or larger, given p->count being uint32_t.
>> No (or else similar changes would be needed elsewhere) - the field
>> being uint32_t doesn't imply the full value range to be used. This is
>> an ioreq_t, which we fill ourselves. Remember the code I pointed
>> you to yesterday? The correctness of the above follows from
>> similar implications afaict.
> 
> It is a matter of defensive coding.  Just because we do not expect
> p->size * p->count to be greater than a page doesn't mean that some bug
> wont cause it to happen.
> 
> At this point, the different between a signed and unsigned i is a
> bounded or unbounded loop.

Again - if you strongly feel about this, submit a patch to fix it
everywhere. When I fixed the backward string ops here, I did
consider what you refer to above but in the end didn't think it
was worth forcing the compiled code to grow (due to added REX
prefixes) for no real reason.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.