[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 for 4.5] ioreq-server: handle the lack of a default emulator properly



>>> On 29.09.14 at 14:51, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 29/09/14 13:20, Jan Beulich wrote:
>>>>> On 29.09.14 at 12:59, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 29/09/14 11:21, Paul Durrant wrote:
>>>> I started porting QEMU over to use the new ioreq server API and hit a
>>>> problem with PCI bus enumeration. Because, with my patches, QEMU only
>>>> registers to handle config space accesses for the PCI device it implements
>>>> all other attempts by the guest to access 0xcfc go nowhere and this was
>>>> causing the vcpu to wedge up because nothing was completing the I/O.
>>>>
>>>> This patch introduces an I/O completion handler into the hypervisor for the
>>>> case where no ioreq server matches a particular request. Read requests are
>>>> completed with 0xf's in the data buffer, writes and all other I/O req types
>>>> are ignored.
>>>>
>>>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
>>>> Cc: Keir Fraser <keir@xxxxxxx>
>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>>>> ---
>>>> v2: - First non-RFC submission
>>>>     - Removed warning on unemulated MMIO accesses
>>>>
>>>>  xen/arch/x86/hvm/hvm.c |   35 ++++++++++++++++++++++++++++++++---
>>>>  1 file changed, 32 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>> index 5c7e0a4..822ac37 100644
>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>> @@ -2386,8 +2386,7 @@ static struct hvm_ioreq_server 
>>> *hvm_select_ioreq_server(struct domain *d,
>>>>      if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
>>>>          return NULL;
>>>>  
>>>> -    if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) ||
>>>> -         (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) )
>>>> +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>>>>          return d->arch.hvm_domain.default_ioreq_server;
>>>>  
>>>>      cf8 = d->arch.hvm_domain.pci_cf8;
>>>> @@ -2618,12 +2617,42 @@ bool_t hvm_send_assist_req_to_ioreq_server(struct 
>>> hvm_ioreq_server *s,
>>>>      return 0;
>>>>  }
>>>>  
>>>> +static bool_t hvm_complete_assist_req(ioreq_t *p)
>>>> +{
>>>> +    switch (p->type)
>>>> +    {
>>>> +    case IOREQ_TYPE_COPY:
>>>> +    case IOREQ_TYPE_PIO:
>>>> +        if ( p->dir == IOREQ_READ )
>>>> +        {
>>>> +            if ( !p->data_is_ptr )
>>>> +                p->data = ~0ul;
>>>> +            else
>>>> +            {
>>>> +                int i, sign = p->df ? -1 : 1;
>>>> +                uint32_t data = ~0;
>>>> +
>>>> +                for ( i = 0; i < p->count; i++ )
>>>> +                    hvm_copy_to_guest_phys(p->data + sign * i * p->size, 
>>>> &data,
>>>> +                                           p->size);
>>> This is surely bogus for an `ins` which crosses a page boundary?
>> Crossing page boundaries gets dealt with up the call stack in
>> hvmemul_linear_to_phys(), namely the path exiting with
>> X86EMUL_UNHANDLEABLE when done == 0.
> 
> Paul also pointed this out in person, which indicates that
> hvm_copy_to_guest_phys() is indeed correct in this case.
> 
> Therefore it is fine, but only because the caller guarentees that
> "p->data + sign * i * p->size" does not cross a page boundary.
> 
> 
> However, what I cant spot is any logic which copes with addr not being
> aligned with bytes_per_rep.  This appears to be valid in x86, and would
> constitute an individual repetition accessing two pages.

Just go to the place in the code I pointed you to above - that case
is being taken care of afaict.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.