[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT



On 12/04/15 03:59, Paul Durrant wrote:
>> -----Original Message-----
>> From: Don Slutz [mailto:don.slutz@xxxxxxxxx]
>> Sent: 04 December 2015 00:23
>> To: Paul Durrant; xen-devel@xxxxxxxxxxxxx
>> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell; George
>> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Tim
>> (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee Suthikulpanit; Boris
>> Ostrovsky; Ian Jackson
>> Subject: Re: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
>>
>> On 12/01/15 06:28, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
>>>> bounces@xxxxxxxxxxxxx] On Behalf Of Don Slutz
>>>> Sent: 28 November 2015 21:45
>>>> To: xen-devel@xxxxxxxxxxxxx
>>>> Cc: Jun Nakajima; Wei Liu; Kevin Tian; Keir (Xen.org); Ian Campbell;
>> George
>>>> Dunlap; Andrew Cooper; Stefano Stabellini; Eddie Dong; Don Slutz; Don
>> Slutz;
>>>> Tim (Xen.org); Aravind Gopalakrishnan; Jan Beulich; Suravee
>> Suthikulpanit;
>>>> Boris Ostrovsky; Ian Jackson
>>>> Subject: [Xen-devel] [PATCH v13 7/8] Add IOREQ_TYPE_VMWARE_PORT
>>>>
>>>> From: Don Slutz <dslutz@xxxxxxxxxxx>
>>>>
>> ...
>>>>
>>>>          /* Verify the emulation request has been correctly re-issued */
>>>> -        if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
>>>> +        if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : is_vmware ?
>>>> IOREQ_TYPE_VMWARE_PORT : IOREQ_TYPE_PIO)) ||
>>>
>>> is_vmware already incorporated !is_mmio so there's a redundant
>>> check in that expression. The extra test also makes it look
>>> pretty ugly... probably better re-factored into an if
>>> statement.
>>>
>>
>> Ok, Will add a variable, that is set via an if statement.  Thinking about:
>>
>>       case STATE_IORESP_READY:
>> +    {
>> +        uint8_t calc_type = p.type;
>> +
>> +        if ( is_vmware )
>> +            calc_type = IOREQ_TYPE_VMWARE_PORT;
>> +
>>          vio->io_req.state = STATE_IOREQ_NONE;
>>          p = vio->io_req;
>>
>>          /* Verify the emulation request has been correctly re-issued */
>> -        if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
>> +        if ( (p.type != calc_type) ||
>>
>>
>>
>>
>>>>               (p.addr != addr) ||
>>>>               (p.size != size) ||
>>>>               (p.count != reps) ||
>> ...
>>>> +
>>>> +            p.type = IOREQ_TYPE_VMWARE_PORT;
>>>> +            vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
>>>
>>> This could be done in a single statement.
>>>
>>
>> Ok.
>>
>>  p.type = vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
>>
>> or
>>
>>  vio->io_req.type = p.type = IOREQ_TYPE_VMWARE_PORT;
>>
>> is clearer to you?
> 
> I think that the former is probably better.
> 

Will use this.

>>
>>>> +            s = hvm_select_ioreq_server(curr->domain, &p);
>> ...
>>>>
>>>>      if ( rc )
>>>> -        hvm_unmap_ioreq_page(s, 0);
>>>> +    {
>>>> +        hvm_unmap_ioreq_page(s, IOREQ_PAGE_TYPE_IOREQ);
>>>> +        return rc;
>>>> +    }
>>>> +
>>>> +    rc = hvm_map_ioreq_page(s, IOREQ_PAGE_TYPE_VMPORT,
>>>> vmport_ioreq_pfn);
>>>
>>> Is every ioreq server going to have one of these? It doesn't look
>>> like it, so should you not have validity check on the pfn?
>>>
>>
>>
>> Currently the default is that all ioreq servers get the mapping:
>>
> 
> That's probably a bit wasteful. It should probably be
> selectable in the create HVM op.

Since the most common case is QEMU and it can use it since upstream
version 2.2.0, the waste is real small.  If a non-QEMU ioreq server does
not want it, it add 0 overhead.  The only case I know of (which is PVH
related to PVH) is if QEMU is not running and you add a non-QEMU ioreq
server that does not use it, you get 1 page + page table entries.

While the following exists:

#define HVM_IOREQSRV_BUFIOREQ_OFF    0
#define HVM_IOREQSRV_BUFIOREQ_LEGACY 1
/*
 * Use this when read_pointer gets updated atomically and
 * the pointer pair gets read atomically:
 */
#define HVM_IOREQSRV_BUFIOREQ_ATOMIC 2
    uint8_t handle_bufioreq; /* IN - should server handle buffered ioreqs */

I see no tests that use these.  Also adding vmport enable/disable to
handle_bufioreq is much more of a hack then I expect to pass a code
review.

Does not look simple to add a new additional argument, but that could
just be my lack of understanding in the area.

> I don't know whether you'd
> even need it there in the default server since I guess the QEMU
> end of things post-dates the use of the HVM op (rather than the
> old param).
> 

Not sure how to parse "post-dates".  Here is some tables about this that
I know about:


xen     Supports                handle_bufioreq
         create_ioreq_server
4.5     yes                     0 or !0
4.6     yes                     0 or !0
4.7     yes                     0 or !0

Upstream vmport         is_default atomic
 QEMU     support

2.2.x    yes            yes        n/a
2.3.x    yes            no         no
2.4.x    yes            no         no
2.5.x    yes            no         yes

Xen      vmport         is_default atomic
 QEMU     support

4.5.x    no             yes        n/a
4.6.x    yes            no         yes
4.7.x    yes            no         yes

With just a "xen only" build, you will not get a QEMU that is a default
ioreq server.  However it looks possible to mix Xen and QEMU and get
this case.

So unless I hear otherwise, I will not be making a change here.

>> +        /* VMware port */
>> +        if ( i == HVMOP_IO_RANGE_VMWARE_PORT &&
>> +            s->domain->arch.hvm_domain.is_vmware_port_enabled )
>> +            rc = rangeset_add_range(s->range[i], 1, 1);
>>
>>
>>
>> but you are right that a check on is_vmware_port_enabled should be
>> added.  Will do.
> 
> Cool. Thanks,
> 
>   Paul
> 
>>
>>    -Don Slutz
>>
>>>   Paul
>>>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.