[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/pciback: Update data filter intersection logic.



>>> On 20.06.16 at 15:36, <boris.ostrovsky@xxxxxxxxxx> wrote:

> 
> On 06/20/2016 09:30 AM, Jan Beulich wrote:
>>>>> On 20.06.16 at 14:58, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> On 06/20/2016 04:56 AM, Jan Beulich wrote:
>>>>>>> On 20.06.16 at 00:03, <andrey2805@xxxxxxxxx> wrote:
>>>>> Follow up on 
>>>>> http://www.gossamer-threads.com/lists/xen/devel/436000#436000 
>>>>> Using http://eli.thegreenplace.net/2008/08/15/intersection-of-1d-segments 
>>>>> as
>>>>> reference.
>>>>>
>>>>>       New value
>>>>>   |---------------|
>>>>>
>>>>> f1                              f5
>>>>> |---|                         |-----|
>>>>>         f2              f4
>>>>>       |-----|    f3   |-----|
>>>>>       |-----|
>>>>>
>>>>> Given a new value of the type above, Current logic will not
>>>>> allow applying parts of the new value overlapping with f3 filter.
>>>>> only f2 and f4.
>>>> I remains unclear what f1...f5 are meant to represent here.
>>>>
>>>>> This change allows all 3 types of overlapes to be included.
>>>>> More specifically for passthrough an Industrial Ethernet Interface
>>>>> (Hilscher GmbH CIFX 50E-DP(M/S)) on a HVM DomU running the
>>>>> Xen 4.6 Hypervisor it allows to restore the LATENCY TIMER field
>>>>> given a quirk to allow read/write for that field is already in place.
>>>>> Device driver logic is such that the entire confspace  is
>>>>> written in 4 byte chunks s.t. LATENCY_TIMER AND CACHE_LINE_SIZE are
>>>>> arriving together in one call to xen_pcibk_config_write.
>>>> Tweaks to make individual devices work are dubious. Any
>>>> explanation should outline why current behavior is _generally_
>>>> wrong. In particular with the original issue being with the
>>>> Latency Timer field, and with that field now being allowed to
>>>> be written by its entry in header_common[], ...
>>>>
>>>>> --- a/drivers/xen/xen-pciback/conf_space.c
>>>>> +++ b/drivers/xen/xen-pciback/conf_space.c
>>>>> @@ -230,8 +230,7 @@ int xen_pcibk_config_write(struct pci_dev *dev, int
>>>>> offset, int size, u32 value)
>>>>>                   field_start = OFFSET(cfg_entry);
>>>>>                   field_end = OFFSET(cfg_entry) + field->size;
>>>>>    
>>>>> -         if ((req_start >= field_start && req_start < field_end)
>>>>> -             || (req_end > field_start && req_end <= field_end)) {
>>>>> +          if (req_end >= field_start || field_end >= req_start) {
>>>>>                           tmp_val = 0;
>>>> ... any change to the logic here which results in writes to the field
>>>> getting issued would seem wrong without even looking at the
>>>> particular nature of the field.
>>>>
>>>> As to the actual change - the two _end variables hold values
>>>> pointing _past_ the region of interest, so the use of >= seems
>>>> wrong here (ought to be >). And in the end we're talking of a
>>>> classical overlap check here, which indeed can be simplified (to
>>>> just two comparisons), but such simplification mustn't result in a
>>>> change of behavior. (Such a simplification would end up being
>>>>
>>>>            if (req_end > field_start && field_end > req_start) {
>>>>
>>>> afaict - note the || instead of && used in your change.)
>>> Will setting header_common[]'s PCI_CACHE_LINE_SIZE size field to 2 (and
>>> adding a proper comment) solve this problem?
>> How would that work? We mean to not allow writes, or else we
>> could simply add a .u.b.write handler for PCI_LATENCY_TIMER.
> 
> I thought you suggested (in another thread) to make PCI_LATENCY_TIMER 
> writable, just like PCI_CACHE_LINE_SIZE?

Well, I did put this up for discussion (as much as I questioned
whether Cache Line Size should perhaps not be writable). And if
we wanted to make it writable, then we should do so the ordinary
way, not via some hacks. (And looking at it again I don't even
understand why Cache Line Size has an entry in header_common[]
- identical behavior would result with the entry dropped afaict.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.