[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v4][PATCH 11/19] tools: introduce some new parameters to set rdm policy



On 07/06/2015 03:21 PM, Chen, Tiejun wrote:
>>>> This way of doing things is different than the way we do it with most
>>>> other options relating to pci devices (e.g., pci_permissive,
>>>> pci_msitranslate, pci_sieze, &c).  All of those options use a "default"
>>>> semantic: the domain-wide setting takes effect only if it's not set
>>>> locally.  If the syntax looks the same but the semantics is different,
>>>> many people will be confused.  If we're going to have the domain-wide
>>>> policy override the per-device policy, then the naming should make that
>>>> clear; for instance, "override=(strict|relaxed|none)", or
>>>> "strict_override=(1|0)".
>>>
>>> Jan,
>>>
>>> What about this?
>>>
>>> This is involving our policy so please take a look at this as well.
>>
>> I don't think the way things get expressed in the domain config
>> directly relates to what the policy is. How to best express things
>> in the config I'd really like to leave to the tools maintainers.
> 
> Did you remember current definitions are from our previous discussion?
> From froce/try to strict/relaxed ...  You're always getting involved so
> much so we'd better listen what you would say at this point.
> 
>>
>>> George,
>>>
>>> Actually we don't mean the domain-wide policy always override the
>>> per-device policy, or the per-device policy always override the
>>> per-device policy. Here we just take "strict" as the highest priority
>>> when it conflicts in two cases. As I said previously myself may not
>>> answer this very correctly but now I can recall or understand that one
>>> reason is that different devices can share one RMRR entry, so its
>>> possible that these two or more per-device policies are not same. So we
>>> need this particular rule which is not same as before. So I still prefer
>>> to keep our original implementation.
>>>
>>> If I'm missing something or wrong, please Jan correct me.
>>
>> I don't think I fully understand what you try to describe above;
>> instead I think the global vs per-device settings should very much
>> behave just like others (i.e. fallback to global if there is no per-
> 
> If there's no any explicit per-device setting from .cfg, per-device
> always has its own default setting, right?
> 
>> device setting). Furthermore, didn't we settle on not allowing
> 
> Let's make this clear.
> 
> Our current implementation is something like what I described in the
> patch head description,
> 
> Default per-device RDM policy is 'strict', while default global RDM
> policy is 'relaxed'. When both policies are specified on a given region,
> 'strict' is always preferred.
> 
> Any concern to this? Or still let per-device policy override per-domain
> policy like others?

It sounds like part of the problem here is a matter of domains.

Jan cares mostly about what happens in the hypervisor.  At the
hypervisor level, there is only the per-device configurations, and he is
keen that rmrrs be "strict" by default, unless there is an explicit flag
to relax it.  (I agree with this, FWIW.)

What we've been arguing about is the xl layer -- what settings should
xl/libxl give to the hypervisor, based on what's in the domain config?

It sounds like Jan doesn't care a great deal about it, and in any case
would defer to the tools maintainers, but that if asked for his advice
he would say that the configuration in xl.cfg should act like all the
other pci device configurations: that you have a domain-wide default
that can be overridden in the per-device setting.

I.e.:
---
rdm='reserve=strict'
pci=[ '02:0.0', '01:1.1,rdm_reserve=relaxed' ]
---
Would pass "strict" for the first device, and "relaxed" for the second.

Do I understand you both properly, Jan / Tiejun?

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.