[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 05/13] x86/altp2m: basic data structures and support routines.



>>> On 06.07.15 at 18:40, <edmund.h.white@xxxxxxxxx> wrote:
> On 07/03/2015 09:22 AM, Andrew Cooper wrote:
>> On 01/07/15 19:09, Ed White wrote:
>>> Add the basic data structures needed to support alternate p2m's and
>>> the functions to initialise them and tear them down.
>>>
>>> Although Intel hardware can handle 512 EPTP's per hardware thread
>>> concurrently, only 10 per domain are supported in this patch for
>>> performance reasons.
>>>
>>> The iterator in hap_enable() does need to handle 512, so that is now
>>> uint16_t.
>>>
>>> This change also splits the p2m lock into one lock type for altp2m's
>>> and another type for all other p2m's. The purpose of this is to place
>>> the altp2m list lock between the types, so the list lock can be
>>> acquired whilst holding the host p2m lock.
>>>
>>> Signed-off-by: Ed White <edmund.h.white@xxxxxxxxx>
>> 
>> Only some style issues.  Otherwise, Reviewed-by: Andrew Cooper
>> <andrew.cooper3@xxxxxxxxxx>
>> 
>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>> index 6faf3f4..f21d34d 100644
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -6502,6 +6504,25 @@ enum hvm_intblk nhvm_interrupt_blocked(struct vcpu 
>>> *v)
>>>      return hvm_funcs.nhvm_intr_blocked(v);
>>>  }
>>>  
>>> +void ap2m_vcpu_update_eptp(struct vcpu *v)
>>> +{
>>> +    if (hvm_funcs.ap2m_vcpu_update_eptp)
>> 
>> spaces inside brackets
>> 
>>> +        hvm_funcs.ap2m_vcpu_update_eptp(v);
>>> +}
>>> +
>>> +void ap2m_vcpu_update_vmfunc_ve(struct vcpu *v)
>>> +{
>>> +    if (hvm_funcs.ap2m_vcpu_update_vmfunc_ve)
>> 
>> spaces inside brackets
>> 
>>> +        hvm_funcs.ap2m_vcpu_update_vmfunc_ve(v);
>>> +}
>>> +
>>> +bool_t ap2m_vcpu_emulate_ve(struct vcpu *v)
>>> +{
>>> +    if (hvm_funcs.ap2m_vcpu_emulate_ve)
>> 
>> spaces inside brackets
>> 
>>> +        return hvm_funcs.ap2m_vcpu_emulate_ve(v);
>>> +    return 0;
>>> +}
>>> +
>>>  /*
>>>   * Local variables:
>>>   * mode: C
>>> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
>>> index d0d3f1e..c00282c 100644
>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -459,7 +459,7 @@ void hap_domain_init(struct domain *d)
>>>  int hap_enable(struct domain *d, u32 mode)
>>>  {
>>>      unsigned int old_pages;
>>> -    uint8_t i;
>>> +    uint16_t i;
>>>      int rv = 0;
>>>  
>>>      domain_pause(d);
>>> @@ -498,6 +498,24 @@ int hap_enable(struct domain *d, u32 mode)
>>>             goto out;
>>>      }
>>>  
>>> +    /* Init alternate p2m data */
>>> +    if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
>>> +    {
>>> +        rv = -ENOMEM;
>>> +        goto out;
>>> +    }
>>> +
>>> +    for (i = 0; i < MAX_EPTP; i++)
>>> +        d->arch.altp2m_eptp[i] = INVALID_MFN;
>>> +
>>> +    for (i = 0; i < MAX_ALTP2M; i++) {
>> 
>> braces
>> 
>>> +        rv = p2m_alloc_table(d->arch.altp2m_p2m[i]);
>>> +        if ( rv != 0 )
>>> +           goto out;
>>> +    }
>>> +
>>> +    d->arch.altp2m_active = 0;
>>> +
>>>      /* Now let other users see the new mode */
>>>      d->arch.paging.mode = mode | PG_HAP_enable;
>>>  
>>> @@ -510,6 +528,17 @@ void hap_final_teardown(struct domain *d)
>>>  {
>>>      uint8_t i;
>>>  
>>> +    d->arch.altp2m_active = 0;
>>> +
>>> +    if ( d->arch.altp2m_eptp ) {
>> 
>> braces
>> 
>>> +        free_xenheap_page(d->arch.altp2m_eptp);
>>> +        d->arch.altp2m_eptp = NULL;
>>> +    }
>>> +
>>> +    for (i = 0; i < MAX_ALTP2M; i++) {
>> 
>> braces
>> 
>>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
>>> index 1fd1194..58d4951 100644
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -35,6 +35,7 @@
>>>  #include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
>>>  #include <asm/mem_sharing.h>
>>>  #include <asm/hvm/nestedhvm.h>
>>> +#include <asm/hvm/altp2m.h>
>>>  #include <asm/hvm/svm/amd-iommu-proto.h>
>>>  #include <xsm/xsm.h>
>>>  
>>> @@ -183,6 +184,43 @@ static void p2m_teardown_nestedp2m(struct domain *d)
>>>      }
>>>  }
>>>  
>>> +static void p2m_teardown_altp2m(struct domain *d)
>>> +{
>>> +    uint8_t i;
>> 
>> A plain unsigned int here would suffice.  It also looks curios as you
>> use uint16 for the same iteration elsewhere.
>> 
>>> +    struct p2m_domain *p2m;
>>> +
>>> +    for (i = 0; i < MAX_ALTP2M; i++)
>> 
>> spaces inside brackets
>> 
>>> +    {
>>> +        if ( !d->arch.altp2m_p2m[i] )
>>> +            continue;
>>> +        p2m = d->arch.altp2m_p2m[i];
>>> +        p2m_free_one(p2m);
>>> +        d->arch.altp2m_p2m[i] = NULL;
>>> +    }
>>> +}
>>> +
>>> +static int p2m_init_altp2m(struct domain *d)
>>> +{
>>> +    uint8_t i;
>>> +    struct p2m_domain *p2m;
>>> +
>>> +    mm_lock_init(&d->arch.altp2m_lock);
>>> +    for (i = 0; i < MAX_ALTP2M; i++)
>> 
>> spaces inside brackets
>> 
> 
> In every case, this is because I wrote the code to conform with the style
> of the surrounding code. I'll fix them all, but I think the maintainers
> need to be clear about which is more important -- following the coding
> style or following the style of the surrounding code.

Okay, I agree that in the areas you change there are bad examples.
Nevertheless, looking at the files you modify as a whole, it is clear
that they're intended to be fully conforming to Xen coding style. In
a number of cases, you even mix styles yourself (e.g. putting blanks
around outer parentheses of if() but not doing so for for()). This
simply makes no sense. Furthermore it is quite intriguing that all the
violations I spotted while checking are due to nested code. I.e.
clearly sloppiness of submitter, reviewer(s) and committer. Bottom
line - this is a clear case of wrongly using malformed code as an
excuse to add further malformed code.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.