[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM



----- 6 lip 2020 o 10:42, Roger Pau Monné roger.pau@xxxxxxxxxx napisał(a):

> On Sun, Jul 05, 2020 at 08:54:59PM +0200, Michał Leszczyński wrote:
>> From: Michal Leszczynski <michal.leszczynski@xxxxxxx>
>> 
>> Implement necessary changes in common code/HVM to support
>> processor trace features. Define vmtrace_pt_* API and
>> implement trace buffer allocation/deallocation in common
>> code.
>> 
>> Signed-off-by: Michal Leszczynski <michal.leszczynski@xxxxxxx>
>> ---
>>  xen/arch/x86/domain.c         | 19 +++++++++++++++++++
>>  xen/common/domain.c           | 19 +++++++++++++++++++
>>  xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
>>  xen/include/xen/sched.h       |  4 ++++
>>  4 files changed, 62 insertions(+)
>> 
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index fee6c3931a..79c9794408 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
>>                  altp2m_vcpu_disable_ve(v);
>>          }
>>  
>> +        for_each_vcpu ( d, v )
>> +        {
>> +            unsigned int i;
>> +
>> +            if ( !v->vmtrace.pt_buf )
>> +                continue;
>> +
>> +            for ( i = 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIFT); 
>> i++ )
>> +            {
>> +                struct page_info *pg = mfn_to_page(
>> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
>> +                if ( (pg->count_info & PGC_count_mask) != 1 )
>> +                    return -EBUSY;
>> +            }
>> +
>> +            free_domheap_pages(v->vmtrace.pt_buf,
>> +                get_order_from_bytes(v->domain->vmtrace_pt_size));
> 
> This is racy as a control domain could take a reference between the
> check and the freeing.
> 
>> +        }
>> +
>>          if ( is_pv_domain(d) )
>>          {
>>              for_each_vcpu ( d, v )
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 25d3359c5b..f480c4e033 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -137,6 +137,21 @@ static void vcpu_destroy(struct vcpu *v)
>>      free_vcpu_struct(v);
>>  }
>>  
>> +static int vmtrace_alloc_buffers(struct vcpu *v)
>> +{
>> +    struct page_info *pg;
>> +    uint64_t size = v->domain->vmtrace_pt_size;
>> +
>> +    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
>> +                             MEMF_no_refcount);
>> +
>> +    if ( !pg )
>> +        return -ENOMEM;
>> +
>> +    v->vmtrace.pt_buf = pg;
>> +    return 0;
>> +}
> 
> I think we already agreed that you would use the same model as ioreq
> servers, where a reference is taken on allocation and then the pages
> are not explicitly freed on domain destruction and put_page_and_type
> is used. Is there some reason why that model doesn't work in this
> case?
> 
> If not, please see hvm_alloc_ioreq_mfn and hvm_free_ioreq_mfn.
> 
> Roger.


Ok, I've got it, will do. Thanks for pointing out the examples.


One thing that is confusing to me is that I don't get what is
the meaning of MEMF_no_refcount flag.

In the hvm_{alloc,free}_ioreq_mfn the memory is allocated
explicitly but freed just by putting out the reference, so
I guess it's automatically detected that the refcount dropped to 0
and the page should be freed? If so, why the flag is named "no refcount"?


Best regards,
Michał Leszczyński
CERT Polska



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.