[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM



----- 6 lip 2020 o 10:31, Jan Beulich jbeulich@xxxxxxxx napisał(a):

> On 05.07.2020 21:11, Michał Leszczyński wrote:
>> ----- 5 lip 2020 o 20:54, Michał Leszczyński michal.leszczynski@xxxxxxx
>> napisał(a):
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
>>>                 altp2m_vcpu_disable_ve(v);
>>>         }
>>>
>>> +        for_each_vcpu ( d, v )
>>> +        {
>>> +            unsigned int i;
>>> +
>>> +            if ( !v->vmtrace.pt_buf )
>>> +                continue;
>>> +
>>> +            for ( i = 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIFT); 
>>> i++ )
>>> +            {
>>> +                struct page_info *pg = mfn_to_page(
>>> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
>>> +                if ( (pg->count_info & PGC_count_mask) != 1 )
>>> +                    return -EBUSY;
>>> +            }
>>> +
>>> +            free_domheap_pages(v->vmtrace.pt_buf,
>>> +                get_order_from_bytes(v->domain->vmtrace_pt_size));
>> 
>> 
>> While this works, I don't feel that this is a good solution with this loop
>> returning -EBUSY here. I would like to kindly ask for suggestions regarding
>> this topic.
> 
> I'm sorry to ask, but with the previously give suggestions to mirror
> existing code, why do you still need to play with this function? You
> really shouldn't have a need to, just like e.g. the ioreq server page
> handling code didn't.
> 
> Jan


Ok, sorry. I think I've finally got it after latest Roger's suggestions :P

This will be fixed in the next version.


Best regards,
Michał Leszczyński
CERT Polska



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.