[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 01/13] Export hypervisor symbols



>>> On 11.09.13 at 16:57, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 09/11/2013 10:12 AM, Jan Beulich wrote:
>>
>>>>> --- a/xen/include/public/platform.h
>>>>> +++ b/xen/include/public/platform.h
>>>>> @@ -527,6 +527,26 @@ struct xenpf_core_parking {
>>>>>    typedef struct xenpf_core_parking xenpf_core_parking_t;
>>>>>    DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
>>>>>    
>>>>> +#define XENPF_get_symbols  61
>>>>> +
>>>>> +#define XENSYMS_SZ         4096
>>>> This doesn't appear to belong into the public interface.
>>> Linux driver needs to know size of the buffer that is passed from
>>> the hypervisir. I suppose I can just use PAGE_SIZE.
>> Buffer? Passed from the hypervisor?
> 
> As it is written now, we pass XENSYMS_SZ worth of (formatted) symbol
> information to dom0.

Right, that what I understood, and that's what I want to avoid.

>>>>> +   /*
>>>>> +    * Symbols data, formatted similar to /proc/kallsyms:
>>>>> +    *   <address> <type> <name>
>>>>> +    */
>>>>> +    XEN_GUEST_HANDLE(char) buf;
>>>> This is too simplistic: Please use a proper structure here, to allow
>>>> switching the internal symbol table representation (which I have on
>>>> my todo list) without having to mimic old behavior.
>>> I don't think I know what you are referring to here.
>> Rather than having a handle to a simply byte array, you ought
>> to have a handle to a structure containing address, type, and
>> (pointer to/handle of) name.
>>
> 
> Are you suggesting passing symbols one per hypercall? That's over 4000
> hypercalls per one file read. How about requesting N next symbols?

That'd be fine too, but could be almost equally achieved with
multi-calls.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.