|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC V7 4/5] xen, libxc: Request page fault injection via libxc
On 08/26/2014 05:44 PM, Jan Beulich wrote:
>>>> On 26.08.14 at 16:24, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> On 08/26/2014 05:13 PM, Jan Beulich wrote:
>>>>>> On 13.08.14 at 17:28, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>> --- a/xen/include/asm-x86/hvm/domain.h
>>>> +++ b/xen/include/asm-x86/hvm/domain.h
>>>> @@ -141,6 +141,14 @@ struct hvm_domain {
>>>> */
>>>> uint64_t sync_tsc;
>>>>
>>>> + /* Memory introspection page fault injection data. */
>>>> + struct {
>>>> + uint64_t address_space;
>>>> + uint64_t virtual_address;
>>>> + uint32_t errcode;
>>>> + bool_t valid;
>>>> + } fault_info;
>>>
>>> Sorry for noticing this only now, but how can this be a per-domain
>>> thing rather than a per-vCPU one?
>>
>> The requirement for our introspection application has simply been to
>> bring back in a swapped-out page, regardless of what VCPU ends up
>> actually doing it.
>
> But please remember that what you add to the public code base
> shouldn't be tied to specific needs of your application, it should
> be coded in a generally useful way.
Of course, perhaps I should have written "the scenario we're working
with" rather than "the requirement for our application". I'm just trying
to understand all the usual cases for this.
> Furthermore, how would this work if you have 2 vCPU-s hit such
> a condition, and you need to bring in 2 pages in parallel?
Since this is all happening in the context of processing mem_events,
it's not really possible for two VCPUs to need to do this in parallel,
since processing mem_events is being done sequentially. A VCPU needs to
put a mem_event in the ring buffer and pause before this hypercall can
be called from userspace.
Thanks,
Razvan Cojocaru
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |