|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 2/4] x86/mem_access: mem_access and mem_event changes to support PV domains
>> @@ -65,12 +91,27 @@ int mem_access_memop(unsigned long cmd,
>> case XENMEM_access_op_set_access:
>> {
>> unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
>> + unsigned long pfn = mao.pfn;
>>
>> rc = -EINVAL;
>> - if ( (mao.pfn != ~0ull) &&
>> + if ( !domain_valid_for_mem_access(d) )
>> + break;
>> +
>> + if ( unlikely(is_pv_domain(d) && pfn != ~0ull) )
>> + pfn = get_gpfn_from_mfn(mao.pfn);
>> +
>> + /*
>> + * max_pfn for PV domains is obtained from the shared_info
>> + structures
>Another one of my potentially inaccurate attempts to help:
>
>I don't know that you need to care about pfn here. This is a PV domain, so a
>page may be "hung" from it without requiring a specific translation to a slot
>in
>the physmap. There are precedents for this, e.g. the shared info page, the
>grant table pages.
>
>In other words, you could make the men event ring page a xen page, share it
>with the guest so it becomes mappable (share_xen_page_with_guest), and
>then xc_map in dom0 libxc would be happily able to map it. With no need to
>worry ever about finding a pfn in the guest domain for this page.
>
>Of course you'll need to keep track of this page and properly dispose of it
>regardless.
OK, I think this is what I am doing as seen below.
Thanks,
Aravindh
>> + case XENMEM_access_op_create_ring_page:
>> + {
>> + void *access_ring_va;
>> +
>> + /*
>> + * mem_access listeners for HVM domains need not call
>> + * xc_mem_access_set_ring_pfn() as the special ring page would have
>been
>> + * setup during domain creation.
>> + */
>> + rc = -ENOSYS;
>> + if ( is_hvm_domain(d) )
>> + break;
>> +
>> + /*
>> + * The ring page was created by a mem_access listener but was not
>> + * freed. Do not allow another xenheap page to be allocated.
>> + */
>> + if ( mfn_valid(d->arch.pv_domain.access_ring_mfn) )
>> + {
>> + rc = -EPERM;
>> + break;
>> + }
>> +
>> + access_ring_va = alloc_xenheap_page();
>> + if ( access_ring_va == NULL )
>> + {
>> + rc = -ENOMEM;
>> + break;
>> + }
>> +
>> + clear_page(access_ring_va);
>> + share_xen_page_with_guest(virt_to_page(access_ring_va), d,
>> + XENSHARE_writable);
>> +
>> + d->arch.pv_domain.access_ring_mfn =
>> + _mfn(virt_to_mfn(access_ring_va));
>> +
>> + rc = 0;
>> + break;
>> + }
>> @@ -123,6 +236,21 @@ int mem_access_send_req(struct domain *d,
>mem_event_request_t *req)
>> return 0;
>> }
>>
>> +/* Free the xenheap page used for the PV access ring */ void
>> +mem_access_free_pv_ring(struct domain *d) {
>> + struct page_info *pg =
>> +mfn_to_page(d->arch.pv_domain.access_ring_mfn);
>> +
>> + if ( !mfn_valid(d->arch.pv_domain.access_ring_mfn) )
>> + return;
>> +
>> + BUG_ON(page_get_owner(pg) != d);
>> + if ( test_and_clear_bit(_PGC_allocated, &pg->count_info) )
>> + put_page(pg);
>> + free_xenheap_page(mfn_to_virt(mfn_x(d-
>>arch.pv_domain.access_ring_mfn)));
>> + d->arch.pv_domain.access_ring_mfn = _mfn(INVALID_MFN); }
>> +
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |