[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multiple-page mem_event ring buffer?



>  
> Hello,
> 
> currently my code (and all the examples I could find in the Xen source 
> code) uses a single page mem_event ring buffer, using code along the 
> lines of:
> 
> /* Map the ring page */
> unsigned long ring_pfn;
> xc_get_hvm_param(xci_, domain_, HVM_PARAM_ACCESS_RING_PFN, &ring_pfn);
> 
> unsigned long mmap_pfn = ring_pfn;
> ringPage_ = xc_map_foreign_batch(xci_, domain_, PROT_READ | PROT_WRITE,
>                                  &mmap_pfn, 1);
> 
> if (mmap_pfn & XEN_DOMCTL_PFINFO_XTAB) {
> 
>     /* Map failed, populate ring page */
>     if (xc_domain_populate_physmap_exact(xci_, domain_,
>                                          1, 0, 0, &ring_pfn))
>         return SOME_ERROR;
> 
>     mmap_pfn = ring_pfn;
>     ringPage = xc_map_foreign_batch(xci_, domain_,
>                                     PROT_READ | PROT_WRITE,
>                                     &mmap_pfn, 1);
> 
>     if (mmap_pfn & XEN_DOMCTL_PFINFO_XTAB)
>         return SOME_OTHER ERROR;
> }
> 
> Could I safely use more than one page for the ring buffer (passing '2' 
> as the last parameter of xc_map_foreign_batch(), and so on), or am I 
> limited to 1 page by design?

You would have to change Xen itself to also view the N pages as a contiguous 
region in the virtual address space. I think vmap has been recently added for 
that purpose, but I know nothing about its limitations (cc'ing Jan).

You would also want to retain the old interface and add a new domctl for N-page 
ring setup. And some refactoring of all the places where that sequence of calls 
needed for setup is repeated. And a pony ;)

Thanks!
Andres
> 
> 
> Thanks,
> Razvan Cojocaru


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.