[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/domctl: lower loglevel of XEN_DOMCTL_memory_mapping
>>> On 10.09.15 at 07:28, <tiejun.chen@xxxxxxxxx> wrote: >> > If the 64 limit was arbitrary then I would suggest increasing it to at >> > least >>> 1024 so that >>> at least 4M of BAR can be mapped in one go and it reduces the overhead by a >>> factor of 16. >> >> 1024 may be a little much, but 256 is certainly a possibility, plus >> Konrad's suggestion to allow this limit to be controlled via command >> line option. > > Are you guys talking this way? Sort of (the patch has the intended effect, but for its size very many rough edges). Jan > diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c > index 3946e4c..a9671bb 100644 > --- a/xen/arch/x86/setup.c > +++ b/xen/arch/x86/setup.c > @@ -88,6 +88,10 @@ boolean_param("noapic", skip_ioapic_setup); > s8 __read_mostly xen_cpuidle = -1; > boolean_param("cpuidle", xen_cpuidle); > > +/* once_mapping_mfns: memory mapping mfn bumbers once. */ > +unsigned int xen_once_mapping_mfns; > +integer_param("once_mapping_mfns", xen_once_mapping_mfns); > + > #ifndef NDEBUG > unsigned long __initdata highmem_start; > size_param("highmem-start", highmem_start); > diff --git a/xen/common/domctl.c b/xen/common/domctl.c > index 3bf39f1..82c85e3 100644 > --- a/xen/common/domctl.c > +++ b/xen/common/domctl.c > @@ -33,6 +33,8 @@ > #include <public/domctl.h> > #include <xsm/xsm.h> > > +extern unsigned int xen_once_mapping_mfns; > + > static DEFINE_SPINLOCK(domctl_lock); > DEFINE_SPINLOCK(vcpu_alloc_lock); > > @@ -1035,7 +1037,7 @@ long > do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) > > ret = -E2BIG; > /* Must break hypercall up as this could take a while. */ > - if ( nr_mfns > 64 ) > + if ( nr_mfns > xen_once_mapping_mfns ) > break; > > ret = -EPERM; > > Thanks > Tiejun _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |