[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] R: ARM: access to iomem and HW IRQ



On 02/28/2014 12:39 AM, Dario Faggioli wrote:
> As I said, Arianna is doing something very similar... perhaps she can merge 
> her
> and your work and try to upstream it properly, in the next few days... 
> Arianna?
> 

I'd be certainly happy to, once the patch is ready and if Eric Trudeau agrees.

Sorry for the delay,
Arianna


> Regards,
> Dario
> 
> 
> Inviato da Samsung Mobile
> 
> 
> -------- Messaggio originale --------
> Da: Eric Trudeau
> Data:28/02/2014 00:03 (GMT+01:00)
> A: Viktor Kleinik
> Cc: Stefano Stabellini ,Dario Faggioli ,xen-devel@xxxxxxxxxxxxx,Arianna 
> Avanzini
> ,Julien Grall
> Oggetto: Re: [Xen-devel] ARM: access to iomem and HW IRQ
> 
> 
> On Feb 27, 2014, at 1:10 PM, "Viktor Kleinik" <viktor.kleinik@xxxxxxxxxxxxxxx
> <mailto:viktor.kleinik@xxxxxxxxxxxxxxx>> wrote:
> 
>> Thank you all for your responses.
>>
>> I will try those changes on our platform.
>> Are you planning push the implementation of
>> xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
>> xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
>> official Xen release?
>>
>> Regards,
>> Victor
>>
> I don't expect to push the changes up. If you want to submit, please go 
> ahead. 
>>
>> On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@xxxxxxxxxxxx
>> <mailto:etrudeau@xxxxxxxxxxxx>> wrote:
>>
>>     > -----Original Message-----
>>     > From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx
>>     <mailto:stefano.stabellini@xxxxxxxxxxxxx>]
>>     > Sent: Thursday, February 27, 2014 8:16 AM
>>     > To: Dario Faggioli
>>     > Cc: Viktor Kleinik; xen-devel@xxxxxxxxxxxxx
>>     <mailto:xen-devel@xxxxxxxxxxxxx>; Arianna Avanzini; Stefano Stabellini;
>>     > Julien Grall; Eric Trudeau
>>     > Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
>>     >
>>     > On Thu, 27 Feb 2014, Dario Faggioli wrote:
>>     > > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
>>     > > > Hi all,
>>     > > >
>>     > > Hi,
>>     > >
>>     > > > Does anyone knows something about future plans to implement
>>     > > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
>>     > > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
>>     > > >
>>     > > I think Arianna is working on an implementation of the former
>>     > > (XEN_DOMCTL_memory_mapping), and she should be sending patches to 
>> this
>>     > > list soon, isn't it so, Arianna?
>>     >
>>     > Eric Trudeau did some work in the area too:
>>     >
>>     > http://marc.info/?l=xen-devel&m=137338996422503
>>     > http://marc.info/?l=xen-devel&m=137365750318936
>>
>>     I checked our repo and the route IRQ changes to DomUs in the second patch
>>     URL Stefano provided below are up-to-date with what we have been using on
>>     our platforms.  We made no further changes after that patch, i.e. we left
>>     the 100 msec max wait for a domain to finish an ISR when destroying it.
>>
>>     We also added support for a DomU to map in I/O memory with the iomem
>>     configuration parameter.  Unfortunately, I don't have time to provide an
>>     official patch on recent Xen upstream code due to time constraints, but
>>     below is a patch based on last October, :( , commit
>>     d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.
>>     I hope this is helpful, because that is the best I can do at this time.
>>
>>     -----------------
>>
>>     tools/libxl/libxl_create.c |  5 +++--
>>      xen/arch/arm/domctl.c      | 74
>>     
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>>      2 files changed, 76 insertions(+), 3 deletions(-)
>>
>>     diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>>     index 1b320d3..53ed52e 100644
>>     --- a/tools/libxl/libxl_create.c
>>     +++ b/tools/libxl/libxl_create.c
>>     @@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc,
>>     libxl__multidev *multidev,
>>              LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
>>                  domid, io->start, io->start + io->number - 1);
>>
>>     -        ret = xc_domain_iomem_permission(CTX->xch, domid,
>>     -                                          io->start, io->number, 1);
>>     +        ret = xc_domain_memory_mapping(CTX->xch, domid,
>>     +                                       io->start, io->start,
>>     +                                       io->number, 1);
>>              if (ret < 0) {
>>                  LOGE(ERROR,
>>                       "failed give dom%d access to iomem range 
>> %"PRIx64"-%"PRIx64,
>>     diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>>     index 851ee40..222aac9 100644
>>     --- a/xen/arch/arm/domctl.c
>>     +++ b/xen/arch/arm/domctl.c
>>     @@ -10,11 +10,83 @@
>>      #include <xen/errno.h>
>>      #include <xen/sched.h>
>>      #include <public/domctl.h>
>>     +#include <xen/iocap.h>
>>     +#include <xsm/xsm.h>
>>     +#include <xen/paging.h>
>>     +#include <xen/guest_access.h>
>>
>>      long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>>                          XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>      {
>>     -    return -ENOSYS;
>>     +    long ret = 0;
>>     +    bool_t copyback = 0;
>>     +
>>     +    switch ( domctl->cmd )
>>     +    {
>>     +    case XEN_DOMCTL_memory_mapping:
>>     +    {
>>     +        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
>>     +        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
>>     +        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
>>     +        int add = domctl->u.memory_mapping.add_mapping;
>>     +
>>     +        /* removing i/o memory is not implemented yet */
>>     +        if (!add) {
>>     +            ret = -ENOSYS;
>>     +            break;
>>     +        }
>>     +        ret = -EINVAL;
>>     +        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
>>     +             /* x86 checks wrap based on paddr_bits which is not
>>     implemented on ARM? */
>>     +             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits -
>>     PAGE_SHIFT)) || */
>>     +             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
>>     +            break;
>>     +
>>     +        ret = -EPERM;
>>     +        if ( current->domain->domain_id != 0 )
>>     +            break;
>>     +
>>     +        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, 
>> add);
>>     +        if ( ret )
>>     +            break;
>>     +
>>     +        if ( add )
>>     +        {
>>     +            printk(XENLOG_G_INFO
>>     +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
>>     +                   d->domain_id, gfn, mfn, nr_mfns);
>>     +
>>     +            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
>>     +            if ( !ret && paging_mode_translate(d) )
>>     +            {
>>     +                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
>>     +                                       (gfn + nr_mfns) << PAGE_SHIFT,
>>     +                                       mfn << PAGE_SHIFT);
>>     +                if ( ret )
>>     +                {
>>     +                    printk(XENLOG_G_WARNING
>>     +                           "memory_map:fail: dom%d gfn=%lx mfn=%lx 
>> nr=%lx\n",
>>     +                           d->domain_id, gfn, mfn, nr_mfns);
>>     +                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
>>     +                         is_hardware_domain(current->domain) )
>>     +                        printk(XENLOG_ERR
>>     +                               "memory_map: failed to deny dom%d access
>>     to [%lx,%lx]\n",
>>     +                               d->domain_id, mfn, mfn + nr_mfns - 1);
>>     +                }
>>     +            }
>>     +        }
>>     +    }
>>     +    break;
>>     +
>>     +    default:
>>     +        ret = -ENOSYS;
>>     +        break;
>>     +    }
>>     +
>>     +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
>>     +        ret = -EFAULT;
>>     +
>>     +    return ret;
>>      }
>>
>>      void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
>>
>> <http://www.globallogic.com/email_disclaimer.txt>


-- 
/*
 * Arianna Avanzini
 * avanzini.arianna@xxxxxxxxx
 * 73628@xxxxxxxxxxxxxxxxxxx
 */

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.