[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: [PATCH v2] x86/HVM: p2m_ram_ro is incompatible with device pass-through


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Mon, 15 Jul 2019 08:38:12 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UAsXtX85XhsGUu+xJzeaW9NOh+PMDtlWLgvzRGs576w=; b=dij6EJpCe881rNRfg3ZOjTkZakHW7Nik2g7HAHN8NN1rvgXQdfjmo5OtJBgkd6q4XQP9ecmwcKAIUOg5LY15oDpcUmlOpo0sSCf6BHoTaf8s1KtaNKCmu21T+CckZq6niPtpg6hjZetr0AEOYrDne7pOGTQJl/EfCEckLULQcM/bfxA9l9g+Sssvv9TUFiK4lfv5a1Nnvj01O1iNRl6B+DaTZlQyt2ZkP3R6gOGxdQtt3kadV4B/8Qv43nAJnd//DGxlDiVUxnIC9RVIr5gcvQ11fq4j9Q7L4U7RTxNsxD9HTl1BDiWYlIFjPqa7bM1hIBcx3XeHc+Dn59fRT+gzDg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CnobZTeuPFLQ6JXXpIqPWAIqBJNqUFMRZxxKkiE9ndDHn0ZyTpxcgrTdpaTQp9Il5Z4jDCKaOhPNmMLGf+OyLLhQEccuikgtl9b88FkIzMDcuhEkZbn3ZjRDHHLp6luVIKGQyjS/X+WflTMoQrmsVaaY7on63eHAESaxWcjq4UmnDaBCab8pDWh/JBxE9hc7nfeaEg4O7okTyZNdZzR2KkUWTH2Me9CiBW+nSykwWvoYfmE2E7WV9AYyebdc+7XBFA1myEFgI/P0KQ964iAjatpX439jO68r6LsbkjaW1ZbYGk99WpYg545cIDo6B01ZOd5eauyBsK5dB1xkEB5dcw==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Paul Durrant <Paul.Durrant@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 15 Jul 2019 08:55:01 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVMZODenL8esX3CUGEt7b/0uHdFabLbhUA
  • Thread-topic: Ping: [PATCH v2] x86/HVM: p2m_ram_ro is incompatible with device pass-through

On 03.07.2019 13:36, Jan Beulich wrote:
> The write-discard property of the type can't be represented in IOMMU
> page table entries. Make sure the respective checks / tracking can't
> race, by utilizing the domain lock. The other sides of the sharing/
> paging/log-dirty exclusion checks should subsequently perhaps also be
> put under that lock then.
> 
> Take the opportunity and also convert neighboring bool_t to bool in
> struct hvm_domain.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Alongside Paul's R-b could I get an ack or otherwise from you?

Thanks, Jan

> ---
> v2: Don't set p2m_ram_ro_used when failing the request.
> 
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -255,16 +255,33 @@ static int set_mem_type(struct domain *d
>    
>        mem_type = array_index_nospec(data->mem_type, ARRAY_SIZE(memtype));
>    
> -    if ( mem_type == HVMMEM_ioreq_server )
> +    switch ( mem_type )
>        {
>            unsigned int flags;
>    
> +    case HVMMEM_ioreq_server:
>            if ( !hap_enabled(d) )
>                return -EOPNOTSUPP;
>    
>            /* Do not change to HVMMEM_ioreq_server if no ioreq server mapped. 
> */
>            if ( !p2m_get_ioreq_server(d, &flags) )
>                return -EINVAL;
> +
> +        break;
> +
> +    case HVMMEM_ram_ro:
> +        /* p2m_ram_ro can't be represented in IOMMU mappings. */
> +        domain_lock(d);
> +        if ( has_iommu_pt(d) )
> +            rc = -EXDEV;
> +        else
> +            d->arch.hvm.p2m_ram_ro_used = true;
> +        domain_unlock(d);
> +
> +        if ( rc )
> +            return rc;
> +
> +        break;
>        }
>    
>        while ( iter < data->nr )
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1448,17 +1448,36 @@ static int assign_device(struct domain *
>        if ( !iommu_enabled || !hd->platform_ops )
>            return 0;
>    
> -    /* Prevent device assign if mem paging or mem sharing have been
> -     * enabled for this domain */
> -    if ( unlikely(d->arch.hvm.mem_sharing_enabled ||
> -                  vm_event_check_ring(d->vm_event_paging) ||
> +    domain_lock(d);
> +
> +    /*
> +     * Prevent device assignment if any of
> +     * - mem paging
> +     * - mem sharing
> +     * - the p2m_ram_ro type
> +     * - global log-dirty mode
> +     * are in use by this domain.
> +     */
> +    if ( unlikely(vm_event_check_ring(d->vm_event_paging) ||
> +#ifdef CONFIG_HVM
> +                  (is_hvm_domain(d) &&
> +                   (d->arch.hvm.mem_sharing_enabled ||
> +                    d->arch.hvm.p2m_ram_ro_used)) ||
> +#endif
>                      p2m_get_hostp2m(d)->global_logdirty) )
> +    {
> +        domain_unlock(d);
>            return -EXDEV;
> +    }
>    
>        if ( !pcidevs_trylock() )
> +    {
> +        domain_unlock(d);
>            return -ERESTART;
> +    }
>    
>        rc = iommu_construct(d);
> +    domain_unlock(d);
>        if ( rc )
>        {
>            pcidevs_unlock();
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -156,10 +156,11 @@ struct hvm_domain {
>    
>        struct viridian_domain *viridian;
>    
> -    bool_t                 hap_enabled;
> -    bool_t                 mem_sharing_enabled;
> -    bool_t                 qemu_mapcache_invalidate;
> -    bool_t                 is_s3_suspended;
> +    bool                   hap_enabled;
> +    bool                   mem_sharing_enabled;
> +    bool                   p2m_ram_ro_used;
> +    bool                   qemu_mapcache_invalidate;
> +    bool                   is_s3_suspended;
>    
>        /*
>         * TSC value that VCPUs use to calculate their tsc_offset value.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.