[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v18 06/11] x86/hvm/ioreq: add a new mappable resource type...



>>> On 22.03.18 at 12:55, <paul.durrant@xxxxxxxxxx> wrote:
> ... XENMEM_resource_ioreq_server
> 
> This patch adds support for a new resource type that can be mapped using
> the XENMEM_acquire_resource memory op.
> 
> If an emulator makes use of this resource type then, instead of mapping
> gfns, the IOREQ server will allocate pages from the emulating domain's
> heap. These pages will never be present in the P2M of the guest at any
> point (and are not even shared with the guest) and so are not vulnerable to
> any direct attack by the guest.

"allocate pages from the emulating domain's heap" is a sub-optimal
(at least slightly misleading) description, due to your use of
MEMF_no_refcount together with the fact that domain's don't
really have their own heaps.

> +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> +{
> +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +
> +    if ( iorp->page )
> +    {
> +        /*
> +         * If a guest frame has already been mapped (which may happen
> +         * on demand if hvm_get_ioreq_server_info() is called), then
> +         * allocating a page is not permitted.
> +         */
> +        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> +            return -EPERM;
> +
> +        return 0;
> +    }
> +
> +    /*
> +     * Allocated IOREQ server pages are assigned to the emulating
> +     * domain, not the target domain. This is safe because the emulating
> +     * domain cannot be destroyed until the ioreq server is destroyed.
> +     * Also we must use MEMF_no_refcount otherwise page allocation
> +     * could fail if the emulating domain has already reached its
> +     * maximum allocation.
> +     */
> +    iorp->page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
> +
> +    if ( !iorp->page )
> +        return -ENOMEM;
> +
> +    if ( !get_page_type(iorp->page, PGT_writable_page) )
> +        goto fail;
> +
> +    iorp->va = __map_domain_page_global(iorp->page);
> +    if ( !iorp->va )
> +        goto fail;
> +
> +    clear_page(iorp->va);
> +    return 0;
> +
> + fail:
> +    put_page_and_type(iorp->page);

This is wrong in case it's the get_page_type() which failed.

> +int arch_acquire_resource(struct domain *d, unsigned int type,
> +                          unsigned int id, unsigned long frame,
> +                          unsigned int nr_frames, xen_pfn_t mfn_list[],
> +                          unsigned int *flags)
> +{
> +    int rc;
> +
> +    switch ( type )
> +    {
> +    case XENMEM_resource_ioreq_server:
> +    {
> +        ioservid_t ioservid = id;
> +        unsigned int i;
> +
> +        rc = -EINVAL;
> +        if ( id != (unsigned int)ioservid )
> +            break;
> +
> +        rc = 0;
> +        for ( i = 0; i < nr_frames; i++ )
> +        {
> +            mfn_t mfn;
> +
> +            rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
> +            if ( rc )
> +                break;
> +
> +            mfn_list[i] = mfn_x(mfn);
> +        }
> +
> +        /*
> +         * The frames will be assigned to the tools domain that created
> +         * the ioreq server.
> +         */

s/will be/have been/ and perhaps drop "tools"?

> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -374,6 +374,14 @@ static inline void put_page_and_type(struct page_info 
> *page)
>  
>  void clear_and_clean_page(struct page_info *page);
>  
> +static inline int arch_acquire_resource(
> +    struct domain *d, unsigned int type, unsigned int id,
> +    unsigned long frame,unsigned int nr_frames, xen_pfn_t mfn_list[],

Missing blank.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.