[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/4] x86/hvm: Implement hvmemul_write() using real mappings



On 12/09/17 15:32, Paul Durrant wrote:
>
>> +    {
>> +        ASSERT_UNREACHABLE();
>> +        goto unhandleable;
>> +    }
>> +
>> +    do {
>> +        enum hvm_translation_result res;
>> +        struct page_info *page;
>> +        pagefault_info_t pfinfo;
>> +        p2m_type_t p2mt;
>> +
>> +        /* Error checking.  Confirm that the current slot is clean. */
>> +        ASSERT(mfn_x(*mfn) == 0);
>> +
>> +        res = hvm_translate_get_page(curr, frame << PAGE_SHIFT, true, pfec,
>> +                                     &pfinfo, &page, NULL, &p2mt);
>> +
>> +        switch ( res )
>> +        {
>> +        case HVMTRANS_okay:
>> +            break;
>> +
>> +        case HVMTRANS_bad_linear_to_gfn:
>> +            x86_emul_pagefault(pfinfo.ec, pfinfo.linear, 
>> &hvmemul_ctxt->ctxt);
>> +            err = ERR_PTR(~(long)X86EMUL_EXCEPTION);
>> +            goto out;
>> +
>> +        case HVMTRANS_bad_gfn_to_mfn:
>> +            err = NULL;
>> +            goto out;
>> +
>> +        case HVMTRANS_gfn_paged_out:
>> +        case HVMTRANS_gfn_shared:
>> +            err = ERR_PTR(~(long)X86EMUL_RETRY);
>> +            goto out;
>> +
>> +        default:
>> +            goto unhandleable;
>> +        }
>> +
>> +        *mfn++ = _mfn(page_to_mfn(page));
>> +        frame++;
>> +
>> +        if ( p2m_is_discard_write(p2mt) )
>> +        {
>> +            err = ERR_PTR(~(long)X86EMUL_OKAY);
>> +            goto out;
>> +        }
>> +
>> +    } while ( frame < final );
> while ( ++frame < final ), and lose the increment above?

I deliberately wrote it this way, to avoid adding to the cognitive load
of trying to work out what is going on.

-1 to the suggestion.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.