[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features




Hello all.


I would like to clarify some questions based on the comments for the patch series. I put them together (please see below).


On 06.08.20 14:29, Jan Beulich wrote:
On 06.08.2020 13:08, Julien Grall wrote:
On 05/08/2020 20:30, Oleksandr wrote:
I was thinking how to split handle_hvm_io_completion()
gracefully but I failed find a good solution for that, so decided to add
two stubs (msix_write_completion and handle_realmode_completion) on Arm.
I could add a comment describing why they are here if appropriate. But
if you think they shouldn't be called from the common code in any way, I
will try to split it.
I am not entirely sure what msix_write_completion is meant to do on x86.
Is it dealing with virtual MSIx? Maybe Jan, Roger or Paul could help?
Due to the split brain model of handling PCI pass-through (between
Xen and qemu), a guest writing to an MSI-X entry needs this write
handed to qemu, and upon completion of the write there Xen also
needs to take some extra action.


1. Regarding common handle_hvm_io_completion() implementation:

Could msix_write_completion() be called later on so we would be able to split handle_hvm_io_completion() gracefully or could we call it from handle_mmio()? The reason I am asking is to avoid calling it from the common code in order to avoid introducing stub on Arm which is not going to be ever implemented
(if msix_write_completion() is purely x86 material).

For the non-RFC patch series I moved handle_realmode_completion to the x86 code and now my local implementation looks like:

bool handle_hvm_io_completion(struct vcpu *v)
{
    struct domain *d = v->domain;
    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
    struct hvm_ioreq_server *s;
    struct hvm_ioreq_vcpu *sv;
    enum hvm_io_completion io_completion;

    if ( has_vpci(d) && vpci_process_pending(v) )
    {
        raise_softirq(SCHEDULE_SOFTIRQ);
        return false;
    }

    sv = get_pending_vcpu(v, &s);
    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
        return false;

    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
        STATE_IORESP_READY : STATE_IOREQ_NONE;

    msix_write_completion(v);
    vcpu_end_shutdown_deferral(v);

    io_completion = vio->io_completion;
    vio->io_completion = HVMIO_no_completion;

    switch ( io_completion )
    {
    case HVMIO_no_completion:
        break;

    case HVMIO_mmio_completion:
        return handle_mmio();

    case HVMIO_pio_completion:
        return handle_pio(vio->io_req.addr, vio->io_req.size,
                          vio->io_req.dir);

    default:
        return arch_handle_hvm_io_completion(io_completion);
    }

    return true;
}

2. Regarding renaming common handle_mmio() to ioreq_handle_complete_mmio():

There was a request to consider renaming that function which is called from the common code in the context of IOREQ series. The point is, that the name of the function is pretty generic and can be confusing on Arm (we already have a try_handle_mmio()). I noticed that except common code that function is called from a few places on x86 (I am not even sure whether all of them are IOREQ related).
The question is would x86 folks be happy with such renaming?

Alternatively I could provide the following in include/asm-arm/hvm/ioreq.h without renaming it in the common code and still using non-confusing variant on Arm (however I am not sure whether this is a good idea):

#define handle_mmio ioreq_handle_complete_mmio


3. Regarding common IOREQ/DM stuff location:

Currently it is located at:
common/hvm/...
include/xen/hvm/...

For the non-RFC patch series I am going to avoid using "hvm" name (which is internal detail of arch specific code and shouldn't be exposed to the common code). The question is whether I should use another directory name (probably ioreq?) or just place them in common root directory?


Could you please share your opinion?

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.