[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch RFC 2/4] usb: add flag to USBPacket to request complete callback after isoc transfer

On 07/17/2015 11:23 AM, Gerd Hoffmann wrote:
--- a/hw/usb/host-libusb.c
+++ b/hw/usb/host-libusb.c
@@ -451,6 +451,7 @@ static void usb_host_req_complete_iso(struct
libusb_transfer *transfer)
       if (xfer->ring->ep->pid == USB_TOKEN_IN) {
           QTAILQ_INSERT_TAIL(&xfer->ring->copy, xfer, next);
+        usb_wakeup(xfer->ring->ep, 0);
       } else {
           QTAILQ_INSERT_TAIL(&xfer->ring->unused, xfer, next);

Hmm, I can see the benefit of this call to avoid polling.

OTOH I don't see how to find the packages already processed via this
mechanism. To help in my case I'd need:

- the call being made in the else clause

Hmm.  This is for IN transfers, notifying the host adapter "I have data
for you, please hand me one (or more) USBPacket which I can fill".

Why do you need it for OUT transfers too?  usb-host has copyed and
queued up the data already, there is nothing to pass back ...

Aah, right. The packet->actual_length is filled during the copy. Is
this correct? What does libusb return for the single frames? I assume
this will be the amount which was sent out to the device, not the
size of the frame given to it.

- some way to have a package reference in the endpoint (assuming
    to use the bus .endpoint_wakeup callback which is called by
    usb_wakeup(), too).

Yes, endpoint callback would be more useful for this.
PortOps needs this for remote wakeup implementation.

    The problem here is that host-libusb.c would call usb_wakeup()
    not for each packet, but for each libusb I/O, which is combining
    multiple packets given to usb_handle_packet().

You can call just call usb_handle_packet() multiple times, either
calculate how often based on time and bandwidth, or continue calling
until you get no more data back.

Understood. That's the polling case I mentioned above.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.