[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 5/6] xen/x86: add PHYSDEVOP_msi_control



On 06.08.2019 11:46, Marek Marczykowski-Górecki  wrote:
On Tue, Aug 06, 2019 at 07:56:39AM +0000, Jan Beulich wrote:
On 05.08.2019 15:44, Marek Marczykowski-Górecki  wrote:
On Fri, Jul 19, 2019 at 09:43:26AM +0000, Jan Beulich wrote:
On 19.07.2019 11:02, Roger Pau Monné  wrote:
On Fri, Jul 19, 2019 at 08:04:45AM +0000, Jan Beulich wrote:
On 18.07.2019 18:52, Roger Pau Monné  wrote:
On Thu, Jul 18, 2019 at 03:17:27PM +0000, Jan Beulich wrote:
On 18.07.2019 15:46, Roger Pau Monné  wrote:
In fact I don't think INTx should be enabled when MSI(-X) is disabled,
QEMU already traps writes to the command register, and it will manage
INTx enabling/disabling by itself. I think the only check required is
that MSI(-X) cannot be enabled if INTx is also enabled. In the same
way both MSI caspabilities cannot be enabled simultaneously. The
function should not explicitly disable any of the other capabilities,
and just return -EBUSY if the caller attempts for example to enable
MSI while INTx or MSI-X is enabled.

You do realize that pci_intx() only ever gets called for Xen
internally used interrupts, i.e. mainly the serial console one?

You will have to bear with me because I'm not sure I understand why
it does matter. Do you mean to point out that dom0 is the one in full
control of INTx, and thus Xen shouldn't care of whether INTx and
MSI(-X) are enabled at the same time?

I still think that at least a warning should be printed if a caller
tries to enable MSI(-X) while INTx is also enabled, but unless there's
a reason to have both MSI(-X) and INTx enabled at the same time (maybe
a quirk for some hardware issue?) it shouldn't be allowed on this new
interface.

I don't mind improvements to the current situation (i.e. such a
warning may indeed make sense); I merely stated how things currently
are. INTx treatment was completely left aside when MSI support was
introduced into Xen.

In order to give Marek a more concise reply, would you agree to return
-EBUSY (or some error code) and print a warning message if the caller
attempts to enable MSI(-X) while INTx is also enabled?

As to returning an error - I think so, yes. I'm less sure about logging
a message.

I'm trying to get it working and it isn't clear to me what should I
check for "INTx is also enabled". I assumed PCI_COMMAND_INTX_DISABLE
bit, but it looks like guest has no control over this bit, even in
permissive mode.  This means enabling MSI(-X) always fails because guest
has no way to set PCI_COMMAND_INTX_DISABLE bit before.

Well, the guest has no control, but in order to enable MSI{,-X} I'd
have expected qemu or the Dom0 kernel to set this bit up front.

qemu would do that, when running in dom0. But in PV stubdomain it talks
to pciback, which filters it out.

Filtering out the guest attempt is fine, but it should then set the
bit while preparing MSI/MSI-X for the guest. (I'm not sure it would
need to do so explicitly, or whether it couldn't rely on code
elsewhere in the kernel doing so.)

If
that's not the case, then of course neither checking nor logging a
message is appropriate at this point in time. It may be worthwhile
calling out this anomaly then in the description.

Ok, so I'll go back to setting PCI_COMMAND_INTX_DISABLE instead of just
verification.

Just to clarify: should I also clear PCI_COMMAND_INTX_DISABLE when
disabling MSI? Now I think yes, because nothing else would do that
otherwise, but I would like to double check.

Well, I think I've made my position clear: So far we use pci_intx()
only on devices used by Xen itself. I think this should remain to be
that way. Devices in possession of domains (including Dom0) should
be under Dom0's control in this regard.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.