[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Binding hidden PCI devices to a transferred DomU

Hi All.

I am not sure if this should be addressed to this mailing-list rather then the developer mailing-list but I hope that any developer is watching what go on in this list. I know cross posting is frond upon.

Using "S" for source machine and "D" for destination machine classification. We can talk about S:DomU:2 (that is machine "S" DomU number 2) - just to assist you understanding my questions.

Say I have 3 x DomU's running on source machine "S". Additionally, I have a several x DomU's running on machine "D". Assuming all Xen machines on the intranet are correctly configured for fall-over condition handling. Again assuming that the hardware on both the "S" and "D" machines contain exactly the same plug-able h/w components,
that have been hidden correctly from Dom0 at boot time.

Let us assume for this discussion sake, that the PCI resources held on the "S" machine are labels namely
04:0.0, 05:0.1 and 06:0.0 (as found by "lspci" command).

Lets look at two cases:

Case 1 -
Where the PCI resource ID's on both machines match. i.e.: resource PCI 03:0.0 on "S" = PCI 03:0.0 on "D". Hence binding each DomU specific resource - mapped correctly at re- execution time on "D" machine.

Case 2 -
On the "D" machine even though the exact same resources are in attendance they hold different resource indicators, namely 03:0.0, 07:0.1 and 09:0.0 - which correspond one-to-one with the plug-able resources on the "S" machine above. This could happen for example because the resources are on a different motherboard design which have different implementations of both south and north bridges OR the south bridge IC itself is different to the "S" machines south bridge.

Now, at this time - automated DomU's transfer is being initiated because S:Dom0 detects machine "S" is faulting;

I have read (and hopefully understand correctly) that S:DomU's are automatically transferred, hence S:DomU's are "added" to D:DomU's whereby they are simply rescheduled for continued execution.

Of course works fine both for para and full virtualised implementations when there is no h/w hidden from Dom0's;

========= BUT what about the case where S:DomU direct resources are assigned unknowingly to S:Dom0 =========

A> What Xen mechanism is used to 1:1 map those DomU resources of "S" machine to those available on "D" machine?

B> What mechanism is used to ensure S:DomU and D:DomU resource states are the same before allowing execution to continue seeing that both S:Dom0 and D:Dom0 are totally unaware of the DomU's resource allocations/usage ?

C> Assuming the current Xen implementation does not ensure DomU resources are maintained in state step,
      what mechanism (XenStore I suspect) is used to:

(1) Bring the transferred DomU's specific PCI resources to a known state (preferably to the last known state on the "S"
      machine before transfer to the "D" machine)?

and ...

(2) If the DomU resource expects to continue from the last known state, what happens to the DomU when it cannot see the assigned resource in that required state? Is there any interplay with the Dom0?

and ...

(3) is it up to the DomU to re-initiate the resource to a know state, possibly losing any data that could/should have been received? Or is it planned to have some sort of DomIO state table where transfered DomU's with such resources allocated and simple continue execution because a copy of the DomIO state table is available to the D:DomU associated with such resources?

(4) Apart from DomU aware devices, what is in the Xen road map for handling such situations as set out above?

D> Would IOMMU table store on "S" and restore on "D" machine assist in a solution to the above dilemma?

Throw in the differences between fully virtualised and para- virtulaised implementations, what is proposed manner in handling such conditions / situations?

Any and all comments appreciated.
Thanks. Grahame

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.