[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [Hackathon] Linux session notes

[Thanks to Malcolm Crossley for taking notes.]

= Overview =

David Vrabel outlined different kernel modes, their status and
suggested focus:

  PV dom0:  production ready - fixes and new hardware support (e.g. EFI)
  PV domU:  production ready - fixes
  PVHVM:    production ready - fixes and new features
  PVH:      experimental     } the future.
  PVH dom0: not yet          }

David would like PV to be deprecated -- no new PV only features;
features must work for PVH or PVHVM as well.

Jan Beulich said that SuSe planned to jump from classic kernel
straight to PVH.

Konrad Rzeszutek Wilk noted that PV MMU ops are planned to be removed
some time (suggested 5 years) after PVH support is complete.

= Broken Things =

Konrad noted that PAT is broken which makes graphics slow.

512 GiB PV guest limit. The fix (4-level p2m) is present in Xen (but
not tools for save/restore?) but PVHVM or PVH is recommended instead
of extending PV.

In guest kexec when PV drivers are used does not work -- grants and
event channels are not torn down which prevents them being used in the
exec'd guest.

= Plans =


- Fixing m2p override.  Mapping the same MFN (by two grant refs or the
  same) two or more times means get_user_pages() cannot find the right
  page() since m2p override is many-to-one.

- Fixing page-to-gref (needed for grant copy in netback to-guest)
  which is currently netback only (this breaks domU providing an iSCSI
  target to be used via blkfront/blkback by another domU in the same
  host.  Probably by adding a generic struct page extension mechanism.

- Testing (mostly dom0) in XenServer's test system.


- Performance regression testing.
- Microcode loading at run time.
- pciback: more useful SBR fallback if FLR isn't present.



Daniel Kiper:

- EFI for dom0 (see series on list)

Wei Liu:

- VNUMA for PV and PVH (picking up Elena's work).

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.