[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] xen: use domid check in is_hardware_domain



On 07/10/2013 06:57 AM, George Dunlap wrote:
On 09/07/13 21:28, Daniel De Graaf wrote:
[...]
index 874742c..51d0ea6 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1459,7 +1459,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
          }
          set_cpuid_faulting(!is_hvm_vcpu(next) &&
-                           (next->domain->domain_id != 0));
+                           !is_control_domain(next->domain));

Won't this mean that in your separate hardware/control domain model that the 
hardware domain *will* fault on cpuid? Is this what we want?

I would expect that if CPUID faulting is turned on for the hardware domain that
no features would be masked (since it can't be migrated, there's no reason to
avoid using an existing feature). Jan commented on a prior version of this patch
(on April 12) that it makes sense for a control domain to see the full features
of the CPU in order to create masks for guests.

In theory, we could enable CPUID faulting for all domains after dom0 and force 
the
domain builder to copy out the hardware CPUID to its guests - likely unmodified 
for
hardware and control domains, and then masked as usual for a domU for others.
However, this may require too much knowledge of the behavior of CPUID 
sub-leaves to
be encoded in the domain builder - this seems like knowledge best left to the
hardware domain (for things like feature bits for power management MSRs) and the
control domain (to see an upper bound on features it exposes to the guest).

So, I think the best solution is to make this condition !hardware && !control.

      }
      if (is_hvm_vcpu(next) && (prev != next) )
[...]
@@ -1962,7 +1962,7 @@ static void dump_softtsc(unsigned char key)
                 "warp=%lu (count=%lu)\n", tsc_max_warp, tsc_check_count);
      for_each_domain ( d )
      {
-        if ( d->domain_id == 0 && d->arch.tsc_mode == TSC_MODE_DEFAULT )
+        if ( is_hardware_domain(d) && d->arch.tsc_mode == TSC_MODE_DEFAULT )
              continue;
          printk("dom%u%s: mode=%d",d->domain_id,
                  is_hvm_domain(d) ? "(hvm)" : "", d->arch.tsc_mode);

Why have direct access to the tsc for the hardware domain and not the control 
domain?

This is just excluding output from a printk. It may make sense to exclude more 
than
the hardware domain, but that's really a matter of taste. We could also just 
remove
the exclusion, but that should be a separate patch.

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 57dbd0c..8e8d3d1 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -745,7 +745,7 @@ static void pv_cpuid(struct cpu_user_regs *regs)
      c = regs->ecx;
      d = regs->edx;
-    if ( current->domain->domain_id != 0 )
+    if ( !is_control_domain(current->domain) )
      {
          unsigned int cpuid_leaf = a, sub_leaf = c;

Same as above re cpuid.

Will need to be handled the same if the above is changed.

[...]
index 6c264a5..692372a 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -238,7 +238,7 @@ struct domain *domain_create(
      if ( domcr_flags & DOMCRF_hvm )
          d->is_hvm = 1;
-    if ( domid == 0 )
+    if ( is_hardware_domain(d) )
      {
          d->is_pinned = opt_dom0_vcpus_pin;
          d->disable_migrate = 1;

At some point this will have to be thought about a bit more -- which of the 
disaggregated domains do we actually want pinned here?  But I think this is 
fine for now.

I would say the hardware domain would be the most likely one to pin, since it 
would
be in charge of things like CPU frequency, microcode, and so forth. Pinning 
other
domains for performance reasons is really better done by a scheduler interface,
either at domain creation or at runtime.

Everything else looks reasonable to me, but obviously needs acks from various 
maintainers.

  -George


--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.