[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2 for-4.12] Introduce runstate area registration with phys address



On 18/03/2019 11:31, Andrii Anisov wrote:
Hello Julien, Guys,

Hi,

Sorry for a delayed answer. I caught a pretty nasty flu after last long weekend, it made me completely unavailable last week :(

Sorry to hear that.

On 07.03.19 17:17, Julien Grall wrote:
Why? Arm32 is as equally supported as Arm64.
Yep, I believe that.
But I do not expect one would build arm32 based systems with many vcpus.
I have an expression arm32 do not target server applications. Whats left? Embedded with 4, no, ok up to 8 VMs, 8 vcpus each. How much would it cost for runstates mappings?

As I already said multiple times before, please try to explain everything in your first e-mail...

The limitations you mention is only for GICv2. If you use GICv3, the number of CPUs can go much higher. Whether this is going to be use in the future is another question. However, I would rather try to not discard 32-bit from any discussion as you don't know how this is going to be used in the future.

The Arm32 port is interesting because all the memory is not mapped in Xen. There are chance that the Arm64 will go towards the same in the future.

I have been thinking a bit more about arm32.I don't think we ever map 2GB of on-demand-paging in one go, so this could probably be reduced to 1GB. The other 1GB could be used to increase the vmap. This would give us up to 1792MB of vmap.

Pending to the performance result, the global mapping could be a solution for arm32 as well.


What scenario? You just need to read the implementation of the current hypercall and see there are nothing preventing the call to be done twice.
Ah, OK, you are about those kind of races. Yes, it looks I've overlooked that scenario.

When you are designing a new hypercall you have to think how a guest can misuse it (yes I said misuse not use!). Think about a guest with two vCPUs. vCPU A is constantly modifying the runstate for vCPU B. What could happen if the hypervisor is in the middle of context switch vCPU B?
Effects I can imagine, might be different:
 - New runstate area might be updated on Arm64, maybe partially and concurrently (IIRC, we have all the RAM permanently mapped to XEN)

Today the RAM is always permanently mapped, I can't promise this is going to be the case in the future.

  - Paging fault might happen on Arm32

What do you mean?

  - Smth. similar or different might happen on x86 PV or HVM

Yet, all of them are out of design and are quite unexpected.
We *must* protect hypervisor against any guest behavior. Particularly the unexpected one. If the Android VM hit itself, then I pretty much don't care assuming the VM was misbehaving. However, I don't think anyone would be happy if the Android VM is able to take down the whole platform. At least, I would not want to be the passenger of that car...

As I pointed

Also vcpu_info needs protections from it. Do you agree?

vcpu_info cannot be called twice thanks to the following check in map_vcpu_info:
     if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
         return -EINVAL;
Right you are.

Well the number you showed in the other thread didn't show any improvement at all... So please explain why we should call map_domain_page_global() here and using more vmap on arm32
I'm not expecting vmap might be a practical problem for arm32 based system.

Well vmap is quite small on Arm. So why should we use more of it if...

With the current implementation numbers are equal to those I have for runstate mapping on access.

it does not make an real improvement to the context switch? But I recall you said the interrupt latency were worst with keeping the runstate mapped (7000ns vs 7900ns).
Yes, for Roger's patch.

You also saw a performance drop when using glmark2 benchmark.
Yes, I did see it with Roger's patch. But with mine - numbers are slightly better (~1%) for runstate being mapped. > Also introducing more races preventing code will introduce its impact.

Please provide the numbers once you fixed the race.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.