|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Design session "grant v3"
On 26.09.22 09:23, Jan Beulich wrote: On 26.09.2022 09:04, Juergen Gross wrote:On 26.09.22 08:57, Jan Beulich wrote:On 23.09.2022 11:31, Juergen Gross wrote:On 22.09.22 20:43, Jan Beulich wrote:On 22.09.2022 15:42, Marek Marczykowski-Górecki wrote:Yann: can backend refuse revoking? Jürgen: it shouldn't be this way, but revoke could be controlled by feature flag; revoke could pass scratch page per revoke call (more flexible control)A single scratch page comes with the risk of data corruption, as all I/O would be directed there. A sink page (for memory writes) would likely be okay, but device writes (memory reads) can't be done from a surrogate page.I don't see that problem. In case the grant is revoked due to a malicious/buggy backend, you can't trust the I/O data anyway.I agree for the malicious case, but I'm less certain when is come to buggy backends. Some bugs (like not unmapping a grant) aren't putting the data at risk.In case the data page can't be used for anything else, what would be the point of revoking the grant? The page would leak in both cases (revoking or not).Sure, but don't you agree it would be better for the guest to have a way to cleanly shut down in case it notices a misbehaving backend, rather than having its data corrupted in the process? Of course a guest won't be able to tell malicious from buggy, but what to do in such a case ought to be a guest policy, not behavior forced upon it from the outside. It could (based on its policy) either revoke or not. Then again I guess "pass scratch page per revoke call" is meant to cover that already, i.e. leaving it to the guest how to actually deal with a failed revoke. Correct.
Yes. And an unaware backend wouldn't be very likely to map 512 grants in one go for making use of the large page without intending to do so. Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |