[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 04/13] xen: extend domctl interface for cache coloring
Hi Julien On Mon, Jan 8, 2024 at 12:36 PM Julien Grall <julien@xxxxxxx> wrote: > > Hi Carlo, > > On 08/01/2024 11:19, Carlo Nonato wrote: > > Hi Julien, > > > > On Mon, Jan 8, 2024 at 12:01 PM Julien Grall <julien@xxxxxxx> wrote: > >> > >> Hi Carlo, > >> > >> On 08/01/2024 10:27, Carlo Nonato wrote: > >>> On Fri, Jan 5, 2024 at 6:26 PM Julien Grall <julien@xxxxxxx> wrote: > >>>> On 02/01/2024 09:51, Carlo Nonato wrote: > >>>>> This commit updates the domctl interface to allow the user to set cache > >>>>> coloring configurations from the toolstack. > >>>>> It also implements the functionality for arm64. > >>>>> > >>>>> Based on original work from: Luca Miccio <lucmiccio@xxxxxxxxx> > >>>>> > >>>>> Signed-off-by: Carlo Nonato <carlo.nonato@xxxxxxxxxxxxxxx> > >>>>> Signed-off-by: Marco Solieri <marco.solieri@xxxxxxxxxxxxxxx> > >>>>> --- > >>>>> v5: > >>>>> - added a new hypercall to set colors > >>>>> - uint for the guest handle > >>>>> v4: > >>>>> - updated XEN_DOMCTL_INTERFACE_VERSION > >>>>> --- > >>>>> xen/arch/arm/llc-coloring.c | 17 +++++++++++++++++ > >>>>> xen/common/domctl.c | 11 +++++++++++ > >>>>> xen/include/public/domctl.h | 10 +++++++++- > >>>>> xen/include/xen/llc-coloring.h | 3 +++ > >>>>> 4 files changed, 40 insertions(+), 1 deletion(-) > >>>>> > >>>>> diff --git a/xen/arch/arm/llc-coloring.c b/xen/arch/arm/llc-coloring.c > >>>>> index 5ce58aba70..a08614ec36 100644 > >>>>> --- a/xen/arch/arm/llc-coloring.c > >>>>> +++ b/xen/arch/arm/llc-coloring.c > >>>>> @@ -9,6 +9,7 @@ > >>>>> * Carlo Nonato <carlo.nonato@xxxxxxxxxxxxxxx> > >>>>> */ > >>>>> #include <xen/errno.h> > >>>>> +#include <xen/guest_access.h> > >>>>> #include <xen/keyhandler.h> > >>>>> #include <xen/llc-coloring.h> > >>>>> #include <xen/param.h> > >>>>> @@ -278,6 +279,22 @@ int dom0_set_llc_colors(struct domain *d) > >>>>> return domain_check_colors(d); > >>>>> } > >>>>> > >>>>> +int domain_set_llc_colors_domctl(struct domain *d, > >>>>> + const struct > >>>>> xen_domctl_set_llc_colors *config) > >>>>> +{ > >>>>> + if ( d->num_llc_colors ) > >>>>> + return -EEXIST; > >>>>> + > >>>>> + if ( domain_alloc_colors(d, config->num_llc_colors) ) > >>>> > >>>> domain_alloc_colors() doesn't sanity check config->num_llc_colors before > >>>> allocating the array. You want a check the size before so we would not > >>>> try to allocate an arbitrary amount of memory. > >>>> > >>>>> + return -ENOMEM; > >>>>> + > >>>>> + if ( copy_from_guest(d->llc_colors, config->llc_colors, > >>>>> + config->num_llc_colors) ) > >>>>> + return -EFAULT; > >>>>> + > >>>>> + return domain_check_colors(d); > >>>>> +} > >>>>> + > >>>>> /* > >>>>> * Local variables: > >>>>> * mode: C > >>>>> diff --git a/xen/common/domctl.c b/xen/common/domctl.c > >>>>> index f5a71ee5f7..b6867d0602 100644 > >>>>> --- a/xen/common/domctl.c > >>>>> +++ b/xen/common/domctl.c > >>>>> @@ -8,6 +8,7 @@ > >>>>> > >>>>> #include <xen/types.h> > >>>>> #include <xen/lib.h> > >>>>> +#include <xen/llc-coloring.h> > >>>>> #include <xen/err.h> > >>>>> #include <xen/mm.h> > >>>>> #include <xen/sched.h> > >>>>> @@ -858,6 +859,16 @@ long > >>>>> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) > >>>>> __HYPERVISOR_domctl, "h", u_domctl); > >>>>> break; > >>>>> > >>>>> + case XEN_DOMCTL_set_llc_colors: > >>>>> + if ( !llc_coloring_enabled ) > >>>>> + break; > >>>>> + > >>>>> + ret = domain_set_llc_colors_domctl(d, &op->u.set_llc_colors); > >>>>> + if ( ret == -EEXIST ) > >>>>> + printk(XENLOG_ERR > >>>>> + "Can't set LLC colors on an already created > >>>>> domain\n"); > >>>> > >>>> To me, the message doesn't match the check in > >>>> domain_set_llc_colors_domctl(). But I think you want to check that no > >>>> memory was yet allocated to the domain. Otherwise, you coloring will be > >>>> wrong. > >>>> > >>>> Also, it is a bit unclear why you print a message for -EEXIST but not > >>>> the others. In this instance, I would consider to print nothing at all. > >>> > >>> The problem here is that we don't support recoloring. When a domain is > >>> created it receives a coloring configuration and it can't change. If this > >>> hypercall is called twice I have to stop the second time somehow. > >> Looking at your check what you prevent is a toolstack updating the array > >> twice. But that would be ok (/!\ I am not saying we should allow it) so > >> long no memory has been allocated to the domain. > >> > >> But I also consider we would re-color once we started to allocate memory > >> for the domain (either RAM or P2M). This seems to be missed out in your > >> check. > > > > So you want to be able to change colors if no memory has yet been allocated? > > No. I am saying that that we should not be able to allow changing the > colors after the memory has been allocated. To give an example, your > current code would allow: > > 1) Setup the P2M pools or allocate RAM > 2) Set the color > > This would render the coloring configuration moot. > > Whether we want to allow changing the coloring before hand is a > different question and as I wrote earlier on, I don't mind if you want > to forbid that. At the moment I'm relying on the toolstack in the sense that I know that it will set colors right after domain creation and before memory allocation. Calling alloc_domheap_pages() without a coloring configuration makes Xen crash, so it's mandatory to have the configuration done before any allocation. I know that we shouldn't rely on the toolstack this much, but I didn't find a better way. Given this assumption, looking for an already existing color configuration of a domain is sufficient to avoid what you are saying. Is it possible to enforce such a constraint with domctl? I mean to be sure that this domctl will be called at a precise time. Thanks. > > I don't know what to check that. > > You can check the size of the P2M pool (d->arch.paging.p2m_total_pages) > is still 0. I think for RAM, you can check d->tot_pages == 0. > > Cheers, > > -- > Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |