[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V13 4/7] xen/arm: Data abort exception (R/W) mem_events.
On Fri, 2015-03-06 at 22:24 +0100, Tamas K Lengyel wrote: > This patch enables to store, set, check and deliver LPAE R/W mem_events. > As the LPAE PTE's lack enough available software programmable bits, > we store the permissions in a Radix tree. The tree is only looked at if > mem_access_enabled is turned on. But it is maintained/updated regardless, is that deliberate? > +static int p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long > pfn, > + p2m_access_t a) > +{ > + int rc; > + > + if ( p2m_access_rwx == a ) > + { > + if ( p2m->mem_access_enabled ) In particular this is gated, but the rest of the function appears not to be, which seems inconsistent... > + radix_tree_delete(&p2m->mem_access_settings, pfn); > + > + return 0; > + } > + > + rc = radix_tree_insert(&p2m->mem_access_settings, pfn, > + radix_tree_int_to_ptr(a)); > + if ( rc == -EEXIST ) > + { > + /* If a setting existed already, change it to the new one */ > + radix_tree_replace_slot( > + radix_tree_lookup_slot( > + &p2m->mem_access_settings, pfn), > + radix_tree_int_to_ptr(a)); > + rc = 0; > + } > + > + return rc; > +} > + > enum p2m_operation { > INSERT, > ALLOCATE, > REMOVE, > RELINQUISH, > CACHEFLUSH, > + MEMACCESS, > }; > > /* Put any references on the single 4K page referenced by pte. TODO: > @@ -560,13 +592,22 @@ static int apply_one_level(struct domain *d, > if ( p2m_valid(orig_pte) ) > return P2M_ONE_DESCEND; > > - if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) ) > + if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) && > + /* We only create superpages when mem_access is not in use. */ > + (level == 3 || (level < 3 && !p2m->mem_access_enabled)) ) I don't think we can get away with adding this check to is_mapping_aligned (it's used elsewhere), but perhaps you could wrap this condition in a helper to use in both places. mapping_allowed_at_level(p2m, level) or some such. > - /* We do not handle replacing an existing table with a superpage > */ > - (level == 3 || !p2m_table(orig_pte)) ) > + /* We do not handle replacing an existing table with a superpage > + * or when mem_access is in use. */ > + (level == 3 || (!p2m_table(orig_pte) && > !p2m->mem_access_enabled)) ) Actually, this is very subtly different isn't it. Can it be unified? If not then ignore the helper idea. > @@ -760,6 +807,47 @@ static int apply_one_level(struct domain *d, > *addr += PAGE_SIZE; > return P2M_ONE_PROGRESS_NOP; > } > + > + case MEMACCESS: > + if ( level < 3 ) > + { > + if ( !p2m_valid(orig_pte) ) > + { > + *addr += level_size; > + return P2M_ONE_PROGRESS_NOP; > + } > + > + /* Shatter large pages as we descend */ > + if ( p2m_mapping(orig_pte) ) > + { > + rc = p2m_shatter_page(d, entry, level, flush_cache); > + if ( rc < 0 ) > + return rc; > + } /* else: an existing table mapping -> descend */ > + > + return P2M_ONE_DESCEND; > + } > + else > + { > + pte = orig_pte; > + > + if ( !p2m_table(pte) ) > + pte.bits = 0; What is this about? Just clobbering an invalid PTE? > @@ -783,6 +871,8 @@ static int apply_p2m_changes(struct domain *d, > unsigned int cur_root_table = ~0; > unsigned int cur_offset[4] = { ~0, ~0, ~0, ~0 }; > unsigned int count = 0; > + const unsigned long sgfn = paddr_to_pfn(start_gpaddr), > + egfn = paddr_to_pfn(end_gpaddr); > bool_t flush = false; > bool_t flush_pt; > > @@ -912,6 +1006,12 @@ static int apply_p2m_changes(struct domain *d, > rc = 0; > > out: > + if ( flush ) > + { > + flush_tlb_domain(d); > + iommu_iotlb_flush(d, sgfn, egfn - sgfn); > + } Is moving the flush out of the loop an independent bug fix? If so please do in a separate commit with a rationale in the commit log. If it is somehow related to the changes here then please mention it in this commit log, since it's a bit subtle. > + > if ( rc < 0 && ( op == INSERT || op == ALLOCATE ) && > addr != start_gpaddr ) > { > @@ -1281,6 +1381,254 @@ void __init setup_virt_paging(void) > smp_call_function(setup_virt_paging_one, (void *)val, 1); > } > > +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec > npfec) This is different to the current x86 prototype, is that due to your other cleanup series? > +{ > + int rc; > + bool_t violation; > + xenmem_access_t xma; > + mem_event_request_t *req; > + struct vcpu *v = current; > + struct p2m_domain *p2m = p2m_get_hostp2m(v->domain); > + > + /* Mem_access is not in use. */ > + if ( !p2m->mem_access_enabled ) > + return true; > + > + rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma); > + if ( rc ) > + return true; > + > + /* Now check for mem_access violation. */ > + switch ( xma ) > + { > + case XENMEM_access_rwx: > + violation = false; > + break; > + case XENMEM_access_rw: > + violation = npfec.insn_fetch; > + break; > + case XENMEM_access_wx: > + violation = npfec.read_access; > + break; > + case XENMEM_access_rx: > + case XENMEM_access_rx2rw: > + violation = npfec.write_access; > + break; > + case XENMEM_access_x: > + violation = npfec.read_access || npfec.write_access; > + break; > + case XENMEM_access_w: > + violation = npfec.read_access || npfec.insn_fetch; > + break; > + case XENMEM_access_r: > + violation = npfec.write_access || npfec.insn_fetch; > + break; > + default: > + case XENMEM_access_n: > + case XENMEM_access_n2rwx: > + violation = true; > + break; > + } > + > + if ( !violation ) > + return true; The preceding section looks pretty similar to the guits of x86's p2m_mem_event_emulate_check, can they be combined? > + > + /* First, handle rx2rw and n2rwx conversion automatically. */ > + if ( npfec.write_access && xma == XENMEM_access_rx2rw ) > + { > + rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1, > + 0, ~0, XENMEM_access_rw); > + return false; > + } > + else if ( xma == XENMEM_access_n2rwx ) > + { > + rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1, > + 0, ~0, XENMEM_access_rwx); > + } This looks like a bit of p2m_mem_access_check, can it be made common? > + > + /* Otherwise, check if there is a memory event listener, and send the > message along */ > + if ( !mem_event_check_ring(&v->domain->mem_event->access) ) > + { > + /* No listener */ > + if ( p2m->access_required ) > + { > + gdprintk(XENLOG_INFO, "Memory access permissions failure, " > + "no mem_event listener VCPU %d, dom %d\n", > + v->vcpu_id, v->domain->domain_id); > + domain_crash(v->domain); > + } > + else > + { > + /* n2rwx was already handled */ > + if ( xma != XENMEM_access_n2rwx ) > + { > + /* A listener is not required, so clear the access > + * restrictions. */ > + rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1, > + 0, ~0, XENMEM_access_rwx); > + } > + } > + > + /* No need to reinject */ > + return false; > + } And this > + req = xzalloc(mem_event_request_t); > + if ( req ) > + { > + req->reason = MEM_EVENT_REASON_VIOLATION; > + if ( xma != XENMEM_access_n2rwx ) > + req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED; > + req->gfn = gpa >> PAGE_SHIFT; > + req->offset = gpa & ((1 << PAGE_SHIFT) - 1); > + req->gla = gla; > + req->gla_valid = npfec.gla_valid; > + req->access_r = npfec.read_access; > + req->access_w = npfec.write_access; > + req->access_x = npfec.insn_fetch; > + if ( npfec_kind_in_gpt == npfec.kind ) > + req->fault_in_gpt = 1; > + if ( npfec_kind_with_gla == npfec.kind ) > + req->fault_with_gla = 1; > + req->vcpu_id = v->vcpu_id; > + > + mem_access_send_req(v->domain, req); > + xfree(req); > + } > + > + /* Pause the current VCPU */ > + if ( xma != XENMEM_access_n2rwx ) > + mem_event_vcpu_pause(v); > + > + return false; > +} > + > +/* Set access type for a region of pfns. > + * If start_pfn == -1ul, sets the default access type */ > +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr, > + uint32_t start, uint32_t mask, xenmem_access_t > access) and this function is nearly identical to the x86 one too. > +{ > + struct p2m_domain *p2m = p2m_get_hostp2m(d); > + p2m_access_t a; > + long rc = 0; > + > + static const p2m_access_t memaccess[] = { > +#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac > + ACCESS(n), > + ACCESS(r), > + ACCESS(w), > + ACCESS(rw), > + ACCESS(x), > + ACCESS(rx), > + ACCESS(wx), > + ACCESS(rwx), > + ACCESS(rx2rw), > + ACCESS(n2rwx), > +#undef ACCESS > + }; > + > + switch ( access ) > + { > + case 0 ... ARRAY_SIZE(memaccess) - 1: > + a = memaccess[access]; > + break; > + case XENMEM_access_default: > + a = p2m->default_access; > + break; > + default: > + return -EINVAL; > + } > + > + /* > + * Flip mem_access_enabled to true when a permission is set, as to > prevent > + * allocating or inserting super-pages. > + */ > + p2m->mem_access_enabled = true; > + > + /* If request to set default access. */ > + if ( pfn == ~0ul ) > + { > + p2m->default_access = a; > + return 0; > + } > + > + rc = apply_p2m_changes(d, MEMACCESS, > + pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr), > + 0, MATTR_MEM, mask, 0, a); > + if ( rc < 0 ) > + return rc; > + else if ( rc > 0 ) > + return start + rc; > + > + return 0; > +} > + > +int p2m_get_mem_access(struct domain *d, unsigned long gpfn, > + xenmem_access_t *access) > +{ > + struct p2m_domain *p2m = p2m_get_hostp2m(d); > + void *i; > + unsigned int index; > + > + static const xenmem_access_t memaccess[] = { Would be nice not to have this static const thing twice in .rodata. > +#define ACCESS(ac) [p2m_access_##ac] = XENMEM_access_##ac > + ACCESS(n), > + ACCESS(r), > + ACCESS(w), > + ACCESS(rw), > + ACCESS(x), > + ACCESS(rx), > + ACCESS(wx), > + ACCESS(rwx), > + ACCESS(rx2rw), > + ACCESS(n2rwx), > +#undef ACCESS > + }; > + > + /* If no setting was ever set, just return rwx. */ > + if ( !p2m->mem_access_enabled ) > + { > + *access = XENMEM_access_rwx; > + return 0; > + } > + > + /* If request to get default access */ > + if ( gpfn == ~0ull ) We should have a suitable constant for this, I think, INVALID_MFN looks like the one. > + { > + *access = memaccess[p2m->default_access]; > + return 0; > + } > + > + spin_lock(&p2m->lock); > + i = radix_tree_lookup(&p2m->mem_access_settings, gpfn); > + spin_unlock(&p2m->lock); > + > + if ( !i ) > + { > + /* > + * No setting was found in the Radix tree. Check if the > + * entry exists in the page-tables. > + */ > + paddr_t maddr = p2m_lookup(d, gpfn << PAGE_SHIFT, NULL); > + if ( INVALID_PADDR == maddr ) > + return -ESRCH; > + > + /* If entry exists then its rwx. */ it's, please > + *access = XENMEM_access_rwx; > + } > + else > + { > + /* Setting was found in the Radix tree. */ > + index = radix_tree_ptr_to_int(i); > + if ( index >= ARRAY_SIZE(memaccess) ) > + return -ERANGE; > + > + *access = memaccess[index]; > + } > + > + return 0; > +} > + > /* > * Local variables: > * mode: C Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |