|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC V4 1/5] xen: Emulate with no writes
On 08/04/2014 05:09 PM, Jan Beulich wrote:
>>>> On 04.08.14 at 13:30, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> +static int hvmemul_rep_ins_discard(
>> + uint16_t src_port,
>> + enum x86_segment dst_seg,
>> + unsigned long dst_offset,
>> + unsigned int bytes_per_rep,
>> + unsigned long *reps,
>> + struct x86_emulate_ctxt *ctxt)
>> +{
>> + return X86EMUL_OKAY;
>> +}
>> +
>> +static int hvmemul_rep_movs_discard(
>> + enum x86_segment src_seg,
>> + unsigned long src_offset,
>> + enum x86_segment dst_seg,
>> + unsigned long dst_offset,
>> + unsigned int bytes_per_rep,
>> + unsigned long *reps,
>> + struct x86_emulate_ctxt *ctxt)
>> +{
>> + return X86EMUL_OKAY;
>> +}
>
> ... these don't seem to be: I don't think you can just drop the other
> half of the operation (i.e. the port or MMIO read).
I've been looking at hvmemul_do_io() (in arch/x86/hvm/emulate.c, line
52), which is what the above functions are reduced to. At line 88 I've
come across the following code:
/*
* Weird-sized accesses have undefined behaviour: we discard writes
* and read all-ones.
*/
if ( unlikely((size > sizeof(long)) || (size & (size - 1))) )
{
gdprintk(XENLOG_WARNING, "bad mmio size %d\n", size);
ASSERT(p_data != NULL); /* cannot happen with a REP prefix */
if ( dir == IOREQ_READ )
memset(p_data, ~0, size);
if ( ram_page )
put_page(ram_page);
return X86EMUL_UNHANDLEABLE;
}
which does drop the last half of the function (though it does so by
returning X86EMUL_UNHANDLEABLE). Hvmemul_rep_ins() looks like this:
static int hvmemul_rep_ins(
uint16_t src_port,
enum x86_segment dst_seg,
unsigned long dst_offset,
unsigned int bytes_per_rep,
unsigned long *reps,
struct x86_emulate_ctxt *ctxt)
{
struct hvm_emulate_ctxt *hvmemul_ctxt =
container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
unsigned long addr;
uint32_t pfec = PFEC_page_present | PFEC_write_access;
paddr_t gpa;
p2m_type_t p2mt;
int rc;
rc = hvmemul_virtual_to_linear(
dst_seg, dst_offset, bytes_per_rep, reps, hvm_access_write,
hvmemul_ctxt, &addr);
if ( rc != X86EMUL_OKAY )
return rc;
if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.dpl == 3 )
pfec |= PFEC_user_mode;
rc = hvmemul_linear_to_phys(
addr, &gpa, bytes_per_rep, reps, pfec, hvmemul_ctxt);
if ( rc != X86EMUL_OKAY )
return rc;
(void) get_gfn_query_unlocked(current->domain, gpa >> PAGE_SHIFT,
&p2mt);
if ( p2mt == p2m_mmio_direct || p2mt == p2m_mmio_dm )
return X86EMUL_UNHANDLEABLE;
return hvmemul_do_pio(src_port, reps, bytes_per_rep, gpa, IOREQ_READ,
!!(ctxt->regs->eflags & X86_EFLAGS_DF), NULL);
}
So if I understand this code correctly, hvmemul_rep_ins() performs a few
checks, and then calls hvmemul_do_pio(), which ends up calling
hvmemul_do_io(), which seems to discard the write rather unceremoniously
for weird-sized accesses. This would seem to roughly correspond to just
returning X86EMUL_UNHANDLEABLE from hvmemul_rep_ins() for that special
case (with no MMIO code executed).
Did I misunderstand something?
Thanks,
Razvan Cojocaru
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |