[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] hiccups in adopting xen in my desktop...


  • To: "S.P.T.Krishnan" <sptkrishnan@xxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
  • Date: Tue, 14 Nov 2006 15:20:33 +0100
  • Delivery-date: Tue, 14 Nov 2006 06:29:59 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AccH9LslpGyS+xYKSqulaqkMqX6scAAAHWPg
  • Thread-topic: [Xen-users] hiccups in adopting xen in my desktop...

 

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> S.P.T.Krishnan
> Sent: 14 November 2006 13:56
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] hiccups in adopting xen in my desktop...
> 
> Hi,
> 
> This is my first post to this mailing list, therefore 
> appreciate your patience.
> 
> I have been having these xen questions for quite some time... 
> tried googling around but didn't get answers.
> 
> My scenario is this. 
> 
> 1. I am not new to linux.
> 2. I have been using VMware, Qemu, Argos, Bochs for some time 
> there fore I am not new to virtualisation as well.
> 3. Currently, I use VMware on a Linux host to run Windows XP; 
> 
> What do I lack and why I want Xen (in the hope that Xen will 
> fill in the gaps) ?
> 
> Currently, VMware virtualises the hardware, say HDD, eth, 
> floppy, HDD, USB, monitor, keyboard, mouse and presents it to 
> the guest OS. 
> 
> I think the following translations occur for any command to 
> run in a guest OS.  (Please correct me if I am wrong).
> 
> guest.app <-> guest.os <-> vmware <-> host.os <-> hw
> 
> a. By using xen, we can eliminate 1 or 2 layers of 
> translations and naturally get better response. 
> b. the memory limitations of the host os on its processes 
> will no longer be applicable to the guest os.  currently, 
> vmware workstation runs as host process.
> c. all the other benefits applicable in vmware case also 
> applies here. 
> 
> What I want or wish ?
> (these are the real questions)
> 
> A. Does Xen allow me to partition the physical hw for 
> different virtual OS ?
> Let me explain.  say I have 8 usb ports... and 3 guest OS... 
> can I ask xen to assign 4 ports to guest.os-1 and 2 each to 2 
> other OS ? so that each OS doesn't trip on each other... 
> currently, when you install vmware in windows, during 
> installation, the installer will highly recommend that the 
> user disable auto-play feature of cdrom drive in windows.  
> The reason for this, I think, is that vmware is neither not 
> able to virtualise the hw as in hdd, eth or restrict the 
> hw(cdrom) to one OS.  Part of the problem could be that 
> vmware itself is running as a process in the host os.  This 
> scenario actually embeds a serious security risk. 

You can not split the USB ports unless they are on separate PCI devices
- some PC's have separate USB controllers for each port, others have a
USB hub integrated into th USB controller, so it appears that you have
one USB controller, but there's 4 USB connectors on the machine... 

Currently, there is also no support for doing this in HVM (fully
virtualized domains), so only in the para-virtual domains. 
> 
> B. In general, I would like to know if the same be applied to 
> any hardware device in the system ? like serial ports, 
> parallel ports, and most importantly wifi interfaces.  wifi 
> is very interesting in that where eth allows for multiple 
> associations and thereby multiple IP address per physical 
> interface, wifi currently allows only one association 
> therefore 1 IP address only.  In this case, I would then say, 
> for example assign wifi to WindowsXP and assign eth to Linux 
> VM respectively. 

As long as the WIFI is on a separate PCI device, it should work just
fine - but above limitation about HVM domains is still in force. 
> 
> C. My final question for this round is more of an 
> understanding one, on how things actually work in xen.
> Let me first state my understanding of VMware and Qemu.
> (I think) VMware virtualises the real hardware into multiple 
> instances of virtual hardware.  For example, if you have a 
> network card from companyX, then it will create multiple 
> virtual companyX eth one for each VM.  It is still upto the 
> guest OS to have the respective drivers... 
> Qemu, on the other hand, will always create virtual interface 
> that are very old so that all major OS are likely to carry 
> the driver software for the hardware irrespective of the card 
> you have.

HVM domains use a modified QEMU to make up the device model, which means
that your HVM domain will not (in the present form, as per above) see
_ANY_ real hardware - it all goes to qemu-dm. 

In the para-virtual world, where Xen originated, the driver model is one
of a front-end/back-end driver pairs, where a front-end driver has the
OS interface for a device of that class (say a hard-disk driver), which
just packetizes the request and forwards it to the back-end driver,
which sits in Dom0 (normally - there are other variants). Dom0 receives
a request from the frontend driver and acts on that request, sending a
packet back with the result of the request (such as the content of a
"disk-sector" [disk may in this case be any "file-like-thing" that is
assigned to the VM as "the disk" - including a networked file-system or
a physical disk]. 

There are para-virtual drivers in limited availability that allows a
Windows domain to talk directly to Dom0 to virtualize an interface - so
for example, you could have a para-virtual front-end hard-disk driver
that talks to Dom0 back-end to read the physical device. I think
XenSource or VirtualIron can supply these drivers if you purchase their
commercial product. I'm not aware of any open-source or downloadable
drivers. 

> 
> Now, for a transaction, the guest.app does a hand-over to 
> guest.os, which hands over to vmware and which hands over to 
> guest.os and then runs in the hardware.
> 
> My understanding of xen is that, xen does not provide a 
> virtual driver connecting the virtual hardware to the the 
> real hardware.  Instead it merely transports the guest OS 
> instructions to the real hardware. 

Ehm, yes and no. It depends on which model we're using. You can assign
hardware to a domain, in which case that's correct - _BUT_ the whole
device has to be assigned, so for example, the entire hard-disk
controller, not just one of a pair of IDE-disks... 

If you don't assign the entire hardware device, there is the
para-virtual model. This is used for para-virtual linux kernels, and as
I mentioned above, there are some solutions available to solve this for
Windows too. If there's no PV driver (i.e. not a PV kernel and no PV
driver), you'll end up with qemu-dm for Windows, and that has the same
long path: guest-os -> Xen -> Dom0 -> qemu-dm.

There is work to improve this chain by moving qemu-dm into "a
stubdomain", which is essentially a minimalized linux kernel that runs
essentially only qemu-dm and is run as "part of" the guest-os. 
> 
> If my view is correct, then in Xen, if a new hardware is 
> installed and say only one OS supports it and if that hw is 
> assigned to (if that's possible) another os, the os will complain...

Dom0 is the only one that sees all hardware, unless it's specifically
assigned and hiding it from Dom0. 

Just like any other hardware not understood by some OS, the OS will do
whatever it does when it has hardware that it doesn't understand -
whether that's called complaining or silently ignoring it depends on
which OS it is. 

--
Mats
> 
> -----
> 
> I hope my email is not long enough to annoy you.  Overall, I 
> am thinking of having a desktop setup where I run Xen and 
> then I run multiple OS as VMs.  However, before I make the 
> jump I want to make sure it is a long-term solution. 
> 
> Thank you for your time
> 
> regards,
> Krishnan 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.