[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] Re: [rhelv5-list] shared storage manual remount ...
On Mon, Feb 08, 2010 at 04:46:59PM +0100, Zoran Popovi? wrote: > Tell me what would you like to know more about my environment - I was > trying to give all relevant information, at least concerning this issue. > And, btw, echo 1 > /proc/sys/vm/drop_caches does the work I have needed > - if I do this I get results I need (and, for example, if I don't do that > after snapshot restore on the storage, my HVM Windows guest usually start > chkdsk during boot). So you're saying you need drop_caches *with* phy for the other host to see the disk contents? -- Pasi > ZP. > > 2010/2/8 Zavodsky, Daniel (GE Capital) <[1]daniel.zavodsky@xxxxxx> > > Hello, > I have tried this and it works here... caching is not used for phy: > devices, only buffering but it is flushed frequently so it is not a > problem. Maybe you should post some more info about your setup? > > Regards, > Daniel > > -------------------------------------------------------------------------- > > From: [2]rhelv5-list-bounces@xxxxxxxxxx > [mailto:[3]rhelv5-list-bounces@xxxxxxxxxx] On Behalf Of Zoran Popoviæ > Sent: Thursday, February 04, 2010 1:40 AM > To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list; > [4]xen-users@xxxxxxxxxxxxxxxxxxx > Subject: [rhelv5-list] shared storage manual remount ... > I am wondering if there is a way to solve the following problem: I > suppose that the usual way is to establish distributed file system with > locking mechanisms like it is possible with GFS and Red Hat Cluster > Suite or similar, but I am interested in doing some of this manually and > ONLY with raw devices (no file system), or simply in knowing some > general principles. The case: I have a VLUN (on FC SAN) presented on two > servers, but mounted only on one host - to be more precise, used by a > Xen HVM guest system as a raw physical phy:// drive. Then, I put this > guest down, and bring it manually up on second host - it can see changed > images, and make changes to the presented disks. Then I put it down > there, and bring it up again on the first host - BUT THEN, this guest > (or host) doesn't see changes made by the second system, it still sees > the picture as it was the way it left it. > Or even better, if I bring HVM guest on a host, then put it down, make > restore of his disks on the storage (I am using HP EVA8400, restoring > original disk from a snapshot - it does have redundant controllers but > their cache must be in sync for sure), and then bring it up - it still > sees things on the disks as they were before restore. But if I _RESTART_ > the host, it can see restored disks correctly. Now, I am wondering why > is this happening, and if it is possible somehow to resync with the > storage without restart (I wouldn't like that on production ! and on our > windows systems this is possible) ... I've tried sync (but that is like > flushing buffer cache), and I didn't try echo 3 > > /proc/sys/vm/drop_caches after that (I've just come upon some articles > about that), and I am not sure if that would really invalidate cache and > help me. What is the right way of dong this ? Please, help ... > ZP. > _______________________________________________ > rhelv5-list mailing list > [5]rhelv5-list@xxxxxxxxxx > [6]https://www.redhat.com/mailman/listinfo/rhelv5-list > > References > > Visible links > 1. mailto:daniel.zavodsky@xxxxxx > 2. mailto:rhelv5-list-bounces@xxxxxxxxxx > 3. mailto:rhelv5-list-bounces@xxxxxxxxxx > 4. mailto:xen-users@xxxxxxxxxxxxxxxxxxx > 5. mailto:rhelv5-list@xxxxxxxxxx > 6. https://www.redhat.com/mailman/listinfo/rhelv5-list > _______________________________________________ > rhelv5-list mailing list > rhelv5-list@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/rhelv5-list _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |