Thank you to all who replied, in the end 16 people. I am very grateful for all your suggestions! The replies varied regarding the strategy to get the job done: - taring in 2gb chunks, scp the chunks, untar - a perl script defining getperm and setperm, run over the whole fs - check for matching userids in etc/passwd (made no difference) - use ufsdump (no chance to mount ro, or unmount because of time) - use scp -rp, (fails to preserve owner/group over network) - use rsync with public key authentication (this worked! Danke Jan) - share the fs with the options anon=0, root=<client system> - mount the share with the option vers=3 if I am on Solaris 10 (I am) I had already been sharing the fs with the root=<client system> option, however on a handfull of userid's, this was seen on the client side as nobody. I added the anon=0 option, however this made no difference. I also added the problem userid's to /etc/passwd on the client system with no change in behaviour. The behaviour is very odd for me, as it seems that what I was doing with nfs should have worked. I will have to spend more time on this issue as it will come up in the future again under other circumstances. Jan D. pointed out that rsync should work, however I was obviouly doing it in a far too complicated manner - setting up an rsync server, then rsyncing through that. His suggestion was to use key authentication, and # rsync -e ssh -aHvp S1:/path/to/copydir S2:/copy/me/here/ This got me out of trouble for now. Thanks again to everyone who replied. regards Markus On Wednesday 26 September 2007, Markus Mayer wrote: > Hello all, > > I need to transfer a large amount of data (about 730Gb) from one machine to > another and am not able to take anything off line. As an rsync > client/server setup doesn't preserve the file ownership and groups at all, > I am trying to do it using an nfs share. This method is also running into > problems. > > I have a share on one machine, set up with the following command: > S1# share -F nfs -o ro=<servername>,root=<servername> /share/location > On the second machine, I can mount the share using: > S2# mount -F nfs-o ro <servername>:/share/location /mnt/remoteserver > > The problem I run into is that on the first machine, files have thier owner > and group, however on the nfs mount on the second machine, often the owner > is changed to nobody. An example of such a file is: > S1# ls -ln /share/location/dir1/afile > -rw-rw-r-- 1 206 201 1348 Sep 11 2006 afile > On the second machine however, I get: > S2# ls -ln /mnt/remoteserver/dir1/afile > -rw-rw-r-- 1 60001 201 1348 Sep 11 2006 afile > > There does seem to be some consistency in this behaviour, all be it > strange. All files on S1 with owner root have their ownership seen as > nobody on S2. For other files, if the file owner on S1 has id 206, the > owner is seen as nobody on S2, however if the owner is 205 or 207, the > owner is preserved. On S1, there is an user entry for id 206, so I put > that id on S2, however with no change in behaviour for ownership > preservations. > > I'm at an absolute loss as to what's happening here and would be grateful > for any help anyone can give me. > > regards > Markus > _______________________________________________ > sunmanagers mailing list > sunmanagers@sunmanagers.org > http://www.sunmanagers.org/mailman/listinfo/sunmanagers _______________________________________________ sunmanagers mailing list sunmanagers@sunmanagers.org http://www.sunmanagers.org/mailman/listinfo/sunmanagersReceived on Thu Sep 27 11:16:53 2007
This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:44:07 EST