Home > Command Line > NetApp File Copy

NetApp File Copy

April 11th, 2009

It always comes up, how can I copy single files, or large areas directly from the NetApp console? Generally the answer comes back, you can’t, use RoboCopy or rsync or another file migration tool. However there are definitely ways of copying files around directly from the filer itself, and often this is the most efficient way of doing it! However, these aren’t the most intuitive or well documented commands.

There may be other methods, and if you have something you have used in the past or you know of, please feel free to share! Not all methods are suitable for all tasks, but each has it’s own individual uses.


This is often overlooked as a file / folder copy command, and is often just used to migrate entire volumes around. In fact it can be used to copy individual folders or filers around, and even better can be used to copy data to other filers! Make sure ndmp is enabled first (ndmpd on). The syntax is quite simple…

ndmpcopy /vol/vol_source_name/folder/file /vol/vol_dest_name/file

Just to break this down, we are choosing to copy a filer from “/vol/vol_source_name/folder” and we want to copy it into “/vol/vol_dest_name”. This isn’t too restrictive, we don’t have to keep the same path, and we can even copy things about in the same volume (such as copying things into QTrees if you need). You can copy things from an entire volume, to a single QTree, down to single folders way down in the directory tree. The only real restriction is you cannot use wildcards, and you cannot select multiple files to copy.

If you want to copy files from one filer to another, we simply extend this syntax…

ndmpcopy -sa <user>:<pass> -da <user>:<pass> source_filer:/vol/vol_source_name/folder/file destination_filer:/vol/vol_dest_name/file

Replace <user> and <pass> with the source filer (-sa) login and the destination filer (-da) login. Here we copy a single file from one location on one filer, to another on another!

We can also define the the incremental level of transfer. By default the system will do a level 0 transfer, but you can define to do a single level 1 or 2 incremental transfer. If the data has changed too much, or too much time has passed since the last copy, this may fail or may take longer than a clean level 0.

This can be very useful, and as the filer is doing this at block level, all ACL’s are completely preserved. Take care to enable that the security style is the same on the destination to prevent ACL’s from being converted however.

ONTAP manual page for this can be found – http://now.netapp.com/NOW/knowledge/docs/ontap/rel7261/html/ontap/cmdref/man1/ndmpcopy.1.htm


This is a “priv set advanced” command, and so apparently reserved for “Network Appliance personnel”. “mv” is very straight forward, give it a source and destination, and a single file will get moved. Remember this is a move, so it is not technically a file copy at all.

mv <file2> <file2>

flex clone

This is a real cheat, but a great cheat! You clone an entire volume based on a snapshot, then you split this volume off from the snapshot. This a great way of getting an entire volume copied with minimal disruption. The clone is almost immediately created, and can then be online and used live. The clone split operation happens in the background so you can move things and be live at the new location in very little time at all.

vol clone create new_vol -s volume -b source_vol source_snap

Where “new_vol” is the new volume you want to create, “-s volume” is the space reservation, “-b source_vol” is the parent volume that the clone will be based on and “source_snap” is the snapshot you want to base the clone on.

vol clone split start new_vol

Will then start the split operation on the “new_vol”

vol copy

Rather than a flex clone, if you haven’t got that licensed, you can do a full vol copy. This is effectively the same as a vol clone, but you need to do the entire operation before the volume is online and available. You need to create the destination volume first and then restrict it so that it is ready for the copy. Then you start the copy process.

vol copy start -s snap_name source_vol dest_vol

“-s snap_name” defines the snapshot you want to base the copy on, and “source_vol” and “dest_vol” define the source and destination for the copy. “-S” can also be used to copy across all the snapshots that are also included in the volume. This can be very useful if you need to copy all backups within a volume as well as just the volume data.

lun clone

If you need to copy an entire LUN, and again you haven’t got flex clone licensed, you can do a direct lun clone, and lun clone split. This is only really useful if you need a duplicate of the LUN in the same volume. It will create a clone based on a snapshot that already exists.

lun clone create clone_path -b parent_path parent_snap

“clone_path” being the new LUN you want to create, “parent_path” being the source LUN you want to clone from and “parent_snap” being a snapshot that already exists of the parent LUN. The you need to split the LUN to become independent with.

lun clone split start clone_path

SnapMirror / SnapVault

You can also use SnapMirror or SnapVault to copy data around. SnapMirror can be useful if you need to copy a large amount of data that will change. You can setup a replication schedule, then during a small window of downtime, you can do a final update and bring the new destination online.

dump and restore

This isn’t really a good way of copying files around, but it certainly a method. If you attach a tape device directly to the filer, you could do a dump, then a restore to a new location or filer. This can be the only method if you have a large amount of data to move to a new site, and no bandwidth or no way of having the 2 systems side by side temporarily.

Command Line , , , , ,

  1. | #1

    Nice post, my situation is as follows:
    I have one Volume with several qtree’s/ Lun’s for several SQL servers. I need to create seperate volumes for each server and then move the LUN’s from the current vol/qtree to the new volume minus the qtree’s. This is so my Volume restores will not affect every hosts in the Volume.
    My question is what is the easiest and least evasive to the hosts solution to move the lun’s? I do have snap mirror.


  2. | #2

    Hi buddy, glad you found the post useful.

    Can I ask why you need to remove the QTree’s? Even if you don’t need to QTree’s, then they should cause no harm by simply being there.

    Using the QTree’s, you could do a QTree SnapMirror of each one to it’s own volume. This would be the least disruptive as you could do a baseline, setup a short schedule, then when you are ready to move things across do a final update and then switch the servers across. If you shut down the servers, you should be able to re-map the LUNs to the same initiators with the same ID’s (make careful notes first) and the servers will be oblivous of any change. For SnapDrive connected hosts, re-create the CIFS shares, but this should be fine.

    If you do want to get rid of QTree’s, then your options are limited. You could do a ndmpcopy of the LUNs themselves to their new locations. I did this recently for a customer and the job worked fine, however the customer was able to switch the LUNs offline prior to the copy.

    Alternatively you could do a host based copy. Create an entire new LUN and then copy the data from the host. Possibly the most awkware way of doing things, but if there is any issues with your existing LUNs, this would be the cleanest.

    You will need some sort of downtime, how long is impossible to tell without knowing the size and change rate of your data. I’d personally be inclined to go for an NDMPcopy. It should be incremental so long as you don’t leave too much time between the baseline and the final update. Perhaps do an NDMPcopy in the afternoon then shut the server down out of hours and do a second NDMPcopy (which should be an update, not full copy). I have done this trick a couple of times.

    The total impact would then be minimised as you could move each LUN separately. This could take some time depending on how much data you have again.

    Let me know how you get along, or if you have any follow up questions. Always happy to help.


  3. Martin
    | #3

    @Chris Kranz

    I need to copy one large folder with lots of files in it (about 250GB) from one volume to another volume.
    What would be the best way to go about it? I basically want the files to look at they did before the move if possible, i.e keep the datestamps etc the same.
    I also have another folder of a similar size to move that contains MAC OS data and I’m worried that if I move that folder using Windows, it’ll cause issues with the MAC data.

    Any suggestions?



  4. | #4

    Hi Martin,

    For your uses, I’d probably recommend using ndmpcopy. This will copy the data at block level and then copy across all the filer pointers (including the ACL’s), so the data will be viewed identically from the client side. This would work for both the Windows and Mac data (any data in fact). You would need some level of downtime while you change the shares to the new destination. With CIFS shares just simply change the share location.

    You could use robocopy, with /mir (there are others ways to achieve this too) and this should preserve all the dates and permissions, but I would prefer doing the copy directly at the filer level.

    If the data is already in a specific volume or qtree, you could use snapmirror and this would be the most efficient, but as you said they are in folders, I reckon you’ll be best with an ndmpcopy.

  5. Chris
    | #5

    Good article, but you missed one option. I’m using dd (dd if=/vol/… of=/vol/..) in privileged mode.


  6. Erling
    | #6

    Hi. Thanks for a nice review of various file copying options.
    One thing that I would like to know is: How to copy/move cifs shares (say to another aggregate or filer?)
    Is there not any better thing to do than to ndmpcopy the share folder to a new path and after that stop and recreate the share (on the new path?)?

  7. | #7

    If you edit the CIFS share config file you can make this a lot easier. Again, my preferred option here would be to SnapMirror to the new location, then when you’re ready to move, disable the CIFS share, break the SnapMirror and repoint the CIFS share. Optionally you could repoint the CIFS share first at which point it’ll be available, but read-only until you’ve broken the mirror. This would lead to very minimal outage.

    The CIFS share config file is “/etc/cifsconfig_share.cfg”.

  8. Erling
    | #8

    Thanks for feedback. I appreciate that. I also agree to the use of SnapMirror, but without that license (its not free!) we are left with more “primitive” copying, I’m afraid.

  9. | #9

    Great post,

    If I had to replicate a fair amount of data (around 7-8TB) from one NetApp array to another in a different location (for a one time migration) over a small link (might only be 100BaseT) what would be the fastest way?

    I was thinking of trying to connect the new NetApp to the same network as the current one locally via GigE to ‘seed’ it with SnapMirror. Then separate them and re-synch the SnapMirror to catch up on deltas and then plan our cut over.

    If standing up the new NetApp in the current data center is not an option, could I do a tape backup of the current NetApp and then tape restore to the new one and THEN SnapMirror the deltas? (seed with tape)? I think this would be an option.

    Anyone know how SnapMirror (block based) would perform for a data copy vs. robocopy (file based) if the network and hardware were all the same? Just curious if using the same hardware and network if SnapMirror would be any faster than robocopy?


  10. | #10

    Sorry for the delay in getting back to your comment!

    There’s a couple of ways to do this, you could actually completely cheat! You could add the new disks for the second filer onto the primary system, then do a local SnapMirror. This would copy at 2-4g, depending on your loop speed. Then remove the disks, change the ownership, and plug into the second system and ship out.

    If that sounds a bit too tricky, then yes, a local gigabit connection and using SnapMirror should work well. We often do that when we have limited bandwidth or a large amount of data to be transferred.

    You can use SnapMirror to tape, and that would give you the option to restore back to a second filer and then resync the changes.

    SnapMirror would definitely be quicker than robocopy. The filer already has the ACL’s and the block changes indexed, so there would be no major file scanning, where-as robocopy would have to scan all the data and calculate the ACL’s.

  11. David Young
    | #11

    I’m doing some recovery on a customers F820 with data ontap 7.0.3…

    It has 2 aggregates, aggr0, and aggr1. aggr1 seems to be corrupted and forces a coredump and reboot when attached to the filer head during normal operation. It seems I have a 10 min window… I’ve tried WAFL_chk, etc but nothing seems to make it come around.

    Is there a way to move data or (hopefully) a volume from aggr1 -> aggr0 from maintenance mode? Any help would be appreciated.

  12. | #12

    Hi David,

    I know that in maintenance mode you have limited access to the aggregates, but I’m pretty sure you don’t have access to the actual volumes.

    What you probably want to do is boot into maintenance mode, then run WAFL_check or wafliron on the corrupted aggregate. WAFL_check gives you feedback and options on what to commit, where-as wafliron just goes ahead and fixes things, whatever the outcome!

    Generally we’d recommend logging a call with NetApp support as they can investigate the output of the WAFL_check much more and let you know any potential impact to fixing the inconsistencies and any potential data loss due to the corruption.

    To run WAFL_check, press CTRL+C on boot to get special boot options, then type “WAFL_check” instead of any numbered option. I’ve only done this a few times over my years (testament to RAID-DP), so I’d strongly recommend you consult NetApp Support.

    If you have access to NOW, have a read over : https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb20575

  13. Cube
    | #13

    Great article!
    I have folowing task.
    I have many cifs shares on one big volume (in one qtree) and have to split this volume in several smoller. The only possibility i see is to use ndmpcopy to transfer data to new volumes. Then stop cifs, delete old shares and create new using cifsconfig_share.cfg.
    I wanted to use robocopy, but it has some limitations (long paths, ACLs etc).
    Do you see some better possibility?

  14. | #14

    This is definitely a tricky situation. Yes you could use ndmpcopy, but if you have a large directory tree, I reckon this will take a very long time!

    I might be tempted to cheat here, if the new volumes you are creating are on the same aggregate (or even if not, we can stage them). You’ll want to disable CIFS access before starting, otherwise you’ll have a nightmare trying to synchronise any changes after you start.

    Clone the volume in question a number of times (If you don’t have FlexClone, use SnapMirror or vol copy, but you won’t benefit from 0% space usage). However many volumes you want the data separated into at the end, clone it this many times. Do it with no space guarantee on the volume. Now, create a CIFS share for yourself to each of the volumes. Delete all other data that exists in the volume. Deleting data will be quicker than copying as you don’t need to read, interrogate and re-write all the ACL’s and file trees, just delete :) . Once this has finished, break the clones (will use less storage now), fix the space guarantee and repoint the CIFS shares.

    This is a fairly novel way, but might be your best and quickest method. Alternatively you’ll be looking at specific migration tools which are usually chargable, like Quest Migrator (I think that’s the name).

  15. Cube
    | #15

    Cool solution!
    Unfortunatly, i can’t do it so, because one of the benefits of the whole action is the feasibility to enable a-sis on new volumes.
    I will be able to test, how long ndmpcopy needs to transfer the data an how the performance impact will be on a DR filer.

  16. Colin
    | #16


    I have many volumes that we need to migrate to a new vol/qtree.

    We are moving silly volumes in silly aggr to 64bit aggr in 7G. Bit of data consilidation.

    I amn struggling to discover what is the best method of moving VOLUMES to QTREE’S.

    Most of the client access is NFS.
    I have SnapMirror lics. Thought about NDMP and FlexClone.

    Werea re talking about 10TB Volumes that are near to full. So this needs to be considered also. Just nee to know trhe best method.

    This will all be on the same filer.


  17. | #17

    I really should do a follow up post on 64bit aggregates. The challenge is simple today, there is very little options. The actual volume structure changes, so most methods that perform on the volume level (ndmp, snapmirror, vol copy, any sort of clone) will fail to migrate from 32bit to 64bit aggregates.

    I believe (in theory) that SnapVault or Qtree SnapMirror would work as these work at a logical level above the volume, but I can’t guarantee that. I can guarantee that a host based file copy would work.

    This is exactly the same situation that we had moving from trad-vols to FlexVols 5/6 years ago and NetApp promised us it would never happen again! Well if we wait around for ONTAP 8.1 or 8.2 then it won’t be a problem, but today with 8.0.x it is a big challenge for us I’m afraid.

  18. Mike
    | #18

    Do you know if it’s possible to use ndmpcopy to copy data (single file/folder) from a snapshot to the active file system?

    I know that, under normal circumstances, the best way is to use a client pc to move the data. But this is not an option on the environment I’m working with.

    Kind regards,

  19. | #19

    It’s definitely possible to do ndmpcopy from within a snapshot, basically put in the full path to the file within the snapshot when you do the ndmpcopy. I’m sure I’ve done this myself in the past when “SnapRestore” hasn’t been licensed. The full path to the snapshot would be similar to…


    This is actually really useful if you doing upgrade work. You can use ndmpcopy to copy unaltered copies of the original config files out of a snapshot back into the root filesystem, or to a second location for comparison.

    However if you have “SnapRestore” licensed, I would recommend that you use this as it leverages the inbuilt functionality that is designed for this purpose. I guess at the end of the day there’s little actual difference as I believe they actually perform similar operations. Within SnapRestore as well as full volume restores, you have single file snap restores. Unfortunately these are also command line only, but they work well and I’ve used them to restore files into /etc on many occasions during maintenance or upgrade work.

  20. Mike
    | #20

    Fantastic, I thought that it must be possible to use ndmpcopy but was reluctant to test on live customer data/hardware.

    Thanks for the advice regarding SnapRestore, I’ll look into this as well.

    Appreciate the help.


  21. virtualtsm
    | #21


    This is great post and very helpful, I have another question – the folder I want to copy has Space in the name eg: “ABC CDEF” . . so the command is not recognized. . please help me on this. . ..

  22. virtualtsm
    | #22

    just to add. . I am using ndmpcopy command . . . tried with “” but no luck. . .

  23. | #23

    Have you tried putting the full path in quotes? So…

    ndmpcopy “/vol/vol_name/folder1/folder 2/” “/vol/vol2_name/folder1/”

  24. virtualtsm
    | #24

    I did try – but it gives me an error > Failed to start dump on source. ..

  25. | #25

    Can you provide a little more background please? Are you copying between 2 different filers, or within the same box? Would you be able to post up the exact commands you are executing and the errors that are displayed afterwards please?

  26. virtualtsm
    | #26

    Hey Chris thanks man, unfortunately the folder names they do not contain space but underscores. . . so I think my mistake . . it should work as its built. . .

    Thanks for value time and inputs. ..

  27. JD
    | #27

    What is the best way to restart a snapmirror after more than 10 months of changes? We need to move approx. 10TB of data to our DR site and reinitialize a snap mirror between our DR and Production. I am guessing that based amount of time elapsed we will need to start from scratch. If that is the case what is the most efficient way to copy the data to the DR site with the least amount of disruption and performance hit to the production filer? Another caveat is that we are at 90%, yeah I know, capacity on our production system so we don’t have space for a large snapshot.

    Wondering if there might be a way to with perform a ndmp from our Networker backup to the DR filer, leaving the production alone, and then replicated the smaller difference. Will snapmirror attempt to start all over or will it only copy the discrepancies.

  28. | #28

    The SnapMirror won’t still be in place, so you’ll have to re-initialise from scratch anyway. The only real issue is going to be the length of time taken to transfer the data, and depending on your bandwidth 10TB could take some time.

    I believe you can do a SnapMirror to tape in order to transfer the data in a more bulk format, but I’m not sure how that works with Networker (if at all). There is a tool from NetApp called “lrep” (http://now.netapp.com/NOW/download/tools/lrep/) however this is for SnapVault. You may be able to use this to do a baseline, and then do a SnapMirror convert, but I’ve never tried this myself so I’m not sure if it’ll actually work.

    I don’t think a straight forward ndmp dump will work either as it isn’t obeying the full filesystem layout, only SM2T (SnapMirror to Tape) does this properly for the NetApp to understand it when it’s restored again. SnapMirror needs the data to be logically identical in order to perform an update, otherwise it’ll assume it’s a full resync.

    Another option could be to simply bring some disks, or the DR system to primary while you do the baseline transfer. We’ve done this several times with loan disk shelves to perform data transfers and it can work quite well! SnapMirror to the loan disks at primary, detach and ship to secondary, then SnapMirror the data onto the DR system and then resync the SnapMirror from Primary – DR.

  29. S A
    | #29

    I have to move large number of files in 100 folders from NetApp filer to external hard-drives. The total size is 30TB. Please note the final use is for the hard-drives to be connected to a PC and copy the files to another system. What is the fastest way to do this.

    Or am I going about the wrong way as the end goal is just to copy data from NetApp filers to harddrives to transport to another customer. Please help.

  30. | #30

    As you are copying the data off the NetApp, I think you’re options are fairly limited as whatever you do will have to be driven from a host or workstation.

    I’d probably look at some form of syncing tool, like robocopy or fastcopy or similar as it should make the process easier for you. There’s no simple way other than physically connecting to the NetApp and copying the files across the network onto your USB hard-drive. Whether you use a tool for this or not is probably down to personal preference. I’d say a robocopy type tool would probably make this quicker as they are usually multi-threaded with some level of reporting at the end.

  31. SA
    | #31

    Thank you Chris for the extremely quick response. After doing some research, I am willing to pickup a harddrive array appliance with iSCSI and Gig Ethernet ports. Is there a way to copy without a server getting involved? I am looking for the fastest way to copy the data. I can get the HD appliance connected directly to NetApp via Broade ports.

    Please help.

  32. | #32

    Unfortunately there’s no way to copy directly off the NetApp controller to an external storage device. I’m afraid a server will be required to get the data copied across.

  33. Kurt
    | #33

    Hi Chris,

    We want to migrate a LUN from one Filer to another, another filer being a nearstore. Can we do this with snapvault ?

    ndmpcopy and vol copy are the options too, But I guess they will take long time considering the LUN size is 2TB!



  34. dc
    | #34

    very useful info..thanks.

    i have a situation.
    i need to move 15TB of data to a new file server & lun.
    problem is, the 15TB consists of 50 directories each with it’s own sub-directories and user permissions.
    how do i move all of this across without losing the specific permissions on each sub-directory.

  35. | #35

    dc: robocopy (or a similar) is usually a good choice. It’ll give you the flexibility to select an individual directory and do subsequent update tasks on it. The only challenge you’ll find with most tools is the sheer quantity of data you have to copy. Any tool should be able to manage this, it’ll just take a long time. You’ll want something multi-threaded and the ability to give you confirmation or failure reports (detailed ones). I think you can do this with robocopy, but you’ll need to have a play with the different extensions and variables it accepts.

  36. | #36

    Kurt: Yes SnapVault is a possibility, but you’ll end up with a read-only destination and have to convert this back to a volume. SnapMirror would probably be a better option as this is much easier to convert. ndmpcopy and vol copy will all achieve similar end results. The copy process will be as long as it takes, regardless of what tool you use on the filer. If you have the latest version of Data ONTAP, you could look at using SnapMirror and then use DataMotion to move the volume.

  37. dc
    | #37

    ok thx chris. i will test with robocopy.

  38. sujith
    | #38

    Nice post and very useful..

    I have a doubt — why do we use the snapshot of the whole volume to create a clone of a file ( lun ) in that volume , any particular reasons for this.


  39. | #39

    You can perform individual LUN clones within a volume, but a snapshot based clone is more efficient as it doesn’t lock the individual LUN. Performing a LUN clone will lock future snapshots as they are locking the cloned LUN, so this can cause problems. A volume clone won’t cause this issue at all and it becomes more flexible as it can be fully broken off from the source.

  40. Frank
    | #40

    Nice post, really useful stuff. I have a scenario here I’d like to get some feedback on.

    I have a filerA running several TBs worth of storage for clients in a datacenter across town. I have filerC in my new shiny datacenter ready and waiting for all this data so we can move our clients. Bandwidth is billed per GB, and it will cost me several thousand dollars to move the data if I have to do a baseline copy with snapmirror vs just syncing the deltas.

    Option A: I’d like to find a way to copy my data from filerA to a USB/eSATA external disk and drive it across town, and then set up the relationship between the old datacenter and the new one to sync the changes. Can NDMP work here? Once I suck all the NDMP data into filerC will I be able to snapmirror the deltas or will it try to copy everything?

    Option B: I have an older filer that can’t run dedupe but could be used to set up a local snapmirror, copy all the data, drive it across town, and snapmirror it into filerC. Suboptimal solution, and unsure it would even work.


  41. jx
    | #41

    Very helpful post. I have a query regarding to migrating live data using flexclone.
    Assuming I can use flexclone to migrate some of the cifs shares data from a cifs volume to another, but since flexclone creation is based on a snapshot, in order to keep data intact, do I need to stop applications before creating clone vol?

  42. | #42

    If you are using it as a migration tool (not sure why), then yes you would want to stop the apps first.

    As I say, why would you want to use this as a migration tool however? The FlexClone will exist in the same aggregate and look and feel the same as the source. Any changes you make to the source can be made to the Clone.

  43. jx
    | #43

    Thanks Chris. the reason is we’re currently running snapmirror on a very large volume, but some of the data don’t really need to replicate, so I want to break it into some smaller ones with dedicated data on different volume, then only maintain snapmirror on the volume that it is needed for replication.
    Is there better tool other than flexclone that can achieve this with minimum down time?

  44. | #44

    Okay, I see what you are planning. So create several FlexClones, break them off and then delete all the extra data in each one? Depending on the quantity of data, it may still be easier (and cleaner) to do a simple host-side data migration into new volumes. However if you have complex permission mappings, lots of small files or many nested directories, FlexClone may be more efficient at running through this.

    You can actually use ndmpcopy to also achieve this, as you can point it at specific folders and sub-directories within a volume and copy out to a new location. http://www.wafl.co.uk/ndmpcopy/

  45. | #45

    That’s unfortunate that you get billed per GB!

    Option A will only work if you can use a backup application that supports SnapMirror-to-Tape (SM2T), and I think this would be things like NetBackup, CommVault, etc. and then you need to make sure that they’ll be happy writing to a USB/eSATA drive.

    If you were using SnapVault, then you could use the NetApp tool LREP (http://now.netapp.com/NOW/download/tools/lrep/) which is designed to help with exactly this issue. Unfortunately SnapMirror is based on having specific snapshots present on either side rather than doing any logical comparison of data, so without those SnapMirror snapshots you’ll be unable to re-establish a relationship.

    Option B sounds the best way of doing this. As you can’t run dedupe on the older filer, then you’ll need to un-dedupe all the data first (not just turn dedupe off, but actually undedupe it). This may be a pain! Alternatively you may be able to speak to your friendly local NetApp reseller, or NetApp directly and see if you can get an eval or loan system for 30 days or so to perform the data transfer. This may be your best bet in terms of time, simplicity, and it shouldn’t be too expensive if you need to loan a system for a short period either.

    Good luck, and please let us know how you get along!

  46. sandeep
    | #46

    Hello I have a filer A (Celerra) with 200 TB unstructured data which I want to migrate to new filer B (NetApp). Could any one help me the best possible fast way to do this data migration with out compromising the data permissions.

  47. Ant
    | #47

    Excellent article! Some really useful tips there.
    I wonder if you can tell me how to do something?
    I’ve got a file (that was restored from an NDMP backup) but the ACL was not restored and so it isn’t accessible via CIFS from a windows client.
    Is there a way to copy the file to another location, or volume, so that it will inherit the ACL from the parent folder?
    I’m using NTFS permissions on the qtrees by the way – I wondered if moving it to a qtree with unix style permissions would mean that I could get access to it.

  48. | #48

    seems like maybe a problem with the ndmp backups also as the ACL should be preserved! priv set advanced gives you the mv command which you could try, or use ndmpcopy to copy it out to another area with inherent permissions. not sure about changing the qtree style, but can you export it as NFS to a *nix host and change the permissions?

  49. | #49

    That’s a lot of data to migrate. A couple of commercial options to maybe try, AutoVirt and F5 ARX. Both will allow you to create a global namespace and more-or-less seamless migrate the share data off the celerra. Alternatives are robocopy or securecopy, and you could improve the performance by having multiple copy engines targeting diffent areas of the file data.

  50. Ant
    | #50

    @Chris Kranz
    Indeed it is a problem with the NDMP backups – a service pack for my backup software switched off the “backup ACLs” option.

    Thanks for the advice – I’ll give it a whirl.

Comment pages
  1. | #1
  2. | #2

This site is not affiliated or sponsored in anyway by NetApp or any other company mentioned within.
%d bloggers like this: