When you're moving large files around, you'll notice that a file doesn't copy as fast as you calacuated based on the LAN speed. There are a number of reasons behind this - TCP/IP overhead, network latency and the list goes on. But for large file transfers, the most likely thing you're experiencing is overhead of the file system cache (aka buffered IO). The notion of caching your IO in RAM is used because of future reads of the same file, thus the OS can read directly from RAM rather than from disk and thus speeding up read time. This is a great feature to reduce IO on file servers where, based of the principle of locality, the same file will get accessed repeatedly over a short period of time.
Using buffered IO on large files (which will only be accessed once) poses a few problems. Firstly, the extra cost associated with caching can add up over time, but more importantly, by caching a large file to the file system cache, other files which are already in the cache get flushed.
So now while you're happily taxing out your storage system with sequential reads for your file transfer, all of a sudden the OS is required to load some system files. Normally, these files will be in the file system caches already and could be quickly served from the RAM, but because the large file transfer has flushed the cache, the operation must be read from physical disk. Not only are these extra requests adding to the overall number of IOs, they're random IOs. The disk heads need to perform seek operations (which are extremely slow - in the millisecond range) to locate the system files, and then seek back to the large file you're transferring. Both your OS IO requests and large file transfer suffer from the IO contention.
The better approach is to use un-buffered IO operations to transfer your files - that is, don't make a copy of the large file in the file system cache. With un-buffered IO, you reduce the overhead of copying the IO blocks to RAM, and also, the file system cache will not get flushed by your one time large file transfer. Leaving the cache intact will deduce the number of IOs request to the disk, and let the disk focus on satisfying the IOs for the file transfer, SEQUENTIALLY ;P
Microsoft suggests using ESEUTIL.EXE to perform unbuffered IO transfers, but the binary isn't available on most machines and is dependent on C++ Runtime. http://blogs.technet.com/b/askperf/archive/2007/05/08/slow-large-file-copy-issues.aspx
In Windows 7 (presumably in Windows Server 2008 R2), XCOPY.EXE can do unbuffered IO when using the /J argument. http://commandwindows.com/xcopy.htm
If command line isn't your thing, TeraCopy can accomplish the same thing. There's even a portable version which I keep on my USB key all the time.