I suddenly started getting errors when copying files to my external hard drive. There is plenty of free space: 1.64TB free of 3.63TB. I was able to complete the file copy by doing one of two things:
In addition, the Windows 8 error checking tool fails with an error unless a different USB enclosure is used (deleting large files does not help in this case). The CHKDSK command-line tool always works and reports no errors on the disk.
How do I confirm the USB HDD enclosure was the problem? (I would like to confirm the problem was not with my hard drive and it is safe to continue using.) And how can I determine the capacity supported by a USB HDD enclosure?
detailed info:
The error when copying a large file:
Windows 8 error checking tool error:
Hardware:
File system:
System:
update: More details from "The shadow copies of volume D: were aborted..." logged event:
System - Provider [ Name] volsnap - EventID 14 [ Qualifiers] 49158 Level 2 Task 0 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2015-01-24T21:23:54.296013300Z EventRecordID 1063256374 Channel System Computer X1-Carbon Security - EventData \Device\HarddiskVolumeShadowCopy6 D: D: 0000000003003000000000000E0006C00A0000000D0000C002000000000000000000000000000000
update 2:
Error mounting the 4TB drive in Ubuntu with the dock that works from Windows:
Error mounting /dev/sdc1 at /media/daniel/DeskStar: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/daniel/DeskStar"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.
The drive isn't listed in in fdisk -l
, so can't try dd
...
I tried hooking back up to Windows: no problem; Windows disk properties error checking tool reports no errors.
Also:
Tried using dd
on the (problem?) enclosure with a different 2TB hard drive:
skip=0
skip=SOMEWHERE_NEAR_MIDDLE_OF_DRIVE
daniel@computer:~$ sudo dd bs=512 if=/dev/sdb1 of=test skip=3907026942 count=1 dd: ‘/dev/sdb1’: cannot skip: Invalid argument 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000210598 s, 0.0 kB/s
If it is the USB drive, and it is size-related, then the USB drive is failing to correctly process a sector write (and probably read, too) request. The file size does not matter. The cause is that the larger file has "pieces" falling beyond the addressable boundary.
Due to disk fragmentation, it is difficult to confirm or deny this hypothesis, but you can try with any tool which displays the disk fragmentation map. This should display a large disk with the beginning which is filling up, and nothing past a certain point. Not at the end, especially.
On a FAT32 disk you could try and fill the disk with small files, each 8Kb in size, until the "reachable" area was filled up and the disk became unwriteable. But the disk is NTFS and however the method isn't really very precise, or certain.
If at all possible, I would mount the disk on a Linux live distribution. At that point you could try and read the disk one sector at a time:
fdisk -l
will tell you how many 512-byte blocks are there in the external disk. Then
dd bs=512 if=/dev/sdc of=test skip=NNNNN count=1
will request a read of sector NNNNN (one-based :-) ).
If it is a matter of a limit to NNNNN, you will observe that:
N=1 it works
N=MAX_NUM it fails
N=MAX_NUM/2 it fails
...
so you can start with a classic bisection algorithm and determine where the critical sector "C" lies (any sector before C is readable, any after is not). If such a sector exists, you've got either an incredibly weird hardware damage, or the proof you were looking for of the enclosure's guilt.
Update - finding the boundary by bisecting: an example
So let's say the disk is 4TB, so 8,000,000,000 sectors. We know that sector 1 is readable and sector 8-billion isn't. Let READABLE be 1, let UNREADABLE be 8. Then the algorithm is:
let TESTING be (READABLE + UNREADABLE)/2
if sector TESTING is readable then READABLE becomes equal to TESTING
else, UNREADABLE becomes equal to TESTING.
Lather, rinse, repeat with the new values of (UN)READABLE.
When two consecutive values of TESTING are obtained, that's your boundary.
Let's imagine the boundary lies at sector 3,141,592,653 because of some strange bug in the enclosure.
first pass: testing = (1 + 8000000000)/2 = 4000000000.
4,000,000,000 is unreadable, so replace 8,000,000,000 with 4,000,000,000
second pass: testing (1 + 4M)/2 = 2M
sector 2M is readable, so replace 1 with 2,000,000,000
third pass: testing (2M + 4M)/2 = 3M
sector 3,000,000,000 is readable
fourth pass: testing (3M + 4M)/2 = 3,500,000,000 which is UNREADABLE
fifth: (3 + 3.5) / 2 = 3,250,000,000 UNREADABLE
...
So READABLE and UNREADABLE stalk the unknown boundary more and more closely, from both directions. When they are close enough you can even go and try all the sectors in between.
To locate the boundary, only log2(max - min) = log2(4TB - 0) = log2(4TB) = log2(240) = 40 (actually I think perhaps 42) sectors need to be read. Given a 30" reset delay on the enclosure when a reading error occurs, that should be 20 minutes at the most; probably much less.
Once you have the boundary B, to confirm it is a boundary you can do a sequential read of large chunks before B (this will not take too long), maybe one megabyte every gigabyte or so; and then a random sampling of sectors beyond B. For example the first 4*63 sectors beyond the boundary, then one sector every 3905 (or every RAND(4000, 4100) ) to try to avoid hitting always the same magnetic platter.
But actually, if you do find a boundary-like behaviour, and confirm that with another enclosure there is no such boundary -- well, I'd declare the case (en)closed.
OK, I think I figured it out:
dmesg
log to verify the addressable memory supported by the USB device.Same drive, different enclosures results in two different reported capacities:
7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
3519069872 512-byte logical blocks:(1.80 TB/1.63 TiB)
Full Details:
1. dmesg
when connecting "modern" dock with 4TB drive:
[93507.922275] usb 1-1.2: new high-speed USB device number 17 using ehci-pci [93508.087948] usb 1-1.2: New USB device found, idVendor=067b, idProduct=2773 [93508.087959] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [93508.087964] usb 1-1.2: Product: ATAPI-6 Bridge Controller [93508.087969] usb 1-1.2: Manufacturer: Prolific Technology Inc. [93508.087973] usb 1-1.2: SerialNumber: 0123456789000000110 [93508.088621] usb-storage 1-1.2:1.0: USB Mass Storage device detected [93508.089092] scsi24 : usb-storage 1-1.2:1.0 [93509.087318] scsi 24:0:0:0: Direct-Access Prolific ATAPI-6 Bridge C MPAO PQ: 0 ANSI: 0 [93509.087836] sd 24:0:0:0: Attached scsi generic sg2 type 0 [93509.088684] sd 24:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). [93509.089837] sd 24:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB) [93509.090945] sd 24:0:0:0: [sdb] Write Protect is off [93509.090958] sd 24:0:0:0: [sdb] Mode Sense: 03 00 00 00 [93509.092819] sd 24:0:0:0: [sdb] No Caching mode page found [93509.092832] sd 24:0:0:0: [sdb] Assuming drive cache: write through [93509.094321] sd 24:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). [93509.100539] sd 24:0:0:0: [sdb] No Caching mode page found [93509.100545] sd 24:0:0:0: [sdb] Assuming drive cache: write through [93509.170090] sdb: sdb1 [93509.171931] sd 24:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). [93509.176059] sd 24:0:0:0: [sdb] No Caching mode page found [93509.176078] sd 24:0:0:0: [sdb] Assuming drive cache: write through [93509.176086] sd 24:0:0:0: [sdb] Attached SCSI disk
2. dmesg
when connecting older enclosure with 4TB drive:
[89939.561869] usb 1-1.2: new high-speed USB device number 14 using ehci-pci [89939.656581] usb 1-1.2: New USB device found, idVendor=152d, idProduct=2338 [89939.656592] usb 1-1.2: New USB device strings: Mfr=1, Product=2,SerialNumber=5 [89939.656598] usb 1-1.2: Product: USB to ATA/ATAPI Bridge [89939.656602] usb 1-1.2: Manufacturer: JMicron [89939.656606] usb 1-1.2: SerialNumber: 0613316A1498 [89939.658334] usb-storage 1-1.2:1.0: USB Mass Storage device detected [89939.658805] scsi20 : usb-storage 1-1.2:1.0 [89940.659147] scsi 20:0:0:0: Direct-Access HGST HMS 5C4040ALE640 A580 PQ: 0 ANSI: 2 CCS [89940.659959] sd 20:0:0:0: Attached scsi generic sg2 type 0 [89940.661373] sd 20:0:0:0: [sdb] 3519069872 512-byte logical blocks:(1.80 TB/1.63 TiB) [89940.662410] sd 20:0:0:0: [sdb] Write Protect is off [89940.662424] sd 20:0:0:0: [sdb] Mode Sense: 00 38 00 00 [89940.663438] sd 20:0:0:0: [sdb] Asking for cache data failed [89940.663446] sd 20:0:0:0: [sdb] Assuming drive cache: write through [89940.667752] sd 20:0:0:0: [sdb] Asking for cache data failed [89940.667761] sd 20:0:0:0: [sdb] Assuming drive cache: write through [89940.684862] sdb: unknown partition table [89940.687887] sd 20:0:0:0: [sdb] Asking for cache data failed [89940.687893] sd 20:0:0:0: [sdb] Assuming drive cache: write through [89940.687897] sd 20:0:0:0: [sdb] Attached SCSI disk
There are few ways to test your Hard Drive, Download a software called "HDTune" This is a paid program but has a trial version which last for 30days with full functionality. You can use it to check for bad sectors, check your hard drive health. In any case you had bad sectors, you can try to fix it with HDD Regenerators, Hirens Download Which I Personally use.
Make sure you do not have any hard drives inserted aside from the one you wanna test. it is bootable you can burn it in cd or in usb drive.
next I suggest is check your cables. Specially on external drives, normally those error are caused by lack of power, try inserting it on another computer or changing cables if you have extras then see if it still fails.
It is strange that it generates the error at files above 4GB. Since the FS is NTFS, the limitation is excluded.
I suspect it's a buffer de-sync error.
Try this: go to regedit --> HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\System
Create a new DWORD value called CopyFileBufferedSynchronousIo
.
Change it's value from default 0 to 1.
Other things to consider: Do you have very large paths and filenames (like over 255 characters large) ? Do you use additional languages on the OS or other than default regional/keyboard or time/date format setting ? (As weird as it sounds, these can break enough things in windows). Alternately, the controller of the external drive may be unable to address more than 2.0 TB. -edit- Can you post a screenshot of the exact currently used space ?
User contributions licensed under CC BY-SA 3.0