"Input/output error" when accessing a directory
I want to list and remove the content of a directory on a removable hard drive. But I have experienced "Input/output error":
$ rm pic -R rm: cannot remove `pic/60.jpg': Input/output error rm: cannot remove `pic/006.jpg': Input/output error rm: cannot remove `pic/008.jpg': Input/output error rm: cannot remove `pic/011.jpg': Input/output error $ ls -la pic ls: cannot access pic/60.jpg: Input/output error -????????? ? ? ? ? ? 006.jpg -????????? ? ? ? ? ? 006.jpg -????????? ? ? ? ? ? 011.jpg
I was wondering what the problem is?
How can I recover or remove the directory
picand all of its content?
My OS is Ubuntu 12.04, and the removable hard drive has ntfs filesystem. Other directories not containing or inside
picon the removable hard drive are working fine.
Last part of output of
dmesgafter I tried to list the content of the directory:
[19000.712070] usb 1-1: new high-speed USB device number 2 using ehci_hcd [19000.853167] usb-storage 1-1:1.0: Quirks match for vid 05e3 pid 0702: 520 [19000.853195] scsi5 : usb-storage 1-1:1.0 [19001.856687] scsi 5:0:0:0: Direct-Access ST316002 1A 0811 PQ: 0 ANSI: 0 [19001.858821] sd 5:0:0:0: Attached scsi generic sg2 type 0 [19001.861733] sd 5:0:0:0: [sdb] 312581808 512-byte logical blocks: (160 GB/149 GiB) [19001.862969] sd 5:0:0:0: [sdb] Test WP failed, assume Write Enabled [19001.865223] sd 5:0:0:0: [sdb] Cache data unavailable [19001.865232] sd 5:0:0:0: [sdb] Assuming drive cache: write through [19001.867597] sd 5:0:0:0: [sdb] Test WP failed, assume Write Enabled [19001.869214] sd 5:0:0:0: [sdb] Cache data unavailable [19001.869218] sd 5:0:0:0: [sdb] Assuming drive cache: write through [19001.891946] sdb: sdb1 [19001.894713] sd 5:0:0:0: [sdb] Test WP failed, assume Write Enabled [19001.895950] sd 5:0:0:0: [sdb] Cache data unavailable [19001.895953] sd 5:0:0:0: [sdb] Assuming drive cache: write through [19001.895958] sd 5:0:0:0: [sdb] Attached SCSI disk [19113.024123] usb 2-1: new high-speed USB device number 3 using ehci_hcd [19113.218157] scsi6 : usb-storage 2-1:1.0 [19114.232249] scsi 6:0:0:0: Direct-Access USB 2.0 Storage Device 0100 PQ: 0 ANSI: 0 CCS [19114.233992] sd 6:0:0:0: Attached scsi generic sg3 type 0 [19114.242547] sd 6:0:0:0: [sdc] 312581808 512-byte logical blocks: (160 GB/149 GiB) [19114.243144] sd 6:0:0:0: [sdc] Write Protect is off [19114.243154] sd 6:0:0:0: [sdc] Mode Sense: 08 00 00 00 [19114.243770] sd 6:0:0:0: [sdc] No Caching mode page present [19114.243778] sd 6:0:0:0: [sdc] Assuming drive cache: write through [19114.252797] sd 6:0:0:0: [sdc] No Caching mode page present [19114.252807] sd 6:0:0:0: [sdc] Assuming drive cache: write through [19114.280407] sdc: sdc1 < sdc5 > [19114.289774] sd 6:0:0:0: [sdc] No Caching mode page present [19114.289779] sd 6:0:0:0: [sdc] Assuming drive cache: write through [19114.289783] sd 6:0:0:0: [sdc] Attached SCSI disk
Input/Output errors during filesystem access attempts generally mean hardware issues.
dmesgand check the last few lines of output. If the disc or the connection to it is failing, it'll be noted there.
EDIT Are you mounting it via
ntfs-3g? As I recall, the legacy
ntfsdriver had no stable write support and was largely abandoned when it turned out
ntfs-3gwas significantly more stable and secure.
I connect the removable hard drive to my Ubuntu 12.04, and it is automatically mounted. So I guess `ntfs-3g`?
Don't "*guess*". Check -- you can see how everything is mounted by typing the `mount` command and looking at the output.
(1) I have added the last part of output of `dmesg` after I tried to list the content of the directory. I don't know how it helps. (2) I can't see if it is mounted by nfts-3g or ntfs, by looking at the output of `mount`: `/dev/sdb1 on /media/removable_drive type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096)`
As Sadhur states this is probably caused by disk hardware issues and the
dmesgoutput is the right place to check this.
You can issue a surface scan of your disk from Linux
Check the manual page for more thorough tests an basic fixes (block relocation). This is all filesystem-agnostic, so it is safe even with an NTFS filesystem as it operates on the 'disk surface' level.
I personally made this to run on a monthly basis from cron. Of course you need to check if you receive the cron mails in your mailbox (which is often not the case by default). These mails end up in
30 4 * * 3 root [ -x /sbin/badblocks ] && [ $(date +\%d) -le 7 ] && /sbin/badblocks /dev/sda
Thanks! To run the command you suggested, is it `/sbin/badblocks /media/removable_drive` in my case?
No. According to the dmesg output you have to use sdb: `/sbin/badblocks /dev/sdb` or sdc. I can't really figure out what happend / you did from `dmesg`
remember badblocks app accepts begin and end blocks to work with in case you want to "suspend/resume" :)
Your filesystem is damaged , for NTFS volumes you should run a
chkdskunder windows system , but it's nearly impossible to recover. Sometimes you might need to format the disk.
Thanks! My other directories are fine. Can I not format the whole drive, just reclaim the space from the directory in question?
@Tim , you had to copy all of the rest out , format and copy them back ... i don't know if one can remove a single node ... not familiar with NTFS structure
A solution that works for me is to downgrade the
ntfs-3gversion from the 2014 release to the 2012 release. This should solve your ntfs partition access problem. In the long run this is not a solution because eventually you will need to run the latest release.
More info here
Thank you so much. That solved my problem. I installed the latest stable release (2016.2.22) from source and now it is working flawlessly. Installation instructions I used: http://www.tuxera.com/community/open-source-ntfs-3g/
Okay, that is good to know. So basically there is a window between 2012 and beginning 2016 during which the drive simply didn't work.
Nobody mentioned what to do if Linux tools are not working and only a Mac, but not Windows, is available.
Can be fixed on OS X with Paragon NTFS
In my case
gpartedsaid to go find a Windows PC which was nowhere to be found. But a Mac was around, for which this great piece of software is available. Installed the trial version, performed verify, then repair - and voilà!
I just wanted to add my solution to this thread for the benefit of others - I did some work on my system when my power supply failed - I must have reconnected the SATA cables in the wrong order as when I switched them over, everything worked again - no idea why the boot disk needed to be on a specific SATA port, anyway, might be the answer for someone else.
I just wanted to share my experience: on FreeBSD 10.3, I mounted my external hard drive with
$ sudo ntfs-3g /dev/da0s1 /media
Inside the hard drive, I did a
mkdirto create a directory and then moved some files to it, of course with
mvcommand. Finally I did the following command:
$ sudo sync
Then I mounted the hard drive on a Linux machine with kernel 4.4.0-78-generic. Now When I list the contents of the hard drive, the directory created on FreeBSD, named
Jeff, is shown like below:
$ ls -lhrtci ls: cannot access 'Jeff': Input/output error total 20K ? d????????? ? ? ? ? ? Jeff
Also, when trying to remove the
Jeffdirectory, I receive the following error message:
$ sudo rm -f -R Jeff rm: cannot remove 'Jeff': Input/output error
I couldn't get rid of
Jeffdirectory on Linux machine, therefore I used the FreeBSD machine and re-mounted the hard drive on FreeBSD again. But the
rmcommands on FreeBSD generate the same
Input/output error. Looks like there has been a bug on FreeBSD
I moved all my data from external hard drive to a Linux machine, of course the corrupt file
Jeffcouldn't be moved due to I/O error. Then I reformatted the external hard drive with both zeroing of the volume and bad sector checking like this:
$ sudo mkfs.ntfs /dev/sdb1
And then moved all the data back to the external volume. This way, I lost the corrupt file named
Jeff, however, my external hard drive is clean of any I/O error.
I released that when I try to access the disk which occurs this error, it tried to write last files copied were over written to the last file then access attempt fails because the record already written doesnt match with the last copied items so its fails. The healthiest way to rescue disk is to removing the last item or items copied in windows.