read only root filesystem

  • Somehow my Debian went to read only in root file system. I have no idea how this could have happened.
    For example when I am in /root folder and type command nano and after that press Tab to list possible file in that folder I get the message:

    [email protected]:~# nano -bash: cannot create temp file for here-document: Read-only file system

    The same for the cd command when I type cd /home and press Tab to list paths I have this:

    [email protected]:~# cd /home -bash: cannot create temp file for here-document: Read-only file system

    I also have problems with software like apt and others. Can't even apt-get update. I have a lot of errors like this:

    Err http :// wheezy-updates/main Sources
    406  Not Acceptable
    W: Not using locking for read only lock file /var/lib/apt/lists/lock
    W: Failed to fetch http ://  rename failed, Read-only file system (/var/lib/apt/lists/ -> /var/lib/apt/lists/
    W: Failed to fetch http ://  404  Not Found
    W: Failed to fetch http ://  404  Not Found
    W: Failed to fetch http ://  406  Not Acceptable
    E: Some index files failed to download. They have been ignored, or old ones used instead.
    W: Not using locking for read only lock file /var/lib/dpkg/lock

    I have a lot of problems in the system. Is it possible to fix that? How can I check what happened? What should I look for in the logs?

    I know it could be because of the line in /etc/fstab file:

    /dev/mapper/debian-root /               ext4    errors=remount-ro 0       1

    but what is the problem? I can't find nothing or maybe I don't know where to look.


    I did search messages logs and found only this:

    kernel: [    5.709326] EXT4-fs (dm-0): re-mounted. Opts: (null)
    kernel: [    5.977131] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro
    kernel: [    7.174856] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)

    I guess it's correct, because I have the same entries on other debian machines.

    I found something in dmesg (I cut that output a bit because was a lot standard ext4 things)

    [email protected]:/# dmesg |grep ext4
    EXT4-fs error (device dm-0) in ext4_reserve_inode_write:4507: Journal has aborted
    EXT4-fs error (device dm-0) in ext4_reserve_inode_write:4507: Journal has aborted
    EXT4-fs error (device dm-0) in ext4_dirty_inode:4634: Journal has aborted
    EXT4-fs error (device dm-0): ext4_discard_preallocations:3894: comm rsyslogd: Error loading buddy information for 1
    EXT4-fs warning (device dm-0): ext4_end_bio:250: I/O error -5 writing to inode 133130 (offset 132726784 size 8192 starting block 159380)
    EXT4-fs error (device dm-0): ext4_journal_start_sb:327: Detected aborted journal

    5 errors and 1 warning. Any ideas? Is it safe to use mount -o remount,rw / ?

    Look for the strings "ext4" et "/dev/mapper/debian-root" in `/var/log/messages`. If your filesystem is corrupt, you should see it in early kernel messages during boot. Also try `mount -o remount,rw /dev/mapper/debian-root` and tell us if it throws you an error.

    also do you have remaining space, what gives you the command `df`

    Can you boot into 'recovery mode' from grub? Alternatively, edit the grub kernel options and add the word single to the end and boot. You should end up with a root shell from which you can run various tools to check and repair your disk.

    resetting the "VM machine" did solve my problem (case - Ubuntu was running on Virtual Box)

  • HBruijn

    HBruijn Correct answer

    6 years ago

    The default behaviour for most Linux file systems is to safeguard your data. When the kernel detects an error in the storage subsystem it will make the filesystem read-only to prevent (further) data corruption.

    You can tune this somewhat with the mount option errors={continue|remount-ro|panic} which are documented in the system manual (man mount).

    When your root file-system encounters such an error, most of the time the error won't be recorded in your log-files, as they will now be read-only too. Fortunately since it is a kernel action the original error message is recorded in memory first, in the kernel ring buffer. Unless already flushed from memory you can display the contents of the ring buffer with the dmesg command. .

    Most real hard disks support SMART and you can use smartctl to try and diagnose the disk health.

    Depending on the error messages, you could decide it is still safe to use file-system and return it read-write condition with mount -o remount,rw /

    In general though, disk errors are a precursor to complete disk failure. Now is the time to create a back-up of your data or to confirm the status of your existing back-ups.

    yes I have the backup data. Couold you please look at my question again? I found something in dmesg and made small edit in my question.

    Typically I would expect those ext4 errors to be surrounded with errors related to IO or the device as most likely the problem is not the filesystem as such, but the underlying disk. See for instance

    One more question. Could it be because of mounted partions (SAN/NAS storage)? I have them of course in my fstab file defined.

    In my experience only the filesystem which suffered the IO errors gets mounted read-only, neither the other partitions nor remote shares shouldn't get remounted read-only.

    We did mount -o remount,rw / and then did chmod to the file which worked for us. when done with changes do a mount -o remount,ro / to take the file system back to read-only.

    In my case, I found that an invalid `fstab` can prevent the kernel from mounting the drive as `rw`. I built my system from scratch, and `dmesg` helped me figure out that I typed in the wrong UUID in the fstab.

    I saw this after an OS upgrade (14.04.4 -> 16.04.1) where something in the upgrade changed the UUID of the partition my root was stored on. I wasn't able to do the `mount -o remount,rw /` because it couldn't find the correct UUID to mount. However, I was able to do `mount -o remount,rw /dev/sdf1 /`, after determining from 'lsblk' that my root was actually mounted on /dev/sdf1. Then I could update my /etc/fstab with the correct UUID found by running `blkid`. Then everything was back to normal.

    If `mount -o remount,rw /` refuses to work with error `mount point not mounted or bad option`, a `sudo fsck /dev/rootpartition` followed by `sudo reboot` may do the trick.

License under CC-BY-SA with attribution

Content dated before 6/26/2020 9:53 AM