Why are hard links not allowed for directories?
I am using Ubuntu 12.04. When I try to create a hard link for any directory, it fails. I can create hard links for files inside file system boundary. I know the reason why we cannot create hardlinks for files beyond file system.
I tried these commands:
$ ln /Some/Direcoty /home/nischay/Hard-Directory hard link not allowed for directory $ sudo ln /Some/Direcoty /home/nischay/Hard-Directory [sudo] password for nischay: hard link not allowed for directory
I just want to know the reason behind this. Is it same for all GNU/Linux distros and Unix flavours (BSD, Solaris, HP-UX, IBM AIX) or only in Ubuntu or Linux?
Try `ln -F ` and it _might_ work. Certainly, it used to work for the superuser in older versions of Unix. Does anyone remember whether that was UCB or System V? Yes, bad things could happen, but usually not. As I recall, `rmdir` knew not to carry on deleting past a hard link. However, users could get confused and delete things in error.
@StevePitchers How can `rmdir` handle hard links in a special way? A hard link is just a normal link - but an additional one. It is not even easy to find out whether an unusual extra links exist without extra recordings.
Each node stores the number of hard links that point to it: the contents are only released once there are no remaining links. So `rmdir` can tell whether the directory has links from other places. Recursive removal, `rm -r`, must be coded with care, to be sure it will act correctly even should there be errors like "permission denied". BTW, UCB = BSD, doh!
Directory hardlinks break the filesystem in multiple ways
They allow you to create loops
A hard link to a directory can link to a parent of itself, which creates a file system loop. For example, these commands could create a loop with the back link
mkdir -p /tmp/a/b cd /tmp/a/b ln -d /tmp/a l
A filesystem with a directory loop has infinite depth:
Avoiding an infinite loop when traversing such a directory structure is somewhat difficult (though for example POSIX requires
findto avoid this).
A file system with this kind of hard link is no longer a tree, because a tree must not, by definition, contain a loop.
They break the unambiguity of parent directories
With a filesystem loop, multiple parent directories exist:
cd /tmp/a/b cd /tmp/a/b/l/b
In the first case,
/tmp/ais the parent directory of
In the second case,
/tmp/a/b/lis the parent directory of
/tmp/a/b/l/b, which is the same as
So it has two parent directories.
They multiply files
Files are identified by paths, after resolving symlinks. So
are different files.
There are infinitely many further paths of the file. They are the same in terms of their inode number of course. But if you do not explicitly expect loops, there is no reason to check for that.
A directory hardlink can also point to a child directory, or a directory that is neither child nor parent of any depth. In this case, a file that is a child of the link would be replicated to two files, identified by two paths.
$ ln /Some/Direcoty /home/nischay/Hard-Directory $ echo foo > /home/nischay/Hard-Directory/foobar.txt $ diff -s /Some/Direcoty/foobar.txt /home/nischay/Hard-Directory/foobar.txt $ echo bar >> /Some/Direcoty/foobar.txt $ diff -s /Some/Direcoty/foobar.txt /home/nischay/Hard-Directory/foobar.txt $ cat /Some/Direcoty/foobar.txt foo bar
How can soft links to directories work then?
A path that may contain softlinks and even soft linked directory loops is often used just to identify and open a file. It can be used as a normal, linear path.
But there are other situations, when paths are used to compare files. In this case, symbolic links in the path can be resolved first, converting it to a minimal, and commonly agreed upon representation creating a canonical path:
This is possible, because the soft links can all be expanded to paths without the link. After doing that with all soft links in a path, the remaining path is part of a tree, where a path is always unambiguous.
readlinkcan resolve a path to its canonical path:
$ readlink -f /some/symlinked/path
Soft links are different from what the filesystem uses
A soft link cannot cause all the trouble because it is different from the links inside the filesystem. It can be distinguished from hard links, and resolved to a path without symlinks if needed.
In some sense, adding symlinks does not alter the basic file system structure - it keeps it, but adds more structure like an application layer.
NAME readlink - print resolved symbolic links or canonical file names SYNOPSIS readlink [OPTION]... FILE... DESCRIPTION Print value of a symbolic link or canonical file name -f, --canonicalize canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist [ ... ]
@Tanay Right, it could help the expanation to compare it to similar cases with soft links. Ill try.
Exactly how does this pertain to only directories? The way I understand it, these problems are also a problem for hardlinked files too. Moreover, I see hardlinking as an easy way to change a given directory's permission to allow others inside, without having to allow them inside the parent chain too. Sounds _very_ useful if you don't have the ability to add/modify groups...
Sounds very interesting! But I know nothing about what the issue is - can you give me a hint?
"You generally should not use hard links anyway" is over-broad. You need to understand the difference between hard links and symlinks, and use each as appropriate. Each comes with its own set of advantages and disadvantages:
- Point to directories
- Point to non-existent objects
- Point to files and directories outside the same filesystem
Hard links can:
- Keep the file that they reference from being deleted
Hard links are especially useful in performing "copy on write" applications. They allow you to keep a backup copy of a directory structure, while only using space for the files that change between two versions.
cp -alis especially useful in this regard. It makes a complete copy of a directory structure, where all the files are represented by hard links to the original files. You can then proceed to update files in the structure, and only the files that you update will take up additional space. This is especially useful when maintaining multigenerational backups.
regarding the last paragraph, if you edit "copied" hardlinked file, the original file is also changed - see http://unix.stackexchange.com/questions/70531/cp-al-snapshot-whose-hard-links-get-directed-to-a-new-file-when-edited
This description of hard links is rather misleading. It's basically true that hard links "keep the file that they reference from being deleted", but that's just a side effect of hard links. It's certainly NOT true that you can create hard links in one directory, change the "original" file, and then expect the hard links to somehow point to the old content. In fact, the guiding truth of hard links is the fact that it's not a link at all, at least not any more so than the original "file", which is just a name pointing to a file. **A hard link is simply another name pointing to the same file.**
The backup idea is good and I actually use that a lot, but I think users should be warned that changing a file will also change the backup.
Heck, a symlink need not point to anything at all. `ln -s "Don't use this directory" README` is legitimate. In fact, if you think about it, a directory can be used as a relational database and not contain any actual files at all.
A bit off-topic, but if you're looking for a backup solution that leverages links take a look at https://github.com/laurent22/rsync-time-backup -- it creates point-in-time snapshots that will hardlink unchanged files
FYI, you can achieve the same thing as hard links for directories by using mount:
mount -t bind /var/www /home/user/workspace/www
This is very dangerous because most tools and programs will not be aware of the binding. I once did something like in the above example and then proceeded to
rm -rf /home/user. Luckily, there was nothing relevant in
If you only need to mount for read, you can set permissions on the mount point and avoid the `rm -rf` problem. https://superuser.com/questions/320415/linux-mount-device-with-specific-user-rights
The reason hard-linking directories is not allowed is a little technical. Essentially, they break the file-system structure. You should generally not use hard links anyway. Symbolic links allow most of the same functionality without causing problems (e.g
ln -s target link).
Hard links have good use cases. Saying you should generally not use them is a little too broad.
+2 for providing the link that actually answers the OP's question (and mine), -1 for emitting an opinion ("You should generally not use hard links anyway" - if it had links to support it, it would be ok). Which was good, because I can't give +2 anyway. ;D
Try to summarize the contents of the link in the answer, and keep the link as a reference. This a Stack Exchange good practice to avoid link rot, thanks.
well done @CharlieParker; best unintended (?) ironic comment of all times :)