How to find (and delete) duplicate files
I have a largish music collection and there are some duplicates in there. Is there any way to find duplicate files. At a minimum by doing a hash and seeing if two files have the same hash.
Bonus points for also finding files with the same name apart from the extension - I think I have some songs with both mp3 and ogg format versions.
I'm happy using the command line if that is the easiest way.
fdupesfor this. It is a commandline program which can be installed from the repositories with
sudo apt install fdupes. You can call it like
fdupes -r /dir/ect/oryand it will print out a list of dupes. fdupes has also a simple Homepage and a Wikipedia article, which lists some more programs.
It also has a "-d" option that lets you choose which copy you want to keep, and deletes the other ones (or you can keep all of them if you want).
Is it possible for fdupes to list duplicate folders instead of duplicate files?
Can you explain in more detail how to delete all duplicates (leaving only a single copy each file) in a recursive directory tree? I want to do this automatically, that is, without having to specify each time which file to keep. It should just select one of the duplicates.
I added little more explanation about the command here http://stackoverflow.com/a/31630565/54964 I would like to have similarly some static file which would remember my early wishes so I do not next time say which duplicates not to remove.
FSlint has a GUI and some other features. The explanation of the duplicate checking algorithm from their FAQ:
1. exclude files with unique lengths 2. handle files that are hardlinked to each other 3. exclude files with unique md5(first_4k(file)) 4. exclude files with unique md5(whole file) 5. exclude files with unique sha1(whole file) (in case of md5 collisions).
Thanks. Note that the command name is "fslint-gui", and the command line tools are not in $PATH by default - they are in /usr/share/fslint/fslint. I was confused when I didn't get help on which package it was in by just running fslint (via /usr/lib/command-not-found).
programs/scripts/bash-solutions, that can find duplicates and run under
- dupedit: Compares many files at once without checksumming. Avoids comparing files against themselves when multiple paths point to the same file.
- dupmerge: runs on various platforms (Win32/64 with Cygwin, *nix, Linux etc.)
- dupseek: Perl with algorithm optimized to reduce reads.
- fdf: Perl/c based and runs across most platforms (Win32, *nix and probably others). Uses MD5, SHA1 and other checksum algorithms
- freedups: shell script, that searches through the directories you specify. When it finds two identical files, it hard links them together. Now the two or more files still exist in their respective directories, but only one copy of the data is stored on disk; both directory entries point to the same data blocks.
- fslint: has command line interface and GUI.
- liten: Pure Python deduplication command line tool, and library, using md5 checksums and a novel byte comparison algorithm. (Linux, Mac OS X, *nix, Windows)
- liten2: A rewrite of the original Liten, still a command line tool but with a faster interactive mode using SHA-1 checksums (Linux, Mac OS X, *nix)
- rdfind: One of the few which rank duplicates based on the order of input parameters (directories to scan) in order not to delete in "original/well known" sources (if multiple directories are given). Uses MD5 or SHA1.
- rmlint: Fast finder with command line interface and many options to find other lint too (uses MD5), since 18.04 LTS has a
rmlint-guipackage with GUI (may be launched by
rmlint --guior from desktop launcher named Shredder Duplicate Finder)
- ua: Unix/Linux command line tool, designed to work with find (and the like).
- findrepe: free Java-based command-line tool designed for an efficient search of duplicate files, it can search within zips and jars.(GNU/Linux, Mac OS X, *nix, Windows)
- fdupe: a small script written in Perl. Doing its job fast and efficiently.1
- ssdeep: identify almost identical files using Context Triggered Piecewise Hashing
Are any of these programs able to find duplicate folders (not just duplicate files?)
If your deduplication task is music related, first run the picard application to correctly identify and tag your music (so that you find duplicate .mp3/.ogg files even if their names are incorrect). Note that picard is also available as an Ubuntu package.
That done, based on the
musicip_puidtag you can easily find all your duplicate songs.
Another script that does this job is rmdupe. From the author's page:
rmdupe uses standard linux commands to search within specified folders for duplicate files, regardless of filename or extension. Before duplicate candidates are removed they are compared byte-for-byte. rmdupe can also check duplicates against one or more reference folders, can trash files instead of removing them, allows for a custom removal command, and can limit its search to files of specified size. rmdupe includes a simulation mode which reports what will be done for a given command without actually removing any files.
For Music related duplicate identification and deletion Picard and Jaikoz by http://musicbrainz.org/ is the best solution. Jaikoz I believe automatically tags your music based on the data of the song file. You don't even need the name of the song for it to identify the song and assign all meta data to it. Although the free version can tag only a limited number of songs in one run, but you can run it as many times as you want.