CONVMV(1) | CONVMV(1) |
convmv - converts filenames from one encoding to another
convmv [options] FILE(S) ... DIRECTORY(S)
Example:
convmv -f latin1 -t utf-8 -r --exec "echo #1 should be renamed to #2" path/to/files
ntfs-sfm(-undo), ntfs-sfu(-undo) for the mapping of illegal ntfs characters for Linux or Macintosh cifs clients (see MS KB 117258 also mapchars mount option of mount.cifs on Linux).
ntfs-pretty(-undo) for for the mapping of illegal ntfs characters to pretty legal Japanese versions of them.
See the map_get_newname() function how to easily add own mappings if needed. Let me know if you think convmv is missing some useful mapping here.
By the way: The superscript dot of the letter i was added in the Middle Ages to distinguish the letter (in manuscripts) from adjacent vertical strokes in such letters as u, m, and n. J is a variant form of i which emerged at this time and subsequently became a separate letter.
convmv is meant to help convert a single filename, a directory tree and the contained files or a whole filesystem into a different encoding. It just converts the filenames, not the content of the files. A special feature of convmv is that it also takes care of symlinks, also converts the symlink target pointer in case the symlink target is being converted, too.
All this comes in very handy when one wants to switch over from old 8-bit locales to UTF-8 locales. It is also possible to convert directories to UTF-8 which are already partly UTF-8 encoded. convmv is able to detect if certain files are UTF-8 encoded and will skip them by default. To turn this smartness off use the "--nosmart" switch.
Almost all POSIX filesystems do not care about how filenames are encoded, here are some exceptions:
HFS+ on OS X / Darwin
Linux and (most?) other Unix-like operating systems use the so called normalization form C (NFC) for its UTF-8 encoding by default but do not enforce this. HFS+ on the Macintosh OS enforces normalization form D (NFD), where a few characters are encoded in a different way. On OS X it's not possible to create NFC UTF-8 filenames because this is prevented at filesystem layer. On HFS+ filenames are internally stored in UTF-16 and when converted back to UTF-8 (because the Unix based OS can't deal with UTF-16 directly), NFD is created for whatever reason. See http://developer.apple.com/qa/qa2001/qa1173.html for defails. I think it was a very bad idea and breaks many things under OS X which expect a normal POSIX conforming system. Anywhere else convmv is able to convert files from NFC to NFD or vice versa which makes interoperability with such systems a lot easier.
APFS on macOS
Apple, with the introduction of APFS in macOS 10.3, gave up to impose NFD on user space. But once you enforced NFD there is no easy way back without breaking existing applications. So they had to make APFS normalization-insensitive, that means a file can be created in NFC or NFD in the filesystem and it can be accessed with both forms also. Under the hood they store hashes of the normalized form of the filename to provide normalization insensitivity. Sounds like a great idea? Let's see: If you readddir a directory, you will get back the files in the the normalization form that was used when those files were created. If you stat a file in NFC or in NFD form you will get back whatever normalization form you used in the stat call. So user space applications can't expect that a file that can be stat'ed and accessed successfully, is also part of directory listings because the returned normalization form is faked to match what the user asked for. Theoretically also user space will have to normalize strings all the time. This is the same problem as for the case insensitivity of filenames before, which still breaks many user space applications. Just that the latter one was much more obvious to spot and to implement than this thing. So long, and thanks for all the fish.
JFS
If people mount JFS partitions with iocharset=utf8, there is a similar problem, because JFS is designed to store filenames internally in UTF-16, too; that is because Linux' JFS is really JFS2, which was a rewrite of JFS for OS/2. JFS partitions should always be mounted with iocharset=iso8859-1, which is also the default with recent 2.6.6 kernels. If this is not done, JFS does not behave like a POSIX filesystem and it might happen that certain files cannot be created at all, for example filenames in ISO-8859-1 encoding. Only when interoperation with OS/2 is needed iocharset should be set according to your used locale charmap.
NFS4
Despite other POSIX filesystems RFC3530 (NFS 4) mandates UTF-8 but also says: "The nfs4_cs_prep profile does not specify a normalization form. A later revision of this specification may specify a particular normalization form." In other words, if you want to use NFS4 you might find the conversion and normalization features of convmv quite useful.
FAT/VFAT and NTFS
NTFS and VFAT (for long filenames) use UTF-16 internally to store filenames. You should not need to convert filenames if you mount one of those filesystems. Use appropriate mount options instead!
Sometimes it might happen that you "double-encoded" certain filenames, for example the file names already were UTF-8 encoded and you accidentlly did another conversion from some charset to UTF-8. You can simply undo that by converting that the other way round. The from-charset has to be UTF-8 and the to-charset has to be the from-charset you previously accidentlly used. If you use the "--fixdouble" option convmv will make sure that only files will be processed that will still be UTF-8 encoded after conversion and it will leave non-UTF-8 files untouched. You should check to get the correct results by doing the conversion without "--notest" before, also the "--qfrom" option might be helpful, because the double utf-8 file names might screw up your terminal if they are being printed - they often contain control sequences which do funny things with your terminal window. If you are not sure about the charset which was accidentlly converted from, using "--qfrom" is a good way to fiddle out the required encoding without destroying the file names finally.
When in the smb.conf (of Samba 2.x) there hasn't been set a correct "character set" variable, files which are created from Win* clients are being created in the client's codepage, e.g. cp850 for western european languages. As a result of that the files which contain non-ASCII characters are screwed up if you "ls" them on the Unix server. If you change the "character set" variable afterwards to iso8859-1, newly created files are okay, but the old files are still screwed up in the Windows encoding. In this case convmv can also be used to convert the old Samba-shared files from cp850 to iso8859-1.
By the way: Samba 3.x finally maps to UTF-8 filenames by default, so also when you migrate from Samba 2 to Samba 3 you might have to convert your file names.
When Netatalk is being switched to UTF-8 which is supported in version 2 then it is NOT sufficient to rename the file names. There needs to be done more. See http://netatalk.sourceforge.net/2.0/htmldocs/upgrade.html#volumes-and-filenames and the uniconv utility of Netatalk for details.
no bugs or fleas known
You can support convmv by doing a donation, see <https://www.j3e.de/donate.html>
Bjoern JACKE
Send mail to bjoern [at] j3e.de for bug reports and suggestions.
2021-01-01 | perl v5.32.0 |