Don't worry about not getting back to me quickly. I'm certainly in no hurry!
The file "names.csv" is Windows 1252. I can only tell that because that is the interpretation that yields the most correct file - there isn't anything in the file data that explicitly marks it as a Windows 1252 file. The "Ó" character is available in Windows 1252, so it is encoded correctly. The "Č" is not available in Windows 1252, and it is encoded as a question mark. The question mark in "?apek" is appearing because it's a question mark in the file, not because your software is reading a character it cannot understand or represent. Rather, it is actually encoded in the file as a literal question mark - what you would expect to be in the file if the person's name were actually "?apek." The software (I'm assuming Excel) that is saving the file is using Windows 1252, and its method for recording an unencodable character is to change it to a question mark. A more useful solution would be for it to replace the character with the best available substitute in Windows 1252, since "C" is a better representation of "Č" than "?" is. Most software does not do this, however, and you get the question marks - typically with a warning that the file could not be correctly encoded, but I wouldn't be surprised if Excel cannot warn you.
The files "encoding test.txt" and "encoding test.csv" are identical files, encoded as UTF-8 (without anything in the file data to mark them as such), and your software is interpreting them as Windows 1252.
The file "names.xlsx" is of course an Excel file. The names are encoded within that file as UTF-8, and your software appears to be interpreting it correctly, with the exception of characters that would not be encodable if the data were converted to Windows 1252 encoding.
The file "names.xlsx.txt" is UTF-16LE encoded, and does contain a "byte order mark" at the beginning which marks it as UTF-16LE. UTF-16 uses two bytes for every character, and UTF-16LE means "little endian" - the bytes are stored with the least-significant byte last.
It looks like your software expects most files to be Windows 1252, but recognizes the UTF-16LE byte order mark and changes its interpretation to correctly import such files. Since you're seeing "Capek" instead of "Čapek" when importing the UTF-16LE file, one of two things is probably happening. The software's internal data encoding may be Windows 1252, and it is changing "Č" to "C" on import since there is no way of representing "Č" in Windows 1252. If that is the case, it means your software is doing a more graceful degradation to "C" instead of "?" like what Excel did when saving your Windows 1252 CSV file. --OR-- it is actually the character "Č" but is rendered in a font that uses a glyph identical to "C" for that character. If a font designer didn't feel like making a bunch of seldom-used glyphs, they may have just duplicated the simple roman glyphs for those characters rather than leaving them undefined. That is probably less likely than the first scenario, since your Excel file import yields a question mark.
In either case, it means your software is behaving a little differently when importing an Excel file versus a UTF-16LE text file. The Excel file is UTF-8, and it is reading that correctly. The text file is UTF-16LE, and it is reading that correctly also. However, for the Excel file, it is changing the "Č" to a "?," and for the UTF-16LE file, it is changing it to "C." Perhaps they used some software library that's available out there for the part of their software that interprets Excel data, that library is converting to Windows 1252, and it does the more common question mark substitution. The import process for UTF-16LE is being a bit smarter about it.
There probably isn't a solution to make your software show "Č" unless it's a font problem and you can change the font, or it's converting to Windows 1252 and there is a preference somewhere that can change that. If "C" is acceptable, you can go the UTF-16LE route. The downside with UTF-16 encodings is that your file size is going to be about double what it would be with UTF-8, and if you need to look at those files with other software, they may not handle it well (UTF-8 would look fine in unaware applications except for the non-ASCII characters, which would probably still be saved correctly).
I've attached a new version of the test file. It is UTF-8 as before, but includes a UTF-8 byte order mark which may cause your software to recognize it correctly. Byte order marks are not generally recommended for UTF-8 files, but that could be a way to get your software to recognize it correctly if you don't have import options that would force it to.