Your comments
Hello Notarobot,
Unfortunately I don't know the internals of Kuboo, so I have no idea of what the problem is here (since it seems to be on client's side).
I still have interest in the project, and I spent to much time on the next version to not pusblish it someday. The issue is time, and energy, but I'll get there.
To be honest, right now I'm struggling to find time to finish the big refactoring I started a few years ago.
New feature will come after that.
I don't like relying on GAFAM for authentication (because it means telling them each time you login), but I guess it would ease the authentication setup process a lot.
Hello Majora2007, this forum is not the right place to keep promoting your tool again and again.
Please host the discussions around it in your own space, not here.
20220122 19:06:09 [main] INFO com.ubooquity.Ubooquity - Java version: 14.0.2
You're running Java 14.
Example of Ubooquity OPDS urls:
http://10.0.0.1:2202/opds-comics
http://10.0.0.1:2202/opds-books
The default is http, unless you have added a certificate to your Ubooquity installation.
I was working on Ubooquity as recently as last night, so no, not dead yet. ;)
Hello,
in the warning popu, the sentence "A full rescan of your comics will be done" is wrong and should have been "A full rescan of your books will be done".
But it does clean the books database, not the comics one.
Note that if you have not unchecked the "Scan collection at launch" option in the general settings, Ubooquity will launch a new scan as soon as the database is cleaned (as the server is automatically restarted afterwards).
No it does not. You are safe. :)
RAR is a proprietary format for which very few decompression library exist.
The library used by Ubooquity (the only existing in Java) has issues with some RAR file, and can't open RAR 5 files at all.
For comics there is no reason whatsoever to prefer RAR to Zip (cbz) anyway.
Customer support service by UserEcho
The scan process principle is very simple: Ubooquity lists files to get their path and last modification date (it's very fast as the files are not read, only the disk "table of content" is), then it compares the modification date it stored in its database during the previous scan and processes only the files that have been modified since the previous scan (by "processes", I mean reading the file to extract the cover and metadata, which is the time consuming operation).
So the first scan of a big collection will take a long time, but subsequent scans will be very fast if only a few files have been added.