Your comments

That's strange. Are other Ubboquity files and folders present in your home directory as well ? (cache, logs, preferences.xml...)
If so, this means you have started Ubooquity directly from your home directory (Ubooquity creates files and folders it needs in the "working directory", meaning the directory you are in when you launch it).
If not, there is something I missed.
Hi,

the server needs to be restarted for the new theme to appear in the dropdown menu.
You still have only the default theme after restarting ?
I never tried, but since Ubooquity interactions with the underlying operating system are quite standard, I don't see why you couldn't.
The problem with resizing images upon extraction is that the resulting cached comic will specific to a screen size. Size which can easily change when resizing the browser window for instance.
And since I have to do the resizing on the fly, I might as well do the transcoding since resizing the image requires transcoding it anyway.
Or I could decide that a width of 1200 or 1500px for a comic image is goos enough for everybody (the Apple way of doing things ;)) and do the resizing/transcoding as soon as I extract the image.
Not my favorite solution but it might be the most efficient one in the end.

The problem is, if I allow usage of native commands to do the extraction, the extraction/resizing phase will have to be done in two passes. More complexity, more concurrency problems, more possibility for failures.
Nothing impossible but longer to implement.

As for serving the original file, it has already been requested for WebP files (somewhere in this forum) and declined as I don't think of WebP as a viable format (at least until it gets support outside of Google products).
Although the performance increase is a much better reason for doing it, it would still be difficult as it would require Ubooquity to know the type of each image in the comic archive when it generates the HTML for the online reader and the OPDS XML (the type is explicitely defined in the OPDS link).
I don't know how to do that efficiently yet.

Still not easy but I have a few ideas...
We agree on what a good streaming experience is, I just have to take the time to improve it. ;)

What I have in mind is not to extract the whole archive before serving the first page, but rather serve it as soon as it is available. This solution involves some problems like how to know when the image file is actually ready (not in the middle of extraction), how to manage concurrent request on the same comic archive, how to manage extracted images eviction (they can use a lot of disk space), etc.

As for the speed of the extraction itself, from I have read so far, zip file extraction should be almost as fast as a native tool (didn't do the test yet). Rar extraction is done using an old Java library that is not maintained anymore, so I wouldn't be surprised if the performances were not that good (I use CBR for testing only as I think this is an almost completely useless file format for comics).

Last question: the CBZ and CBR files you tested, did they contain WEBP files ?
I'm a bit late to the party, but I'll try to provide some info.

First, regarding the comics streaming performance, the easiest way to know if the problem comes from Ubooquity is to try to read a comic directly in a browser instead of using Chunky, as the mechanism that provides the images is the same (except for preloading, Chunky preloads more pages). So if the online reading is slow, Ubooquity's the problem.

That being said, I'm pretty sure the performance problem comes from Ubooquity. The way image extraction is currently done is pretty naive. The archive is reopened for every image.
Also I don't know how the Java libraries I use for extraction perform against native tools, but I suppose they might be slower (I'll have to do some tests).

I have thought about two main solutions for now:
  • Full comic extraction when a page is opened. The whole archive would be extracted in one pass. There are a lot of side effects to take into account though (mostly cache management and concurrent access) and the performance gain is not guaranteed. I need to test that too.
  • Use native tools. I could have Ubooquity call native "unzip" and "unrar" commands (at least on Linux). That way I would be assured to make the best of the available hardware. Could be used in conjunction with the first solution.
Let me know if you have other ideas.

As for the file permissions, Ubooquity only need read access to the comics files. The only folder where it needs writing permission is the one it is run from, to be able to write the database, the thumbnails etc. I don't know how writing access on comics file could have an impact on anything Ubooquity does. Could you send me your log file (in the "logs" directory), if there are errors inside, it could give me alead.
My address: tom "at" vaemendis.net

I'm sorry I could not answer earlier, but I'm glad you found your answer.
Don't hesitate to ask here, there is no naive question. :)