Your comments
I'm a bit late to the party, but I'll try to provide some info.
First, regarding the comics streaming performance, the easiest way to know if the problem comes from Ubooquity is to try to read a comic directly in a browser instead of using Chunky, as the mechanism that provides the images is the same (except for preloading, Chunky preloads more pages). So if the online reading is slow, Ubooquity's the problem.
That being said, I'm pretty sure the performance problem comes from Ubooquity. The way image extraction is currently done is pretty naive. The archive is reopened for every image.
Also I don't know how the Java libraries I use for extraction perform against native tools, but I suppose they might be slower (I'll have to do some tests).
I have thought about two main solutions for now:
As for the file permissions, Ubooquity only need read access to the comics files. The only folder where it needs writing permission is the one it is run from, to be able to write the database, the thumbnails etc. I don't know how writing access on comics file could have an impact on anything Ubooquity does. Could you send me your log file (in the "logs" directory), if there are errors inside, it could give me alead.
My address: tom "at" vaemendis.net
First, regarding the comics streaming performance, the easiest way to know if the problem comes from Ubooquity is to try to read a comic directly in a browser instead of using Chunky, as the mechanism that provides the images is the same (except for preloading, Chunky preloads more pages). So if the online reading is slow, Ubooquity's the problem.
That being said, I'm pretty sure the performance problem comes from Ubooquity. The way image extraction is currently done is pretty naive. The archive is reopened for every image.
Also I don't know how the Java libraries I use for extraction perform against native tools, but I suppose they might be slower (I'll have to do some tests).
I have thought about two main solutions for now:
- Full comic extraction when a page is opened. The whole archive would be extracted in one pass. There are a lot of side effects to take into account though (mostly cache management and concurrent access) and the performance gain is not guaranteed. I need to test that too.
- Use native tools. I could have Ubooquity call native "unzip" and "unrar" commands (at least on Linux). That way I would be assured to make the best of the available hardware. Could be used in conjunction with the first solution.
As for the file permissions, Ubooquity only need read access to the comics files. The only folder where it needs writing permission is the one it is run from, to be able to write the database, the thumbnails etc. I don't know how writing access on comics file could have an impact on anything Ubooquity does. Could you send me your log file (in the "logs" directory), if there are errors inside, it could give me alead.
My address: tom "at" vaemendis.net
I'm sorry I could not answer earlier, but I'm glad you found your answer.
Don't hesitate to ask here, there is no naive question. :)
Don't hesitate to ask here, there is no naive question. :)
Well the cookie should expire after 30 days, but I realized there might be a bug there as my browser indicates that the cookie will expire at the end of my session. I'll investigate.
In any case, there is a server side check which forces you to reenter your credentials at least every 30 days.
These are arbitrary values that can easily be made configurable if needed. Just let me know if you have specific needs.
In any case, there is a server side check which forces you to reenter your credentials at least every 30 days.
These are arbitrary values that can easily be made configurable if needed. Just let me know if you have specific needs.
These errors happen when the browser abruptly closes the connection. It can happen for a lot of reasons, but is not necessarily indicative of a problem (for instance, you get it when you cancel a download).
As for the icon, this is something I'll address when I revamp the online reader. It won't happen that quickly as I have some other tasks to complete first, but it's definitely on the list.
As for the icon, this is something I'll address when I revamp the online reader. It won't happen that quickly as I have some other tasks to complete first, but it's definitely on the list.
Bug found (stupid one, inherited from the first C# version of Ubooquity). It'll be fixed in the next version.
Customer support service by UserEcho
What I have in mind is not to extract the whole archive before serving the first page, but rather serve it as soon as it is available. This solution involves some problems like how to know when the image file is actually ready (not in the middle of extraction), how to manage concurrent request on the same comic archive, how to manage extracted images eviction (they can use a lot of disk space), etc.
As for the speed of the extraction itself, from I have read so far, zip file extraction should be almost as fast as a native tool (didn't do the test yet). Rar extraction is done using an old Java library that is not maintained anymore, so I wouldn't be surprised if the performances were not that good (I use CBR for testing only as I think this is an almost completely useless file format for comics).
Last question: the CBZ and CBR files you tested, did they contain WEBP files ?