Your comments

Since the rewrite is going to take some time, here is a workaround:

  1. Create a new theme (or use an existing one, as long as you can edit the pagereader.css file).
  2. Edit the pagereader.css file of the theme you are using and add the following section at the end of the file:
    #displayedpage{
       width:100%
    }    
  3. Restart Ubooquity after having checked you are using the theme you just modified.
After this, your pages should always use 100% of the width of your screen, no more, no less.

I have not tested it on mobile devices though (lack of time), so let me know if you encounter problems applying this workaround.
(or if it works for you, interesting to know too :))
It's a good workaround.

Don't bother too much though, I have implemented the solution suggested above (simple image file in the directory) as well as the user provided info through local html files in the folder.
They will be part of the next release (no date yet though).
I guess I could add an option to customize the displayed title.
Configuration would look like that for instance:
{issue #} - {comic title}
and display:
8 - Harper's Story 
Not exactly what you requested (the title would still be displayed as a single string of characters) but more flexible in my opnion.
Just so you know, I have not forgotten this issue. I just chose to rewrite the online reader from scratch to manage user defined sizing and double page auto-split. Work in progres...
I don't know much about CPU performances, but I guess the ARM CPU of the Pi 2 is not as fast as the Atom you were using before.

I will have to do a few tests on my own Raspberry Pi (an "A" version, the slowest that exists, so I should be able to reproduce the problem easily) to determine which step is the most CPU intensive (extraction or page resizing). I also have plans to allow direct file download in the online reader (without conversion): combined with native extraction tool, the performance boost could be quite noticeable.


Since you have the problem in the Pi browser itself, I suppose network congestion is not its cause. Two lengthy steps remain: extraction from the archive and resizing/reencoding.
Extraction can be slowed down by file system (are your comics hosted on the Pi memory card or on a network drive ?) and by CPU.
Resizing/reencoding rely only on CPU.

The CPU load of 25/50% is probably because only one of the cores is used to unzip or resize the image. So having a 25% load means a 100% utilization of a single core.

I'm guessing performances were fine even for big pages on your previous device, right ? What kind of machine was it (more specifically what kind of CPU) ?