Your comments
I am guessing you mean the Comixology scraping script.
this one: https://github.com/CuddleBear92/Ubooquity-Themes/blob/master/Scripts/comixology-to-ubooquity.py
You can always make an issue if you want too.
You should only need python 3 and a few things you can grab with pip.
Far from an expert on it myself and the script was made by a friend.
Running it as i stated as an example run works out well for me over here.
If you are missing anything on python with pip then it should error out on the specific requirement to inform you whats missing. doing a simple pip3 install of that and re-trying it should be fine.
there is a few params with the script for sure like scraping series, skipping publishers or series, setting the delay and setting a destination.
Also of which is atleast loosely noted in the readme.md file in the script folder.
EDIT: please, if you follow up on this. Make an github issue and note what script you are using at a minimum.
The scraping script which is the main part of the repo does not work with any comic folder you have. but rather just scrape the comixology site for metadata and generate files for the theme.
Yeah, I am having issues with this too. With and without the Comixology theme on the docker.
Changing the page layout seems to not do anything at all. nothing noted in the logs either.
Anyway you can update this for more automatic bulk actions from something like
https://comicbookreadingorders.com/
Or Arcs on ComicVine
EDIT: Random thoughts:
In ComicRack you can easily make filters for Story Arc or Alternative story lines that was scraped from ComicVine in the correct order too with the alternative number.
These can easily be drag and dropped while giving out the current sort order of ComicRack.
Drag and drop to this program would give it the correct order as it just drags the file itself.
This would give you the full path of the file, the filename and everything.
Would that be enough to get it working? Maybe it would need some manual input of sorts, but it would atleast speed the most demanding part of adding the filenames in order atleast.
This would make it pretty easy to make these json files.
But i guess technically one could do this with a python script inside of ComicRack too.
Dropped a complete release of the Comixology mirror for this theme in Ubooquity.
Got Logo's and images for all, html and css files for publishers aswell as json files for series.
This first release is just a mirror and uses the names used on the websites itself.
Some year entries might be wrong as the first individual release of the series might have the wrong entry on the site itself.
(site stating the release was 2020 when it was 1995 for example)
Sadly this is reflected in the folder names and the internal jsons.
The script did scrape all years and picked the lowest year number but it didn't help in all cases.
The script itself is also hosted in releases aswell in the repo itself.
https://github.com/CuddleBear92/Ubooquity-Themes/releases
Next plans for the next few days:
To upload the missing files in the main repo (that is already in the release).
Clear out old v1 files and keep alternative images.
Clean out white backgrounds with black text to contain the important flag so they are respected by the set theme.
After than i guess i should try to fill out the missing publishers that isnt on Comixology at all.
Don't believe i will replace images with the Comixology Unlimited banner on them, If people have them then please make a pull containing them (or if you make your own).
Notes on the running script, it had some bumps but it was bugfixed.
The last run had a small glip once that didn't repeat itself on the second try.
If the script fails when you run it yourself, please either delete the publisher or series folder in question and try again with the correct skip settings.
There is more here: https://github.com/CuddleBear92/Ubooquity-Themes/
There is a script there to scrape the whole publisher list and all its series from comixology to make these specific files.
I am working on uploading these to the github today and more till im in sync. Clearing out all the old v1 theme stuff i had on it.
People wanting to do it themselfs can easily grab the script and run it in python3. Scraping it all will take atleast a whole day though.
Will be uploading the base scrape first then edit any weird ones that for example have default white backgrounds with black text (so the dark mode themes can still work correct for those).
Good good!
will drop the local github mirror then and link to it.
@KanadaKid (Scott)
The dropbox link is erroring out with a 404, didn't move the folder or something did you?
Did you unzip it first before placing it in the theme folder?
make sure the Comixology folder is unzipped before you place it there.
That would be this folder: https://github.com/CuddleBear92/Ubooquity-Themes/tree/master/Theme/comixology/comixology
That is just a backup and a mirror of the other download link.
I have a backup of it here: https://github.com/CuddleBear92/Ubooquity-Themes/tree/master/Theme
Unless Scott updated it since then, then you should be all good!
You will also find more publisher files if you want it.
Took a break from this though, need to get back into filling it out.
Customer support service by UserEcho
Best to browse the repo itself to find what you need atm, the release has just a pure dump of the comixology.com scrape without any cleanup.
All publishers and series are sorted correctly in the repo so it should be easy to find what you need if i have it.
Much of it, esp the publishers are still a work in progress.
Also still need to get the script working for the comixology.eu site as it has some publishers and series the .com site doesn't have.
Series years on the repo as a whole have to be taken with a huge grain of salt cause of the limited data the site had.
I pulls all the years and uses the lowest value in both the metadata and the series directory.
This might be way off cause of the site data. This will have to manually be corrected in the worst case. Do plan to add cvinfo files to each of the series folders too which could be re-used again for another script for those wanting that sort of thing.