+1
Under review

Too many files open creates a crash

Elouan 8 years ago updated 6 years ago 10
his bug is related to an older one that has been closed. I'm opening a new one because I have a better understanding of what's going on

quite often, after I've used ubooquity for a while, I reach a state where the server stops and when I reload a page, it crashes

When I look at my log, I get something like this:

ERROR com.ubooquity.f.d - Could not list files in folder: 
java.nio.file.FileSystemException: /volume1/comics/aaaa/bbbbbb: Too many open files
      at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[na:1.8.0_101]
      at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.8.0_101]
      at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[na:1.8.0_101]
      etc...

Now, I've looked at the number of file handlers that ubooquity uses versus the number that my system allows. Here is how it looks:

ulimit -n
1024

OK, now I load a couple of pages inside ubooquity to look at how many handles I'm using. First time:


fs.file-nr = 4562

Now, I reload a new page (my settings are to display 20 items par page) I get this:

fs.file-nr = 4582

And this go on: everytime I reload a page, it increases the number of handler by 20 (this is consistent with the number of thumbs to read). The problem is that the number never seems to go down after that

I've continued load pages until I crashed, and I reached the following figure:

fs.file-nr = 5394

So the difference is 5394 - 4592 = 800... which is quite a lot considering I have 1024 to play with, and that I probably missed a few before the first number.

After I restarted ubooquity i went down to 4420

fs.file-nr = 4420

I fear the files are read and then never closed. So, if I increase by 20 at every-reload, and 2 or 3 people using ubooquity, it could exeed the limit within a couple of hours and this would a crash; this would explain my situation, including how it takes to crash the server

It feels there is a file-handler leak: ubooquity never seem to release the handler





Under review

Thanks for the precise characterization of the problem, it helps a lot to know exactly where to look.

I'll do my best to fix this in the next release.


Guess I'll have to investigate on Linux though, can't reproduce the problem on Windows...

Thanks a lot for looking at this problem. I'm pretty sure you're not closing the files when we navigate from page to page.

I've found an other way (in linux) to investigate exactly the problem:

  • find pid of ubboquity
  • look in the folder /proc/PID/fd

this folder contains all the file descriptors used by the pid; it's the best way to investigate the problem


I've just encountered the bug again, so I counted the files in that folder, (command line = ls -l fd | wc -l) , it tells me 1023 ! remenber: I'm only allowed 1024 in linux; so I'm pretty sure that's what causing the bug

  • After restart : ls -l fd | wc -l) = 48
  • After login : ls -l fd | wc -l) = 75
  • Now when I load new pages, here what I get


admin@DiskStation:/proc/9202$ sudo ls -l fd | wc -l
99
admin@DiskStation:/proc/9202$ sudo ls -l fd | wc -l
119
admin@DiskStation:/proc/9202$ sudo ls -l fd | wc -l
139
admin@DiskStation:/proc/9202$ sudo ls -l fd | wc -l
159

So, It's pretty clear that I load 20 new fd for each page (corresponding to the 20 icons that I've defined in the admin settings)... until I bump into the 1024 limit


Agreed, the problem is pretty clear.

Now I have to find the exact point of the leak, which should not be too complex, as soon as I get the time to do it.


In the meantime, if you have root access, you should be able to raise the max number of opened file handler (far from ideal, I know).

Hello,


I wanted to get some news about this issue; do you think you will correct it for the next release ? I'm asking because when I open the server to a new user, he starts navigating a lot to see what's there and this keeps happening... that's quite annoying.

I've tried increasing the max number but that doesn't help: it just delays the crash a little, but the crash happens anyway

if the cause is unknown, I propose a workaround that would allow me to keep the server running: catch that exception, and end ubooquity process with an error code (I believe exit(9) is the correct exit code in that case?) so the OS can detect the crash and launch a procedure to restart ubooquity from scratch (equivalent to kill 'ubooquity' followed by start ubooquity.

I didn't look into it yet, although I know exactly how to reproduce it, thanks to your tests.

It's on the list, perhaps not for 2.0 but definitely for 2.1 (see here).

Hi,


I've downloaded the latest release and installed it on my system. Globally, it's working as before: I like the new look and feel of the admin interface, scanning of my (rather large) collection went without a single issue; so I have to give a thumb up for the new release

on the down side, I didn't see much improvment on this particular subject (nor on other features I was hopping to see in release 2.0 -like a way for users to modify their password- why would you not open-source ubooquity? you've built a cool piece of software, it could gain momentum with 3rd party apps and reader coming in, but 1 year between 2 releases, it's way too slow: you should let us contribute)


Anyway, back to the topic: I've tested today, and the file leak is still there, everthing goes smooth but once I reach 1024 file handles, it breaks down

I still can't reproduce the problem on Windows (working with Ubooquity 2.0.2), so I tried on my Raspbery Pi (on Raspbian).

I followed your protocol, and the number of opened file descriptor stays quite stable, around 70, even when browsing a lot of pages with hundreds of covers.


So I don't understand why you still have the problem unfortunately.


Dear Tom


The memory leak problem is still there on Linux Synology NAS DSM 6.1 with Ubooquity 1.10.1


Consequently the server crashes although the process is not killed but the server remains unusable


Therefore I wonder if the leak is really Windows specific.


Maybe a temporary approach could be within the ubooquity code to force the java [Unix/Windows] FileSystemProvider to close all files that were parsed when loading a library page each time a new library page is loaded ?


Could you please workaround the leak and release a fix in the version 1.10.1 so that we have a fully stabilized version in the meantime you release 2.1 ?


...20170501 21:03:42 [pool-1-thread-6] ERROR com.ubooquity.f.d - Could not list files in folder:
java.nio.file.FileSystemException:

/volume1/distr/Books/Comics/Library/FR ± Soleil/Arleston [S] # Opale
(02-001-2013) [Codex] ¤ (FAN,HER,TEE): Too many open files

at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[na:1.8.0_102]
   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.8.0_102]
   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[na:1.8.0_102]
   at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427) ~[na:1.8.0_102]
... etc


Thank you very much in advance

All the best


Regards

...

hi Tom,


how many pages do you have on your library? maybe there is only 2 different pages to load? my settings are: 24 comics per page, and i have more than 50 pages


also can you try something? on your raspberry, can you reduce the number of file allowed to something below 1024, and then browse your library for many pages.... 

Hi Tom


i got the same problem. again today.;.. it had been a long time without issue, probably because I upgraded to a more recent synology model, but the issue is not solved


this is the message i got in the log:

Error while reading log file: java.io.FileNotFoundException - /volume1/homes/Ubooquity/Ubooquity/logs/ubooquity.log (Too many open files)


to reproduce the problem, maybe you can install dsl on your pc using https://xpenology.com/


also, i re-iterate my wish to participate in the development of ubooquity: it’s a very nice piece of softawre but updates are too scarce and we could make this even better