-
|
a few weeks back i submitted an issue that should have probably been placed here from the beginning as a general question. i have the tgdrive setting "Split File Size" set to 1GB and i also have the "Upload Concurrency" setting set to 8 and have multiple bots added to tgdrive. I also have encryption enabled but the issue persists even with encryption disabled. now if i run "df -h" on my host system before starting the upload i get this if i start an upload that is larger than 1GB (i tested with a 20GB ish file), then i can run df -h while the upload is taking place and i can see that my space available gradually reduces from 33G to 25G. After the upload finishes my available space goes back to 33G since this is a decrease of 8gb i made the assumption that the tgdrive container is temporarily using 1GB per concurrent upload. since i set that to 8 and i set split file size to 1GB. at one point i ran du -f on the docker overlay directory during an upload and could confirm that this is increasing the size of some container so im guessing its the tgdrive container and not the postgres container. i thought tgdrive must have some sort of tmp directory or something and i submitted my issue to ask that question but now after seeing that reply im not sure whats going on. im thinking my next steps would be to use a busybox executable as iwconfig mentioned in his reply, to run similar tests from inside the tgdrive container and see if i can verify that the container is using more drive space and what/where those files are being created ..... or would anyone recommend any other trouble shooting steps? i dont know that this is needed but ill paste my docker compose here just in case anyone is curious also my config.toml file with some stuff redacted |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
|
Well i've already basically answered this in #485. I said it must be something other than teldrive because teldrive does not do disk caching, hence why i wondered if you had mistaken the rclone container for teldrive. I also gave you some proof of this (the spoilers), albeit somewhat crude, but also ways to research and verify this yourself. I suggest you go ahead and copy the busybox executable over to the container and conduct further tests inside it. Although, I question the point of doing it, because we know for a fact that teldrive does not cache anything on disk; you will gain nothing but experience and perhaps broader insight, but please know that teldrive itself is not the culprit here. And to further provide evidence that this is the case you can see my own demonstration in the video below: output3.mp4As you can see, I do not get the same result as you do, so it must be something else. It doesn't even make much sense why Teldrive would use a disk cache. Why make a copy the file only to split it up for storage as cache? The file does not need to be split up beforehand, and also, the client chunks the file, not the backend. Seeking, reading and writing to a file at specific byte ranges is a basic function of modern file systems. Seeking is the operation that moves the file pointer to a particular offset, after which read or write can be performed at that exact byte range. With a 3 GB file and 1 GB chunk size as an example, the teldrive client (e.g. web or rclone) seeks to the following byte ranges:
Each chunk (or part) is sent individually in parallel to the teldrive backend, which tracks each part with metadata including part number, part ID, channel ID, size, and encryption status. The actual process of uploading to Telegram is performed by the telegram API client which streams each chunk by:
Only the current buffer is resident in RAM, so memory usage stays roughly constant regardless of chunk size. And here's what DeepWiki AI had to say about it. Read the answer to the first prompt. The second prompt was poorly phrased, but nevertheless the answer to that one may still be valuable. What other docker containers do you host? Do you run your teldrive server on your main computer? Maybe your web browser is caching the file or something? Although, I highly doubt it, but it's possible i guess. Oh and also, when debugging you should really try to reproduce stuff in clean environments if you can. |
Beta Was this translation helpful? Give feedback.
-
|
thank you for going into details here. I did take the time to use a busy box executable to attach to my teldrive container similar to the video you posted. it was a good learning experience but as you could guess i didnt find any "smoking gun". I wasnt really expecting to at this point but still wanted to do the exercise for learning. im curious what you mean by "rclone container" |
Beta Was this translation helpful? Give feedback.
-
|
just wanted to update this thread as i found the issue and it may be helpful for anyone else. basically we knew it wasnt teldrive and we suspected rclone but it seemed to be doing this even when using the web ui. fixed the issue and made uploads faster too since i dont have to upload to nginx first. thanks for all the help and support trying to figure out this issue. it was for sure a great learning activity |
Beta Was this translation helpful? Give feedback.
just wanted to update this thread as i found the issue and it may be helpful for anyone else. basically we knew it wasnt teldrive and we suspected rclone but it seemed to be doing this even when using the web ui.
it turned out to be my reverse proxy nginx that was caching uploads before sending them to teldrive so it would happen in the web ui or rclone.
i added these lines to my nginx config to disable this for teldrive
proxy_request_buffering off;
proxy_buffering off;
client_max_body_size 0;
fixed the issue and made uploads faster too since i dont have to upload to nginx first.
if using a reverse proxy make sure its configured correctly
thanks for all the help and support trying to figu…