-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Static Files Truncated at 65,339 or 65340 via TLS connection #120
Comments
Thank you for reporting! Is this only when SSL is enabled, or always? |
Hi, finally got around to checking it. I can confirm this only happens when SSL is enabled. I see here is the place where that is used: Line 391 in 7f5219c
And just below seems to be without SSL Line 405 in 7f5219c
It looks like the non SSL version is using this function: Line 357 in 7f5219c
Not sure how that last function is actually sending the file, just by doing setf on |
Thank you for your efforts to look into it, but I still couldn't reproduce it.
Because a system call |
Thanks, I'm just using a fresh ubuntu LTS without nginx or any reverse proxy, just straight woo. I guess if it comes up for anyone else they may help us figure it out |
In Chrome I am getting the following error for some Static Files from the server
The headers seem to be alright, however the actual files are being truncated
The actual files being transferred are truncated partially and only have 65,339 or 65340 characters in length.
I just tried changing back to Hunchentoot with the caveman2 app and it works. Actually I just checked and interestingly enough the actual length of characters in the file is 162762 but the
content-length
header has 162770. It could be some escape characters that the browser just renders. Unclear. Nevertheless, this works with Hunchentoot and not with woo when removing:server :woo
.It looks like somewhere in woo it's limiting the file size to be sent over to 65,339 or 65,340. Just want to point out that
2^16 = 65536
. So I imagine somewhere in the code there must be an integer handling how much to actually transfer, and because it's maxed out, it truncates the transfer. Probably changing this to a larger integer of say 32 bits is not such a good idea, because any file longer than that (I think about 4 GB) will then just be truncated as well. I wonder where thecontent-length
header is being set, from reading the file size in the filesystem probably, and it could make sense to look into that code as a sample to dealing with the integers. Setting it to an integer 64 bits long will probably solve the issue for a long time though.I'll be happy to help if you can point out to me which files and functions are dealing with the file transfer.
The text was updated successfully, but these errors were encountered: