openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #21727
Re: [Swift] File Upload Problems after Upgrade
On 3/8/13 4:31 AM, Heiko Krämer wrote:
Hi Guys,
I've upgraded my swift setup (2 Storage Nodes and 2 Proxy Nodes) from
1.4.6 to 1.7.6. It was upgraded without errors and i've followed this
guides:
https://lists.launchpad.net/openstack/msg16188.html
https://wiki.openstack.org/wiki/ReleaseNotes/Folsom#OpenStack_Object_Storage_.28Swift.29
But since this time i can't upload any bigger files. I mean smaller as
10MB :(
I got every time
<head><title>413 Request Entity Too Large</title></h
I don't think that error is coming from your Swift proxy. Swift's 413
response has content-type text/plain and body "Your request is too
large." I think that your nginx is misconfigured. Try turning off nginx
and testing without SSL termination.
[...]
and i'm using a nginx proxy in front of swift proxy for SSL handlings with
client_max_body_size 10000M;
With Swift, you almost certainly don't want to use nginx for SSL
termination; you want to use something like pound instead.
Nginx has the nasty habit of buffering uploads to a temporary directory
before passing them on to the backend server, while pound streams
uploads. I haven't found a way to turn it off.
So, imagine a Swift cluster with pound for SSL termination, and a client
performing a PUT request to an object. As the client streams data to
pound, pound sends it to the proxy, the proxy sends it along to the
object servers, and they write the chunks to disk. If the upload rate is
roughly constant, then the load imposed on the cluster is also roughly
constant.
Now, imagine a Swift cluster with nginx for SSL termination. As the
client streams data to nginx, nginx spools it to a temporary file. No
data is being sent to the proxy. Once the client finishes sending the
data, nginx opens a connection to the proxy and crams data into it as
fast as possible. This results in a big load spike right at the end.
Further, with nginx's upload spooling, the client sees a big delay
between sending the last byte of the request and when the server
finishes sending the response, as that's when all the work happens in
the cluster. Given sufficiently-large files, this may result in clients
timing out.
References