engine.io-client > Limit the request size - engine.io

I have been trying to find if there is a config or straight forward way to limit the size of the requests/payload sent to the server in engine.io-client library.
This is because our nginx is throwing 413 : Request entity too large error which is set to 4 MB at the moment and we cannot increase that limit.
thanks

Related

Ajax sending large data to server

I have read the GET/POST request has either character or size limitations (2000 chars or some browsers 8 KB limit).
What is the correct way to send large data (5000 chars or more) using AJAX?
Thank you!
As I know GET/POST max size depends on web server and client.
maximum length of HTTP GET request?

Checking responses ranges for HTTP 206 responses for videos

I'm serving some HTML5 videos over Amazon CloudFront. They're being requested with HTTP range requests and the responses are often HTTP 206 Partial Content, as expected.
I'd like to log, with Javascript, the ranges requested and the ranges in the response (i.e., bytes 0- requested, bytes 0-1000 responded). Is this possible? Keep in mind these isn't an XMLHttpRequest; it's just a <video> tag. I already have a client-side logging facility, but I don't know how to get the data I need.
CloudFront already logs the number of bytes in the response, unfortunately including headers too, but I need to also know how many bytes were requested. There's one Safari user who made tens of thousands of range requests for a single 500 KB video and transferred more than 1 GB as a result, and I can't figure out why.
Some other options and their drawbacks:
Use the Safari developer tools. They're useless to me because they don't show the HTTP status code for requests generated by <video> tags. (╯°□°)╯︵ ┻━┻
Use Wireshark. This seems completely unpredictable and unreproducible, so unless I have the user capture packets during the entire testing cycle--which would be a massive capture--I can't isolate the data to specifically when the problem occurs. It's a last resort.
Use EC2 to act as a proxy so I can have finer control over logging (i.e., I can log the raw Range request header and the response headers). It can work for a testing environment if I'm ever able to reproduce this, but not for production because I need the benefits a CDN like CloudFront provides.
This is probably what you're looking for:
http://www.w3schools.com/jquery/jquery_ajax_get_post.asp
Taken directly from the above link:
$("button").click(function(){
$.get("demo_test.asp", function(data, status){
alert("Data: " + data + "\nStatus: " + status);
});
});
Just link/bind the above to your html button and you can do whatever you want with the response (both the actual video file or the error - if one is returned).

Unexpected behavior when increasing network speed and connecting to a node.js server

I have a simple node.js server like:
var app = require('express')();
var compression = require('compression');
app.use(compression());
app.get('/', function(request, response) {
response.send('<!DOCTYPE html>.......');
}
app.listen(2345);
The html I send is 2.4kB (1.2kB when compressed).
When testing on different network speed (using dev tools) I get this unexpected behavior:
50kbps: Latency 600ms, download 1ms
250kbps: Latency 300ms, download 0.6ms
750kbps: Latency 100ms, download 100ms
2Mbps: Latency 10ms, download 200ms
32Mbps: Latency 5ms, download 210ms
I don't think that the download time is supposed to increase when network speed increases after 250kbps. What is going on?
Again look at what happens if I remove compression:
var app = require('express')();
app.get('/', function(request, response) {
response.send('<!DOCTYPE html>.......');
}
app.listen(2345);
Now the file is just 2.4kB and look at the latency/download times:
50kbps: Latency 550ms, download 230ms
250kbps: Latency 350ms, download 50ms
750kbps: Latency 120ms, download 15ms
2Mbps: Latency 35ms, download 6ms
32Mbps: Latency 4ms, download 0.5ms
The response with the non-gzipped content (and contet-length header) seems to be ok, but the response with the gzipped content (with transfer-encoding chunked header) doesn't seem to be ok.What is this all about?I strongly encourage you to simulate a similar test yourself with whatever tools you like and see the results yourself before saying that my benchmark is wrong and that this cannot be possible. And if you get different results please share them.
Express.js compression options
I would also not hesitate to change the different compression quality settings, strategies and especially the treshold setting of the express compression module, as described here: https://github.com/expressjs/compression, especiall the:
Compression Treshold Level
Since you are sending only a few bytes of textual data as body, try to set the treshold lower then the default of 1Kb.
The byte threshold for the response body size before compression is considered for the response, defaults to 1kb. This is a number of bytes, any string accepted by the bytes module, or false.
(cited from the express compression module github page)
HTTP Compression is not always faster
Make sure to play around with other HTTP features like HTTP Pipelining or accepted encodings (also on the client side) as switching those on or off may vastly alter the outcome of download time.
IBM conducted a series of excellent HTTP tests which I recommend you read about here: http://www.ibm.com/developerworks/library/wa-httpcomp/

Can browser split up payload data into multiple websocket frames?

In javascript below i send up to 200 bytes through websocket (after connecting and handshaking ):
buf= new Uint8Array(200);
/* filling buf with data*/
ws.send(buf.buffer);
On the other side there is a simple iocp c++ server, which receive these 200 bytes preceded by few bytes of websocket frame.
Can i assume that browser always send these 200 bytes(+ websocket head) in one piece?
Or should I always on server side check if this is final frame (by checking first bit in websocket head)?
thanks in advance for yours tips.
You should always check the FIN bit, as per RFC6455 it is perfectly possible that the browser splits up the payload (depending on some upper limit for a web socket frame).

xmlhttpreqeust fails in post/get but not in regular form submission

When using xmlhttpreqeust the post/get request fails when the data exceeded 7k. (HTTP error 400)
When posting the same data using regular form submission ( etc . . .) it works well.
Is there a limit to data size when using xmlhttprequest? or extra setting is needed?
Dev Inv: NetBeans 6.9.1.
Server: Tomcat 6.
Browser: IE8.
When doing a GET, the data is limited by the length of the URL that the browser accept. Some versions of IE had a limit around 2 kB, so you should make sure that the data is well below that. A GET is simply not suited for sending a lot of data.
When doing a POST, the limit is much higher. The webserver has a default limit for the size of the request, which is typically something like 4 MB.
The same limits apply to a request using XMLHTTPRequest and posting a form. It's the method (POST/GET) that makes a difference for the limit.

Resources