We define round trip time as the time taken from the user initiating a resource request to when that resource is completely available for the user to interact with. We limit our measurements only to HTML page type resources.
The round trip time is therefore the time from the user clicking on a link to the page referenced by that link becoming usable by the user. For most cases, this is as good as measuring the time from the previous page's onbeforeunload event firing to the current page's onload event firing. In some cases this may be different, but we let the developer determine those events.
This is how we measure.
attach a function to the window.onbeforeunload
event.
Inside this function, we take a time reading (in milliseconds) and store it into a session cookie along with the URL of the current page.
attach a function to the window.onload
event.
Inside this function, we take a time reading (in milliseconds). If the browser has implemented the
WebTiming API, we
pull out navigationStart
(or fetchStart
if navigationStart
is unset). To get around a bug in Firefox 7 and 8, we use unloadEventStart
instead.
If the WebTiming API is not supported, we look for the cookie where we set the start time, and if found, use that. If we find neither, we abort [1].
If we find a cookie, we check the URL stored in the cookie with the document.referrer
of the current document. If these two differ, it means that the user possibly
visited a third party page in between the two pages from our site and the measurement
is invalid, so we abort [2].
If we're still going, we pull the time out of the cookie and remove the cookie. We measure the difference in the two times and this is the round trip time for the page.
Bandwidth and latency are measured by downloading fixed size images from a server and measuring the time it took to download them. We run it in the following order:
First download a 32 byte gif 10 times serially. This is used to measure latency
We discard the first measurement because that pays the price for the TCP handshake (3 packets) and TCP slow-start (4 more packets). All other image requests take two TCP packets (one for the request and one for the response). This gives us a good idea of how much time it takes to make an HTTP request from the browser to our server.
Once done, we calculate the arithmetic mean, standard deviation and standard error at 95% confidence for the 9 download times that we have. This is the latency number that we beacon back to our server.
Next download images of increasing size until one of the times out
We choose image sizes so that we can narrow down on a bandwidth range as soon as possible. See the code comments in boomerang.js for full details.
Image timeouts are set at between 1.2 and 1.5 seconds. If an image times out, we stop downloading larger images, and retry the largest image 4 more times[3]. We then calculate the bandwidth for the largest 3 images that we downloaded. This should result in 7 readings unless the test timed out before that [4]. We calculate the median, standard deviation and standard error from these values and this is the bandwidth that we beacon back to our server.
The latest code and docs is available on github.com/lognormal/boomerang
BW.nruns
parameter. See
Howto #6 for details on configuring boomerang.