maxmarengo
Member of DD Central
Posts: 96
Likes: 28
|
Post by maxmarengo on Nov 14, 2014 8:34:02 GMT
Out of interest I just monitored loading the summary page on Firefox - 27 requests and 1Mb. About half the level reported by GSV3MIaC. Reloading only sends 20 requests, but the same amount of data is loaded.
I am using a pretty standard Windows desktop and the latest version of Firefox with a few adblockers. I agree there seems to be too much data flying around. I would rather the system called for extra data if needed rather than loading it just in case.
Most of the data sent across is Javascript (650k). Seems quite big when all it is doing is a bit of formatting. This is loaded with every page so probably accounts for 75% of the load - a bit of optimisation on this could make a significant difference.
|
|
adrianc
Member of DD Central
Posts: 10,000
Likes: 5,139
|
Post by adrianc on Nov 14, 2014 12:25:37 GMT
Oh and that's after I've already 'HOSTed' out some of the analytics rubbish. Is there a sane web developer out there who can comment?? (I just dabble around the edges). Can you add any extra to this list ? If so, please share 127.0.0.1 www.google-analytics.com # FC 127.0.0.1 ssl.google-analytics.com # FC 127.0.0.1 js-agent.newrelic.com # FC 127.0.0.1 beacon-1.newrelic.com # FC 127.0.0.1 asset0.zendesk.com # Stop AWS amazon chunter when using FC 127.0.0.1 cdn.optimizely.com # Funding circle java The zendesk one was because, for some daft reason, my PC was making constant outgoing connections there, which the firewall machine was whinging about all day long. Got fed up of FC's site spamming the logs, so blocked it at the PC. Far simpler and more reliable, use Firefox's NoScript plugin, and you can see exactly what domains are being invoked for every page, and turn their ability to script on and off at will.
|
|
|
Post by GSV3MIaC on Nov 14, 2014 16:47:35 GMT
Out of interest I just monitored loading the summary page on Firefox - 27 requests and 1Mb. About half the level reported by GSV3MIaC. Reloading only sends 20 requests, but the same amount of data is loaded. I am using a pretty standard Windows desktop and the latest version of Firefox with a few adblockers. I agree there seems to be too much data flying around. I would rather the system called for extra data if needed rather than loading it just in case. Most of the data sent across is Javascript (650k). Seems quite big when all it is doing is a bit of formatting. This is loaded with every page so probably accounts for 75% of the load - a bit of optimisation on this could make a significant difference. Interesting .. how many times did the 148.06 kb download from use.typekit.net appear in your trace? (I get 3 iirc. the resource is some stupid long string/code starting with d?3 I also have two entries for angular-<gobbledegook> from cloudfront.net .. at 192kb each. Wonder why I'm getting so much more shown -I'm also on latest Ffox, Win7 64 bit. Maybe I should see what IE 11 has to say?
|
|
mikeb
Posts: 1,072
Likes: 472
|
Post by mikeb on Nov 14, 2014 19:48:04 GMT
Far simpler and more reliable, use Firefox's NoScript plugin, and you can see exactly what domains are being invoked for every page, and turn their ability to script on and off at will. Possibly, but I just crowbar new nuisance sites into the front of "MVPS HOSTS" (free download from winhelp2002.mvps.org/hosts.htm) -- works for any browser and ANY process on the system.
|
|
mikeb
Posts: 1,072
Likes: 472
|
Post by mikeb on Nov 14, 2014 19:52:33 GMT
Although just in case I end up with something listening on the relevant ports and serving something that looks like a web page, I often use a different address on the loopback network - e.g. 127.0.0.2 or 127.1.2.3 or whatever. There's another one I keep seeing on my android devices - something about honeybadger.io (I forget whether it's www. or js.) - main reason I notice it is that android's browser doesn't recognise the security certificate as being issued by a valid certifying authority, so I need to tell it to ignore an "invalid" certificate every time I visit the site on that device after rebooting it, which doesn't seem like a great idea to train users to do..... I do have something running on there, an Apache (locally visible only) server that quickly returns a "What?" response and lets the page load minus those bits, rather than sitting around "Waiting for 127.0.0.2" etc. Just checked. Actually, using 127.0.0.1 or 127.0.0.2 makes no difference, it's still ME and it's still my Apache server So your change seems to be superstition!
|
|
sl75
Posts: 2,092
Likes: 1,245
|
Post by sl75 on Nov 15, 2014 8:42:32 GMT
I do have something running on there, an Apache (locally visible only) server that quickly returns a "What?" response and lets the page load minus those bits, rather than sitting around "Waiting for 127.0.0.2" etc. Just checked. Actually, using 127.0.0.1 or 127.0.0.2 makes no difference, it's still ME and it's still my Apache server So your change seems to be superstition! I've not found "sitting around waiting" a problem anyway - connections to a port which is not listening should get an immediate "destination port unreachable", which seems more lightweight than fully opening the TCP connection, sending a request over it and receiving an HTTP error code. It will of course depend whether the server service is explicitly listening on address 127.0.0.1 or listening on "any address". I forget what historic computer that may have made a difference on. Another useful side-effect is that it can help to differentiate "normal" loopback connections from "fake web server" loopback connections on utilities such as "netstat"... I'm sure on some previous computer I'd done something odd with the routing table too, in order to make sure whatever address it was using would be routed in a manner that would result in a fast and lightweight ICMP failure on the local machine rather than the slower and more resource-hungry mechanism of establishing a TCP connection, making a request, and returning an error status.
|
|
maxmarengo
Member of DD Central
Posts: 96
Likes: 28
|
Post by maxmarengo on Nov 15, 2014 9:58:50 GMT
Out of interest I just monitored loading the summary page on Firefox - 27 requests and 1Mb. About half the level reported by GSV3MIaC. Reloading only sends 20 requests, but the same amount of data is loaded. I am using a pretty standard Windows desktop and the latest version of Firefox with a few adblockers. I agree there seems to be too much data flying around. I would rather the system called for extra data if needed rather than loading it just in case. Most of the data sent across is Javascript (650k). Seems quite big when all it is doing is a bit of formatting. This is loaded with every page so probably accounts for 75% of the load - a bit of optimisation on this could make a significant difference. Interesting .. how many times did the 148.06 kb download from use.typekit.net appear in your trace? (I get 3 iirc. the resource is some stupid long string/code starting with d?3 I also have two entries for angular-<gobbledegook> from cloudfront.net .. at 192kb each. Wonder why I'm getting so much more shown -I'm also on latest Ffox, Win7 64 bit. Maybe I should see what IE 11 has to say? This morning it is 42 requests and 1.7Mb. Only one load of the 148kb file and only a few small files from cloudfront. Will try without the adblocker on later.
|
|
|
Post by GSV3MIaC on Nov 15, 2014 14:09:26 GMT
I tried with IE11, and =it= counted 62 requests and (still) 2.4MBytes. I wonder why!? It must be said that I do have a lot of bids/parts, but surely the number shouldn't influence typekit loading. I maybe need to go ramble through my FF options and see if I am stopping caching somehow. Hmm, OK I looked at those 3 requests from use.typekit.net, and they are from 3 different parts of the summary page (as per the referer shown in the request headers) .. first one was from www.fundingcircle.com/my-account/my-lending/then from www.fundingcircle.com/lenders/summary/funds_summaryand finally one from www.fundingcircle.com/lenders/summary/commentsthat was 148k each time .. call it 450k for the set. If it is being cached/reused locally I sure can't tell. The last two both asked for the 192Kb of angular-6521<blah>.js from cloudfront.net as well. another 380kB Then there's 659k of application.js, and 408k of application-legacy.js, and everything else is probably down in the noise!! I expect there must have been some HTML somewhere .. surely. 8>. PS .. no sign of the .js or .css files being saved in the caches, so I guess it is loaded from the web every time (I looked at IE cache, since FF cache is a PITA to decode). Looking at the data returned from the 'get' request I wonder if it significant that the expiry date has already passed? ----------------------------- Update .. I finally found the performance analysis tool (new to me in Ffox) which measures performance with primed/empty cache (it's the little clock under network tab) .. guess what .. no Funding circle page caches anything .. not .CSS, not massive generic .JS files, not a damn thing (OK, maybe 40 bytes of something, probably facebook link or whatever). I mean I can see what secure financial HTTPs data might want to not be cached, or only for the session, but not caching .CSS files (even for two minutes) is just plain ridiculous!!! Loading my watchlist - 41 requests, 1.4MB (page 1 only), loading a random loan (8835 iirc) .. 34 requests, 1.6MB. in each case 3./4 of the traffic is .CSS and .JS which is (presumably) NOT changing on a second by second basis. Issue raised with FC (some of us still have data traffic limits, and are trying to stream music and movies as well). Hopefully it'll reach their tech team since their front line support probably won't have an answer. Attachments:
|
|
|
Post by GSV3MIaC on Nov 18, 2014 18:27:10 GMT
FC confirmed to me that they are basically blocking all caching so nobody gets an out of date / incompatible .CSS or .js file. I have suggested they might want to consider something more 20th century, like versioning, thus reducing web traffic by 95%. Meantime, yes it's a 2mB+ download to read your summary data. Each time.
|
|
sl75
Posts: 2,092
Likes: 1,245
|
Post by sl75 on Nov 18, 2014 18:46:26 GMT
FC confirmed to me that they are basically blocking all caching so nobody gets an out of date / incompatible .CSS or .js file. I have suggested they might want to consider something more 20th century, like versioning, thus reducing web traffic by 95%. Meantime, yes it's a 2mB+ download to read your summary data. Each time. Can I be bothered to go to the trouble of setting up a caching proxy explicitly configured to ignore requests from web servers not to cache certain content...? If I'm likely to be paying £3 per 50MB of roaming data again and/or on a regular basis, I would certainly consider it!
|
|
sl75
Posts: 2,092
Likes: 1,245
|
Post by sl75 on Nov 26, 2014 9:37:27 GMT
Can you add any extra to this list ? If so, please share 127.0.0.1 www.google-analytics.com # FC 127.0.0.1 ssl.google-analytics.com # FC 127.0.0.1 js-agent.newrelic.com # FC 127.0.0.1 beacon-1.newrelic.com # FC 127.0.0.1 asset0.zendesk.com # Stop AWS amazon chunter when using FC 127.0.0.1 cdn.optimizely.com # Funding circle java Just found one of the FC pages with firefox stuck waiting for analytics.twitter.com - so in it went! Edit: and having found the connection to "cdn.optimizely.com" sometimes sluggish despite being on a loopback (127.0.0.X), I'm now experimenting with 255.0.0.1 - as an "obviously invalid" address which the TCP/IP stack doesn't even know how to use (being reserved for "future use" of a potentially completely different transmission mechanism), it should cause the connection to fail more quickly, without having to wake any other process or generate any network traffic.
|
|