Ever since it was announced, AWS's Global Accelerator intrigued me. The idea is that, for surprisingly little money, you get a pair of IPs that are anycast from all of Amazon's edge locations. You can then attach endpoints to them—ELBs, EIPs, or Amazon instances. The endpoints can be in multiple regions, and connections are carried over Amazon's internal network from the edge to the endpoint.
Building a mediocre CDN
One idea I've had for a while is to get hitch+varnish running on instances in a handful of locations. For example, run them in something like a t3.micro in Virginia, Oregon, London, and Singapore, and for maybe $30/month you've got much of the world within 100ms of a cache you control. By design, users get routed to the closest one.
Since it's your own instances, you can configure them however you like, so you could, say, have varnish keep a persistent connection open to your backend webserver, juggle headers in a way a normal CDN might not let you, or use varnish strictly as a cache, and let nginx + ngx_pagespeed handle connections to your backend if you couldn't do that on the webserver itself.
What's also interesting to me is that, since Global Accelerator can do health checks and stop routing traffic to unhealthy backends, it becomes possible to have spot instances for these. If your cache node in a region gets terminated when you're outbid, traffic just spills over to the next-closest instance you have. There seems to be a pricing sweet spot on many instance types where you pay a good bit less than the usual on-demand price, but rarely if ever get outbid.
Accelerating international connections
For reasons I'm not convinced are adequately explained by the bandwidth-delay product, some international connections seem utterly incapable of reaching anything remotely resembling maximum speed.
For giggles, I set up an EC2 instance in Mumbai, and put it behind a Global Accelerator. I set up squid on the instance as a transparent proxy, but with caching disabled, in front of an open source mirror in India.
Being on the east coast of the US, my connection to Global Accelerator hit Amazon's edge nodes in Boston. From there it took Amazon's network to Mumbai. They now support TCP termination at the edge and I was curious to see how that played out.
Without Global Accelerator
Hitting the Linux mirror in India direct from the United States over my home Internet connection, I averaged 2.77 MB/sec. to download an ISO (649MB), for a total of 3m55s.
With Global Accelerator
Going through Global Accelerator, using it as something vaguely resembling a poor man's WAN accelerator, I averaged 7.01 MB/sec., downloading the same 649MB ISO in 93s.
My goal here was to keep Amazon resources close to both sides of the connection. On my end, the actual TCP connection was terminated in Boston, some 20ms away on Comcast. In India, the upstream mirror saw the connection coming from an AWS instance a little over 2ms away.
7MB/sec. is still rather unimpressive, but it also represents more than a doubling of throughput without really doing anything fancy. All this accomplished was moving TCP termination closer to the edge on both sides, and having most of the traffic carried on Amazon's network versus the public Internet. From that perspective, this is a fairly impressive boost.
Future Fun
I'd like to play around more with the CDN idea, coupled with a good way of handling purges, whether it's a small script that hits all the nodes, or some AWS message bus.
I had initially tried to get hitch in front of varnish to use TLSv1.3, but run into unexpected problems. Combined with Lets Encrypt, this seems like it could become a pretty cheap and easy way to have everything supporting HTTP/2 and TLSv1.3. (That said, so would "Just put your site behind Cloudflare for free," in fairness.)
With some on-site instrumentation, it could also be interesting to see what the effect is of bringing up additional nodes in different locations. AWS is going to bring up a region in South Africa soon, for example, and it would be interesting to see how much traffic it took, versus how much was still routed through Europe.
It also occurs to me that it might be fun to run an OpenVPN server on the same nodes. A single configuration would always route to the closest instance. You could also take advantage of the ability to have multiple listeners and give a way to pin connections to a particular region if desired.