The Speed of Cloud
By John Parkinson Warning! Math and science present in what follows.
Physics is tough. You don't get a vote on the "laws" and you can't generally cheat without getting hurt. You can change the laws if you're smart enough, but that's really, really tough these days when most of the easy stuff has already been figured out. So, we generally have to work with the laws of physics as they are, not as we would like them to be. Let's start with the speed of light.
Light travels at 300,000 meters/sec in vacuum and about two thirds that in the best quality optical fiber, which is much denser than vacuum and hence slows the photons (or waves or whichever your view of physics allows) quite a lot.
For example, if I had a single fiber 5000km long (or 5 million meters) running from Boston to San Diego, light would take 25 milliseconds (ms) to get there (and the same to get back to tell me it had arrived safely and intact). However I don't actually have a single continuous fiber that long. In the real world, I have to regenerate the signal several times and convert things from electrons to photons and back, which introduces a series of small delays that add up over distance. So if I "ping" San Diego from Boston, what I generally get is about twice the theoretical delay - around 50ms. That's what gives us the generally accepted latency heuristic for optical fiber links - 10 microseconds (Î¼s) per kilometer.
This is inconvenient, but actually very good in terms of how close we are to the theoretical maximum performance - within 50% for the best available fiber and 30% of the speed in vacuum. There aren't many comparable technologies that are anywhere near that close to their potential "laws of physics" limits. The downside is that things can't get that much better - we will never double the available performance in fiber without some new laws - which would be a bad way to bet.
And the nature of wide area packet networks makes things worse. There are actually many different paths between Boston and San Diego, and each one is a different length with different latency characteristics. You can't easily prescribe which path your packets will take, so even if they start out in the order you want, you can't guarantee they'll arrive in the same order, and if order matters to you, which it often does, you have to wait until the last one arrives to be sure that they can be reassembled correctly. That's more delay.
We can, however, work smarter. Our single fiber can handle more than one wavelength of light (called a lambda - Î» - in optical engineering jargon) and each lambda can carry some of our data, so we can get more data in the same unit of irreducible latency - up to a point, because we have to organize the data prior to sending it and reorganize (and verify) it when it arrives (all of which also adds to latency). With 64 lambdas (common today), even if the first bit takes 50ms, the 64th would arrive at the same time and the 128th right after it and so on.
Ok, enough theory. Now let's suppose I want to send a lot of data to the cloud, either to store it there or to compute a result there. The fattest (and therefore fastest) pipe I can buy today is probably an OC192, which lets me ride 10 Gigabits a second on each wavelength. If I want to send say 10 Terabytes (which with protocol overhead is roughly 100 Terabits) it's going to take me about 10 thousand seconds - somewhere around 2.8 hours. That's not too bad. Not exactly instantaneous, and I may have to do some fancy synchronization between source and target so that people know which version is correct at any given moment, but I can probably plan for it and there are use cases that fit this quite nicely.
However, to get that kind of speed is expensive - really expensive. And if I only need to do this occasionally - say once a day - that big fat expensive pipe is empty almost 90% of the time. There are ways around this - I can buy "burstable" bandwidth that only goes really fast when I need it to and reduces the capacity to say 1Gbps (which is useful for many things and more affordable) the rest of the time. But the engineering for this isn't trivial, and pretty soon I'll be better off buying my own dedicated fiber (or at least lambdas on dedicated fiber) and doing my own optical engineering, which also isn't trivial. Better hope my cloud provider can handle their end of the connection.
If you're just a casual cloud user, you probably won't need to worry about all this. But if you're contemplating moving significant data volumes to and from the cloud, network issues such as bandwidth and latency (and error rate over prolonged transmission times) matter a lot - and might just cancel out the potential savings from all that cheap storage and on demand computing capacity.
About the Author John Parkinson is the head of the Global Program Management Office at AXIS Capital. He has been a technology executive, strategist, consultant and author for 25 years.