Email a colleague    

March 2018

The Rise of Data Centers at the Network Edge: From Cable Content Cache to Cloud Enabler

The Rise of Data Centers at the Network Edge: From Cable Content Cache to Cloud Enabler

Traditional data centers grew up around the Internet and the need for enterprises to connect their voice and data traffic.

But about five years ago, a new data center concept arose, Edge Data Centers® (EDCs), smaller, high-powered and compute-dense facilities pioneered by EdgeConneX on behalf of Comcast and other cablecos and telcos.

In recent years, these EdgeConneX Edge Data Centers have evolved to the point where they on-ramp an estimated 70% to 80% of all Internet traffic in the United States.  And their ability to move content, cloud and applications closer to end users is enabling the growth of the cloud and expanding peering in new markets across the U.S. and Europe.

These Edge Data Centers are located in previously ignored tier 2 markets that relied on their Internet to come from a few peering locations that were — in many cases — hundreds of miles way.

At the ITW show in Chicago, I spoke to Phill Lawson-Shanks, chief architect and vice president of innovation at EdgeConneX.  In the narration that follows, Phill not only explains the growing significance of its Edge Data Centers, he also brilliantly shows how they organically evolved to meet the insatiable demand for cost-effective and low-latency content distribution and cloud.  So what he’s given us here is a highly interesting history of the data center and related technology.

1. The Beginning: A Bottleneck Backhauling to Cell Towers

Phill Lawson-Shanks: EdgeConneX got started about seven years ago, in the days when cable companies were desperately trying to trench their way to cell towers to support wireless backhaul.  It was a mad scramble.

It got to the point where as soon as one company finished getting the permits, digging up the roads for the cable, putting a box at bottom of the tower, and running the antenna, a week later another cableco would do the exact same thing at the same facility!

Sometimes they would cut through the cable, but then we would just put a box at the bottom and one to the top.  But they were splicing into their infrastructure, so there was no ability to do anything else then.

So the founders of EdgeConneX said, “Why don’t we put a shed at the bottom of the cell tower and in that shed allow people to insert a bit more technology rather than just a splice in the spur.”  That way, each individual cable company can do more things and there would be less duplication of effort.

So, this is how EdgeConneX got into the business, enabling all these cable companies to get started.  EdgeConneX built and managed the infrastructure at the base of the cell towers, creating a meeting point for everyone to get connected.  And it was a very successful business for us.

2. Bringing Ethernet to Corporate Buildings

A little while after we started helping cable companies reach cell towers, the cablecos came back to us and said, “You know, on the way to these cell towers in North America, we are running past all these shiny buildings.  And we’d love to sell Ethernet services to the tenants.”

And on the cablecos’ behalf, we knocked on the doors of building managers who proved very helpful when it came to securing the necessary permitting.

This market turned out to be another source of funding.  And even though we were still privately held, we had a portfolio of about 40,000 buildings we were running fiber to.  Not all these buildings were lit, but we had the rights to do that and we also enjoyed rights at approximately 30,000 cell towers.

The big telecom operators were also involved.  So in many cases, we would work with the municipality, get the permits to dig up the roads, penetrate the building, have riser rights, top of building rights, front of building rights on behalf of the major wireless carriers.

Basically, we became a property facilitator for the network companies.  And one of the important side benefits of the business is we collected all the available maps of the fiber in North America from all the providers.  Very few companies out there have that intelligence.  And because we remain private, we hold that information dear to us and don’t sell it.

3. Moving Content Storage from Set-Tops to the Cloud

About five years ago, technological change required a drastic change in the way cable content was stored.  Broadband had exploded to the point where most connectivity to the Internet was coming through cable broadband as opposed to the traditional ILECs.

In particular, Comcast, the largest cableco wanted to change how they were delivering technology to its customers, primarily because set-top boxes were failing.  Cisco bought the last of the remaining set-top box manufacturers, Scientific Atlantic, but set-top boxes had not evolved to serve industry needs.

Due to the influence of the Motion Picture Association of America (MPAA) the law in the U.S. is that when something is broadcast over the air or cable, you as a consumer can store the content and use it yourself, but you can’t share it.

And if you should lose your content, you have to wait for it to be rebroadcast.  For instance, if you saved on your set-top box several episodes of “Game of Thrones” and the set-top died, the cable company can send you another set-top box, but they can’t repopulate your content.  So, this became a big customer complaint and churn issue.

So, Comcast said, “Why don’t we switch to a cloud-based DVR infrastructure?  That way, we can maintain the integrity of each subscriber’s content.  Instead of having a hard drive in your home, let’s have a ‘hard drive’ in the cloud.”

And an added benefit of that concept was that when travelling, a subscriber could access a video on his mobile phone as long as Comcast maintained the integrity of the data and transcoded the signal at the edge.

4. The Rise of Over-The-Top Content Providers

Another disruptive force at the time was the Over-the-Top providers.  Once Netflix started streaming over-the-top, it was destroying the market for traditional broadcast TV and strongly challenging cable TVsubscriptions.  It also made content providers think, “OK, I need to have a catalog of content and be able to sell content a la carte like Netflix.”

Then, just as networks were collapsing under the weight of Netflix due to the massive volume of bandwidth-intensive content, along came Hulu, Amazon Prime and other players offering alternative TV channels.  So, all these forces meant that Internet super highway and data exchange methodologies needed to be rethought.  There was a desperate need for new ways to distribute and manage content so that end users had a positive experience.

5.  A Better Way to Distribute Xfinity Content

Xfinity became Comcast’s ground-breaking way of changing content distribution for its cable TV service.  The cloud content storage was originally hosted in Denver, where their technical team resides.  Xfinity’s first trial was launched in Chicago and people loved it. 

Xfinity was a fantastic service: it delivered a much better interface and it brought IP-straight-to-the-home.  The only trouble was that with the video content “origin” servers being in Colorado and the consumers being over a 1,000 miles away in Chicago, the network latency made the whole system unusable one they reach a certainly level of subscribers.

But Comcast realized Xfinity could still be a winner as long as an instance of the Xfinity platform could be put in each market they were serving.

So, Comcast and EdgeConneX teamed up.  EdgeConneX knew the Comcast market.  And we were able to look at their headends (cable COs), evaluating whether we could retrofit those to be the content repositories for the Xfinity service.

When the EdgeConneX team started looking at the buildings, we concluded that retrofitting headends weren’t going to work because the current headends were little more than telco COs.  The problem was that you needed to put in incredibly power-hungry equipment that generated lots of heat.  It was just not going to work.

The use of containers was also explored, but the economics of scale in container systems weren’t quite there yet, and that made the container option too costly.

So, Comcast and EdgeConneX eventually created a plan.  EdgeConneX was to build a network neutral data center for Comcast within 10 kilometers of their major aggregation points so that they could have a passive network and avoid the use of repeaters and other expensive gear.  Comcast became a tenant and additional tenants were to come.  Soon enough, Internet ecosystem tenants came in including Content Delivery Networks and major content providers who needed to reach all of those Comcast eyeballs (customers).

6. Comcast Buys into the EdgeConneX Data Center Idea

The Comcast proposal was the opportunity of a lifetime because it got the EdgeConneX Edge Data Center concept in motion and at the pivot point of the content distribution system.  Four short years later, our portfolio of EDCs on-ramp 70 to 80% of Internet and content traffic in the US.

Our first Edge Data Center was created in Houston, retrofitting a small, unfinished “Mom and Pop” data center.  The Houston EDC worked so well that Akamai came into our ecosystem and shortly after became an investor in our company.

Then Comcast asked us to do Salt Lake City.  With our fiber and network intelligence, finding the perfect location was not an issue.  And despite freezing winter weather, we built out and launched that EDC in three months.

7. The Rollout of EdgeConneX Edge Data Centers

With a successful model now proven, Comcast was excited to move forward with us in additional locations where we were able to put forth a bid that was a clear winner.

Over the next two years, EdgeConneX built 23 EDCs across North America, predominantly for Comcast, but also for Cox Communications and Charter Communications, to name a few.

As a result, we grew very quickly.  We became the next-generation headends, the lowest latency point to the majority of the business and domestic users in any market we built.  We initially elected not to build in big peering cities such as Chicago, Santa Clara, and Ashburn, where most of the big cluster data centers live.  There was no point in building there because those markets were over-served.

Our EDCs were built on a smaller footprint, but we designed them from the onset to be 20 KW or 30 KW of cabinet instead of the traditional 4 to 7 KW.  And that was key because the equipment was capable of incredibly dense computing and we knew the stuff coming would require even greater power loads.

As an example, one of our customers put in a Juniper router that was a 30 KW device: most people can’t accommodate that, but we can.  We can also accommodate 20 KW in every cabinet across the entire footprint of our flow.

And the trend is towards newer, faster and multiple streams with 100 Gig links.  All these devices we put in are blade architectures, terabytes of data, either solid state or spinning disks — all of which require greater power and cooling.  But we design our EDCs from the ground up to accommodate huge capacity and compute density.

It didn’t take long for the content guys to come in, the over-the-top guys, the social network players.  Today, these players make up the majority of our customer base.  In fact, in any one of our buildings, we are driving 70% to 80% of all Internet traffic for that region throughout our space.

Players like Akamai pass throughout our infrastructure as well.  Akamai used to put their storage devices in telco CO or headend locations.  Now, they put them with us because we are a lower latency play to all of the market through our building.

8. Reaching the Internet in Boston via Town of Billerica

Another advantage of our EDC deployments is that we took maximum advantage of location.  A case in point is Boston where we built an EDC to support a couple of cable companies.

Traditionally, you would look at the center of Boston where the telcos would connect, which is the Prudential building off Boylston Street.  The data centers would be around that building, which is a high-price real estate district: that’s where you would connect to the Internet.

But nowadays, broadband has taken over, and it turns out the fastest point to 80% of the customers in the Boston area is a town called Billerica, about an hour northwest of Boston.  Billerica is best because that’s how the Internet is structured.

In Internet time, Billerica is closer than the Prudential Building because people who are accessing the Internet through Comcast, Charter, or whomever.  The traffic is tromboning — it goes out of Boston to the Billerica headend, then comes back to the Prudential Building and from there usually to points south.

Of course, from our EDC in Billerica, we don’t necessarily follow the Prudential Building route.  We can go straight to Chicago or Ashburn or wherever is required next.  In short, ubiquitous broadband has caused the points of presence and on-ramps to shift, and that effect will be even more pronounced once 5G wireless is ratified.

9. Internet Birth to Equinix: The Net-Neutral Data Center

Now at this point, I think it’s important to distinguish the EdgeConneX EDC from the traditional data centers that emerged at the birth of the Internet.

Back in the early 1990s, data centers were simply a way to get on the Internet as quickly as possible.  And that desire led to the big clusters of data centers you see today in places like Ashburn, Virginia, Dallas, and Santa Clara.

Ashburn has the most storied past because it grew up after the networking companies UUNET and MFSnet decided to connect their networks together in Tysons Corner, Virginia, back in 1992.  Shortly thereafter, MCI came in and established MAE-East, and then PaxNet in Palo Alto became MAE-West.  Those became the first two points of the Internet — the true Internet — because up till then if you bought Internet services from an MCI, you could only reach MCI customers.  There was no traversing other people’s networks until then.

Soon enough, UUNET became the largest Internet on-ramp and they decided to move down to Ashburn, Virginia, close to Dulles Airport.  It was a farmland area with lots of available land, good  power, and outside the theoretical 30-mile blast radius from Washington, D.C.

It was at that time that a brilliant guy, the founder of Equinix, came along and determined that if the Meet-Me rooms are where all the connections occur, why not build a Meet-Me room in a network-neutral data center — one not controlled by a telco with a vested interest?

Equinix then built the first net-neutral data center in Ashburn, behind the UUNET building, and it became wildly successful and that prompted the build out of a much larger data center across the street, which became Equinix DC-1.

Today, Ashburn’s DC-2 is the most connected place on the planet.  And it caused the explosion of data centers in Ashburn.  The local municipality was smart to support the development by building a water treatment plant to pump grey, or non-drinkable, water to the data centers for use in the Ashburn cooling towers.  So, this is why Ashburn is ground zero for data centers and the Internet.

10. From Safe COLOs to Virtual Infrastructure

In the beginning, network-neutral data centers were colos used by enterprises purely for the economics.  People said, “Hey I’m running my exchange server and my database on that a big box in the corner, so I need to get that in a safe location.”  So colos exploded for that purpose: CIOs were looking for safe places where someone else could do the care and feeding of the servers.

And then virtualization happened with VMware, hyperscale and all that stuff.  This was a big paradigm shift for it allowed people to say, “Well, instead of having this one box to run my mail and messaging and another box to do my database and so on, I can virtualize that with VMware (or another hypervisor) and I can simply get a bigger box and run it all on that.”

So, costs could be consolidated to the colo provider.  Instead of buying all these cabinets, I can squish it all one cabinet, use more power, but it is a bigger box.  There is a lot of that going on.

Now, there are exceptions to this.  If you are running an old Oracle system or there are compliance issues where machines have to stay where they are, then you can’t go to the cloud with that.  There are also some old DB2 boxes running on IBM and that’s critical backend infrastructure.

The point is: anything that could be virtualized has been virtualized.  So, the larger colo providers specialize in delivering that.  They got the business because they were the closest point to the Internet on-ramps.  So that was their market.

11. The Arrival of Cloud: It’s Impact on the Data Center

But now, with the advent of cloud — which is simply another virtualized instance, but deployed on-demand somewhere — it keeps getting harder for the IT guys to justify, “Hey, in two or three years, I need to buy another server box to keep up with the technology of faster chips, faster memory, whatever it is.

In turn, it gets harder for them to say, “I need $20,000 dollars to buy another server, to run all of our software.”  So Instead they may need to buy the licenses for software to do that, and add to that a certain amount for power, cooling, and space in the data center.

The alternative of course, is to no longer do that at all, but to move it into a cloud where you are buying it on demand.  With the cloud, you don’t have to pay for it 24 hours a day.  If you are doing end-of-month payroll processing, you can just spin it up and spin it down when it’s complete — there is a semi-persistent aspect to technology now.  In short, it became harder for them to justify maintaining that cage with those cabinets, with those servers, operating system, and software licenses in that colo.

As more and more people are moving into the cloud, and people are saying, “This is great because it’s cheaper infrastructure.”

12. The EDC’s Advantages in the Enterprise Cloud

Okay, let’s review what I’ve discussed thus far.  I gave a brief history of how traditional net-neutral data centers came into the market.  And before that, I explained how the Edge Data Center originally came into existence to provide content caching for the cable companies.

Well, it turns out that EDCs are also ideal for the deployment of enterprise clouds.  And it’s here where the mission of traditional data centers and EDCs are now merging and competing for the same customers.

One of the key problems with cloud is latency.  If the mail and messaging of my company is in Hillsboro, outside of Portland, and I’m a salesperson on the road in the Northeast, in the morning I discover my contacts and calendar have completely disappeared.  And that’s a big issue because I don’t know where I’m going for my next meeting.

Now to solve this latency issue, the cloud companies want to put portions of their instance in cloud infrastructure as close as possible to where the people are using it.

Bottom line, there’s going to be a migration away from the traditional hubs to the clouds.  That also means the clouds need to be closer to the customers.  It is all about where the network access is as opposed to the geography of where those headquarter buildings are.  And this requirement aligns perfectly with the virtues of our Edge Data Centers.

Now, the cloud player used to have a 24-month window when deciding where to put their next cloud megasite.  But the cloud business has become so dynamic that the planning window is now down to only six months — life moves very quickly for them today.  Their infrastructure — the on-ramps, the express route, the direct connects in the buildings — needs to be in place as fast as possible.

Traditionally, peering across half a dozen sites across the country was networked across a hub and spoke model. 

But now, because of what we have done, a distributed mesh of Internet peering is available.  This is far more convenient because traffic doesn’t have to go back to downtown Chicago, or Ashburn, or Santa Clara.  It can traverse wherever it needs to go and in a highly efficient fashion.

Then the cloud guys came to us and said: “We need to put larger nodes everywhere.” So now, instead of just building 2 to 8 megawatt sites, we now build 50 MW sites for them very quickly, faster than they can build themselves in Chicago, Dublin, Amsterdam, and in other places we can’t yet disclose.

So, to us, the edge is just a case of understanding what an individual is trying to get to — the network or solution providers they are using to get there, be it Microsoft Office 365 or whatever.  It all depends on how they are connecting to the Internet, and we want to facilitate the fastest route.  In fact, a lot of traffic is still to cell phone towers, which will move to small cells as soon as 5G is ratified.

Increasingly then, the edge is a moving target for us.  We can build EDCs at any size and scale — from small containers to large multi-MWs.  The point is to facilitate moving that 70% to 80% of the Internet traffic so it get to where people can use it.

Along the way, our customers are focused on network on-ramps for content caches, transaction engines, and intelligent analytical instances for processing packets to and from IoT platforms and data lakes.

13. The Arrival of Internet Exchanges

One of the most fascinating trends today is the arrival of a new breed of Internet exchange companies who are making it far easier to bring cloud applications anywhere.

In traditional data centers, one of the biggest expenses is cross-connect, the fiber that runs from your caged infrastructure to one of their branches or switches.  It’s a system challenge and there’s a monthly recurring fee you have to pay whether you use it or on.  And it’s pretty expensive: $300 a month every month for a piece of fiber a guy on a ladder ran for $100 and now you’re going to pay for it forever.

But at EdgeConneX we do things differently.  We have one structured fiber path to our Internet Exchange partners infrastructure and through software, you can say I need a 100% of my traffic to go to Microsoft.  Or change that on the fly to be 10% of my traffic to Microsoft and 20% to Oracle and the rest to Google.  It just dynamically changes as you need it, all based on usage, and you only pay for what you use.

Plus the traffic all goes through a private network, so it gets it off the Internet, making it much harder to be hacked or attacked.

This development has greatly increased the versatility of going to the cloud.  Companies driving this trend include Megaport, Console, and PacketFabric.  We are proud to have all three of them as partners within our infrastructure.

14. Expansion in Europe

The latest development for EdgeConneX is building out in Europe.

Now at EdgeConneX we never build speculatively: we only build with an active tenant.  So, we manage our capital very specifically and the whole point is want to ensure there’s a big pool of tenants to build an ecosystem around.

Remember, we don’t sell to the enterprise: we are a wholesaler who sells to the people who sell to the enterprise.  So, if we build a cloud or an on-ramp or something, we want to make it easy for the people who sell to the enterprise to take space with us — to be close to that on-ramp and be able to service the market.

15. The Future of Data Centers

Looking ahead, we know that traditional data centers will always be there.  But due to the explosion in broadband and the need to be closer to customers, traffic patterns will shift.  The tremendous growth the large colo data centers have seen over the past four or five years will not be as exponential as it once was.

We know that EdgeConneX was fortunate to be at the right place at the right time to become the focal point for cable company investment in distribution.  Of course, we took some gambles at the time that paid off wonderfully.

In fact, during the last six months, we deployed more power than any other data center company on the planet.  Ironically, we are the fastest growing data center company that no one has ever heard of.  And that’s partly because we built out our initial U.S. presence in just under two years in stealth mode.

And we stayed quiet literally because once we go into a market, we own that market, by becoming the headend, the closest point to all the people.  This is not rocket science.  We simply understand how the Internet works, and we sited our buildings appropriately.  Cloud deployments are the biggest thing enabling Edge Data Centers today, and it’s truly exciting.

Meanwhile, the Internet of Things (IoT) has not taken off quite yet.  When that happens, it will entail a very different type of traffic flow.  I don’t think anyone has quite yet predicted how that is going to go, but it will certainly entail distribution change.

For example, you’re not going to have a temperature sensor transmit the same temperature every minute.  It will be designed to only send something when something new happens.  And when something happens the traffic will leave our building and go alert somebody of the fact.

But in one sense, IoT is no different from the many other traffic distribution challenges we just discussed.  The secret to the wholesale business, then, is watching where the technology and distribution is going and then to smartly steering yourself into the fast-moving stream of change — and opportunity.

16. The Founders of the Edge

One thing we probably don’t talk about enough is that after our first year operating in stealth mode, we felt confident that we could radically change the complete landscape of the internet and data center marketplace.  So, we tasked our marketing and public relations partner iMPR to make the “Edge” the sexiest thing on the planet.  Four years later, I think you would agree, they more than achieved the request.

Copyright 2018 Top Operator Journal

 

About the Experts

Phill Lawson-Shanks

Phill Lawson-Shanks

Phill Lawson-Shanks, Chief Innovation Officer at EdgeConneX, has been at the forefront of designing and deploying industry leading solutions in both the UK and the US for over 25 years.

Prior to joining EdgeConneX, Phill was CTO of Virtacore responsible for data center build-out, network core architecture and systems to accommodate the migration and consolidation of thousands of clients and infrastructure.

Phill’s team also designed and deployed the world’s largest VMware-based Public Cloud.  At Alcatel-Lucent Phill served as Chief Strategy Officer where he led their effective transition to the cloud, virtualizing many of their programs and creating meaningful time and cost savings.

Phill joined SAVVIS as VP and GM of Hosting to re-establish the Managed Hosting business as well as introduce the first ‘virtual server’ product to the market.  As Founder, VP and GM of Digital Media at MCI (now Verizon Digital Media Services), Phill created the division and received several related patents in the process.

Phill holds 8 active Technology Patents secured worldwide including 4 in the U.S. and 4 in Europe.

Recent Articles