Oct
09
2020
--

How Roblox completely transformed its tech stack

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem

Jan
14
2020
--

Equinix is acquiring bare metal cloud provider Packet

Equinix announced today that it is acquiring bare metal cloud provider Packet, the New York City startup that had raised over $36 million on a $100 million valuation, according to PitchBook data.

Equinix has a set of data centers and co-location facilities around the world. Companies that may want to have more control over their hardware could use their services, including space, power and cooling systems, instead of running their own data centers.

Equinix is getting a unique cloud infrastructure vendor in Packet, one that can provide more customized kinds of hardware configurations than you can get from the mainstream infrastructure vendors like AWS and Azure. Company COO George Karidis described what separated his company from the pack in a September, 2018 TechCrunch article:

“We offer the most diverse hardware options,” he said. That means they could get servers equipped with Intel, ARM, AMD or with specific nVidia GPUs in whatever configurations they want. By contrast public cloud providers tend to offer a more off-the-shelf approach. It’s cheap and abundant, but you have to take what they offer, and that doesn’t always work for every customer.

In a blog post announcing the deal, company co-founder and CEO Zachary Smith had a message for his customers, who may be worried about the change in ownership. “When the transaction closes later this quarter, Packet will continue operating as before: same team, same platform, same vision,” he wrote.

He also offered the standard value story for a deal like this, saying the company could scale much faster under Equinix than it could on its own, with access to its new company’s massive resources, including 200+ data centers in 55 markets and 1,800 networks.

Sara Baack, chief product officer at Equinix, says bringing the two companies together will provide a diverse set of bare metal options for customers moving forward. “Our combined strengths will further empower companies to be everywhere they need to be, to interconnect everyone and integrate everything that matters to their business,” she said in a statement.

While the companies did not share the purchase price, they did hint that they would have more details on the transaction after it closes, which is expected in the first quarter this year.

Nov
20
2019
--

Google Cloud launches Bare Metal Solution

Google Cloud today announced the launch of a new bare metal service, dubbed the Bare Metal Solution. We aren’t talking about bare metal servers offered directly by Google Cloud here, though. Instead, we’re talking about a solution that enterprises can use to run their specialized workloads on certified hardware that’s co-located in the Google Cloud data centers and directly connect them to Google Cloud’s suite of other services. The main workload that makes sense for this kind of setup is databases, Google notes, and specifically Oracle Database.

Bare Metal Solution is, as the name implies, a fully integrated and fully managed solution for setting up this kind of infrastructure. It involves a completely managed hardware infrastructure that includes servers and the rest of the data center facilities like power and cooling, support contracts with Google Cloud and billing are handled through Google’s systems, as well as an SLA. The software that’s deployed on those machines is managed by the customer — not Google.

The overall idea, though, is clearly to make it easier for enterprises with specialized workloads that can’t easily be migrated to the cloud to still benefit from the cloud-based services that need access to the data from these systems. Machine learning is an obvious example, but Google also notes that this provides these companies with a bridge to slowly modernize their tech infrastructure in general (where ‘modernize’ tends to mean ‘move to the cloud’).

“These specialized workloads often require certified hardware and complicated licensing and support agreements,” Google writes. “This solution provides a path to modernize your application infrastructure landscape, while maintaining your existing investments and architecture. With Bare Metal Solution, you can bring your specialized workloads to Google Cloud, allowing you access and integration with GCP services with minimal latency.”

Since this service is co-located with Google Cloud, there are no separate ingress and egress charges for data that moves between Bare Metal Solution and Google Cloud in the same region.

The servers for this solution, which are certified to run a wide range of applications (including Oracle Database) range from dual-socket 16-core systems with 384 GB of RAM to quad-socket servers with 112 cores and 3072 GB of RAM. Pricing is on a monthly basis, with a preferred term length of 36 months.

Obviously, this isn’t the kind of solution that you self-provision, so the only way to get started — and get pricing information — is to talk to Google’s sales team. But this is clearly the kind of service that we should expect from Google Cloud, which is heavily focused on providing as many enterprise-ready services as possible.

Apr
29
2019
--

With Kata Containers and Zuul, OpenStack graduates its first infrastructure projects

Over the course of the last year and a half, the OpenStack Foundation made the switch from purely focusing on the core OpenStack project to opening itself up to other infrastructure-related projects as well. The first two of these projects, Kata Containers and the Zuul project gating system, have now exited their pilot phase and have become the first top-level Open Infrastructure Projects at the OpenStack Foundation.

The Foundation made the announcement at its Open Infrastructure Summit (previously known as the OpenStack Summit) in Denver today after the organization’s board voted to graduate them ahead of this week’s conference. “It’s an awesome milestone for the projects themselves,” OpenStack Foundation executive direction Jonathan Bryce told me. “It’s a validation of the fact that in the last 18 months, they have created sustainable and productive communities.”

It’s also a milestone for the OpenStack Foundation itself, though, which is still in the process of reinventing itself in many ways. It can now point at two successful projects that are under its stewardship, which will surely help it as it goes out and tries to attract others who are looking to bring their open-source projects under the aegis of a foundation.

In addition to graduating these first two projects, Airship — a collection of open-source tools for provisioning private clouds that is currently a pilot project — hit version 1.0 today. “Airship originated within AT&T,” Bryce said. “They built it from their need to bring a bunch of open-source tools together to deliver on their use case. And that’s why, from the beginning, it’s been really well-aligned with what we would love to see more of in the open-source world and why we’ve been super excited to be able to support their efforts there.”

With Airship, developers use YAML documents to describe what the final environment should look like and the result of that is a production-ready Kubernetes cluster that was deployed by OpenStack’s Helm tool — though without any other dependencies on OpenStack.

AT&T’s assistant vice president, Network Cloud Software Engineering, Ryan van Wyk, told me that a lot of enterprises want to use certain open-source components, but that the interplay between them is often difficult and that while it’s relatively easy to manage the life cycle of a single tool, it’s hard to do so when you bring in multiple open-source tools, all with their own life cycles. “What we found over the last five years working in this space is that you can go and get all the different open-source solutions that you need,” he said. “But then the operator has to invest a lot of engineering time and build extensions and wrappers and perhaps some orchestration to manage the life cycle of the various pieces of software required to deliver the infrastructure.”

It’s worth noting that nothing about Airship is specific to the telco world, though it’s no secret that OpenStack is quite popular in the telco world and unsurprisingly, the Foundation is using this week’s event to highlight the OpenStack project’s role in the upcoming 5G rollouts of various carriers.

In addition, the event will showcase OpenStack’s bare-metal capabilities, an area the project has also focused on in recent releases. Indeed, the Foundation today announced that its bare-metal tools now manage more than a million cores of compute. To codify these efforts, the Foundation also today launched the OpenStack Ironic Bare Metal program, which brings together some of the project’s biggest users, like Verizon Media (home of TechCrunch, though we don’t run on the Verizon cloud), 99Cloud, China Mobile, China Telecom, China Unicom, Mirantis, OVH, Red Hat, SUSE, Vexxhost and ZTE.

Feb
14
2019
--

AWS announces new bare metal instances for companies who want more cloud control

When you think about Infrastructure as a Service, you typically pay for a virtual machine that resides in a multi-tenant environment. That means, it’s using a set of shared resources. For many companies that approach is fine, but when a customer wants more control, they may prefer a single tenant system where they control the entire set of hardware resources. This approach is also known as “bare metal” in the industry, and today AWS announced five new bare metal instances.

You end up paying more for this kind of service because you are getting more control over the processor, storage and other resources on your own dedicated underlying server. This is part of the range of products that all cloud vendors offer. You can have a vanilla virtual machine, with very little control over the hardware, or you can go with bare metal and get much finer grain control over the underlying hardware, something that companies require if they are going to move certain workloads to the cloud.

As AWS describes it in the blog post announcing these new instances, these are for highly specific use cases. “Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted Tier 1 business critical applications,” the company explained.

The five new products, called m5.metal, m5d.metal, r5.metal, r5d.metal, and z1d.metal (catchy names there, Amazon) offer a variety of resources:

Chart courtesy of Amazon

These new offerings are available starting today as on-demand, reserved or spot instances, depending on your requirements.

Apr
21
2018
--

Full-Metal Packet is hosting the future of cloud infrastructure

Cloud computing has been a revolution for the data center. Rather than investing in expensive hardware and managing a data center directly, companies are relying on public cloud providers like AWS, Google Cloud, and Microsoft Azure to provide general-purpose and high-availability compute, storage, and networking resources in a highly flexible way.

Yet as workflows have moved to the cloud, companies are increasingly realizing that those abstracted resources can be enormously expensive compared to the hardware they used to own. Few companies want to go back to managing hardware directly themselves, but they also yearn to have the price-to-performance level they used to enjoy. Plus, they want to take advantage of a whole new ecosystem of customized and specialized hardware to process unique workflows — think Tensor Processing Units for machine learning applications.

That’s where Packet comes in. The New York City-based startup’s platform offers a highly-customizable infrastructure for running bare metal in the cloud. Rather than sharing an instance with other users, Packet’s customers “own” the hardware they select, so they can use all the resources of that hardware.

Even more interesting is that Packet will also deploy custom hardware to its data centers, which currently number eighteen around the world. So, for instance, if you want to deploy a quantum computing box redundantly in half of those centers, Packet will handle the logistics of installing those boxes, setting them up, and managing that infrastructure for you.

The company was founded in 2014 by Zac Smith, Jacob Smith, and Aaron Welch, and it has raised a total of $12 million in venture capital financing according to Crunchbase, with its last round led by Softbank. “I took the usual path, I went to Juilliard,” Zac Smith, who is CEO, said to me at his office, which overlooks the World Trade Center in downtown Manhattan. Double bass was a first love, but he found his way eventually into internet hosting, working as COO of New York-based Voxel.

At Voxel, Smith said that he grew up in hosting just as the cloud started taking off. “We saw this change in the user from essentially a sysadmin who cared about Tom’s Hardware, to a developer who had never opened a computer but who was suddenly orchestrating infrastructure,” he said.

Innovation is the lifeblood of developers, yet, public clouds were increasingly abstracting away any details of the underlying infrastructure from developers. Smith explained that “infrastructure was becoming increasingly proprietary, the land of few companies.” While he once thought about leaving the hosting world post-Voxel, he and his co-founders saw an opportunity to rethink cloud infrastructure from the metal up.

“Our customer is a millennial developer, 32 years old, and they have never opened an ATX case, and how could you possibly give them IT in the same way,” Smith asked. The idea of Packet was to bring back choice in infrastructure to these developers, while abstracting away the actual data center logistics that none of them wanted to work on. “You can choose your own opinion — we are hardware independent,” he said.

Giving developers more bare metal options is an interesting proposition, but it is Packet’s long-term vision that I think is most striking. In short, the company wants to completely change the model of hardware development worldwide.

VCs are increasingly investing in specialized chips and memory to handle unique processing loads, from machine learning to quantum computing applications. In some cases, these chips can process their workloads exponentially faster compared to general purpose chips, which at scale can save companies millions of dollars.

Packet’s mission is to encourage that ecosystem by essentially becoming a marketplace, connecting original equipment manufacturers with end-user developers. “We use the WeWork model a lot,” Smith said. What he means is that Packet allows you to rent space in its global network of data centers and handle all the logistics of installing and monitoring hardware boxes, much as WeWork allows companies to rent real estate while it handles the minutia like resetting the coffee filter.

In this vision, Packet would create more discerning and diverse buyers, allowing manufacturers to start targeting more specialized niches. Gone are the generic x86 processors from Intel driving nearly all cloud purchases, and in their place could be dozens of new hardware vendors who can build up their brands among developers and own segments of the compute and storage workload.

In this way, developers can hack their infrastructure much as an earlier generation may have tricked out their personal computer. They can now test new hardware more easily, and when they find a particular piece of hardware they like, they can get it running in the cloud in short order. Packet becomes not just the infrastructure operator — but the channel connecting buyers and sellers.

That’s Packet’s big vision. Realizing it will require that hardware manufacturers increasingly build differentiated chips. More importantly, companies will have to have unique workflows, be at a scale where optimizing those workflows is imperative, and realize that they can match those workflows to specific hardware to maximize their cost performance.

That may sound like a tall order, but Packet’s dream is to create exactly that kind of marketplace. If successful, it could transform how hardware and cloud vendors work together and ultimately, the innovation of any 32-year-old millennial developer who doesn’t like plugging a box in, but wants to plug in to innovation.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com