Sep
05
2021
--

The time Animoto almost brought AWS to its knees

Today, Amazon Web Services is a mainstay in the cloud infrastructure services market, a $60 billion juggernaut of a business. But in 2008, it was still new, working to keep its head above water and handle growing demand for its cloud servers. In fact, 15 years ago last week, the company launched Amazon EC2 in beta. From that point forward, AWS offered startups unlimited compute power, a primary selling point at the time.

EC2 was one of the first real attempts to sell elastic computing at scale — that is, server resources that would scale up as you needed them and go away when you didn’t. As Jeff Bezos said in an early sales presentation to startups back in 2008, “you want to be prepared for lightning to strike, […] because if you’re not that will really generate a big regret. If lightning strikes, and you weren’t ready for it, that’s kind of hard to live with. At the same time you don’t want to prepare your physical infrastructure, to kind of hubris levels either in case that lightning doesn’t strike. So, [AWS] kind of helps with that tough situation.”

An early test of that value proposition occurred when one of their startup customers, Animoto, scaled from 25,000 to 250,000 users in a 4-day period in 2008 shortly after launching the company’s Facebook app at South by Southwest.

At the time, Animoto was an app aimed at consumers that allowed users to upload photos and turn them into a video with a backing music track. While that product may sound tame today, it was state of the art back in those days, and it used up a fair amount of computing resources to build each video. It was an early representation of not only Web 2.0 user-generated content, but also the marriage of mobile computing with the cloud, something we take for granted today.

For Animoto, launched in 2006, choosing AWS was a risky proposition, but the company found trying to run its own infrastructure was even more of a gamble because of the dynamic nature of the demand for its service. To spin up its own servers would have involved huge capital expenditures. Animoto initially went that route before turning its attention to AWS because it was building prior to attracting initial funding, Brad Jefferson, co-founder and CEO at the company explained.

“We started building our own servers, thinking that we had to prove out the concept with something. And as we started to do that and got more traction from a proof-of-concept perspective and started to let certain people use the product, we took a step back, and were like, well it’s easy to prepare for failure, but what we need to prepare for success,” Jefferson told me.

Going with AWS may seem like an easy decision knowing what we know today, but in 2007 the company was really putting its fate in the hands of a mostly unproven concept.

“It’s pretty interesting just to see how far AWS has gone and EC2 has come, but back then it really was a gamble. I mean we were talking to an e-commerce company [about running our infrastructure]. And they’re trying to convince us that they’re going to have these servers and it’s going to be fully dynamic and so it was pretty [risky]. Now in hindsight, it seems obvious but it was a risk for a company like us to bet on them back then,” Jefferson told me.

Animoto had to not only trust that AWS could do what it claimed, but also had to spend six months rearchitecting its software to run on Amazon’s cloud. But as Jefferson crunched the numbers, the choice made sense. At the time, Animoto’s business model was for free for a 30 second video, $5 for a longer clip, or $30 for a year. As he tried to model the level of resources his company would need to make its model work, it got really difficult, so he and his co-founders decided to bet on AWS and hope it worked when and if a surge of usage arrived.

That test came the following year at South by Southwest when the company launched a Facebook app, which led to a surge in demand, in turn pushing the limits of AWS’s capabilities at the time. A couple of weeks after the startup launched its new app, interest exploded and Amazon was left scrambling to find the appropriate resources to keep Animoto up and running.

Dave Brown, who today is Amazon’s VP of EC2 and was an engineer on the team back in 2008, said that “every [Animoto] video would initiate, utilize and terminate a separate EC2 instance. For the prior month they had been using between 50 and 100 instances [per day]. On Tuesday their usage peaked at around 400, Wednesday it was 900, and then 3,400 instances as of Friday morning.” Animoto was able to keep up with the surge of demand, and AWS was able to provide the necessary resources to do so. Its usage eventually peaked at 5000 instances before it settled back down, proving in the process that elastic computing could actually work.

At that point though, Jefferson said his company wasn’t merely trusting EC2’s marketing. It was on the phone regularly with AWS executives making sure their service wouldn’t collapse under this increasing demand. “And the biggest thing was, can you get us more servers, we need more servers. To their credit, I don’t know how they did it — if they took away processing power from their own website or others — but they were able to get us where we needed to be. And then we were able to get through that spike and then sort of things naturally calmed down,” he said.

The story of keeping Animoto online became a main selling point for the company, and Amazon was actually the first company to invest in the startup besides friends and family. It raised a total of $30 million along the way, with its last funding coming in 2011. Today, the company is more of a B2B operation, helping marketing departments easily create videos.

While Jefferson didn’t discuss specifics concerning costs, he pointed out that the price of trying to maintain servers that would sit dormant much of the time was not a tenable approach for his company. Cloud computing turned out to be the perfect model and Jefferson says that his company is still an AWS customer to this day.

While the goal of cloud computing has always been to provide as much computing as you need on demand whenever you need it, this particular set of circumstances put that notion to the test in a big way.

Today the idea of having trouble generating 3,400 instances seems quaint, especially when you consider that Amazon processes 60 million instances every day now, but back then it was a huge challenge and helped show startups that the idea of elastic computing was more than theory.

Aug
28
2021
--

How Amazon EC2 grew from a notion into a foundational element of cloud computing

Fifteen years ago this week on August 25, 2006, AWS turned on the very first beta instance of EC2, its cloud-based virtual computers. Today cloud computing, and more specifically infrastructure as a service, is a staple of how businesses use computing, but at that moment it wasn’t a well known or widely understood concept.

The EC in EC2 stands for Elastic Compute, and that name was chosen deliberately. The idea was to provide as much compute power as you needed to do a job, then shut it down when you no longer needed it — making it flexible like an elastic band. The launch of EC2 in beta was preceded by the beta release of S3 storage six months earlier, and both services marked the starting point in AWS’ cloud infrastructure journey.

You really can’t overstate what Amazon was able to accomplish with these moves. It was able to anticipate an entirely different way of computing and create a market and a substantial side business in the process. It took vision to recognize what was coming and the courage to forge ahead and invest the resources necessary to make it happen, something that every business could learn from.

The AWS origin story is complex, but it was about bringing the IT power of the Amazon business to others. Amazon at the time was not the business it is today, but it was still rather substantial and still had to deal with massive fluctuations in traffic such as Black Friday when its website would be flooded with traffic for a short but sustained period of time. While the goal of an e-commerce site, and indeed every business, is attracting as many customers as possible, keeping the site up under such stress takes some doing and Amazon was learning how to do that well.

Those lessons and a desire to bring the company’s internal development processes under control would eventually lead to what we know today as Amazon Web Services, and that side business would help fuel a whole generation of startups. We spoke to Dave Brown, who is VP of EC2 today, and who helped build the first versions of the tech, to find out how this technological shift went down.

Sometimes you get a great notion

The genesis of the idea behind AWS started in the 2000 timeframe when the company began looking at creating a set of services to simplify how they produced software internally. Eventually, they developed a set of foundational services — compute, storage and database — that every developer could tap into.

But the idea of selling that set of services really began to take shape at an executive offsite at Jeff Bezos’ house in 2003. A 2016 TechCrunch article on the origins AWS described how that started to come together:

As the team worked, Jassy recalled, they realized they had also become quite good at running infrastructure services like compute, storage and database (due to those previously articulated internal requirements). What’s more, they had become highly skilled at running reliable, scalable, cost-effective data centers out of need. As a low-margin business like Amazon, they had to be as lean and efficient as possible.

They realized that those skills and abilities could translate into a side business that would eventually become AWS. It would take a while to put these initial ideas into action, but by December 2004, they had opened an engineering office in South Africa to begin building what would become EC2. As Brown explains it, the company was looking to expand outside of Seattle at the time, and Chris Pinkham, who was director in those days, hailed from South Africa and wanted to return home.

Feb
14
2019
--

AWS announces new bare metal instances for companies who want more cloud control

When you think about Infrastructure as a Service, you typically pay for a virtual machine that resides in a multi-tenant environment. That means, it’s using a set of shared resources. For many companies that approach is fine, but when a customer wants more control, they may prefer a single tenant system where they control the entire set of hardware resources. This approach is also known as “bare metal” in the industry, and today AWS announced five new bare metal instances.

You end up paying more for this kind of service because you are getting more control over the processor, storage and other resources on your own dedicated underlying server. This is part of the range of products that all cloud vendors offer. You can have a vanilla virtual machine, with very little control over the hardware, or you can go with bare metal and get much finer grain control over the underlying hardware, something that companies require if they are going to move certain workloads to the cloud.

As AWS describes it in the blog post announcing these new instances, these are for highly specific use cases. “Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted Tier 1 business critical applications,” the company explained.

The five new products, called m5.metal, m5d.metal, r5.metal, r5d.metal, and z1d.metal (catchy names there, Amazon) offer a variety of resources:

Chart courtesy of Amazon

These new offerings are available starting today as on-demand, reserved or spot instances, depending on your requirements.

Sep
11
2018
--

Packet hauls in $25M Series B as customized cloud vision takes shape

In a world where large hyperscale companies like Amazon, Microsoft and Google dominate the public cloud, it would seem foolhardy for a startup to try to carve out a space, but Packet has an alternative customized cloud vision, and investors have taken notice. Today, the company announced a $25 million Series B led by Third Point Ventures.

An interesting mix of strategic and traditional investors joined the round including Battery Ventures, JA Mitsui Leasing and Samsung Next. Existing investors SoftBank Corp. and Dell Technologies Capital also participated. The company has now raised over $36 million.

The company also showed some signs of maturing by bringing in Ihab Tarazi as CTO and George Karidis as COO. Tarazi, who came over from Equinix, likes what he sees in Packet .

He says they offer several advantages over the public providers. First of all, customers can buy whatever hardware they want. “We offer the most diverse hardware options,” he said. That means they could get servers equipped with Intel, ARM, AMD or with specific nVidia GPUs in whatever configurations they want. By contrast public cloud providers tend to offer a more off-the-shelf approach. It’s cheap and abundant, but you have to take what they offer, and that doesn’t always work for every customer.

Another advantage Packet bring to the table, according to Tarazi, is that they support a range of open source software options, letting customers build whatever applications they want on top of that custom hardware.

They currently have 18 locations around the world, but Tarazi said they will soon be adding 50 more, also adding geographic diversity to the mix.

Finally, each customer gets their own bare metal offering, providing them with a single tenant private option inside Packet’s data center. This gives them the advantages of a privately run data center but where Packet handles all of the management, configuration and upkeep.

Tarazi doesn’t see Packet competing directly with the hyperscale players. Instead, he believes there will be room for both approaches. “I think you have a combination of both happening where people are trying to take advantage of all these hardware options to optimize performance across specific applications,” he explained.

The company, which launched in 2014, currently has about 50 employees with headquarters in New York City and offices in Palo Alto. They are also planning on opening an operations center in Dallas soon. The number should swell to 100 employees over the next year as they expand operations.

Jun
07
2018
--

Google Cloud announces the Beta of single tenant instances

One of the characteristics of cloud computing is that when you launch a virtual machine, it gets distributed wherever it makes the most sense for the cloud provider. That usually means sharing servers with other customers in what is known as a multi-tenant environment. But what about times when you want a physical server dedicated just to you?

To help meet those kinds of demands, Google announced the Beta of Google Compute Engine Sole-tenant nodes, which have been designed for use cases such a regulatory or compliance where you require full control of the underlying physical machine, and sharing is not desirable.

“Normally, VM instances run on physical hosts that may be shared by many customers. With sole-tenant nodes, you have the host all to yourself,” Google wrote in a blog post announcing the new offering.

Diagram: Google

Google has tried to be as flexible as possible, letting the customer choose exactly what configuration they want in terms CPU and memory. Customers can also let Google choose the dedicated server that’s best at any particular moment, or you can manually select the server if you want that level of control. In both cases, you will be assigned a dedicated machine.

If you want to play with this, there is a free tier and then various pricing tiers for a variety of computing requirements. Regardless of your choice, you will be charged on a per-second basis with a one-minute minimum charge, according to Google.

Since this feature is still in Beta, it’s worth noting that it is not covered under any SLA. Microsoft and Amazon have similar offerings.

Nov
07
2017
--

New tools help could help prevent Amazon S3 data leaks

 If you do a search for Amazon S3 breaches due to customer error of leaving the data unencrypted, you’ll see a long list that includes a DoD contractor, Verizon (the owner of this publication) and Accenture, among the more high profile examples. Today, AWS announced a new set of five tools designed to protect customers from themselves and ensure (to the extent possible) that the data in S3… Read More

Oct
25
2017
--

Spotinst delivers spot cloud infrastructure services at discount prices

 Cloud infrastructure vendors offer a range of prices for their services including aggressively priced virtual machines available on the spot market on an as-available basis. The catch is that the vendor can shut down these VMs whenever they happen to need them. Spotinst, a Tel Aviv-based startup, developed a service to buy and distribute that available excess capacity. Today, the company… Read More

Mar
01
2017
--

The day Amazon S3 storage stood still

Jeff Bezoz, CEO of Amazon. By now you’ve probably heard that Amazon’s S3 storage service went down in its Northern Virginia datacenter for the better part of 4 hours yesterday, and took parts of a bunch of prominent websites and services with it. It’s worth noting that as of this morning, the Amazon dashboard was showing everything was operating normally. While yesterday’s outage was a big deal… Read More

Oct
14
2016
--

AWS gets richer with VMware partnership

2016-10-13_1334 VMware signed deals with Microsoft, Google and IBM earlier this year as it has shifted firmly to a hybrid cloud strategy, but it was the deal it signed with AWS this week that has had everybody talking. The cloud infrastructure market breaks down to AWS with around a third of the market — and everybody else. Microsoft is the closest competitor with around 10 percent. While VMware has… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com