Sep
15
2018
--

Why the Pentagon’s $10 billion JEDI deal has cloud companies going nuts

By now you’ve probably heard of the Defense Department’s massive winner-take-all $10 billion cloud contract dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short).
Star Wars references aside, this contract is huge, even by government standards.The Pentagon would like a single cloud vendor to build out its enterprise cloud, believing rightly or wrongly that this is the best approach to maintain focus and control of their cloud strategy.

Department of Defense (DOD) spokesperson Heather Babb tells TechCrunch the department sees a lot of upside by going this route. “Single award is advantageous because, among other things, it improves security, improves data accessibility and simplifies the Department’s ability to adopt and use cloud services,” she said.

Whatever company they choose to fill this contract, this is about modernizing their computing infrastructure and their combat forces for a world of IoT, artificial intelligence and big data analysis, while consolidating some of their older infrastructure. “The DOD Cloud Initiative is part of a much larger effort to modernize the Department’s information technology enterprise. The foundation of this effort is rationalizing the number of networks, data centers and clouds that currently exist in the Department,” Babb said.

Setting the stage

It’s possible that whoever wins this DOD contract could have a leg up on other similar projects in the government. After all it’s not easy to pass muster around security and reliability with the military and if one company can prove that they are capable in this regard, they could be set up well beyond this one deal.

As Babb explains it though, it’s really about figuring out the cloud long-term. “JEDI Cloud is a pathfinder effort to help DOD learn how to put in place an enterprise cloud solution and a critical first step that enables data-driven decision making and allows DOD to take full advantage of applications and data resources,” she said.

Photo: Mischa Keijser for Getty Images

The single vendor component, however, could explain why the various cloud vendors who are bidding, have lost their minds a bit over it — everyone except Amazon, that is, which has been mostly silent, happy apparently to let the process play out.

The belief amongst the various other players, is that Amazon is in the driver’s seat for this bid, possibly because they delivered a $600 million cloud contract for the government in 2013, standing up a private cloud for the CIA. It was a big deal back in the day on a couple of levels. First of all, it was the first large-scale example of an intelligence agency using a public cloud provider. And of course the amount of money was pretty impressive for the time, not $10 billion impressive, but a nice contract.

For what it’s worth, Babb dismisses such talk, saying that the process is open and no vendor has an advantage. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said.

Complaining loudly

As the Pentagon moves toward selecting its primary cloud vendor for the next decade, Oracle in particular has been complaining to anyone who will listen that Amazon has an unfair advantage in the deal, going so far as to file a formal complaint last month, even before bids were in and long before the Pentagon made its choice.

Photo: mrdoomits for Getty Images (cropped)

Somewhat ironically, given their own past business model, Oracle complained among other things that the deal would lock the department into a single platform over the long term. They also questioned whether the bidding process adhered to procurement regulations for this kind of deal, according to a report in the Washington Post. In April, Bloomberg reported that co-CEO Safra Catz complained directly to the president that the deal was tailor made for Amazon.

Microsoft hasn’t been happy about the one-vendor idea either, pointing out that by limiting itself to a single vendor, the Pentagon could be missing out on innovation from the other companies in the back and forth world of the cloud market, especially when we’re talking about a contract that stretches out for so long.

As Microsoft’s Leigh Madden told TechCrunch in April, the company is prepared to compete, but doesn’t necessarily see a single vendor approach as the best way to go. “If the DOD goes with a single award path, we are in it to win, but having said that, it’s counter to what we are seeing across the globe where 80 percent of customers are adopting a multi-cloud solution,” he said at the time.

He has a valid point, but the Pentagon seems hell bent on going forward with the single vendor idea, even though the cloud offers much greater interoperability than proprietary stacks of the 1990s (for which Oracle and Microsoft were prime examples at the time).

Microsoft has its own large DOD contract in place for almost a billion dollars, although this deal from 2016 was for Windows 10 and related hardware for DOD employees, rather than a pure cloud contract like Amazon has with the CIA.

It also recently released Azure Stack for government, a product that lets government customers install a private version of Azure with all the same tools and technologies you find in the public version, and could prove attractive as part of its JEDI bid.

Cloud market dynamics

It’s also possible that the fact that Amazon controls the largest chunk of the cloud infrastructure market, might play here at some level. While Microsoft has been coming fast, it’s still about a third of Amazon in terms of market size, as Synergy Research’s Q42017 data clearly shows.

The market hasn’t shifted dramatically since this data came out. While market share alone wouldn’t be a deciding factor, Amazon came to market first and it is much bigger in terms of market than the next four combined, according to Synergy. That could explain why the other players are lobbying so hard and seeing Amazon as the biggest threat here, because it’s probably the biggest threat in almost every deal where they come up against each other, due to its sheer size.

Consider also that Oracle, which seems to be complaining the loudest, was rather late to the cloud after years of dismissing it. They could see JEDI as a chance to establish a foothold in government that they could use to build out their cloud business in the private sector too.

10 years might not be 10 years

It’s worth pointing out that the actual deal has the complexity and opt-out clauses of a sports contract with just an initial two-year deal guaranteed. A couple of three-year options follow, with a final two-year option closing things out. The idea being, that if this turns out to be a bad idea, the Pentagon has various points where they can back out.

Photo: Henrik Sorensen for Getty Images (cropped)

In spite of the winner-take-all approach of JEDI, Babb indicated that the agency will continue to work with multiple cloud vendors no matter what happens. “DOD has and will continue to operate multiple clouds and the JEDI Cloud will be a key component of the department’s overall cloud strategy. The scale of our missions will require DOD to have multiple clouds from multiple vendors,” she said.

The DOD accepted final bids in August, then extended the deadline for Requests for Proposal to October 9th. Unless the deadline gets extended again, we’re probably going to finally hear who the lucky company is sometime in the coming weeks, and chances are there is going to be lot of whining and continued maneuvering from the losers when that happens.

Sep
05
2018
--

Elastic’s IPO filing is here

Elastic, the provider of subscription-based data search software used by Dell, Netflix, The New York Times and others, has unveiled its IPO filing after confidentially submitting paperwork to the SEC in June. The company will be the latest in a line of enterprise SaaS businesses to hit the public markets in 2018.

Headquartered in Mountain View, Elastic plans to raise $100 million in its NYSE listing, though that’s likely a placeholder amount. The timing of the filing suggests the company will transition to the public markets this fall; we’ve reached out to the company for more details. 

Elastic will trade under the symbol ESTC.

The business is known for its core product, an open-source search tool called ElasticSearch. It also offers a range of analytics and visualization tools meant to help businesses organize large data sets, competing directly with companies like Splunk and even Amazon — a name it mentions 14 times in the filing.

Amazon offers some of our open source features as part of its Amazon Web Services offering. As such, Amazon competes with us for potential customers, and while Amazon cannot provide our proprietary software, the pricing of Amazon’s offerings may limit our ability to adjust,” the company wrote in the filing, which also lists Endeca, FAST, Autonomy and several others as key competitors.

This is our first look at Elastic’s financials. The company brought in $159.9 million in revenue in the 12 months ended July 30, 2018, up roughly 100 percent from $88.1 million the year prior. Losses are growing at about the same rate. Elastic reported a net loss of $18.5 million in the second quarter of 2018. That’s an increase from $9.9 million in the same period in 2017.

Founded in 2012, the company has raised about $100 million in venture capital funding, garnering a $700 million valuation the last time it raised VC, which was all the way back in 2014. Its investors include Benchmark, NEA and Future Fund, which each retain a 17.8 percent, 10.2 percent and 8.2 percent pre-IPO stake, respectively.

A flurry of business software companies have opted to go public this year. Domo, a business analytics company based in Utah, went public in June raising $193 million in the process. On top of that, subscription biller Zuora had a positive debut in April in what was a “clear sign post on the road to SaaS maturation,” according to TechCrunch’s Ron Miller. DocuSign and Smartsheet are also recent examples of both high-profile and successful SaaS IPOs.

Aug
25
2018
--

Amazon isn’t the only tech company getting tax breaks

Amazon has a big target on its back these days, and because of its size, scope and impact on local business, critics are right to look closely at tax breaks and other subsidies they receive. There is nothing wrong with digging into these breaks to see if they reach the goals governments set in terms of net new jobs. But Amazon isn’t alone here by any means. Many states have a big tech subsidy story to tell, and it isn’t always a tale that ends well for the subsidizing government.

In fact, a recent study by the watchdog group, Good Jobs First, found states are willing to throw millions at high tech companies to lure them into building in their communities. They cited three examples in the report including Tesla’s $1.25 billion 20-year deal to build a battery factory in Nevada, Foxconn’s $3 billion break to build a display factory in Wisconsin and the Apple data center deal in Iowa, which resulted in a $214 million tax break.

Good Jobs First executive director Greg LeRoy doesn’t think these subsidies are justifiable and they take away business development dollars from smaller businesses that tend to build more sustainable jobs in a community.

“The “lots of eggs in one basket” strategy is especially ill-suited. But many public leaders haven’t switched gears yet, often putting taxpayers at great risk, especially because some tech companies have become very aggressive about demanding big tax breaks. Companies with famous names are even more irresistible to politicians who want to look active on jobs,” LeRoy and his colleague Maryann Feldman wrote in a Guardian commentary last month.

It doesn’t always work the way you hope

While these deals are designed to attract the company to an area and generate jobs, that doesn’t always happen. The Apple-Iowa deal, for example, involved 550 construction jobs to build the $1.3 billion state-of-the-art facility, but will ultimately generate only 50 full-time jobs. It’s worth noting that in this case, Apple further sweetened the pot by contributing “up to $100 million” to a local public improvement fund, according to information supplied by the company.

One thing many lay people don’t realize, however, is that in spite of the size, cost and amount of real estate of these mega data centers, they are highly automated and don’t require a whole lot of people to run. While Apple is giving back to the community around the data center, in the end, if the goal of the subsidy is permanent high-paying jobs, there aren’t very many involved in running a data center.

It’s not hard to find projects that didn’t work out. A $2 million tax subsidy deal between Massachusetts and Nortel Networks in 2008 to keep 2200 jobs in place and add 800 more failed miserably. By 2010 there were just 145 jobs left at the facility and the tax incentive lasted another 4 years, according to a Boston.com report.

More recent deals come at a much higher price. The $3 billion Foxconn deal in Wisconsin was expected to generate 3000 direct jobs (and another 22,000 related ones). That comes out to an estimated cost of between $15,000 and $19,000 per job annually, much higher than the typical cost of $2457 per job, according to data in the New York Times.

Be careful what you wish for

Meanwhile states are falling all over themselves with billions in subsidies to give Amazon whatever its little heart desires to build HQ2, which could generate up to 50,000 jobs over a decade if all goes according to plan. The question, as with the Foxconn deal, is whether the states can truly justify the cost per job and the impact on infrastructure and housing to make it worth it?

What’s more, how do you ensure that you get a least a modest return on that investment? In the case of the Nortel example in Massachusetts, shouldn’t the Commonwealth have protected itself against a catastrophic failure instead of continuing to give the tax break for years after it was clear Nortel wasn’t able to live up to its side of the agreement?

Not every deal needs to be a home run, but you want to at least ensure you get a decent number of net new jobs out of it, and that there is some fairness in the end, regardless of the outcome. States also need to figure out the impact of any subsidy on other economic development plans, and not simply fall for name recognition over common sense.

These are questions every state needs to be considering as they pour money into these companies. It’s understandable in post-industrial America, where many factory jobs have been automated away that states want to lure high-paying high tech jobs to their communities, but it’s still incumbent upon officials to make sure they are doing due diligence on the total impact of the deal to be certain the cost is justified in the end.

Jul
31
2018
--

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Jul
26
2018
--

Amazon’s AWS continues to lead its performance highlights

Amazon’s web services AWS continue to be the highlight of the company’s balance sheet, once again showing the kind of growth Amazon is looking for in a new business for the second quarter — especially one that has dramatically better margins than its core retail business.

Despite now running a grocery chain, the company’s AWS division — which has an operating margin over 25 percent compared to its tiny margins on retail — grew 49 percent year-over-year in the quarter compared to last year’s second quarter. It’s also up 49 percent year-over-year when comparing the most recent six months to the same period last year. AWS is now on a run rate well north of $10 billion annually, generating more than $6 billion in revenue in the second quarter this year. Meanwhile, Amazon’s retail operations generated nearly $47 billion with a net income of just over $1.3 billion (unaudited). Amazon’s AWS generated $1.6 billion in operating income on its $6.1 billion in revenue.

So, in short, Amazon’s dramatically more efficient AWS business is its biggest contributor to its actual net income. The company reported earnings of $5.07 per share, compared to analyst estimates of around $2.50 per share, on revenue of $52.9 billion. That revenue number fell under what investors were looking for, so the stock isn’t really doing anything in after-hours, and Amazon still remains in the race to become a company with a market cap of $1 trillion alongside Google, Apple and Microsoft.

This isn’t extremely surprising, as Amazon was one of the original harbingers of the move to a cloud computing-focused world, and, as a result, Microsoft and Google are now chasing it to capture up as much share as possible. While Microsoft doesn’t break out Azure, the company says it’s one of its fastest-growing businesses, and Google’s “other revenue” segment that includes Google Cloud Platform also continues to be one of its fastest-growing divisions. Running a bunch of servers with access to on-demand compute, it turns out, is a pretty efficient business that can account for the very slim margins that Amazon has on the rest of its core business.

Jul
17
2018
--

Walmart enlists Microsoft cloud in battle against Amazon

Once a seemingly unstoppable retail juggernaut, Walmart’s been scrambling to define its digitally in this Amazon-defined era. This morning, the company announced that it’s struck a five-year deal with Microsoft, Amazon’s chief cloud competitor.

These sorts of partnerships are a regular occurrence for AWS — in fact, it announced one with Fortnite maker Epic Games, just this morning. The companies involved tend to put on a big show, in return for a discount on services, but Walmart and Microsoft are happily playing into the concept of teaming up to take on Amazon.

Microsoft’s certainly not making any bones about the competition. In an interview, Satya Nadella told The Wall Street Journal that the fight against Amazon “is absolutely core to this,” adding, “How do we get more leverage as two organizations that have depth and breadth and investment to be able to outrun our respective competition?”

Of course, neither Walmart nor Microsoft can be framed as an underdog in any respect, but Amazon’s stranglehold on online retail also can’t be understated. Not even a massive outage at the height of Prime Day could do much to ruffle the company’s feathers.

Included in the deal are AI/ML technologies design to help optimize the in-store experience — one of the key factors Walmart brings to the table in a battle against Amazon, which has mostly just dabbled in brick and mortar. For its part, Walmart has been testing cashier-less stores along the lines of Amazon’s, but the company has just to officially unveil its plans in that space.

Jul
17
2018
--

Standard Cognition raises another $5.5M to create a cashier-less checkout experience

As Amazon looks to increasingly expand its cashier-less grocery stories — called Amazon Go – across different regions, there’s at least one startup hoping to end up everywhere else beyond Amazon’s empire.

Standard Cognition aims to help businesses create that kind of checkout experience based on machine vision, using image recognition to figure out that a specific person is picking up and walking out the door with a bag of Cheetos. The company said it’s raised an additional $5.5 million in a round in what the company is calling a seed round extension from CRV. The play here is, like many startups, to create something that a massive company is going after — like image recognition for cashier-less checkouts — for the long tail businesses rather than locking them into a single ecosystem.

Standard Cognition works with security cameras that have a bit more power than typical cameras to identify people that walk into a store. Those customers use an app, and the camera identifies everything they are carrying and bills them as they exit the store. The company has said it works to anonymize that data, so there isn’t any kind of product tracking that might chase you around the Internet that you might find on other platforms.

“The platform is built at this point – we are now focused on releasing the platform to each retail partner that signs on with us,” Michael Suswal, Co-founder and COO said. “Most of the surprises coming our way come from learning about how each retailer prefers to run their operations and store experiences. They are all a little different and require us to be flexible with how we deploy.”

It’s a toolkit that makes sense for both larger and smaller retailers, especially as the actual technology to install cameras or other devices that can get high-quality video or have more processing power goes down over time. Baking that into smaller retailers or mom-and-pop stores could help them get more foot traffic or make it easier to keep tabs on what kind of inventory is most popular or selling out more quickly. It offers an opportunity to have an added layer of data about how their store works, which could be increasingly important over time as something like Amazon looks to start taking over the grocery experience with stores like Amazon Go or its massive acquisition of Whole Foods.

“While we save no personal data in the cloud, and the system is built for privacy (no facial recognition among other safety features that come with being a non-cloud solution), we do use the internet for a couple of things,” Suswal said. “One of those things is to update our models and push them fleet wide. This is not a data push. It is light and allows us to make updates to models and add new features. We refer to it as the Tesla model, inspired by the way a driver can have a new feature when they wake up in the morning. We are also able to offer cross-store analytics to the retailer using the cloud, but no personal data is ever stored there.”

It’s thanks to advances in machine learning — and the frameworks and hardware that support it — that have made this kind of technology easier to build for smaller companies. Already there are other companies that look to be third-party providers for popular applications like voice recognition (think SoundHound) or machine vision (think Clarifai). All of those aim to be an option outside of whatever options larger companies might have like Alexa. It also means there is probably going to be a land grab and that there will be other interpretations of what the cashier-less checkout experience looks like, but Standard Cognition is hoping it’ll be able to get into enough stores to be an actual challenger to Amazon Go.

May
31
2018
--

AWS launches pay-per-session pricing for its QuickSight BI tool

Amazon QuickSight, the company’s business intelligence tool for AWS, launched back in 2015, but it’s hard to say how much impact the service has made in the highly competitive BI market. The company has far from given up on this project, though, and today, it’s introducing a new pay-per-session pricing plan for access to QuickSight dashboards that is surely meant to give it a bit of a lift in a market where Tableau and Microsoft’s Power BI have captured much of the mindshare.

Under the new pricing plan, creating and publishing dashboards will stay cost $18 per user and month. For readers, though, who only need to have access to these dashboards, AWS now offers a very simple option: they will now pay $0.30 per session up to a maximum of $5 per month and user. Under this scheme, a session is defined as the first 30 minutes from login.

Previously, AWS offered two tiers of QuickSight plans: a $9 per user/month standard plan and a $24/user/month enterprise edition with support for Active Directory and encryption at rest.

That $9/user/month is still available and probably still makes sense for smaller companies where those who build dashboards and consume them are often the same person. The new pricing plan replaces the existing enterprise edition.

QuickSight already significantly undercuts the pricing of services like Tableau and others, though we’re also talking about a somewhat more limited feature set. This new pay-per-session offering only widens the pricing gap.

“With highly scalable object storage in Amazon Simple Storage Service (Amazon S3), data warehousing at one-tenth the cost of traditional solutions in Amazon Redshift, and serverless analytics offered by Amazon Athena, customers are moving data into AWS at an unprecedented pace,” said Dorothy Nicholls, Vice President of Amazon QuickSight at AWS, in a canned comment. “What’s changed is that virtually all knowledge workers want easy access to that data and the insights that can be derived. It’s been cost-prohibitive to enable that access for entire companies until the Amazon QuickSight pay-per-session pricing — this is a game-changer in terms of information and analytics access.”

Current QuickSight users include the NFL, Siemens, Volvo and AutoTrader.

May
14
2018
--

AWS introduces 1-click Lambda functions app for IoT

When Amazon introduced AWS Lambda in 2015, the notion of serverless computing was relatively unknown. It enables developers to deliver software without having to manage a server to do it. Instead, Amazon manages it all and the underlying infrastructure only comes into play when an event triggers a requirement. Today, the company released an app in the iOS App Store called AWS IoT 1-Click to bring that notion a step further.

The 1-click part of the name may be a bit optimistic, but the app is designed to give developers even quicker access to Lambda event triggers. These are designed specifically for simple single-purpose devices like a badge reader or a button. When you press the button, you could be connected to customer service or maintenance or whatever makes sense for the given scenario.

One particularly good example from Amazon is the Dash Button. These are simple buttons that users push to reorder goods like laundry detergent or toilet paper. Pushing the button connects to the device to the internet via the home or business’s WiFi and sends a signal to the vendor to order the product in the pre-configured amount. AWS IoT 1-Click extends this capability to any developers, so long as it is on a supported device.

To use the new feature, you need to enter your existing account information. You configure your WiFi and you can choose from a pre-configured list of devices and Lambda functions for the given device. Supported devices in this early release include AWS IoT Enterprise Button, a commercialized version of the Dash button and the AT&T LTE-M Button.

Once you select a device, you define the project to trigger a Lambda function, or send an SMS or email, as you prefer. Choose Lambda for an event trigger, then touch Next to move to the configuration screen where you configure the trigger action. For instance, if pushing the button triggers a call to IT from the conference room, the trigger would send a page to IT that there was a call for help in the given conference room.

Finally, choose the appropriate Lambda function, which should work correctly based on your configuration information.

All of this obviously requires more than one click and probably involves some testing and reconfiguring to make sure you’ve entered everything correctly, but the idea of having an app to create simple Lambda functions could help people with non-programming background configure buttons with simple functions with some training on the configuration process.

It’s worth noting that the service is still in Preview, so you can download the app today, but you have to apply to participate at this time.

Apr
11
2018
--

With Fargate, AWS wants to make containers more cloud native

At its re:Invent developer conference, AWS made so many announcements that even some of the company’s biggest launches only got a small amount of attention. While the company’s long-awaited Elastic Container Service for Kubernetes got quite a bit of press, the launch of the far more novel Fargate container service stayed under the radar.

When I talked to him earlier this week, AWS VP and Amazon CTO (and EDM enthusiast) Werner Vogels admitted as much. “I think some of the Fargate stuff got a bit lost in all the other announcements that there were,” he told me. “I think it is a major step forward in making containers more cloud native and we see quite a few of our customers jumping on board with Fargate.”

Fargate, if you haven’t followed along, is a technology for AWS’ Elastic Container Service (ECS) and Kubernetes Service (EKS) that abstracts all of the underlying infrastructure for running containers away. You pick your container orchestration engine and the service does the rest. There’s no need for managing individual servers or clusters. Instead, you simply tells ECS or EKS that you want to launch a container with Fargate, define the CPU and memory requirements of your application and let the service handle the rest.

To Vogels, who also published a longer blog post on Fargate today, the service is part of the company’s mission to help developers focus on their applications — and not the infrastructure. “I always compare it a bit to the early days of cloud,” said Vogels. “Before we had AWS, there were only virtual machines. And many companies build successful businesses around it. But when you run virtual machines, you still have to manage the hardware. […] One of the things that happened when we introduced EC2 [the core AWS cloud computing service] in the early days, was sort of that it decoupled things from the hardware. […] I think that tremendously improved developer productivity.”

But even with the early containers tools, if you wanted to run them directly on AWS or even in ECS, you still had to do a lot of work that had little to do with actually running the containers. “Basically, it’s the same story,” Vogels said. “VMs became the hardware for the containers. And a significant amount of work for developers went into that orchestration piece.”

What Amazon’s customers wanted, however, was being able to focus on running their containers — not what Vogels called the “hands-on hardware-type of management.” “That was so pre-cloud,” he added and in his blog post today, he also notes that “container orchestration has always seemed to me to be very not cloud native.”

In Vogels’ view, it seems, if you are still worried about infrastructure, you’re not really cloud native. He also noted that the original promise of AWS was that AWS would worry about running the infrastructure while developers got to focus on what mattered for their businesses. It’s services like Fargate and maybe also Lambda that take this overall philosophy the furthest.

Even with a container service like ECS or EKS, though, the clusters still don’t run completely automatically and you still end up provisioning capacity that you don’t need all the time. The promise of Fargate is that it will auto-scale for you and that you only pay for the capacity you actually need.

“Our customers, they just want to build software, they just want to build their applications. They don’t want to be bothered with how to exactly map this container down to that particular virtual machine — which is what they had to do,” Vogels said. “With Fargate, you select the type of CPUs you want to use for a particular task and it will autoscale this for you. Meaning that you actually only have to pay for the capacity you use.”

When it comes to abstracting away infrastructure, though, Fargate does this for containers, but it’s worth noting that a serverless product like AWS Lambda takes it even further. For Vogels, this is a continuum and driven by customer demand. While AWS is clearly placing big bets on containers, he is also quite realistic about the fact that many companies will continue to use containers for the foreseeable future. “VMs won’t go away,” he said.

With a serverless product like Lambda, you don’t even think about the infrastructure at all anymore, not even containers — you get to fully focus on the code and only pay for the execution of that code. And while Vogels sees the landscape of VMs, containers and serverless as a continuum, where customers move from one to the next, he also noted that AWS is seeing enterprises that are skipping over the container step and going all in on serverless right away.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com