Jan
06
2020
--

Despite JEDI loss, AWS retains dominant market position

AWS took a hard blow last year when it lost the $10 billion, decade-long JEDI cloud contract to rival Microsoft. Yet even without that mega deal for building out the nation’s Joint Enterprise Defense Infrastructure, the company remains fully in control of the cloud infrastructure market — and it intends to fight that decision.

In fact, AWS still owns almost twice as much cloud infrastructure market share as Microsoft, its closest rival. While the two will battle over the next decade for big contracts like JEDI, for now, AWS doesn’t have much to worry about.

There was a lot more to AWS’s year than simply losing JEDI. Per usual, the news came out with a flurry of announcements and enhancements to its vast product set. Among the more interesting moves was a shift to the edge, the fact the company is getting more serious about the chip business and a big dose of machine learning product announcements.

The fact is that AWS has such market momentum now, it’s a legitimate question to ask if anyone, even Microsoft, can catch up. The market is continuing to expand though, and the next battle is for that remaining market share. AWS CEO Andy Jassy spent more time than in the past trashing Microsoft at 2019’s re:Invent customer conference in December, imploring customers to move to the cloud faster and showing that his company is preparing for a battle with its rivals in the years ahead.

Numbers, please

AWS closed 2019 on a $36 billion run rate, growing from $7.43 billion in in its first report in January to $9 billion in earnings for its most recent earnings report in October. Believe it or not, according to CNBC, that number failed to meet analysts expectations of $9.1 billion, but still accounted for 13% of Amazon’s revenue in the quarter.

Regardless, AWS is a juggernaut, which is fairly amazing when you consider that it started as a side project for Amazon .com in 2006. In fact, if AWS were a stand-alone company, it would be a substantial business. While growth slowed a bit last year, that’s inevitable when you get as large as AWS, says John Dinsdale, VP, chief analyst and general manager at Synergy Research, a firm that follows all aspects of the cloud market.

“This is just math and the law of large numbers. On average over the last four quarters, it has incremented its revenues by well over $500 million per quarter. So it has grown its quarterly revenues by well over $2 billion in a twelve-month period,” he said.

Dinsdale added, “To put that into context, this growth in quarterly revenue is bigger than Google’s total revenues in cloud infrastructure services. In a very large market that is growing at over 35% per year, AWS market share is holding steady.”

Dinsdale says the cloud infrastructure market didn’t quite break $100 billion last year, but even without full Q4 results, his firm’s models project a total of around $95 billion, up 37% over 2018. AWS has more than a third of that. Microsoft is way back at around 17% with Google in third with around 8 or 9%.

While this is from Q1, it illustrates the relative positions of companies in the cloud market. Chart: Synergy Research

JEDI disappointment

It would be hard to do any year-end review of AWS without discussing JEDI. From the moment the Department of Defense announced its decade-long, $10 billion cloud RFP, it has been one big controversy after another.

Jan
02
2020
--

Moving storage in-house helped Dropbox thrive

Back in 2013, Dropbox was scaling fast.

The company had grown quickly by taking advantage of cloud infrastructure from Amazon Web Services (AWS), but when you grow rapidly, infrastructure costs can skyrocket, especially when approaching the scale Dropbox was at the time. The company decided to build its own storage system and network — a move that turned out to be a wise decision.

In a time when going from on-prem to cloud and closing private data centers was typical, Dropbox took a big chance by going the other way. The company still uses AWS for certain services, regional requirements and bursting workloads, but ultimately when it came to the company’s core storage business, it wanted to control its own destiny.

Storage is at the heart of Dropbox’s service, leaving it with scale issues like few other companies, even in an age of massive data storage. With 600 million users and 400,000 teams currently storing more than 3 exabytes of data (and growing) if it hadn’t taken this step, the company might have been squeezed by its growing cloud bills.

Controlling infrastructure helped control costs, which improved the company’s key business metrics. A look at historical performance data tells a story about the impact that taking control of storage costs had on Dropbox.

The numbers

In March of 2016, Dropbox announced that it was “storing and serving” more than 90% of user data on its own infrastructure for the first time, completing a 3-year journey to get to this point. To understand what impact the decision had on the company’s financial performance, you have to examine the numbers from 2016 forward.

There is good financial data from Dropbox going back to the first quarter of 2016 thanks to its IPO filing, but not before. So, the view into the impact of bringing storage in-house begins after the project was initially mostly completed. By examining the company’s 2016 and 2017 financial results, it’s clear that Dropbox’s revenue quality increased dramatically. Even better for the company, its revenue quality improved as its aggregate revenue grew.

Dec
09
2019
--

AWS is sick of waiting for your company to move to the cloud

AWS held its annual re:Invent customer conference last week in Las Vegas. Being Vegas, there was pageantry aplenty, of course, but this year’s model felt a bit different than in years past, lacking the onslaught of major announcements we are used to getting at this event.

Perhaps the pace of innovation could finally be slowing, but the company still had a few messages for attendees. For starters, AWS CEO Andy Jassy made it clear he’s tired of the slow pace of change inside the enterprise. In Jassy’s view, the time for incremental change is over, and it’s time to start moving to the cloud faster.

AWS also placed a couple of big bets this year in Vegas to help make that happen. The first involves AI and machine learning. The second, moving computing to the edge, closer to the business than the traditional cloud allows.

The question is what is driving these strategies? AWS had a clear head start in the cloud, and owns a third of the market, more than double its closest rival, Microsoft. The good news is that the market is still growing and will continue to do so for the foreseeable future. The bad news for AWS is that it can probably see Google and Microsoft beginning to resonate with more customers, and it’s looking for new ways to get a piece of the untapped part of the market to choose AWS.

Dec
03
2019
--

Verizon and AWS announce 5G Edge computing partnership

Just as Qualcomm was starting to highlight its 5G plans for the coming years, Verizon CEO Hans Vestberg hit the stage at AWS re:Invent to discuss the carrier’s team up with the cloud computing giant.

As part of Verizon’s (TechCrunch’s parent company, disclosure, disclosure, disclosure) upcoming focus on 5G edge computing, the carrier will be the first to use the newly announced AWS Wavelength. The platform is designed to let developers build super-low-latency apps for 5G devices.

Currently, it’s being piloted in Chicago with a handful of high-profile partners, including the NFL and Bethesda, the game developer behind Fallout and Elder Scrolls. No details yet on those specific applications (though remote gaming and live streaming seem like the obvious ones), but potential future uses include things like smart cars, IoT devices, AR/VR — you know, the sorts of things people cite when discussing 5G’s life beyond the smartphone.

“AWS Wavelength provides the same AWS environment — APIs, management console and tools — that they’re using today at the edge of the 5G network,” AWS CEO Andy Jassy said onstage. Starting with Verizon’s 5G network locations in the U.S., customers will be able to deploy the latency-sensitive portions of an application at the edge to provide single-digit millisecond latency to mobile and connected devices.”

As Verizon’s CEO joined Vestberg onstage, CNO Nicki Palmer joined Qualcomm in Hawaii to discuss the carrier’s mmwave approach to the next-gen wireless. The technology has raised some questions around its coverage area. Verizon has addressed this to some degree with partnerships with third-parties like Boingo.

The company plans to have coverage in 30 U.S. cities by end of year. That number is currently at 18.

Dec
03
2019
--

AWS announces new enterprise search tool powered by machine learning

Today at AWS re:Invent in Las Vegas, the company announced a new search tool called Kendra, which provides natural language search across a variety of content repositories using machine learning.

Matt Wood, AWS VP of artificial intelligence, said the new search tool uses machine learning, but doesn’t actually require machine learning expertise of any kind. Amazon is taking care of that for customers under the hood.

You start by identifying your content repositories. This could be anything from an S3 storage repository to OneDrive to Salesforce — anywhere you store content. You can use pre-built connectors from AWS, provide your credentials and connect to all of these different tools.

Kendra then builds an index based on the content it finds in the connected repositories, and users can begin to interact with the search tool using natural language queries. The tool understands concepts like time, so if the question is something like “When is the IT Help Desk is open,” the search engine understands that this is about time, checks the index and delivers the right information to the user.

The beauty of this search tool is not only that it uses machine learning, but based on simple feedback from a user, like a smiley face or sad face emoji, it can learn which answers are good and which ones require improvement, and it does this automatically for the search team.

Once you have it set up, you can drop the search on your company intranet or you can use it internally inside an application and it behaves as you would expect a search tool to do, with features like type ahead.

Dec
03
2019
--

AWS’ CodeGuru uses machine learning to automate code reviews

AWS today announced CodeGuru, a new machine learning-based service that automates code reviews based on the data the company has gathered from doing code reviews internally.

Developers write the code and simply add CodeGuru to the pull requests. It supports GitHub and CodeCommit, for the time being. CodeGuru uses its knowledge of reviews from Amazon and about 10,000 open-source projects to find issues, then comments on the pull request as needed. It will obviously identify the issues, but it also will suggest remediations and offer links to the relevant documentation.

Encoded in CodeGuru are AWS’s own best practices. Among other things, it also finds concurrency issues, incorrect handling of resources and issues with input validation.

AWS and Amazon’s consumer side have used the profiler part of CodeGuru for the last few years to find the “most expensive line of code.” Over the last few years, even as some of the company’s applications grew, some teams were able to increase their CPU utilization by more than 325% at 36% lower cost.

Dec
03
2019
--

AWS AutoPilot gives you more visible AutoML in SageMaker Studio

Today at AWS re:Invent in Las Vegas, the company announced AutoPilot, a new tool that gives you greater visibility into automated machine learning model creation, known as AutoML. This new tool is part of the new SageMaker Studio also announced today.

As AWS CEO Andy Jassy pointed out onstage today, one of the problems with AutoML is that it’s basically a black box. If you want to improve a mediocre model, or just evolve it for your business, you have no idea how it was built.

The idea behind AutoPilot is to give you the ease of model creation you get from an AutoML-generated model, but also give you much deeper insight into how the system built the model. “AutoPilot is a way to create a model automatically, but give you full visibility and control,” Jassy said.

“Using a single API call, or a few clicks in Amazon SageMaker Studio, SageMaker Autopilot first inspects your data set, and runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms and hyperparameters. Then, it uses this combination to train an Inference Pipeline, which you can easily deploy either on a real-time endpoint or for batch processing. As usual with Amazon SageMaker, all of this takes place on fully-managed infrastructure,” the company explained in a blog post announcing the new feature.

You can look at the model’s parameters, and see 50 automated models, and it provides you with a leader board of what models performed the best. What’s more, you can look at the model’s underlying notebook, and also see what trade-offs were made to generate that best model. For instance, it may be the most accurate, but sacrifices speed to get that.

Your company may have its own set of unique requirements and you can choose the best model based on whatever parameters you consider to be most important, even though it was generated in an automated fashion.

Once you have the model you like best, you can go into SageMaker Studio, select it and launch it with a single click. The tool is available now.

 

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Dec
02
2019
--

CircleCI launches improved AWS support

For about a year now, continuous integration and delivery service CircleCI has offered Orbs, a way to easily reuse commands and integrations with third-party services. Unsurprisingly, some of the most popular Orbs focus on AWS, as that’s where most of the company’s developers are either testing their code or deploying it. Today, right in time for AWS’s annual re:Invent developer conference in Las Vegas, the company announced that it has now added Orb support for the AWS Serverless Application Model (SAM), which makes setting up automated CI/CD platforms for testing and deploying to AWS Lambda significantly easier.

In total, the company says, more than 11,000 organizations started using Orbs since it launched a year ago. Among the AWS-centric Orbs are those for building and updating images for the Amazon Elastic Container Services and the Elastic Container Service for Kubernetes (EKS), for example, as well as AWS CodeDeploy support, an Orb for installing and configuring the AWS command line interface, an Orb for working with the S3 storage service and more.

“We’re just seeing a momentum of more and more companies being ready to adopt [managed services like Lambda, ECS and EKS], so this became really the ideal time to do most of the work with the product team at AWS that manages their serverless ecosystem and to add in this capability to leverage that serverless application model and really have this out of the box CI/CD flow ready for users who wanted to start adding these into to Lambda,” CircleCI VP of business development Tom Trahan told me. “I think when Lambda was in its earlier days, a lot of people would use it and they would use it and not necessarily follow the same software patterns and delivery flow that they might have with their traditional software. As they put more and more into Lambda and are really putting a lot more what I would call ‘production quality code’ out there to leverage. They realize they do want to have that same software delivery capability and discipline for Lambda as well.”

Trahan stressed that he’s still talking about early adopters and companies that started out as cloud-native companies, but these days, this group includes a lot of traditional companies, as well, that are now rapidly going through their own digital transformations.

Dec
02
2019
--

New Amazon tool simplifies delivery of containerized machine learning models

As part of the flurry of announcements coming this week out of AWS re:Invent, Amazon announced the release of Amazon SageMaker Operators for Kubernetes, a way for data scientists and developers to simplify training, tuning and deploying containerized machine learning models.

Packaging machine learning models in containers can help put them to work inside organizations faster, but getting there often requires a lot of extra management to make it all work. Amazon SageMaker Operators for Kubernetes is supposed to make it easier to run and manage those containers, the underlying infrastructure needed to run the models and the workflows associated with all of it.

“While Kubernetes gives customers control and portability, running ML workloads on a Kubernetes cluster brings unique challenges. For example, the underlying infrastructure requires additional management such as optimizing for utilization, cost and performance; complying with appropriate security and regulatory requirements; and ensuring high availability and reliability,” AWS’ Aditya Bindal wrote in a blog post introducing the new feature.

When you combine that with the workflows associated with delivering a machine learning model inside an organization at scale, it becomes part of a much bigger delivery pipeline, one that is challenging to manage across departments and a variety of resource requirements.

This is precisely what Amazon SageMaker Operators for Kubernetes has been designed to help DevOps teams do. “Amazon SageMaker Operators for Kubernetes bridges this gap, and customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale,” Bindal wrote.

The promise of Kubernetes is that it can orchestrate the delivery of containers at the right moment, but if you haven’t automated delivery of the underlying infrastructure, you can over (or under) provision and not provide the correct amount of resources required to run the job. That’s where this new tool, combined with SageMaker, can help.

“With workflows in Amazon SageMaker, compute resources are pre-configured and optimized, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete, offering near 100% utilization,” Bindal wrote.

Amazon SageMaker Operators for Kubernetes are available today in select AWS regions.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com