Dec
09
2019
--

AWS is sick of waiting for your company to move to the cloud

AWS held its annual re:Invent customer conference last week in Las Vegas. Being Vegas, there was pageantry aplenty, of course, but this year’s model felt a bit different than in years past, lacking the onslaught of major announcements we are used to getting at this event.

Perhaps the pace of innovation could finally be slowing, but the company still had a few messages for attendees. For starters, AWS CEO Andy Jassy made it clear he’s tired of the slow pace of change inside the enterprise. In Jassy’s view, the time for incremental change is over, and it’s time to start moving to the cloud faster.

AWS also placed a couple of big bets this year in Vegas to help make that happen. The first involves AI and machine learning. The second, moving computing to the edge, closer to the business than the traditional cloud allows.

The question is what is driving these strategies? AWS had a clear head start in the cloud, and owns a third of the market, more than double its closest rival, Microsoft. The good news is that the market is still growing and will continue to do so for the foreseeable future. The bad news for AWS is that it can probably see Google and Microsoft beginning to resonate with more customers, and it’s looking for new ways to get a piece of the untapped part of the market to choose AWS.

Dec
03
2019
--

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Dec
03
2019
--

Verizon and AWS announce 5G Edge computing partnership

Just as Qualcomm was starting to highlight its 5G plans for the coming years, Verizon CEO Hans Vestberg hit the stage at AWS re:Invent to discuss the carrier’s team up with the cloud computing giant.

As part of Verizon’s (TechCrunch’s parent company, disclosure, disclosure, disclosure) upcoming focus on 5G edge computing, the carrier will be the first to use the newly announced AWS Wavelength. The platform is designed to let developers build super-low-latency apps for 5G devices.

Currently, it’s being piloted in Chicago with a handful of high-profile partners, including the NFL and Bethesda, the game developer behind Fallout and Elder Scrolls. No details yet on those specific applications (though remote gaming and live streaming seem like the obvious ones), but potential future uses include things like smart cars, IoT devices, AR/VR — you know, the sorts of things people cite when discussing 5G’s life beyond the smartphone.

“AWS Wavelength provides the same AWS environment — APIs, management console and tools — that they’re using today at the edge of the 5G network,” AWS CEO Andy Jassy said onstage. Starting with Verizon’s 5G network locations in the U.S., customers will be able to deploy the latency-sensitive portions of an application at the edge to provide single-digit millisecond latency to mobile and connected devices.”

As Verizon’s CEO joined Vestberg onstage, CNO Nicki Palmer joined Qualcomm in Hawaii to discuss the carrier’s mmwave approach to the next-gen wireless. The technology has raised some questions around its coverage area. Verizon has addressed this to some degree with partnerships with third-parties like Boingo.

The company plans to have coverage in 30 U.S. cities by end of year. That number is currently at 18.

Dec
03
2019
--

AWS announces new enterprise search tool powered by machine learning

Today at AWS re:Invent in Las Vegas, the company announced a new search tool called Kendra, which provides natural language search across a variety of content repositories using machine learning.

Matt Wood, AWS VP of artificial intelligence, said the new search tool uses machine learning, but doesn’t actually require machine learning expertise of any kind. Amazon is taking care of that for customers under the hood.

You start by identifying your content repositories. This could be anything from an S3 storage repository to OneDrive to Salesforce — anywhere you store content. You can use pre-built connectors from AWS, provide your credentials and connect to all of these different tools.

Kendra then builds an index based on the content it finds in the connected repositories, and users can begin to interact with the search tool using natural language queries. The tool understands concepts like time, so if the question is something like “When is the IT Help Desk is open,” the search engine understands that this is about time, checks the index and delivers the right information to the user.

The beauty of this search tool is not only that it uses machine learning, but based on simple feedback from a user, like a smiley face or sad face emoji, it can learn which answers are good and which ones require improvement, and it does this automatically for the search team.

Once you have it set up, you can drop the search on your company intranet or you can use it internally inside an application and it behaves as you would expect a search tool to do, with features like type ahead.

Dec
03
2019
--

AWS’ CodeGuru uses machine learning to automate code reviews

AWS today announced CodeGuru, a new machine learning-based service that automates code reviews based on the data the company has gathered from doing code reviews internally.

Developers write the code and simply add CodeGuru to the pull requests. It supports GitHub and CodeCommit, for the time being. CodeGuru uses its knowledge of reviews from Amazon and about 10,000 open-source projects to find issues, then comments on the pull request as needed. It will obviously identify the issues, but it also will suggest remediations and offer links to the relevant documentation.

Encoded in CodeGuru are AWS’s own best practices. Among other things, it also finds concurrency issues, incorrect handling of resources and issues with input validation.

AWS and Amazon’s consumer side have used the profiler part of CodeGuru for the last few years to find the “most expensive line of code.” Over the last few years, even as some of the company’s applications grew, some teams were able to increase their CPU utilization by more than 325% at 36% lower cost.

Dec
03
2019
--

AWS AutoPilot gives you more visible AutoML in SageMaker Studio

Today at AWS re:Invent in Las Vegas, the company announced AutoPilot, a new tool that gives you greater visibility into automated machine learning model creation, known as AutoML. This new tool is part of the new SageMaker Studio also announced today.

As AWS CEO Andy Jassy pointed out onstage today, one of the problems with AutoML is that it’s basically a black box. If you want to improve a mediocre model, or just evolve it for your business, you have no idea how it was built.

The idea behind AutoPilot is to give you the ease of model creation you get from an AutoML-generated model, but also give you much deeper insight into how the system built the model. “AutoPilot is a way to create a model automatically, but give you full visibility and control,” Jassy said.

“Using a single API call, or a few clicks in Amazon SageMaker Studio, SageMaker Autopilot first inspects your data set, and runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms and hyperparameters. Then, it uses this combination to train an Inference Pipeline, which you can easily deploy either on a real-time endpoint or for batch processing. As usual with Amazon SageMaker, all of this takes place on fully-managed infrastructure,” the company explained in a blog post announcing the new feature.

You can look at the model’s parameters, and see 50 automated models, and it provides you with a leader board of what models performed the best. What’s more, you can look at the model’s underlying notebook, and also see what trade-offs were made to generate that best model. For instance, it may be the most accurate, but sacrifices speed to get that.

Your company may have its own set of unique requirements and you can choose the best model based on whatever parameters you consider to be most important, even though it was generated in an automated fashion.

Once you have the model you like best, you can go into SageMaker Studio, select it and launch it with a single click. The tool is available now.

 

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Dec
02
2019
--

New Amazon tool simplifies delivery of containerized machine learning models

As part of the flurry of announcements coming this week out of AWS re:Invent, Amazon announced the release of Amazon SageMaker Operators for Kubernetes, a way for data scientists and developers to simplify training, tuning and deploying containerized machine learning models.

Packaging machine learning models in containers can help put them to work inside organizations faster, but getting there often requires a lot of extra management to make it all work. Amazon SageMaker Operators for Kubernetes is supposed to make it easier to run and manage those containers, the underlying infrastructure needed to run the models and the workflows associated with all of it.

“While Kubernetes gives customers control and portability, running ML workloads on a Kubernetes cluster brings unique challenges. For example, the underlying infrastructure requires additional management such as optimizing for utilization, cost and performance; complying with appropriate security and regulatory requirements; and ensuring high availability and reliability,” AWS’ Aditya Bindal wrote in a blog post introducing the new feature.

When you combine that with the workflows associated with delivering a machine learning model inside an organization at scale, it becomes part of a much bigger delivery pipeline, one that is challenging to manage across departments and a variety of resource requirements.

This is precisely what Amazon SageMaker Operators for Kubernetes has been designed to help DevOps teams do. “Amazon SageMaker Operators for Kubernetes bridges this gap, and customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale,” Bindal wrote.

The promise of Kubernetes is that it can orchestrate the delivery of containers at the right moment, but if you haven’t automated delivery of the underlying infrastructure, you can over (or under) provision and not provide the correct amount of resources required to run the job. That’s where this new tool, combined with SageMaker, can help.

“With workflows in Amazon SageMaker, compute resources are pre-configured and optimized, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete, offering near 100% utilization,” Bindal wrote.

Amazon SageMaker Operators for Kubernetes are available today in select AWS regions.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com