Dec
02
2018
--

AWS wants to rule the world

AWS, once a nice little side hustle for Amazon’s eCommerce business, has grown over the years into a behemoth that’s on a $27 billion run rate, one that’s still growing at around 45 percent a year. That’s a highly successful business by any measure, but as I listened to AWS executives last week at their AWS re:Invent conference in Las Vegas, I didn’t hear a group that was content to sit still and let the growth speak for itself. Instead, I heard one that wants to dominate every area of enterprise computing.

Whether it was hardware like the new Inferentia chip and Outposts, the new on-prem servers or blockchain and a base station service for satellites, if AWS saw an opportunity they were not ceding an inch to anyone.

Last year, AWS announced an astonishing 1400 new features, and word was that they are on pace to exceed that this year. They get a lot of credit for not resting on their laurels and continuing to innovate like a much smaller company, even as they own gobs of marketshare.

The feature inflation probably can’t go on forever, but for now at least they show no signs of slowing down, as the announcements came at a furious pace once again. While they will tell you that every decision they make is about meeting customer needs, it’s clear that some of these announcements were also about answering competitive pressure.

Going after competitors harder

In the past, AWS kept criticism of competitors to a minimum maybe giving a little jab to Oracle, but this year they seemed to ratchet it up. In their keynotes, AWS CEO Andy Jassy and Amazon CTO Werner Vogels continually flogged Oracle, a competitor in the database market, but hardly a major threat as a cloud company right now.

They went right for Oracle’s market though with a new on prem system called Outposts, which allows AWS customers to operate on prem and in the cloud using a single AWS control panel or one from VMware if customers prefer. That is the kind of cloud vision that Larry Ellison might have put forth, but Jassy didn’t necessarily see it as going after Oracle or anyone else. “I don’t see Outposts as a shot across the bow of anyone. If you look at what we are doing, it’s very much informed by customers,” he told reporters at a press conference last week.

AWS CEO Andy Jassy at a press conference at AWS Re:Invent last week.

Yet AWS didn’t reserve its criticism just for Oracle. It also took aim at Microsoft, taking jabs at Microsoft SQL Server, and also announcing Amazon FSx for Windows File Server, a tool specifically designed to move Microsoft files to the AWS cloud.

Google wasn’t spared either when launching Inferentia and Elastic Inference, which put Google on notice that AWS wasn’t going to yield the AI market to Google’s TPU infrastructure. All of these tools and much more were about more than answering customer demand, they were about putting the competition on notice in every aspect of enterprise computing.

Upward growth trajectory

The cloud market is continuing to grow at a dramatic pace, and as market leader, AWS has been able to take advantage of its market dominance to this point. Jassy, echoing Google’s Diane Greene and Oracle’s Larry Ellison, says the industry as a whole is still really early in terms of cloud adoption, which means there is still plenty of marketshare left to capture.

“I think we’re just in the early stages of enterprise and public sector adoption in the US. Outside the US I would say we are 12-36 months behind. So there are a lot of mainstream enterprises that are just now starting to plan their approach to the cloud,” Jassy said.

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy says that AWS has been using its market position to keep expanding into different areas. “AWS has the scale right now to do many things others cannot, particularly lesser players like Google Cloud Platform and Oracle Cloud. They are trying to make a point with the thousands of new products and features they bring out. This serves as a disincentive longer-term for other players, and I believe will result in a shakeout,” he told TechCrunch.

As for the frenetic pace of innovation, Moorhead believes it can’t go on forever. “To me, the question is, when do we reach a point where 95% of the needs are met, and the innovation rate isn’t required. Every market, literally every market, reaches a point where this happens, so it’s not a matter of if but when,” he said.

Certainly areas like the AWS Ground Station announcement, showed that AWS was willing to expand beyond the conventional confines of enterprise computing and into outer space to help companies process satellite data. This ability to think beyond traditional uses of cloud computing resources shows a level of creativity that suggests there could be other untapped markets for AWS that we haven’t yet imagined.

As AWS moves into more areas of the enterprise computing stack, whether on premises or in the cloud, they are showing their desire to dominate every aspect of the enterprise computing world. Last week they demonstrated that there is no area that they are willing to surrender to anyone.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS launches a managed Kafka service

Kafka is an open-source tool for handling incoming streams of data. Like virtually all powerful tools, it’s somewhat hard to set up and manage. Today, Amazon’s AWS is making this all a bit easier for its users with the launch of Amazon Managed Streaming for Kafka. That’s a mouthful, but it’s essentially Kafka as a fully managed, highly available service on AWS. It’s now available on AWS as a public preview.

As AWS CTO Werner Vogels noted in his AWS re:Invent keynote, Kafka users traditionally had to do a lot of heavy lifting to set up a cluster on AWS and to ensure that it could scale and handle failures. “It’s a nightmare having to restart all the cluster and the main nodes,” he said. “This is what I would call the traditional heavy lifting that AWS is really good at solving for you.”

It’s interesting to see AWS launch this service, given that it already offers a very similar tool in Kinesis, a tool that also focuses on ingesting streaming data. There are plenty of applications on the market today that already use Kafka, and AWS is clearly interested in giving those users a pathway to either move to a managed Kafka service or to AWS in general.

As with all things AWS, the pricing is a bit complicated, but a basic Kafka instance will start at $0.21 per hour. You’re not likely to just use one instance, so for a somewhat useful setup with three brokers and a good amount of storage and some other fees, you’ll quickly pay well over $500 per month.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS is bringing the cloud on prem with Outposts

AWS has always been the pure cloud vendor, and even though it has given a nod to hybrid, it is now fully embracing it. Today in conjunction with VMware, it announced a pair of options to bring AWS into the data center.

Yes, you read it correctly. You can now put AWS into your data center with AWS hardware, the same design they use in their own data centers. The two new products are part of AWS Outposts.

There are two Outposts variations — VMware Cloud on AWS Outposts and AWS Outposts. The first uses the VMware control panel. The second allows customers to run compute and storage on premises using the same AWS APIs that are used in the AWS cloud.

In fact, VMware CEO Pat  Gelsinger joined AWS CEO Andy Jassy onstage at AWS re:Invent for a joint announcement. The two companies have been working together for some time to bring VMware to the AWS cloud. Part of this announcement flips that on its head, bringing the AWS cloud on prem to work with VMware. In both cases, AWS sells you their hardware, installs it if you wish, and will even maintain it for you.

This is an area that AWS has lagged, preferring the vision of a cloud, rather than moving back to the data center, but it’s a tacit acknowledgment that customers want to operate in both places for the foreseeable future.

The announcement also extends the company’s cloud-native-like vision. On Monday, the company announced Transit Gateways, which is designed to provide a single way to manage network resources, whether they live in the cloud or on prem.

Now AWS is bringing its cloud on prem, something that Microsoft, Canonical, Oracle and others have had for some time. It’s worth noting that today’s announcement is a public preview. The actual release is expected in the second half of next year.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

Amazon Textract brings intelligence to OCR

One of the challenges just about every business faces is converting forms to a useful digital format. This has typically involved using human data entry clerks to enter the data into the computer. State of the art involved using OCR to read forms automatically, but AWS CEO Andy Jassy explained that OCR is basically just a dumb text reader. It doesn’t recognize text types. Amazon wanted to change that and today it announced Amazon Textract, an intelligent OCR tool to move data from forms to a more useable digital format.

In an example, he showed a form with tables. Regular OCR didn’t recognize the table and interpreted it as a string of text. Textract is designed to recognize common page elements like a table and pull the data in a sensible way.

Jassy said that forms also often change, and if you are using a template as a workaround for OCR’s lack of intelligence, the template breaks if you move anything. To fix that, Textract is smart enough to understand common data types like Social Security numbers, dates of birth and addresses, and interprets them correctly no matter where they fall on the page.

“We have taught Textract to recognize this set of characters is a date of birth and this is a Social Security number. If forms change, Textract won’t miss it,” Jassy explained.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS announces new Inferentia machine learning chip

AWS is not content to cede any part of any market to any company. When it comes to machine learning chips, names like Nvidia or Google come to mind, but today at AWS re:Invent in Las Vegas, the company announced a new dedicated machine learning chip of its own called Inferentia.

“Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor,” AWS CEO Andy Jassy explained during the announcement.

Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future.

“The speed and cost of running machine learning operations — ideally in deep learning — are a competitive differentiator for enterprises. Speed advantages will make or break success of enterprises (and nations when you think of warfare). That speed can only be achieved with custom hardware, and Inferentia is AWS’s first step to get in to this game,” Mueller told TechCrunch. As he pointed out, Google has a 2-3 year head start with its TPU infrastructure.

Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What’s more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX.

Of course, being an Amazon product, it also supports data from popular AWS products such as EC2, SageMaker and the new Elastic Inference Engine announced today.

While the chip was announced today, AWS CEO Andy Jassy indicated it won’t actually be available until next year.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS launches a managed blockchain service

It was only a year ago that AWS CEO Andy Jassy said that he wasn’t all that interested in blockchain services. Clearly something has changed over the course of the last year because today, the company is launching two new blockchain services: Quantum Ledger Database and Amazon Managed Blockchain.

As the name implies, AWS Managed Blockchain is a managed blockchain service. It supports Ethereum and Hyperledger Fabric.

“This service is going to make it much easier for you to use the two most popular blockchain frameworks,” said AWS CEO Andy Jassy. He noted that companies tend to use Hyperledger Fabric when they know the number of members in their blockchain network and want robust private operations and capabilities. AWS promises that the service will scale to thousands of applications and will allow users to run millions of transactions (though the company didn’t say with what kind of latency).

Support for Hyperledger Fabric is available today. Ethereum support is launching a few months from now.

Getting started with Managed Blockchain is a matter of using the AWS Console and configuring nodes, adding members and deploying applications.

“When we heard people saying ‘blockchain,’ we felt like there was their weird conveluting and conflating what they really wanted,” said Jassy. “And as we spent time working with customers and figuring out the jobs they were really trying to solve, this is what we think people are trying to do with blockchain.”

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS launches new time series database

AWS announced a new time series database today at AWS re:Invent in Las Vegas. The new product called DynamoDB On-Demand is a fully managed database designed to track items over time, which can be particularly useful for Internet of Things scenarios.

“With time series data each data point consists of a timestamp and one or more attributes and it really measures how things change over time and helps drive real time decisions,” AWS CEO Andy Jassy explained.

He sees a problem though with existing open source and commercial solutions, which says don’t scale well and hard to manage. This is of course a problem that a cloud service like AWS often helps solve.

Not surprising as customers were looking for a good time series database solution, AWS decided to create one themselves. “Today we are introducing Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning,” Danilo Poccia from AWS wrote in the blog post introducing the new service.

Jassy said that they built DynamoDB on-demand from the ground up with an architecture that organizes data by time intervals and enables time series specific data compression, which leads to less scanning and faster performance.

He claims it will be a thousand times faster at a tenth of cost, and of course it scales up and down as required and includes all of the analytics capabilities you need to understand all of the data you are tracking.

This new service is available across the world starting today.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS Lake Formation makes setting up data lakes easier

The concept of data lakes has been around for a long time, but being able to set up one of these systems, which store vast amounts of raw data in its native formats, was never easy. AWS wants to change this with the launch of AWS Lake Formation. At its core, this new service, which is available today, allows developers to create a secure data lake within a few days.

While “a few days” may still sound like a long time in this age of instant gratification, it’s nothing in the world of enterprise software.

“Everybody is excited about data lakes,” said AWS CEO Andy Jassy in today’s AWS re:Invent keynote. “People realize that there is significant value in moving all that disparate data that lives in your company in different silos and make it much easier by consolidating it in a data lake.”

Setting up a data lake today means you have to, among other things, configure your storage and (on AWS) S3 buckets, move your data, add metadata and add that to a catalog. And then you have to clean up that data and set up the right security policies for the data lake. “This is a lot of work and for most companies, it takes them several months to set up a data lake. It’s frustrating,” said Jassy.

Lake Formation is meant to handle all of these complications with just a few clicks. It sets up the right tags and cleans up and dedupes the data automatically. And it provides admins with a list of security policies to help secure that data.

“This is a step-level change for how easy it is to set up data lakes,” said Jassy.

more AWS re:Invent 2018 coverage

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com