Jan
06
2020
--

Despite JEDI loss, AWS retains dominant market position

AWS took a hard blow last year when it lost the $10 billion, decade-long JEDI cloud contract to rival Microsoft. Yet even without that mega deal for building out the nation’s Joint Enterprise Defense Infrastructure, the company remains fully in control of the cloud infrastructure market — and it intends to fight that decision.

In fact, AWS still owns almost twice as much cloud infrastructure market share as Microsoft, its closest rival. While the two will battle over the next decade for big contracts like JEDI, for now, AWS doesn’t have much to worry about.

There was a lot more to AWS’s year than simply losing JEDI. Per usual, the news came out with a flurry of announcements and enhancements to its vast product set. Among the more interesting moves was a shift to the edge, the fact the company is getting more serious about the chip business and a big dose of machine learning product announcements.

The fact is that AWS has such market momentum now, it’s a legitimate question to ask if anyone, even Microsoft, can catch up. The market is continuing to expand though, and the next battle is for that remaining market share. AWS CEO Andy Jassy spent more time than in the past trashing Microsoft at 2019’s re:Invent customer conference in December, imploring customers to move to the cloud faster and showing that his company is preparing for a battle with its rivals in the years ahead.

Numbers, please

AWS closed 2019 on a $36 billion run rate, growing from $7.43 billion in in its first report in January to $9 billion in earnings for its most recent earnings report in October. Believe it or not, according to CNBC, that number failed to meet analysts expectations of $9.1 billion, but still accounted for 13% of Amazon’s revenue in the quarter.

Regardless, AWS is a juggernaut, which is fairly amazing when you consider that it started as a side project for Amazon .com in 2006. In fact, if AWS were a stand-alone company, it would be a substantial business. While growth slowed a bit last year, that’s inevitable when you get as large as AWS, says John Dinsdale, VP, chief analyst and general manager at Synergy Research, a firm that follows all aspects of the cloud market.

“This is just math and the law of large numbers. On average over the last four quarters, it has incremented its revenues by well over $500 million per quarter. So it has grown its quarterly revenues by well over $2 billion in a twelve-month period,” he said.

Dinsdale added, “To put that into context, this growth in quarterly revenue is bigger than Google’s total revenues in cloud infrastructure services. In a very large market that is growing at over 35% per year, AWS market share is holding steady.”

Dinsdale says the cloud infrastructure market didn’t quite break $100 billion last year, but even without full Q4 results, his firm’s models project a total of around $95 billion, up 37% over 2018. AWS has more than a third of that. Microsoft is way back at around 17% with Google in third with around 8 or 9%.

While this is from Q1, it illustrates the relative positions of companies in the cloud market. Chart: Synergy Research

JEDI disappointment

It would be hard to do any year-end review of AWS without discussing JEDI. From the moment the Department of Defense announced its decade-long, $10 billion cloud RFP, it has been one big controversy after another.

Dec
03
2019
--

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Dec
02
2019
--

CircleCI launches improved AWS support

For about a year now, continuous integration and delivery service CircleCI has offered Orbs, a way to easily reuse commands and integrations with third-party services. Unsurprisingly, some of the most popular Orbs focus on AWS, as that’s where most of the company’s developers are either testing their code or deploying it. Today, right in time for AWS’s annual re:Invent developer conference in Las Vegas, the company announced that it has now added Orb support for the AWS Serverless Application Model (SAM), which makes setting up automated CI/CD platforms for testing and deploying to AWS Lambda significantly easier.

In total, the company says, more than 11,000 organizations started using Orbs since it launched a year ago. Among the AWS-centric Orbs are those for building and updating images for the Amazon Elastic Container Services and the Elastic Container Service for Kubernetes (EKS), for example, as well as AWS CodeDeploy support, an Orb for installing and configuring the AWS command line interface, an Orb for working with the S3 storage service and more.

“We’re just seeing a momentum of more and more companies being ready to adopt [managed services like Lambda, ECS and EKS], so this became really the ideal time to do most of the work with the product team at AWS that manages their serverless ecosystem and to add in this capability to leverage that serverless application model and really have this out of the box CI/CD flow ready for users who wanted to start adding these into to Lambda,” CircleCI VP of business development Tom Trahan told me. “I think when Lambda was in its earlier days, a lot of people would use it and they would use it and not necessarily follow the same software patterns and delivery flow that they might have with their traditional software. As they put more and more into Lambda and are really putting a lot more what I would call ‘production quality code’ out there to leverage. They realize they do want to have that same software delivery capability and discipline for Lambda as well.”

Trahan stressed that he’s still talking about early adopters and companies that started out as cloud-native companies, but these days, this group includes a lot of traditional companies, as well, that are now rapidly going through their own digital transformations.

Nov
26
2019
--

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and to process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s MySQL (and Postgres)-compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Asay wrote in a blog post announcing these new capabilities.

Asay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

Nov
25
2019
--

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Nov
07
2019
--

AWS announces new savings plans to reduce complexity of reserved instances

Reserved instances (RIs) have provided a mechanism for companies, which expect to use a certain level of AWS infrastructure resources, to get some cost certainty. But as AWS’ Jeff Barr points out, they are on the complex side. To fix that, the company announced a new method called Savings Plans.

“Today we are launching Savings Plans, a new and flexible discount model that provides you with the same discounts as RIs, in exchange for a commitment to use a specific amount (measured in dollars per hour) of compute power over a one or three year period,” Barr wrote in a blog post announcing the new program.

Amazon charges customers in a couple of ways. First, there is an on-demand price, which is basically the equivalent of the rack rate at a hotel. You are going to pay more for this because you’re walking up and ordering it on the fly.

Most organizations know they are going to need a certain level of resources over a period of time, and in these cases, they can save some money by buying in bulk up front. This gives them cost certainty as an organization, and it helps Amazon because it knows it’s going to have a certain level of usage and can plan accordingly.

While Reserved Instances aren’t going away yet, it sounds like Amazon is trying to steer customers to the new savings plans. “We will continue to sell RIs, but Savings Plans are more flexible and I think many of you will prefer them,” Barr wrote.

The Savings Plans come in two flavors. Compute Savings Plans provide up to 66% savings and are similar to RIs in this regard. The aspect that customers should like is that the savings are broadly applicable across AWS products, and you can even move workloads between regions and maintain the same discounted rate.

The other is an EC2 Instance Savings Plan. With this one, also similar to the reserved instance, you can save up to 72% over the on-demand price, but with this option you are limited to a single region. It does offer a measure of flexibility though, allowing you to select different sizes of the same instance type or even switch operating systems from Windows to Linux without affecting your discount with your region of choice.

You can sign up today through the AWS Cost Explorer.

Oct
15
2019
--

Amazon migrates more than 100 consumer services from Oracle to AWS databases

AWS and Oracle love to take shots at each other, but as much as Amazon has knocked Oracle over the years, it was forced to admit that it was in fact a customer. Today in a company blog post, the company announced it was shedding Oracle for AWS databases, and had effectively turned off its final Oracle database.

The move involved 75 petabytes of internal data stored in nearly 7,500 Oracle databases, according to the company. “I am happy to report that this database migration effort is now complete. Amazon’s Consumer business just turned off its final Oracle database (some third-party applications are tightly bound to Oracle and were not migrated),” AWS’s Jeff Barr wrote in the company blog post announcing the migration.

Over the last several years, the company has been working to move off of Oracle databases, but it’s not an easy task to move projects on Amazon scale. Barr wrote there were lots of reasons the company wanted to make the move. “Over the years we realized that we were spending too much time managing and scaling thousands of legacy Oracle databases. Instead of focusing on high-value differentiated work, our database administrators (DBAs) spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted,” he wrote.

More than 100 consumer services have been moved to AWS databases, including customer-facing tools like Alexa, Amazon Prime and Twitch, among others. It also moved internal tools like AdTech, its fulfillment system, external payments and ordering. These are not minor matters. They are the heart and soul of Amazon’s operations.

Each team moved the Oracle database to an AWS database service like Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS) and Amazon Redshift. Each group was allowed to choose the service they wanted, based on its individual needs and requirements.

Oracle declined to comment on this story.

 

Oct
04
2019
--

Annual Extra Crunch members can receive $1,000 in AWS credits

We’re excited to announce a new partnership with Amazon Web Services for annual members of Extra Crunch. Starting today, qualified annual members can receive $1,000 in AWS credits. You also must be a startup founder to claim this Extra Crunch community perk.

AWS is the premier service for your application hosting needs, and we want to make sure our community is well-resourced to build. We understand that hosting and infrastructure costs can be a major hurdle for tech startups, and we’re hoping that this offer will help better support your team.

What’s included in the perk:

  • $1,000 in AWS Promotional Credit valid for 1 year
  • 2 months of AWS Business Support
  • 80 credits for self-paced labs

Applications are processed in 7-10 days, once an application is received. Companies may not be eligible for AWS Promotional Credits if they previously received a similar or greater amount of credit. Companies may be eligible to be “topped up” to a higher credit amount if they previously received a lower credit.

In addition to the AWS community perk, Extra Crunch members also get access to how-tos and guides on company building, intelligence on what’s happening in the startup ecosystem, stories about founders and exits, transcripts from panels at TechCrunch events, discounts on TechCrunch events, no banner ads on TechCrunch.com and more. To see a full list of the types of articles you get with Extra Crunch, head here.

You can sign up for annual Extra Crunch membership here.

Once you are signed up, you’ll receive a welcome email with a link to the AWS offer. If you are already an annual Extra Crunch member, you will receive an email with the offer at some point today. If you are currently a monthly Extra Crunch subscriber and want to upgrade to annual in order to claim this deal, head over to the “my account” section on TechCrunch.com and click the “upgrade” button.

This is one of several new community perks we’ve been working on for Extra Crunch members. Extra Crunch members also get 20% off all TechCrunch event tickets (email extracrunch@techcrunch.com with the event name to receive a discount code for event tickets). You can learn more about our events lineup here. You also can read about our Brex community perk here.

Aug
26
2019
--

Nvidia and VMware team up to make GPU virtualization easier

Nvidia today announced that it has been working with VMware to bring its virtual GPU technology (vGPU) to VMware’s vSphere and VMware Cloud on AWS. The company’s core vGPU technology isn’t new, but it now supports server virtualization to enable enterprises to run their hardware-accelerated AI and data science workloads in environments like VMware’s vSphere, using its new vComputeServer technology.

Traditionally (as far as that’s a thing in AI training), GPU-accelerated workloads tend to run on bare metal servers, which were typically managed separately from the rest of a company’s servers.

“With vComputeServer, IT admins can better streamline management of GPU accelerated virtualized servers while retaining existing workflows and lowering overall operational costs,” Nvidia explains in today’s announcement. This also means that businesses will reap the cost benefits of GPU sharing and aggregation, thanks to the improved utilization this technology promises.

Note that vComputeServer works with VMware Sphere, vCenter and vMotion, as well as VMware Cloud. Indeed, the two companies are using the same vComputeServer technology to also bring accelerated GPU services to VMware Cloud on AWS. This allows enterprises to take their containerized applications and from their own data center to the cloud as needed — and then hook into AWS’s other cloud-based technologies.

2019 08 25 1849

“From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Nvidia founder and CEO Jensen Huang . “Together with VMware, we’re designing the most advanced and highest performing GPU- accelerated hybrid cloud infrastructure to foster innovation across the enterprise.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com