Mar
26
2020
--

Tech giants should let startups defer cloud payments

Google, Amazon and Microsoft are the landlords. Amidst the coronavirus economic crisis, startups need a break from paying rent. They’re in a cash crunch. Revenue has stopped flowing in, capital markets like venture debt are hesitant and startups and small-to-medium sized businesses are at risk of either having to lay off huge numbers of employees and/or shut down.

Meanwhile, the tech giants are cash rich. Their success this decade means they’re able to weather the storm for a few months. Their customers cannot.

Cloud infrastructure costs area amongst many startups’ top expense besides payroll. The option to pay these cloud bills later could save some from going out of business or axing huge parts of their staff. Both would hurt the tech industry, the economy and the individuals laid off. But most worryingly for the giants, it could destroy their customer base.

The mass layoffs have already begun. Soon we’re sure to start hearing about sizable companies shutting down, upended by COVID-19. But there’s still an opportunity to stop a larger bloodbath from ensuing.

That’s why I have a proposal: cloud relief.

The platform giants should let startups and small businesses defer their cloud infrastructure payments for three to six months until they can pay them back in installments. Amazon AWS, Google Cloud, Microsoft Azure, these companies’ additional infrastructure products, and other platform providers should let customers pause payment until the worst of the first wave of the COVID-19 economic disruption passes. Profitable SaaS providers like Salesforce could give customers an extension too.

There are plenty of altruistic reasons to do this. They have the resources to help businesses in need. We all need to support each other in these tough times. This could protect tons of families. Some of these startups are providing important services to the public and even discounting them, thereby ramping up their bills while decreasing revenue.

Then there are the PR reasons. After years of techlash and anti-trust scrutiny, here’s the chance for the giants to prove their size can be beneficial to the world. Recruiters could use it as a talking point. “We’re the company that helped save Silicon Valley.” There’s an explanation for them squirreling away so much cash: the rainy day has finally arrived.

But the capitalistic truth and the story they could sell to Wall Street is that it’s not good for our business if our customers go out of business. Look at what happened to infrastructure providers in the dot-com crash. When tons of startups vaporized, so did the profits for those selling them hosting and tools. Any government stimulus for businesses would be better spent by them paying employees than paying the cloud companies that aren’t in danger. Saving one future Netflix from shutting down could cover any short-term loss from helping 100 other businesses.

This isn’t a handout. These startups will still owe the money. They’d just be able to pay it a little later, spread out over their monthly bills for a year or so. Once mass shelter-in-place orders subside, businesses can operate at least a little closer to normal, investors can get less cautious and customers will have the cash they need to pay their dues. Plus interest, if necessary.

Meanwhile, they’ll be locked in and loyal customers for the foreseeable future. Cloud vendors could gate the deferment to only customers that have been with them for X amount of months or that have already spent Y amount on the platform. The vendors also could offer the deferment on the condition that customers add a year or more to their existing contracts. Founders will remember who gave them the benefit of the doubt.

cloud ice cream cone imagine

Consider it a marketing expense. Platforms often offer discounts or free trials to new customers. Now it’s existing customers that need a reprieve. Instead of airport ads, the giants could spend the money ensuring they’ll still have plenty of developers building atop them by the end of 2020.

Beyond deferred payment, platforms could just push the due date on all outstanding bills to three or six months from now. Alternatively, they could offer a deep discount such as 50% off for three months if they didn’t want to deal with accruing debt and then servicing it. Customers with multi-year contracts could offered the opportunity to downgrade or renegotiate their contracts without penalties. Any of these might require giving sales quota forgiveness to their account executives.

It would likely be far too complicated and risky to accept equity in lieu of cash, a cut of revenue going forward or to provide loans or credit lines to customers. The clearest and simplest solution is to let startups skip a few payments, then pay more every month later until they clear their debt. When asked for comment or about whether they’re considering payment deferment options, Microsoft declined, and Amazon and Google did not respond.

To be clear, administering payment deferment won’t be simple or free. There are sure to be holes that cloud economists can poke in this proposal, but my goal is to get the conversation started. It could require the giants to change their earnings guidance. Rewriting deals with significantly sized customers will take work on both ends, and there’s a chance of breach of contract disputes. Giants would face the threat of customers recklessly using cloud resources before shutting down or skipping town.

Most taxing would be determining and enforcing the criteria of who’s eligible. The vendors would need to lay out which customers are too big so they don’t accidentally give a cloud-intensive but healthy media company a deferment they don’t need. Businesses that get questionably excluded could make a stink in public. Executing on the plan will require staff when giants are stretched thin trying to handle logistics disruptions, misinformation and accelerating work-from-home usage.

Still, this is the moment when the fortunate need to lend a hand to the vulnerable. Not a hand out, but a hand up. Companies with billions in cash in their coffers could save those struggling to pay salaries. All the fundraisers and info centers and hackathons are great, but this is how the tech giants can live up to their lofty mission statements.

We all live in the cloud now. Don’t evict us. #CloudRelief

Thanks to Falon Fatemi, Corey Quinn, Ilya Fushman, Jason Kim, Ilya Sukhar and Michael Campbell for their ideas and feedback on this proposal.

Mar
18
2020
--

Big opening for startups that help move entrenched on-prem workloads to the cloud

AWS CEO Andy Jassy showed signs of frustration at his AWS re:Invent keynote address in December.

Customers weren’t moving to the cloud nearly fast enough for his taste, and he prodded them to move along. Some of their hesitation, as Jassy pointed out, was due to institutional inertia, but some of it also was due to a technology problem related to getting entrenched, on-prem workloads to the cloud.

When a challenge of this magnitude presents itself and you have the head of the world’s largest cloud infrastructure vendor imploring customers to move faster, you can be sure any number of players will start paying attention.

Sure enough, cloud infrastructure vendors (ISVs) have developed new migration solutions to help break that big data logjam. Large ISVs like Accenture and Deloitte are also happy to help your company deal with migration issues, but this opportunity also offers a big opening for startups aiming to solve the hard problems associated with moving certain workloads to the cloud.

Think about problems like getting data off of a mainframe and into the cloud or moving an on-prem data warehouse. We spoke to a number of experts to figure out where this migration market is going and if the future looks bright for cloud-migration startups.

Cloud-migration blues

It’s hard to nail down exactly the percentage of workloads that have been moved to the cloud at this point, but most experts agree there’s still a great deal of growth ahead. Some of the more optimistic projections have pegged it at around 20%, with the U.S. far ahead of the rest of the world.

Mar
11
2020
--

AWS launches Bottlerocket, a Linux-based OS for container hosting

AWS has launched its own open-source operating system for running containers on both virtual machines and bare metal hosts. Bottlerocket, as the new OS is called, is basically a stripped-down Linux distribution that’s akin to projects like CoreOS’s now-defunct Container Linux and Google’s container-optimized OS. The OS is currently in its developer preview phase, but you can test it as an Amazon Machine Image for EC2 (and by extension, under Amazon EKS, too).

As AWS chief evangelist Jeff Barr notes in his announcement, Bottlerocket supports Docker images and images that conform to the Open Container Initiative image format, which means it’ll basically run all Linux-based containers you can throw at it.

One feature that makes Bottlerocket stand out is that it does away with a package-based update system. Instead, it uses an image-based model that, as Barr notes, “allows for a rapid & complete rollback if necessary.” The idea here is that this makes updates easier. At the core of this update process is “The Update Framework,” an open-source project hosted by the Cloud Native Computing Foundation.

AWS says it will provide three years of support (after General Availability) for its own builds of Bottlerocket. As of now, the project is very much focused on AWS, of course, but the code is available on GitHub and chances are we will see others expand on AWS’ work.

The company is launching the project in cooperation with a number of partners, including Alcide, Armory, CrowdStrike, Datadog, New Relic, Sysdig, Tigera, Trend Micro and Waveworks.

“Container-optimized operating systems will give dev teams the additional speed and efficiency to run higher throughput workloads with better security and uptime,” said Michael Gerstenhaber, director of Product Management at Datadog.” We are excited to work with AWS on Bottlerocket, so that as customers take advantage of the increased scale they can continue to monitor these ephemeral environments with confidence.”

 

Jan
06
2020
--

Despite JEDI loss, AWS retains dominant market position

AWS took a hard blow last year when it lost the $10 billion, decade-long JEDI cloud contract to rival Microsoft. Yet even without that mega deal for building out the nation’s Joint Enterprise Defense Infrastructure, the company remains fully in control of the cloud infrastructure market — and it intends to fight that decision.

In fact, AWS still owns almost twice as much cloud infrastructure market share as Microsoft, its closest rival. While the two will battle over the next decade for big contracts like JEDI, for now, AWS doesn’t have much to worry about.

There was a lot more to AWS’s year than simply losing JEDI. Per usual, the news came out with a flurry of announcements and enhancements to its vast product set. Among the more interesting moves was a shift to the edge, the fact the company is getting more serious about the chip business and a big dose of machine learning product announcements.

The fact is that AWS has such market momentum now, it’s a legitimate question to ask if anyone, even Microsoft, can catch up. The market is continuing to expand though, and the next battle is for that remaining market share. AWS CEO Andy Jassy spent more time than in the past trashing Microsoft at 2019’s re:Invent customer conference in December, imploring customers to move to the cloud faster and showing that his company is preparing for a battle with its rivals in the years ahead.

Numbers, please

AWS closed 2019 on a $36 billion run rate, growing from $7.43 billion in in its first report in January to $9 billion in earnings for its most recent earnings report in October. Believe it or not, according to CNBC, that number failed to meet analysts expectations of $9.1 billion, but still accounted for 13% of Amazon’s revenue in the quarter.

Regardless, AWS is a juggernaut, which is fairly amazing when you consider that it started as a side project for Amazon .com in 2006. In fact, if AWS were a stand-alone company, it would be a substantial business. While growth slowed a bit last year, that’s inevitable when you get as large as AWS, says John Dinsdale, VP, chief analyst and general manager at Synergy Research, a firm that follows all aspects of the cloud market.

“This is just math and the law of large numbers. On average over the last four quarters, it has incremented its revenues by well over $500 million per quarter. So it has grown its quarterly revenues by well over $2 billion in a twelve-month period,” he said.

Dinsdale added, “To put that into context, this growth in quarterly revenue is bigger than Google’s total revenues in cloud infrastructure services. In a very large market that is growing at over 35% per year, AWS market share is holding steady.”

Dinsdale says the cloud infrastructure market didn’t quite break $100 billion last year, but even without full Q4 results, his firm’s models project a total of around $95 billion, up 37% over 2018. AWS has more than a third of that. Microsoft is way back at around 17% with Google in third with around 8 or 9%.

While this is from Q1, it illustrates the relative positions of companies in the cloud market. Chart: Synergy Research

JEDI disappointment

It would be hard to do any year-end review of AWS without discussing JEDI. From the moment the Department of Defense announced its decade-long, $10 billion cloud RFP, it has been one big controversy after another.

Dec
03
2019
--

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Dec
02
2019
--

CircleCI launches improved AWS support

For about a year now, continuous integration and delivery service CircleCI has offered Orbs, a way to easily reuse commands and integrations with third-party services. Unsurprisingly, some of the most popular Orbs focus on AWS, as that’s where most of the company’s developers are either testing their code or deploying it. Today, right in time for AWS’s annual re:Invent developer conference in Las Vegas, the company announced that it has now added Orb support for the AWS Serverless Application Model (SAM), which makes setting up automated CI/CD platforms for testing and deploying to AWS Lambda significantly easier.

In total, the company says, more than 11,000 organizations started using Orbs since it launched a year ago. Among the AWS-centric Orbs are those for building and updating images for the Amazon Elastic Container Services and the Elastic Container Service for Kubernetes (EKS), for example, as well as AWS CodeDeploy support, an Orb for installing and configuring the AWS command line interface, an Orb for working with the S3 storage service and more.

“We’re just seeing a momentum of more and more companies being ready to adopt [managed services like Lambda, ECS and EKS], so this became really the ideal time to do most of the work with the product team at AWS that manages their serverless ecosystem and to add in this capability to leverage that serverless application model and really have this out of the box CI/CD flow ready for users who wanted to start adding these into to Lambda,” CircleCI VP of business development Tom Trahan told me. “I think when Lambda was in its earlier days, a lot of people would use it and they would use it and not necessarily follow the same software patterns and delivery flow that they might have with their traditional software. As they put more and more into Lambda and are really putting a lot more what I would call ‘production quality code’ out there to leverage. They realize they do want to have that same software delivery capability and discipline for Lambda as well.”

Trahan stressed that he’s still talking about early adopters and companies that started out as cloud-native companies, but these days, this group includes a lot of traditional companies, as well, that are now rapidly going through their own digital transformations.

Nov
26
2019
--

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and to process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s MySQL (and Postgres)-compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Asay wrote in a blog post announcing these new capabilities.

Asay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

Nov
25
2019
--

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Nov
07
2019
--

AWS announces new savings plans to reduce complexity of reserved instances

Reserved instances (RIs) have provided a mechanism for companies, which expect to use a certain level of AWS infrastructure resources, to get some cost certainty. But as AWS’ Jeff Barr points out, they are on the complex side. To fix that, the company announced a new method called Savings Plans.

“Today we are launching Savings Plans, a new and flexible discount model that provides you with the same discounts as RIs, in exchange for a commitment to use a specific amount (measured in dollars per hour) of compute power over a one or three year period,” Barr wrote in a blog post announcing the new program.

Amazon charges customers in a couple of ways. First, there is an on-demand price, which is basically the equivalent of the rack rate at a hotel. You are going to pay more for this because you’re walking up and ordering it on the fly.

Most organizations know they are going to need a certain level of resources over a period of time, and in these cases, they can save some money by buying in bulk up front. This gives them cost certainty as an organization, and it helps Amazon because it knows it’s going to have a certain level of usage and can plan accordingly.

While Reserved Instances aren’t going away yet, it sounds like Amazon is trying to steer customers to the new savings plans. “We will continue to sell RIs, but Savings Plans are more flexible and I think many of you will prefer them,” Barr wrote.

The Savings Plans come in two flavors. Compute Savings Plans provide up to 66% savings and are similar to RIs in this regard. The aspect that customers should like is that the savings are broadly applicable across AWS products, and you can even move workloads between regions and maintain the same discounted rate.

The other is an EC2 Instance Savings Plan. With this one, also similar to the reserved instance, you can save up to 72% over the on-demand price, but with this option you are limited to a single region. It does offer a measure of flexibility though, allowing you to select different sizes of the same instance type or even switch operating systems from Windows to Linux without affecting your discount with your region of choice.

You can sign up today through the AWS Cost Explorer.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com