Jan
17
2020
--

DigitalOcean is laying off staff, sources say 30-50 affected

After appointing a new CEO and CFO last summer, cloud infrastructure provider DigitalOcean is embarking on a wider reorganisation: the startup has announced a round of layoffs, with potentially between 30 and 50 people affected.

DigitalOcean has confirmed the news with the following statement:

“DigitalOcean recently announced a restructuring to better align its teams to its go-forward growth strategy. As part of this restructuring, some roles were, unfortunately, eliminated. DigitalOcean continues to be a high-growth business with $275M in [annual recurring revenues] and more than 500,000 customers globally. Under this new organizational structure, we are positioned to accelerate profitable growth by continuing to serve developers and entrepreneurs around the world.”

Before the confirmation was sent to us this morning, a number of footprints began to emerge last night, when the layoffs first hit, with people on Twitter talking about it, some announcing that they are looking for new opportunities and some offering help to those impacted. Inbound tips that we received estimate the cuts at between 30 and 50 people. With around 500 employees (an estimate on PitchBook), that would work out to up to 10% of staff affected.

It’s not clear what is going on here — we’ll update as and when we hear more — but when Yancey Spruill and Bill Sorenson were respectively appointed CEO and CFO in July 2019 (Spruill replacing someone who was only in the role for a year), the incoming CEO put out a short statement that, in hindsight, hinted at a refocus of the business in the near future:

“My aspiration is for us to continue to provide everything you love about DO now, but to also enhance our offerings in a way that is meaningful, strategic and most helpful for you over time.”

The company provides a range of cloud infrastructure services to developers, including scalable compute services (“Droplets” in DigitalOcean terminology), managed Kubernetes clusters, object storage, managed database services, Cloud Firewalls, Load Balancers and more, with 12 data centers globally. It says it works with more than 1 million developers across 195 countries. It has also been expanding the services that it offers to developers, including more enhancements in its managed database services, and a free hosting option for continuous code testing in partnership with GitLab.

All the same, as my colleague Frederic pointed out when DigitalOcean appointed its latest CEO, while developers have generally been happy with the company, it isn’t as hyped as it once was, and is a smallish player nowadays.

And in an area of business where economies of scale are essential for making good margins on a business, it competes against some of the biggest leviathans in tech: Google (and its Google Cloud Platform), Amazon (which as AWS) and Microsoft (with Azure). That could mean that DigitalOcean is either trimming down as it talks to investors for a new round; or to better conserve cash as it sizes up how best to compete against these bigger, deep-pocketed players; or perhaps to start thinking about another kind of exit.

In that context, it’s notable that the company not only appointed a new CFO last summer, but also a CEO with prior CFO experience. It’s been a while since DigitalOcean has raised capital. According to PitchBook data, DigitalOcean last raised money in 2017, an undisclosed amount from Mighty Capital, Glean Capital, Viaduct Ventures, Black River Ventures, Hanaco Venture Capital, Torch Capital and EG Capital Advisors. Before that, it took out $130 million in debt, in 2016. Altogether it has raised $198 million, and its last valuation was from a round in 2015, $683 million.

It’s been an active week for layoffs among tech startups. Mozilla laid off 70 employees this week; and the weed delivery platform Eaze is also gearing up for more cuts amid an emergency push for funding.

We’ll update this post as we learn more. Best wishes to those affected by the news.

Jan
15
2020
--

Google Cloud gets a premium support plan with 15-minute response times

Google Cloud today announced the launch of its premium support plans for enterprise and mission-critical needs. This new plan brings Google’s support offerings for the Google Cloud Platform (GCP) in line with its premium G Suite support options.

“Premium Support has been designed to better meet the needs of our customers running modern cloud technology,” writes Google’s VP of Cloud Support, Atul Nanda. “And we’ve made investments to improve the customer experience, with an updated support model that is proactive, unified, centered around the customer, and flexible to meet the differing needs of their businesses.”

The premium plan, which Google will charge for based on your monthly GCP spent (with a minimum cost of what looks to be about $12,500 per month), promises a 15-minute response time for P1 cases. Those are situations when an application or infrastructure is unusable in production. Other features include training and new product reviews, as well as support for troubleshooting third-party systems.

Google stresses that the team that will answer a company’s calls will consist of “content-aware experts” that know your application stack and architecture. As with similar premium plans from other vendors, enterprises will have a Technical Account manager who works through these issues with them. Companies with global operations can opt to have (and pay for) technical account managers available during business hours in multiple regions.

The idea here, however, is also to give GCP users more proactive support, which will soon include a site reliability engineering engagement, for example, that is meant to help customers “design a wrapper of supportability around the Google Cloud customer projects that have the highest sensitivity to downtime.” The Support team will also work with customers to get them ready for special events like Black Friday or other peak events in their industry. Over time, the company plans to add more features and additional support plans.

As with virtually all of Google’s recent cloud moves, today’s announcement is part of the company’s efforts to get more enterprises to move to its cloud. Earlier this week, for example, it launched support for IBM’s Power Systems architecture, as well as new infrastructure solutions for retailers. In addition, it also acquired no-code service AppSheet.

Jan
14
2020
--

Equinix is acquiring bare metal cloud provider Packet

Equinix announced today that it is acquiring bare metal cloud provider Packet, the New York City startup that had raised over $36 million on a $100 million valuation, according to PitchBook data.

Equinix has a set of data centers and co-location facilities around the world. Companies that may want to have more control over their hardware could use their services, including space, power and cooling systems, instead of running their own data centers.

Equinix is getting a unique cloud infrastructure vendor in Packet, one that can provide more customized kinds of hardware configurations than you can get from the mainstream infrastructure vendors like AWS and Azure. Company COO George Karidis described what separated his company from the pack in a September, 2018 TechCrunch article:

“We offer the most diverse hardware options,” he said. That means they could get servers equipped with Intel, ARM, AMD or with specific nVidia GPUs in whatever configurations they want. By contrast public cloud providers tend to offer a more off-the-shelf approach. It’s cheap and abundant, but you have to take what they offer, and that doesn’t always work for every customer.

In a blog post announcing the deal, company co-founder and CEO Zachary Smith had a message for his customers, who may be worried about the change in ownership. “When the transaction closes later this quarter, Packet will continue operating as before: same team, same platform, same vision,” he wrote.

He also offered the standard value story for a deal like this, saying the company could scale much faster under Equinix than it could on its own, with access to its new company’s massive resources, including 200+ data centers in 55 markets and 1,800 networks.

Sara Baack, chief product officer at Equinix, says bringing the two companies together will provide a diverse set of bare metal options for customers moving forward. “Our combined strengths will further empower companies to be everywhere they need to be, to interconnect everyone and integrate everything that matters to their business,” she said in a statement.

While the companies did not share the purchase price, they did hint that they would have more details on the transaction after it closes, which is expected in the first quarter this year.

Jan
13
2020
--

Google brings IBM Power Systems to its cloud

As Google Cloud looks to convince more enterprises to move to its platform, it needs to be able to give businesses an onramp for their existing legacy infrastructure and workloads that they can’t easily replace or move to the cloud. A lot of those workloads run on IBM Power Systems with their Power processors, and, until now, IBM was essentially the only vendor that offered cloud-based Power systems. Now, however, Google is also getting into this game by partnering with IBM to launch IBM Power Systems on Google Cloud.

“Enterprises looking to the cloud to modernize their existing infrastructure and streamline their business processes have many options,” writes Kevin Ichhpurani, Google Cloud’s corporate VP for its global ecosystem, in today’s announcement. “At one end of the spectrum, some organizations are re-platforming entire legacy systems to adopt the cloud. Many others, however, want to continue leveraging their existing infrastructure while still benefiting from the cloud’s flexible consumption model, scalability, and new advancements in areas like artificial intelligence, machine learning, and analytics.”

Power Systems support obviously fits in well here, given that many companies use them for mission-critical workloads based on SAP and Oracle applications and databases. With this, they can take those workloads and slowly move them to the cloud, without having to re-engineer their applications and infrastructure. Power Systems on Google Cloud is obviously integrated with Google’s services and billing tools.

This is very much an enterprise offering, without a published pricing sheet. Chances are, given the cost of a Power-based server, you’re not looking at a bargain, per-minute price here.

Because IBM has its own cloud offering, it’s a bit odd to see it work with Google to bring its servers to a competing cloud — though it surely wants to sell more Power servers. The move makes perfect sense for Google Cloud, though, which is on a mission to bring more enterprise workloads to its platform. Any roadblock the company can remove works in its favor, and, as enterprises get comfortable with its platform, they’ll likely bring other workloads to it over time.

Dec
17
2019
--

Google details its approach to cloud-native security

Over the years, Google’s various whitepapers, detailing how the company solves specific problems at scale, have regularly spawned new startup ecosystems and changed how other enterprises think about scaling their own tools. Today, the company is publishing a new security whitepaper that details how it keeps its cloud-native architecture safe.

The name, BeyondProd, already indicates that this is an extension of the BeyondCorp zero trust system the company first introduced a few years ago. While BeyondCorp is about shifting security away from VPNs and firewalls on the perimeter to the individual users and devices, BeyondProd focuses on Google’s zero trust approach to how it connects machines, workloads and services.

Unsurprisingly, BeyondProd is based on pretty much the same principles as BeyondCorp, including network protection at the end, no mutual trust between services, trusted machines running known code, automated and standardized change rollout and isolated workloads. All of this, of course, focuses on securing cloud-native applications that generally communicate over APIs and run on modern infrastructure.

“Altogether, these controls mean that containers and the microservices running inside can be deployed, communicate with each other, and run next to each other, securely; without burdening individual microservice developers with the security and implementation details of the underlying infrastructure,” Google explains.

Google, of course, notes that it is making all of these features available to developers through its own services like GKE and Anthos, its hybrid cloud platform. In addition, though, the company also stresses that a lot of its open-source tools also allow enterprises to build systems that adhere to the same platforms, including the likes of Envoy, Istio, gVisor and others.

“In the same way that BeyondCorp helped us to evolve beyond a perimeter-based security model, BeyondProd represents a similar leap forward in our approach to production security,” Google says. “By applying the security principles in the BeyondProd model to your own cloud-native infrastructure, you can benefit from our experience, to strengthen the deployment of your workloads, how your their communications are secured, and how they affect other workloads.”

You can read the full whitepaper here.

Dec
03
2019
--

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Dec
02
2019
--

CircleCI launches improved AWS support

For about a year now, continuous integration and delivery service CircleCI has offered Orbs, a way to easily reuse commands and integrations with third-party services. Unsurprisingly, some of the most popular Orbs focus on AWS, as that’s where most of the company’s developers are either testing their code or deploying it. Today, right in time for AWS’s annual re:Invent developer conference in Las Vegas, the company announced that it has now added Orb support for the AWS Serverless Application Model (SAM), which makes setting up automated CI/CD platforms for testing and deploying to AWS Lambda significantly easier.

In total, the company says, more than 11,000 organizations started using Orbs since it launched a year ago. Among the AWS-centric Orbs are those for building and updating images for the Amazon Elastic Container Services and the Elastic Container Service for Kubernetes (EKS), for example, as well as AWS CodeDeploy support, an Orb for installing and configuring the AWS command line interface, an Orb for working with the S3 storage service and more.

“We’re just seeing a momentum of more and more companies being ready to adopt [managed services like Lambda, ECS and EKS], so this became really the ideal time to do most of the work with the product team at AWS that manages their serverless ecosystem and to add in this capability to leverage that serverless application model and really have this out of the box CI/CD flow ready for users who wanted to start adding these into to Lambda,” CircleCI VP of business development Tom Trahan told me. “I think when Lambda was in its earlier days, a lot of people would use it and they would use it and not necessarily follow the same software patterns and delivery flow that they might have with their traditional software. As they put more and more into Lambda and are really putting a lot more what I would call ‘production quality code’ out there to leverage. They realize they do want to have that same software delivery capability and discipline for Lambda as well.”

Trahan stressed that he’s still talking about early adopters and companies that started out as cloud-native companies, but these days, this group includes a lot of traditional companies, as well, that are now rapidly going through their own digital transformations.

Nov
25
2019
--

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Nov
22
2019
--

Cloud Foundry’s Kubernetes bet with Project Eirini hits 1.0

Cloud Foundry, the open-source platform-as-a-service that, with the help of lots of commercial backers, is currently in use by the majority of Fortune 500 companies, launched well before containers, and especially the Kubernetes orchestrator, were a thing. Instead, the project built its own container service, but the rise of Kubernetes obviously created a lot of interest in using it for managing Cloud Foundry’s container implementation. To do so, the organization launched Project Eirini last year; today, it’s officially launching version 1.0, which means it’s ready for production usage.

Eirini/Kubernetes doesn’t replace the old architecture. Instead, for the foreseeable future, they will operate side-by-side, with the operators deciding on which one to use.

The team working on this project shipped a first technical preview earlier this year and a number of commercial vendors, too, started to build their own commercial products around it and shipped it as a beta product.

“It’s one of the things where I think Cloud Foundry sometimes comes at things from a different angle,” IBM’s Julz Friedman told me. “Because it’s not about having a piece of technology that other people can build on in order to build a platform. We’re shipping the end thing that people use. So 1.0 for us — we have to have a thing that ticks all those boxes.”

He also noted that Diego, Cloud Foundry’s existing container management system, had been battle-tested over the years and had always been designed to be scalable to run massive multi-tenant clusters.

“If you look at people doing similar things with Kubernetes at the moment,” said Friedman, “they tend to run lots of Kubernetes clusters to scale to that kind of level. And Kubernetes, although it’s going to get there, right now, there are challenges around multi-tenancy, and super big multi-tenant scale”

But even without being able to get to this massive scale, Friedman argues that you can already get a lot of value even out of a small Kubernetes cluster. Most companies don’t need to run enormous clusters, after all, and they still get the value of Cloud Foundry with the power of Kubernetes underneath it (all without having to write YAML files for their applications).

As Cloud Foundry CTO Chip Childers also noted, once the transition to Eirini gets to the point where the Cloud Foundry community can start applying less effort to its old container engine, those resources can go back to fulfilling the project’s overall mission, which is about providing the best possible developer experience for enterprise developers.

“We’re in this phase in the industry where Kubernetes is the new infrastructure and [Cloud Foundry] has a very battle-tested developer experience around it,” said Childers. “But there’s also really interesting ideas that are out there that are coming from our community, so one of the things that I’ve suggested to the community writ large is, let’s use this time as an opportunity to not just evolve what we have, but also make sure that we’re paying attention to new workflows, new models, and figure out what’s going to provide benefit to that enterprise developer that we’re so focused on — and bring those types of capabilities in.”

Those new capabilities may be around technologies like functions and serverless, for example, though Friedman at least is more focused on Eirini 1.1 for the time being, which will include closing the gaps with what’s currently available in Cloud Foundry’s old scheduler, like Docker image support and support for the Cloud Foundry v3 API.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com