Jul
09
2020
--

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier.

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for compute services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.

Apr
22
2020
--

AWS launches Amazon AppFlow, its new SaaS integration service

AWS today launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand.

Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows and while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

“Developers spend huge amounts of time writing custom integrations so they can pass data between SaaS applications and AWS services so that it can be analysed; these can be expensive and can often take months to complete,” said AWS principal advocate Martin Beeby in today’s announcement. “If data requirements change, then costly and complicated modifications have to be made to the integrations. Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error.”

Every flow (which AWS defines as a call to a source application to transfer data to a destination) costs $0.001 per run, though, in typical AWS fashion, there’s also cost associated with data processing (starting at 0.02 per GB).

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third-party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, Vice President, AWS. “Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public Internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications – all without having to develop custom connectors or manage underlying API and network connectivity.”

At this point, the number of supported services remains comparatively low, with only 14 possible sources and four destinations (Amazon Redshift and S3, as well as Salesforce and Snowflake). Sometimes, depending on the source you select, the only possible destination is Amazon’s S3 storage service.

Over time, the number of integrations will surely increase, but for now, it feels like there’s still quite a bit more work to do for the AppFlow team to expand the list of supported services.

AWS has long left this market to competitors, even though it has tools like AWS Step Functions for building serverless workflows across AWS services and EventBridge for connections applications. Interestingly, EventBridge currently supports a far wider range of third-party sources, but as the name implies, its focus is more on triggering events in AWS than moving data between applications.

Dec
03
2019
--

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Nov
22
2019
--

Cloud Foundry’s Kubernetes bet with Project Eirini hits 1.0

Cloud Foundry, the open-source platform-as-a-service that, with the help of lots of commercial backers, is currently in use by the majority of Fortune 500 companies, launched well before containers, and especially the Kubernetes orchestrator, were a thing. Instead, the project built its own container service, but the rise of Kubernetes obviously created a lot of interest in using it for managing Cloud Foundry’s container implementation. To do so, the organization launched Project Eirini last year; today, it’s officially launching version 1.0, which means it’s ready for production usage.

Eirini/Kubernetes doesn’t replace the old architecture. Instead, for the foreseeable future, they will operate side-by-side, with the operators deciding on which one to use.

The team working on this project shipped a first technical preview earlier this year and a number of commercial vendors, too, started to build their own commercial products around it and shipped it as a beta product.

“It’s one of the things where I think Cloud Foundry sometimes comes at things from a different angle,” IBM’s Julz Friedman told me. “Because it’s not about having a piece of technology that other people can build on in order to build a platform. We’re shipping the end thing that people use. So 1.0 for us — we have to have a thing that ticks all those boxes.”

He also noted that Diego, Cloud Foundry’s existing container management system, had been battle-tested over the years and had always been designed to be scalable to run massive multi-tenant clusters.

“If you look at people doing similar things with Kubernetes at the moment,” said Friedman, “they tend to run lots of Kubernetes clusters to scale to that kind of level. And Kubernetes, although it’s going to get there, right now, there are challenges around multi-tenancy, and super big multi-tenant scale”

But even without being able to get to this massive scale, Friedman argues that you can already get a lot of value even out of a small Kubernetes cluster. Most companies don’t need to run enormous clusters, after all, and they still get the value of Cloud Foundry with the power of Kubernetes underneath it (all without having to write YAML files for their applications).

As Cloud Foundry CTO Chip Childers also noted, once the transition to Eirini gets to the point where the Cloud Foundry community can start applying less effort to its old container engine, those resources can go back to fulfilling the project’s overall mission, which is about providing the best possible developer experience for enterprise developers.

“We’re in this phase in the industry where Kubernetes is the new infrastructure and [Cloud Foundry] has a very battle-tested developer experience around it,” said Childers. “But there’s also really interesting ideas that are out there that are coming from our community, so one of the things that I’ve suggested to the community writ large is, let’s use this time as an opportunity to not just evolve what we have, but also make sure that we’re paying attention to new workflows, new models, and figure out what’s going to provide benefit to that enterprise developer that we’re so focused on — and bring those types of capabilities in.”

Those new capabilities may be around technologies like functions and serverless, for example, though Friedman at least is more focused on Eirini 1.1 for the time being, which will include closing the gaps with what’s currently available in Cloud Foundry’s old scheduler, like Docker image support and support for the Cloud Foundry v3 API.

Aug
14
2019
--

VMware says it’s looking to acquire Pivotal

VMware today confirmed that it is in talks to acquire software development platform Pivotal Software, the service best known for commercializing the open-source Cloud Foundry platform. The proposed transaction would see VMware acquire all outstanding Pivotal Class A stock for $15 per share, a significant markup over Pivotal’s current share price (which unsurprisingly shot up right after the announcement).

Pivotal’s shares have struggled since the company’s IPO in April 2018. The company was originally spun out of EMC Corporation (now DellEMC) and VMware in 2012 to focus on Cloud Foundry, an open-source software development platform that is currently in use by the majority of Fortune 500 companies. A lot of these enterprises are working with Pivotal to support their Cloud Foundry efforts. Dell itself continues to own the majority of VMware and Pivotal, and VMware also owns an interest in Pivotal already and sells Pivotal’s services to its customers, as well. It’s a bit of an ouroboros of a transaction.

Pivotal Cloud Foundry was always the company’s main product, but it also offered additional consulting services on top of that. Despite improving its execution since going public, Pivotal still lost $31.7 million in its last financial quarter as its stock price traded at just over half of the IPO price. Indeed, the $15 per share VMware is offering is identical to Pivotal’s IPO price.

An acquisition by VMware would bring Pivotal’s journey full circle, though this is surely not the journey the Pivotal team expected. VMware is a Cloud Foundry Foundation platinum member, together with Pivotal, DellEMC, IBM, SAP and Suse, so I wouldn’t expect any major changes in VMware’s support of the overall open-source ecosystem behind Pivotal’s core platform.

It remains to be seen whether the acquisition will indeed happen, though. In a press release, VMware acknowledged the discussion between the two companies but noted that “there can be no assurance that any such agreement regarding the potential transaction will occur, and VMware does not intend to communicate further on this matter unless and until a definitive agreement is reached.” That’s the kind of sentence lawyers like to write. I would be quite surprised if this deal didn’t happen, though.

Buying Pivotal would also make sense in the grand scheme of VMware’s recent acquisitions. Earlier this year, the company acquired Bitnami, and last year it acquired Heptio, the startup founded by two of the three co-founders of the Kubernetes project, which now forms the basis of many new enterprise cloud deployments and, most recently, Pivotal Cloud Foundry.

Apr
21
2018
--

Full-Metal Packet is hosting the future of cloud infrastructure

Cloud computing has been a revolution for the data center. Rather than investing in expensive hardware and managing a data center directly, companies are relying on public cloud providers like AWS, Google Cloud, and Microsoft Azure to provide general-purpose and high-availability compute, storage, and networking resources in a highly flexible way.

Yet as workflows have moved to the cloud, companies are increasingly realizing that those abstracted resources can be enormously expensive compared to the hardware they used to own. Few companies want to go back to managing hardware directly themselves, but they also yearn to have the price-to-performance level they used to enjoy. Plus, they want to take advantage of a whole new ecosystem of customized and specialized hardware to process unique workflows — think Tensor Processing Units for machine learning applications.

That’s where Packet comes in. The New York City-based startup’s platform offers a highly-customizable infrastructure for running bare metal in the cloud. Rather than sharing an instance with other users, Packet’s customers “own” the hardware they select, so they can use all the resources of that hardware.

Even more interesting is that Packet will also deploy custom hardware to its data centers, which currently number eighteen around the world. So, for instance, if you want to deploy a quantum computing box redundantly in half of those centers, Packet will handle the logistics of installing those boxes, setting them up, and managing that infrastructure for you.

The company was founded in 2014 by Zac Smith, Jacob Smith, and Aaron Welch, and it has raised a total of $12 million in venture capital financing according to Crunchbase, with its last round led by Softbank. “I took the usual path, I went to Juilliard,” Zac Smith, who is CEO, said to me at his office, which overlooks the World Trade Center in downtown Manhattan. Double bass was a first love, but he found his way eventually into internet hosting, working as COO of New York-based Voxel.

At Voxel, Smith said that he grew up in hosting just as the cloud started taking off. “We saw this change in the user from essentially a sysadmin who cared about Tom’s Hardware, to a developer who had never opened a computer but who was suddenly orchestrating infrastructure,” he said.

Innovation is the lifeblood of developers, yet, public clouds were increasingly abstracting away any details of the underlying infrastructure from developers. Smith explained that “infrastructure was becoming increasingly proprietary, the land of few companies.” While he once thought about leaving the hosting world post-Voxel, he and his co-founders saw an opportunity to rethink cloud infrastructure from the metal up.

“Our customer is a millennial developer, 32 years old, and they have never opened an ATX case, and how could you possibly give them IT in the same way,” Smith asked. The idea of Packet was to bring back choice in infrastructure to these developers, while abstracting away the actual data center logistics that none of them wanted to work on. “You can choose your own opinion — we are hardware independent,” he said.

Giving developers more bare metal options is an interesting proposition, but it is Packet’s long-term vision that I think is most striking. In short, the company wants to completely change the model of hardware development worldwide.

VCs are increasingly investing in specialized chips and memory to handle unique processing loads, from machine learning to quantum computing applications. In some cases, these chips can process their workloads exponentially faster compared to general purpose chips, which at scale can save companies millions of dollars.

Packet’s mission is to encourage that ecosystem by essentially becoming a marketplace, connecting original equipment manufacturers with end-user developers. “We use the WeWork model a lot,” Smith said. What he means is that Packet allows you to rent space in its global network of data centers and handle all the logistics of installing and monitoring hardware boxes, much as WeWork allows companies to rent real estate while it handles the minutia like resetting the coffee filter.

In this vision, Packet would create more discerning and diverse buyers, allowing manufacturers to start targeting more specialized niches. Gone are the generic x86 processors from Intel driving nearly all cloud purchases, and in their place could be dozens of new hardware vendors who can build up their brands among developers and own segments of the compute and storage workload.

In this way, developers can hack their infrastructure much as an earlier generation may have tricked out their personal computer. They can now test new hardware more easily, and when they find a particular piece of hardware they like, they can get it running in the cloud in short order. Packet becomes not just the infrastructure operator — but the channel connecting buyers and sellers.

That’s Packet’s big vision. Realizing it will require that hardware manufacturers increasingly build differentiated chips. More importantly, companies will have to have unique workflows, be at a scale where optimizing those workflows is imperative, and realize that they can match those workflows to specific hardware to maximize their cost performance.

That may sound like a tall order, but Packet’s dream is to create exactly that kind of marketplace. If successful, it could transform how hardware and cloud vendors work together and ultimately, the innovation of any 32-year-old millennial developer who doesn’t like plugging a box in, but wants to plug in to innovation.

Dec
06
2016
--

GoDaddy is buying rival Host Europe Group for $1.8B to accelerate its international expansion

godaddy-heg GoDaddy is on a shopping spree. Yesterday we reported that the domain and hosting company had bought WP Curve, a WordPress services startup to expand its WordPress support team. And today the company has just announced a much bigger deal. GoDaddy has acquired European rival Host Europe Group (HEG) for $1.8 billion — including €605 million paid to existing Host Europe… Read More

Dec
01
2016
--

AWS Batch simplifies batch computing in the cloud

img_20161201_101449 Amazon’s new AWS Batch allows engineers to execute a series of jobs automatically, in the cloud. The tool lets you run apps and container images on whatever EC2 instances are required to accomplish a given task.
Amazon recognized that many of its customers were bootstrapping their own batch computing systems. Users were stringing together EC2 instances, containers, notifications, and… Read More

Dec
01
2016
--

AWS Personal Health Dashboard helps developers monitor the state of their cloud apps

img_20161201_091825 DevOps teams will be happy to hear that Amazon is launching its own dashboard for Amazon Web Services. Personal Health Dashboard, as the company calls it, is its latest release from the stage of re:Invent 2016 to support more advanced cloud apps monitoring. The tool puts critical infrastructure data in one place. The dashboard will automatically notify teams of failures and allow them… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com