Jan
16
2019
--

AWS launches Backup, a fully managed backup service for AWS

Amazon’s AWS cloud computing service today launched Backup, a new tool that makes it easier for developers on the platform to back up their data from various AWS services and their on-premises apps. Out of the box, the service, which is now available to all developers, lets you set up backup policies for services like Amazon EBS volumes, RDS databases, DynamoDB tables, EFS file systems and AWS Storage Gateway volumes. Support for more services is planned, too. To back up on-premises data, businesses can use the AWS Storage Gateway.

The service allows users to define their various backup policies and retention periods, including the ability to move backups to cold storage (for EFS data) or delete them completely after a certain time. By default, the data is stored in Amazon S3 buckets.

Most of the supported services, except for EFS file systems, already feature the ability to create snapshots. Backup essentially automates that process and creates rules around it, so it’s no surprise that pricing for Backup is the same as for using those snapshot features (with the exception of the file system backup, which will have a per-GB charge). It’s worth noting that you’ll also pay a per-GB fee for restoring data from EFS file systems and DynamoDB backups.

Currently, Backup’s scope is limited to a given AWS region, but the company says that it plans to offer cross-region functionality later this year.

“As the cloud has become the default choice for customers of all sizes, it has attracted two distinct types of builders,” writes Bill Vass, AWS’s VP of Storage, Automation, and Management Services. “Some are tinkerers who want to tweak and fine-tune the full range of AWS services into a desired architecture, and other builders are drawn to the same breadth and depth of functionality in AWS, but are willing to trade some of the service granularity to start at a higher abstraction layer, so they can build even faster. We designed AWS Backup for this second type of builder who has told us that they want one place to go for backups versus having to do it across multiple, individual services.”

Early adopters of AWS Backup are State Street Corporation, Smile Brands and Rackspace, though this is surely a service that will attract its fair share of users as it makes the life of admins quite a bit easier. AWS does have quite a few backup and storage partners, though, who may not be all that excited to see AWS jump into this market, too — though they often offer a wider range of functionality than AWS’s service, including cross-region and offsite backups.

 

Dec
19
2018
--

Google’s Cloud Spanner database adds new features and regions

Cloud Spanner, Google’s globally distributed relational database service, is getting a bit more distributed today with the launch of a new region and new ways to set up multi-region configurations. The service is also getting a new feature that gives developers deeper insights into their most resource-consuming queries.

With this update, Google is adding to the Cloud Spanner lineup Hong Kong (asia-east2), its newest data center location. With this, Cloud Spanner is now available in 14 out of 18 Google Cloud Platform (GCP) regions, including seven the company added this year alone. The plan is to bring Cloud Spanner to every new GCP region as they come online.

The other new region-related news is the launch of two new configurations for multi-region coverage. One, called eur3, focuses on the European Union, and is obviously meant for users there who mostly serve a local customer base. The other is called nam6 and focuses on North America, with coverage across both costs and the middle of the country, using data centers in Oregon, Los Angeles, South Carolina and Iowa. Previously, the service only offered a North American configuration with three regions and a global configuration with three data centers spread across North America, Europe and Asia.

While Cloud Spanner is obviously meant for global deployments, these new configurations are great for users who only need to serve certain markets.

As far as the new query features are concerned, Cloud Spanner is now making it easier for developers to view, inspect and debug queries. The idea here is to give developers better visibility into their most frequent and expensive queries (and maybe make them less expensive in the process).

In addition to the Cloud Spanner news, Google Cloud today announced that its Cloud Dataproc Hadoop and Spark service now supports the R language, in addition to Python 3.7 support on App Engine.

Dec
11
2018
--

The Cloud Native Computing Foundation adds etcd to its open-source stable

The Cloud Native Computing Foundation (CNCF), the open-source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.

Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open-source projects that use etcd include Cloud Foundry, and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”

Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 projects that fall under its “incubated technologies” umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.

That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).

Dec
10
2018
--

Kong launches its fully managed API platform

API platform Kong, which you may remember under its previous name of Mashape, is launching its new Kong Cloud service today. Kong Cloud is the company’s fully managed platform for securing, connecting and orchestrating APIs. Enterprises can deploy it to virtually any major cloud platform, including AWS, Azure and Google Cloud, and Kong will handle all of the daily drudgery of managing it for them.

At the core of Kong Cloud is Kong, the company’s open source microservices gateway. The company already offers an enterprise version of Kong under the Kong Enterprise brand, but it’s up to enterprises to manage this version by themselves.

“Customers running Kong Enterprise on-prem and self-managed are often running it multi cloud. They are running it from  AWS, to Azure, Google Cloud, Pivotal Cloud Foundry or bare metal. It’s all over the place,” Kong co-founder, president and CEO Augusto Marietti told me. “But not all of them have massive engineering organizations, so Kong multi-cloud is our managed version of Kong as a service that can run on any cloud.”

With Kong Cloud, the company monitors and manages the service, giving enterprises an end-to-end API platform and developer portal. The company handles updates and all the other operational tasks. In terms of the overall functionality (think governance, security features etc.), this is essentially Kong Enterprise. Indeed, Marietti stressed that the two are meant to be one-to-one compatible, in part because he expects that some companies will use both versions, depending on their teams’ needs.

Marietti told me that Kong now has more than 85 employees and more than 100 enterprise customers. These include the likes of Zillow, Soulcycle and Expedia. Year-over-year, the company tells me, its bookings have grown 9x and the Kong open-source tool has now been downloaded more than 54 million times.

The company rebranded as Kong in October 2017, in part to signify that its ongoing focus would be on microservices in the enterprise and the Kong tool, which it open sourced in 2015. Ahead of its rebranding exercise, Mashape/Kong sold off its API marketplace to RapidAPI. The marketplace was the company’s first product — and Kong was in part developed to support it — but in the end, the company decided that its focus was going to be on Kong itself. That move seems to be paying off now, as enterprises are moving to adopt microservices and often need partners to do so.

Dec
06
2018
--

Contentful raises $33.5M for its headless CMS platform

Contentful, a Berlin- and San Francisco-based startup that provides content management infrastructure for companies like Spotify, Nike, Lyft and others, today announced that it has raised a $33.5 million Series D funding round led by Sapphire Ventures, with participation from OMERS Ventures and Salesforce Ventures, as well as existing investors General Catalyst, Benchmark, Balderton Capital and Hercules. In total, the company has now raised $78.3 million.

It’s been less than a year since the company raised its Series C round and, as Contentful co-founder and CEO Sascha Konietzke told me, the company didn’t really need to raise right now. “We had just raised our last round about a year ago. We still had plenty of cash in our bank account and we didn’t need to raise as of now,” said Konietzke. “But we saw a lot of economic uncertainty, so we thought it might be a good moment in time to recharge. And at the same time, we already had some interesting conversations ongoing with Sapphire [formerly SAP Ventures] and Salesforce. So we saw the opportunity to add more funding and also start getting into a tight relationship with both of these players.”

The original plan for Contentful was to focus almost explicitly on mobile. As it turns out, though, the company’s customers also wanted to use the service to handle its web-based applications and these days, Contentful happily supports both. “What we’re seeing is that everything is becoming an application,” he told me. “We started with native mobile application, but even the websites nowadays are often an application.”

In its early days, Contentful focused only on developers. Now, however, that’s changing, and having these connections to large enterprise players like SAP and Salesforce surely isn’t going to hurt the company as it looks to bring on larger enterprise accounts.

Currently, the company’s focus is very much on Europe and North America, which account for about 80 percent of its customers. For now, Contentful plans to continue to focus on these regions, though it obviously supports customers anywhere in the world.

Contentful only exists as a hosted platform. As of now, the company doesn’t have any plans for offering a self-hosted version, though Konietzke noted that he does occasionally get requests for this.

What the company is planning to do in the near future, though, is to enable more integrations with existing enterprise tools. “Customers are asking for deeper integrations into their enterprise stack,” Konietzke said. “And that’s what we’re beginning to focus on and where we’re building a lot of capabilities around that.” In addition, support for GraphQL and an expanded rich text editing experience is coming up. The company also recently launched a new editing experience.

Nov
27
2018
--

Red Hat acquires hybrid cloud data management service NooBaa

Red Hat is in the process of being acquired by IBM for a massive $34 billion, but that deal hasn’t closed yet and, in the meantime, Red Hat is still running independently and making its own acquisitions, too. As the company today announced, it has acquired Tel Aviv-based NooBaa, an early-stage startup that helps enterprises manage their data more easily and access their various data providers through a single API.

NooBaa’s technology makes it a good fit for Red Hat, which has recently emphasized its ability to help enterprise more effectively manage their hybrid and multicloud deployments. At its core, NooBaa is all about bringing together various data silos, which should make it a good fit in Red Hat’s portfolio. With OpenShift and the OpenShift Container Platform, as well as its Ceph Storage service, Red Hat already offers a range of hybrid cloud tools, after all.

“NooBaa’s technologies will augment our portfolio and strengthen our ability to meet the needs of developers in today’s hybrid and multicloud world,” writes Ranga Rangachari, the VP and general manager for storage and hyperconverged infrastructure at Red Hat, in today’s announcement. “We are thrilled to welcome a technical team of nine to the Red Hat family as we work together to further solidify Red Hat as a leading provider of open hybrid cloud technologies.”

While virtually all of Red Hat’s technology is open source, NooBaa’s code is not. The company says that it plans to open source NooBaa’s technology in due time, though the exact timeline has yet to be determined.

NooBaa was founded in 2013. The company has raised some venture funding from the likes of Jerusalem Venture Partners and OurCrowd, with a strategic investment from Akamai Capital thrown in for good measure. The company never disclosed the size of that round, though, and neither Red Hat nor NooBaa are disclosing the financial terms of the acquisition.

Nov
06
2018
--

VMware acquires Heptio, the startup founded by 2 co-founders of Kubernetes

During its big customer event in Europe, VMware announced another acquisition to step up its game in helping enterprises build and run containerised, Kubernetes-based architectures: it has acquired Heptio, a startup out of Seattle that was co-founded by Joe Beda and Craig McLuckie, who were two of the three people who co-created Kubernetes back at Google in 2014 (it has since been open sourced).

Beda and McLuckie and their team will all be joining VMware in the transaction.

Terms of the deal are not being disclosed — VMware said in a release that they are not material to the company — but as a point of reference, when Heptio last raised money — a $25 million Series B in 2017, with investors including Lightspeed, Accel and Madrona — it was valued at $117 million post-money, according to data from PitchBook.

Given the pedigree of Heptio’s founders, this is a signal of the big bet that VMware is taking on Kubernetes, and the belief that it will become an increasing cornerstone in how enterprises run their businesses. The larger company already works with 500,000+ customers globally, and 75,000 partners. It’s not clear how many customers Heptio worked with but they included large, tech-forward businesses like Yahoo Japan.

It’s also another endorsement of the ongoing rise of open source and its role in cloud architectures, a paradigm that got its biggest boost at the end of October with IBM’s acquisition of RedHat, one of the biggest tech acquisitions of all time at $34 billion.

Heptio provides professional services for enterprises that are adopting or already use Kubernetes, providing training, support and building open-source projects for managing specific aspects of Kubernetes and related container clusters, and this deal is about VMware expanding the business funnel and margins for Kubernetes within it its wider cloud, on-premise and hybrid storage and computing services with that expertise.

“Kubernetes is emerging as an open framework for multi-cloud infrastructure that enables enterprise organizations to run modern applications,” said Paul Fazzone, senior vice president and general manager, Cloud Native Apps Business Unit, VMware, in a statement. “Heptio products and services will reinforce and extend VMware’s efforts with PKS to establish Kubernetes as the de facto standard for infrastructure across clouds upon closing. We are thrilled that the Heptio team led by Craig and Joe will be joining VMware to help us guide customers as they move to a multi-cloud world.”

VMware and its Pivotal business already offer Kubernetes-related services by way of PKS, which lets organizations run cloud-agnostic apps. Heptio will become a part of that wider portfolio.

“The team at Heptio has been focused on Kubernetes, creating products that make it easier to manage multiple clusters across multiple clouds,” said Craig McLuckie, CEO and co-founder of Heptio. “And now we will be tapping into VMware’s cloud native resources and proven ability to execute, amplifying our impact. VMware’s interest in Heptio is a recognition that there is so much innovation happening in open source. We are jointly committed to contribute even more to the community—resources, ideas and support.”

VMware has made some 33 acquisitions overall, according to Crunchbase, but this appears to have been the first specifically to boost its position in Kubernetes.

The deal is expected to close by fiscal Q4 2019, VMware said.

Oct
28
2018
--

Forget Watson, the Red Hat acquisition may be the thing that saves IBM

With its latest $34 billion acquisition of Red Hat, IBM may have found something more elementary than “Watson” to save its flagging business.

Though the acquisition of Red Hat  is by no means a guaranteed victory for the Armonk, N.Y.-based computing company that has had more downs than ups over the five years, it seems to be a better bet for “Big Blue” than an artificial intelligence program that was always more hype than reality.

Indeed, commentators are already noting that this may be a case where IBM finally hangs up the Watson hat and returns to the enterprise software and services business that has always been its core competency (albeit one that has been weighted far more heavily on consulting services — to the detriment of the company’s business).

Watson, the business division focused on artificial intelligence whose public claims were always more marketing than actually market-driven, has not performed as well as IBM had hoped and investors were losing their patience.

Critics — including analysts at the investment bank Jefferies (as early as one year ago) — were skeptical of Watson’s ability to deliver IBM from its business woes.

As we wrote at the time:

Jefferies pulls from an audit of a partnership between IBM Watson and MD Anderson as a case study for IBM’s broader problems scaling Watson. MD Anderson cut its ties with IBM after wasting $60 million on a Watson project that was ultimately deemed, “not ready for human investigational or clinical use.”

The MD Anderson nightmare doesn’t stand on its own. I regularly hear from startup founders in the AI space that their own financial services and biotech clients have had similar experiences working with IBM.

The narrative isn’t the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.

That’s not the only trouble IBM has had with Watson’s healthcare results. Earlier this year, the online medical journal Stat reported that Watson was giving clinicians recommendations for cancer treatments that were “unsafe and incorrect” — based on the training data it had received from the company’s own engineers and doctors at Sloan-Kettering who were working with the technology.

All of these woes were reflected in the company’s latest earnings call where it reported falling revenues primarily from the Cognitive Solutions business, which includes Watson’s artificial intelligence and supercomputing services. Though IBM chief financial officer pointed to “mid-to-high” single digit growth from Watson’s health business in the quarter, transaction processing software business fell by 8% and the company’s suite of hosted software services is basically an afterthought for business gravitating to Microsoft, Alphabet, and Amazon for cloud services.

To be sure, Watson is only one of the segments that IBM had been hoping to tap for its future growth; and while it was a huge investment area for the company, the company always had its eyes partly fixed on the cloud computing environment as it looked for areas of growth.

It’s this area of cloud computing where IBM hopes that Red Hat can help it gain ground.

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” said Ginni Rometty, IBM Chairman, President and Chief Executive Officer, in a statement announcing the acquisition. “IBM will become the world’s number-one hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.”

The acquisition also puts an incredible amount of marketing power behind Red Hat’s various open source services business — giving all of those IBM project managers and consultants new projects to pitch and maybe juicing open source software adoption a bit more aggressively in the enterprise.

As Red Hat chief executive Jim Whitehurst told TheStreet in September, “The big secular driver of Linux is that big data workloads run on Linux. AI workloads run on Linux. DevOps and those platforms, almost exclusively Linux,” he said. “So much of the net new workloads that are being built have an affinity for Linux.”

Oct
22
2018
--

Pulumi raises $15M for its infrastructure as code platform

Pulumi, a Seattle-based startup that lets developers specify and manage their cloud infrastructure using the programming language they already know, today announced that it has raised a $15 million Series A funding round led by Madrona Venture Group. Tola Capital also participated in this round and Tola managing director Sheila Gulati will get a seat on the Pulumi board, where she’ll join former Microsoft exec and Madrona managing director S. Somasegar.

In addition to announcing its raise, the company also today launched its commercial platform, which builds upon Pulumi’s open-source work.

“Since launch, we’ve had a lot of inbound interest, both on the community side — so you’re seeing a lot of open source contributions, and they’re really impactful contributions, including, for example, community-led support for VMware and OpenStack,” Pulumi co-founder and CEO Eric Rudder told me. “So we’re actually seeing a lot of vibrancy in the open-source community. And at the same time, we have a lot of inbound interest on the commercial side of things. That is, teams wanting to operationalize Pulumi and put it into production and wanting to purchase the product.”

So to meet that opportunity, the team decided to raise a new round to scale out both its team and product. And now, that product includes a commercial offering of Pulumi with the company’s new ‘team edition.’ This new enterprise version includes support for unlimited users, integrations with third-party tools like GitHub and Slack, as well as role-based access controls and onboarding and 12×5 support. Like the free, single-user community edition, the team edition is delivered as a SaaS product and supports deployments to all of the major public and private cloud platforms.

“We’re all seeing the same things — the cloud is a foregone conclusion,” Tola’s Gulati told me when I asked her why she was investing in Pulumi. “Enterprises have a lot of complexity as they come over the cloud. And so dealing with VMs, containers and serverless is a reality for these enterprises. And the ability to do that in a way that there’s a single toolset, letting developers use real programming languages, letting them exist where they have skills today, but then allows them to bring the best of cloud into their organization. Frankly, Pulumi really has thought through the existing complexity, the developer reality, the IT and develop a relationship from both a runtime and deployment perspective. And they are the best that we’ve seen.”

Pulumi will, of course, continue to develop its open source tools, too. Indeed, the company noted that it would invest heavily in building out the community around its tools. The team told me that it is already seeing a lot of momentum but with the new funding, it’ll re-double its efforts.

With the new funding, the company will also work on making the onboarding process much easier, up to the point where it will become a full self-serve experience. But that doesn’t work for most large organizations, so Pulumi will also invest heavily in its pre- and post-sales organization. Right now, like most companies at this stage, the team is mostly composed of engineers.

Oct
10
2018
--

Google expands its identity management portfolio for businesses and developers

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.

Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.

The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.

“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”

This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).

“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”

In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.

“As much as Google loves the cloud, a mantra that Google has is ‘let’s meet customers where they are.’ We know that customers are embracing the cloud, but we also know that they have a massive, massive footprint of traditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.

Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com