Oct
11
2018
--

New Relic acquires Belgium’s CoScale to expand its monitoring of Kubernetes containers and microservices

New Relic, a provider of analytics and monitoring around a company’s internal and external facing apps and services to help optimise their performance, is making an acquisition today as it continues to expand a newer area of its business, containers and microservices. The company has announced that it has purchased CoScale, a provider of monitoring for containers and microservices, with a specific focus on Kubernetes.

Terms of the deal — which will include the team and technology — are not being disclosed, as it will not have a material impact on New Relic’s earnings. The larger company is traded on the NYSE (ticker: NEWR) and has been a strong upswing in the last two years, and its current market cap its around $4.6 billion.

Originally founded in Belgium, CoScale had raised $6.4 million and was last valued at $7.9 million, according to PitchBook. Investors included Microsoft (via its ScaleUp accelerator), PMV and the Qbic Fund, two Belgian investors.

We are thrilled to bring CoScale’s knowledge and deeply technical team into the New Relic fold,” noted Ramon Guiu, senior director of product management at New Relic. “The CoScale team members joining New Relic will focus on incorporating CoScale’s capabilities and experience into continuing innovations for the New Relic platform.”

The deal underscores how New Relic has had to shift in the last couple of years: when the company was founded years ago, application monitoring was a relatively easy task, with the web and a specified number of services the limit of what needed attention. But services, apps and functions have become increasingly complex and now tap data stored across a range of locations and devices, and processing everything generates a lot of computing demand.

New Relic first added container and microservices monitoring to its stack in 2016. That’s a somewhat late arrival to the area, New Relic CEO Lew Cirne believes that it’s just at the right time, dovetailing New Relic’s changes with wider shifts in the market.

‘We think those changes have actually been an opportunity for us to further differentiate and further strengthen our thesis that the New Relic  way is really the most logical way to address this,” he told my colleague Ron Miller last month. As Ron wrote, Cirne’s take is that New Relic has always been centered on the code, as opposed to the infrastructure where it’s delivered, and that has helped it make adjustments as the delivery mechanisms have changed.

New Relic already provides monitoring for Kubernetes, Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (EKS), Microsoft Azure Kubernetes Service (AKS), and RedHat Openshift, and the idea is that CoScale will help it ramp up across that range, while also adding Docker and OpenShift to the mix, as well as offering new services down the line to serve the DevOps community.

“The visions of New Relic and CoScale are remarkably well aligned, so our team is excited that we get to join New Relic and continue on our journey of helping companies innovate faster by providing them visibility into the performance of their modern architectures,” said CoScale CEO Stijn Polfliet, in a statement. “[Co-founder] Fred [Ryckbosch] and I feel like this is such an exciting space and time to be in this market, and we’re thrilled to be teaming up with the amazing team at New Relic, the leader in monitoring modern applications and infrastructure.”

Oct
10
2018
--

Google’s Apigee officially launches its API monitoring service

It’s been about two years since Google acquired API management service Apigee. Today, the company is announcing new extensions that make it easier to integrate the service with a number of Google Cloud services, as well as the general availability of the company’s API monitoring solution.

Apigee API monitoring allows operations teams to get more insight into how their APIs are performing. The idea here is to make it easy for these teams to figure out when there’s an issue and what’s the root cause for it by giving them very granular data. “APIs are now part of how a lot of companies are doing business,” Ed Anuff, Apigee’s former SVP of product strategy and now Google’s product and strategy lead for the service, told me. “So that tees up the need for API monitoring.”

Anuff also told me that he believes that it’s still early days for enterprise API adoption — but that also means that Apigee is currently growing fast as enterprise developers now start adopting modern development techniques. “I think we’re actually still pretty early in enterprise adoption of APIs,” he said. “So what we’re seeing is a lot more customers going into full production usage of their APIs. A lot of what we had seen before was people using it for maybe an experiment or something that they were doing with a couple of partners.” He also attributed part of the recent growth to customers launching more mobile applications where APIs obviously form the backbone of much of the logic that drives those apps.

API Monitoring was already available as a beta, but it’s now generally available to all Apigee customers.

Given that it’s now owned by Google, it’s no surprise that Apigee is also launching deeper integrations with Google’s cloud services now — specifically services like BigQuery, Cloud Firestore, Pub/Sub, Cloud Storage and Spanner. Some Apigee customers are already using this to store every message passed through their APIs to create extensive logs, often for compliance reasons. Others use Cloud Firestore to personalize content delivery for their web users or to collect data from their APIs and then send that to BigQuery for analysis.

Anuff stressed that Apigee remains just as open to third-party integrations as it always was. That is part of the core promise of APIs, after all.

Oct
10
2018
--

Google expands its identity management portfolio for businesses and developers

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.

Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.

The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.

“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”

This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).

“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”

In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.

“As much as Google loves the cloud, a mantra that Google has is ‘let’s meet customers where they are.’ We know that customers are embracing the cloud, but we also know that they have a massive, massive footprint of traditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.

Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.

 

Oct
10
2018
--

Google Cloud expands its networking feature with Cloud NAT

It’s a busy week for news from Google Cloud, which is hosting its Next event in London. Today, the company used the event to launch a number of new networking features. The marquee launch today is Cloud NAT, a new service that makes it easier for developers to build cloud-based services that don’t have public IP addresses and can only be accessed from applications within a company’s virtual private cloud.

As Google notes, building this kind of setup was already possible, but it wasn’t easy. Obviously, this is a pretty common use case, though, so with Cloud NAT, Google now offers a fully managed service that handles all the network address translation (hence the NAT) and provides access to these private instances behind the Cloud NAT gateway.

Cloud NAT supports Google Compute Engine virtual machines as well as Google Kubernetes Engine containers, and offers both a manual mode where developers can specify their IPs and an automatic mode where IPs are automatically allocated.

Also new in today’s release is Firewall Rules Logging, which is now in beta. Using this feature, admins can audit, verify and analyze the effects of their firewall rules. That means when there are repeated connection attempts that the firewall blocked, you can now analyze those and see whether somebody was up to no good or whether somebody misconfigured the firewall. Because the data is only delayed by about five seconds, the service provides near real-time access to this data — and you can obviously tie this in with other services like Stackdriver Logging, Cloud Pub/Sub and BigQuery to create alerts and further analyze the data.

Also new today is managed TLS certificated for HTTPS load balancers. The idea here is to take the hassle out of managing TLS certificates (the kind of certificates that ensure that your user’s browser creates a secure connection to your app) when there is a load balancer in play. This feature, too, is now in beta.

Oct
10
2018
--

Nvidia launches Rapids to help bring GPU acceleration to data analytics

Nvidia, together with partners like IBM, HPE, Oracle, Databricks and others, is launching a new open-source platform for data science and machine learning today. Rapids, as the company is calling it, is all about making it easier for large businesses to use the power of GPUs to quickly analyze massive amounts of data and then use that to build machine learning models.

“Businesses are increasingly data-driven,” Nvidia’s VP of Accelerated Computing Ian Buck told me. “They sense the market and the environment and the behavior and operations of their business through the data they’ve collected. We’ve just come through a decade of big data and the output of that data is using analytics and AI. But most it is still using traditional machine learning to recognize complex patterns, detect changes and make predictions that directly impact their bottom line.”

The idea behind Rapids then is to work with the existing popular open-source libraries and platforms that data scientists use today and accelerate them using GPUs. Rapids integrates with these libraries to provide accelerated analytics, machine learning and — in the future — visualization.

Rapids is based on Python, Buck noted; it has interfaces that are similar to Pandas and Scikit, two very popular machine learning and data analysis libraries, and it’s based on Apache Arrow for in-memory database processing. It can scale from a single GPU to multiple notes and IBM notes that the platform can achieve improvements of up to 50x for some specific use cases when compared to running the same algorithms on CPUs (though that’s not all that surprising, given what we’ve seen from other GPU-accelerated workloads in the past).

Buck noted that Rapids is the result of a multi-year effort to develop a rich enough set of libraries and algorithms, get them running well on GPUs and build the relationships with the open-source projects involved.

“It’s designed to accelerate data science end-to-end,” Buck explained. “From the data prep to machine learning and for those who want to take the next step, deep learning. Through Arrow, Spark users can easily move data into the Rapids platform for acceleration.”

Indeed, Spark is surely going to be one of the major use cases here, so it’s no wonder that Databricks, the company founded by the team behind Spark, is one of the early partners.

“We have multiple ongoing projects to integrate Spark better with native accelerators, including Apache Arrow support and GPU scheduling with Project Hydrogen,” said Spark founder Matei Zaharia in today’s announcement. “We believe that RAPIDS is an exciting new opportunity to scale our customers’ data science and AI workloads.”

Nvidia is also working with Anaconda, BlazingDB, PyData, Quansight and scikit-learn, as well as Wes McKinney, the head of Ursa Labs and the creator of Apache Arrow and Pandas.

Another partner is IBM, which plans to bring Rapids support to many of its services and platforms, including its PowerAI tools for running data science and AI workloads on GPU-accelerated Power9 servers, IBM Watson Studio and Watson Machine Learning and the IBM Cloud with its GPU-enabled machines. “At IBM, we’re very interested in anything that enables higher performance, better business outcomes for data science and machine learning — and we think Nvidia has something very unique here,” Rob Thomas, the GM of IBM Analytics told me.

“The main benefit to the community is that through an entirely free and open-source set of libraries that are directly compatible with the existing algorithms and subroutines that their used to — they now get access to GPU-accelerated versions of them,” Buck said. He also stressed that Rapids isn’t trying to compete with existing machine learning solutions. “Part of the reason why Rapids is open source is so that you can easily incorporate those machine learning subroutines into their software and get the benefits of it.”

Oct
09
2018
--

Cloud Foundry expands its support for Kubernetes

Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses run the Application Platform and their container-based applications in parallel. In addition, Cloud Foundry has also long been the force behind BOSH, a tool for building, deploying and managing cloud applications.

The addition of the Container Runtime a year go seemed to muddle the organization’s mission a bit, but now that the dust has settled, the intent here is starting to become clearer. As Cloud Foundry CTO Chip Childers told me, what enterprises are mostly using the Container Runtime for is for running the pre-packaged applications they get from their vendors. “The Container Runtime — or really any deployment of Kubernetes — when used next to or in conjunction with the App Runtime, that’s where people are largely landing packaged software being delivered by an independent software vendor,” he told me. “Containers are the new CD-ROM. You just want to land it in a good orchestration platform.”

Because the Application Runtime launched well before Kubernetes was a thing, the Cloud Foundry project built its own container service, called Diego.

Today, the Cloud Foundry foundation is launching two new Kubernetes-related projects that take the integration between the two to a new level. The first is Project Eirini, which was launched by IBM and is now being worked on by Suse and SAP as well. This project has been a long time in the making and it’s something that the community has expected for a while. It basically allows developers to choose between using the existing Diego orchestrator and Kubernetes when it comes to deploying applications written for the Application Runtime. That’s a big deal for Cloud Foundry.

“What Eirini does, is it takes that Cloud Foundry Application Runtime — that core PaaS experience that the [Cloud Foundry] brand is so tied to and it allows the underlying Diego scheduler to be replaced with Kubernetes as an option for those use cases that it can cover,” Childers explained. He added that there are still some use cases the Diego container management system is better suited for than Kubernetes. One of those is better Windows support — something that matters quite a bit to the enterprise companies that use Cloud Foundry. Childers also noted that the multi-tenancy guarantees of Kubernetes are a bit less stringent than Diego’s.

The second new project is CF Containerization, which was initially developed by Suse. Like the name implies, CF Containerization basically allows you to package the core Cloud Foundry Application Runtime and deploy it in Kubernetes clusters with the help of the BOSH deployment tool. This is pretty much what Suse is already using to ship its Cloud Foundry distribution.

Clearly then, Kubernetes is becoming part and parcel of what the Cloud Foundry PaaS service will sit on top of and what developers will use to deploy the applications they write for it in the near future. At first glance, this focus on Kubernetes may look like it’s going to make Cloud Foundry superfluous, but it’s worth remembering that, at its core, the Cloud Foundry Application Runtime isn’t about infrastructure but about a developer experience and methodology that aims to manage the whole application development lifecycle. If Kubernetes can be used to help manage that infrastructure, then the Cloud Foundry project can focus on what it does best, too.

Oct
04
2018
--

GitHub gets a new and improved Jira Software Cloud integration

Atlassian’s Jira has become a standard for managing large software projects in many companies. Many of those same companies also use GitHub as their source code repository and, unsurprisingly, there has long been an official way to integrate the two. That old way, however, was often slow, limited in its capabilities and unable to cope with the large code bases that many enterprises now manage on GitHub .

Almost as if to prove that GitHub remains committed to an open ecosystem, even after the Microsoft acquisition, the company today announced a new and improved integration between the two products.

“Working with Atlassian on the Jira integration was really important for us,” GitHub’s director of ecosystem engineering Kyle Daigle told me ahead of the announcement. “Because we want to make sure that our developer customers are getting the best experience of our open platform that they can have, regardless of what tools they use.”

So a couple of months ago, the team decided to build its own Jira integration from the ground up, and it’s committed to maintaining and improving it over time. As Daigle noted, the improvements here include better performance and a better user experience.

The new integration now also makes it easier to view all the pull requests, commits and branches from GitHub that are associated with a Jira issue, search for issues based on information from GitHub and see the status of the development work right in Jira, too. And because changes in GitHub trigger an update to Jira, too, that data should remain up to date at all times.

The old Jira integration over the so-called Jira DVCS connector will be deprecated and GitHub will start prompting existing users to do the upgrade over the next few weeks. The new integration is now a GitHub app, so that also comes with all of the security features the platform has to offer.

Sep
25
2018
--

Chef launches deeper integration with Microsoft Azure

DevOps automation service Chef today announced a number of new integrations with Microsoft Azure. The news, which was announced at the Microsoft Ignite conference in Orlando, Florida, focuses on helping enterprises bring their legacy applications to Azure and ranges from the public preview of Chef Automate Managed Service for Azure to the integration of Chef’s InSpec compliance product with Microsoft’s cloud platform.

With Chef Automate as a managed service on Azure, which provides ops teams with a single tool for managing and monitoring their compliance and infrastructure configurations, developers can now easily deploy and manage Chef Automate and the Chef Server from the Azure Portal. It’s a fully managed service and the company promises that businesses can get started with using it in as little as thirty minutes (though I’d take those numbers with a grain of salt).

When those configurations need to change, Chef users on Azure can also now use the Chef Workstation with Azure Cloud Shell, Azure’s command line interface. Workstation is one of Chef’s newest products and focuses on making ad-hoc configuration changes, no matter whether the node is managed by Chef or not.

And to remain in compliance, Chef is also launching an integration of its InSpec security and compliance tools with Azure. InSpec works hand in hand with Microsoft’s new Azure Policy Guest Configuration (who comes up with these names?) and allows users to automatically audit all of their applications on Azure.

“Chef gives companies the tools they need to confidently migrate to Microsoft Azure so users don’t just move their problems when migrating to the cloud, but have an understanding of the state of their assets before the migration occurs,” said Corey Scobie, the senior vice president of products and engineering at Chef, in today’s announcement. “Being able to detect and correct configuration and security issues to ensure success after migrations gives our customers the power to migrate at the right pace for their organization.”

more Microsoft Ignite 2018 coverage

Sep
24
2018
--

Salesforce partners with Apple to roll deeper into mobile enterprise markets

Apple and Salesforce are both highly successful, iconic brands, who like to put on a big show when they make product announcements. Today, the two companies announced they were forming a strategic partnership with an emphasis on mobile strategy ahead of Salesforce’s enormous customer conference, Dreamforce, which starts tomorrow in San Francisco.

For Apple, which is has been establishing partnerships with key enterprise brands for the last several years, today’s news is a another big step toward solidifying its enterprise strategy by involving the largest enterprise SaaS vendor in the world.

“We’re forming a strategic partnership with Salesforce to change the way people work and to empower developers of all abilities to build world-class mobile apps,” Susan Prescott, vice president of markets, apps and services at Apple told TechCrunch.

Tim Cook at Apple event on September 12, 2018 Photo: Justin Sullivan/Getty Images

Bret Taylor, president and chief product officer at Salesforce, who came over in the Quip deal a couple of years ago, says working together, the two companies can streamline mobile development for customers. “Every single one of our customers is on mobile. They all want world-class mobile experiences, and this enables us when we’re talking to a customer about their mobile strategy, that we can be in that conversation together,” he explained.

For starters, the partnership is going to involve three main components: The two companies are going to work together to bring in some key iOS features such Siri Shortcuts and integration with Apple’s Business Chat into the Salesforce mobile app. Much like the partnership between Apple and IBM, Apple and Salesforce will also work together to build industry-specific iOS apps on the Salesforce platform.

The companies are also working together on a new mobile SDK built specifically for Swift, Apple’s popular programming language. The plan is to provide a way to build Swift apps for iOS and deploy them natively on Salesforce’s Lightning platform.

The final component involves deeper integration with Trailhead, Salesforce’s education platform. That will involve a new Trailhead Mobile app on IOS as well as adding Swift education courses to the Trailhead catalogue to help drive adoption of the mobile SDK.

While Apple has largely been perceived as a consumer-focused organization, as we saw a shift to  companies encouraging employees to bring their own devices to work over the last six or seven years, Apple has benefited. As that has happened, it has been able to take advantage to sell more products and services and has partnered with a number of other well-known enterprise brands including IBMCiscoSAP and GE along with systems integrators Accenture and Deloitte.

The move gives Salesforce a formidable partner to continue their incredible growth trajectory. Just last year the company passed the $10 billion run rate putting it in rarefied company with some of the most successful software companies in the world. In their most recent earnings call at the end of August, they reported $3.28 billion for the quarter, placing them on a run rate of over $13 billion. Connecting with Apple could help keep that momentum growing.

The two companies will show off the partnership at Dreamforce this week. It’s a deal that has the potential to work out well for both companies, giving Salesforce a more integrated iOS experience and helping Apple increase its reach into the enterprise.

Sep
24
2018
--

Microsoft Azure gets new high-performance storage options

Microsoft Azure is getting a number of new storage options today that mostly focus on use cases where disk performance matters.

The first of these is Azure Ultra SSD Managed Disks, which are now in public preview. Microsoft says that these drives will offer “sub-millisecond latency,” which unsurprisingly makes them ideal for workloads where latency matters.

Earlier this year, Microsoft launched its Premium and Standard SSD Managed Disks offerings for Azure into preview. These ‘ultra’ SSDs represent the next tier up from the Premium SSDs with even lower latency and higher throughput. They’ll offer 160,000 IOPS per second will less than a millisecond of read/write latency. These disks will come in sizes ranging from 4GB to 64TB.

And talking about Standard SSD Managed Disks, this service is now generally available after only three months in preview. To top things off, all of Azure’s storage tiers (Premium and Standard SSD, as well as Standard HDD) now offer 8, 16 and 32 TB storage capacity.

Also new today is Azure Premium files, which is now in preview. This, too, is an SSD-based service. Azure Files itself isn’t new, though. It offers users access to cloud storage using the standard SMB protocol. This new premium offering promises higher throughput and lower latency for these kind of SMB operations.

more Microsoft Ignite 2018 coverage

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com