Sep
22
2020
--

Microsoft challenges Twilio with the launch of Azure Communication Services

Microsoft today announced the launch of Azure Communication Services, a new set of features in its cloud that enable developers to add voice and video calling, chat and text messages to their apps, as well as old-school telephony.

The company describes the new set of services as the “first fully managed communication platform offering from a major cloud provider,” and that seems right, given that Google and AWS offer some of these features, including the AWS notification service, for example, but not as part of a cohesive communication service. Indeed, it seems Azure Communication Service is more of a competitor to the core features of Twilio or up-and-coming MessageBird.

Over the course of the last few years, Microsoft has built up a lot of experience in this area, in large parts thanks to the success of its Teams service. Unsurprisingly, that’s something Microsoft is also playing up in its announcement.

“Azure Communication Services is built natively on top a global, reliable cloud — Azure. Businesses can confidently build and deploy on the same low latency global communication network used by Microsoft Teams to support over 5 billion meeting minutes in a single day,” writes Scott Van Vliet, corporate vice president for Intelligent Communication at the company.

Microsoft also stresses that it offers a set of additional smart services that developers can tap into to build out their communication services, including its translation tools, for example. The company also notes that its services are encrypted to meet HIPPA and GDPR standards.

Like similar services, developers access the various capabilities through a set of new APIs and SDKs.

As for the core services, the capabilities here are pretty much what you’d expect. There’s voice and video calling (and the ability to shift between them). There’s support for chat and, starting in October, users will also be able to send text messages. Microsoft says developers will be able to send these to users anywhere, with Microsoft positioning it as a global service.

Provisioning phone numbers, too, is part of the services and developers will be able to provision those for in-bound and out-bound calls, port existing numbers, request new ones and — most importantly for contact-center users — integrate them with existing on-premises equipment and carrier networks.

“Our goal is to meet businesses where they are and provide solutions to help them be resilient and move their business forward in today’s market,” writes Van Vliet. “We see rich communication experiences – enabled by voice, video, chat, and SMS – continuing to be an integral part in how businesses connect with their customers across devices and platforms.”

Sep
22
2020
--

Microsoft brings data services to its Arc multi-cloud management service

Microsoft today launched a major update to its Arc multi-cloud service that allows Azure customers to run and manage workloads across clouds — including those of Microsoft’s competitors — and their on-premises data centers. First announced at Microsoft Ignite in 2019, Arc was always meant to not just help users manage their servers but also allow them to run data services like Azure SQL and Azure Database for PostgreSQL, close to where their data sits.

Today, the company is making good on this promise with the preview launch of Azure Arc-enabled data services with support for, as expected, Azure SQL and Azure Database for PostgreSQL.

In addition, Microsoft is making the core feature of Arc, Arc-enabled servers, generally available. These are the tools at the core of the service that allow enterprises that use the standard Azure Portal to manage and monitor their Windows and Linux servers across their multi-cloud and edge environments.

Image Credits: Microsoft

“We’ve always known that enterprises are looking to unlock the agility of the cloud — they love the app model, they love the business model — while balancing a need to maintain certain applications and workloads on premises,” Rohan Kumar, Microsoft’s corporate VP for Azure Data said. “A lot of customers actually have a multi-cloud strategy. In some cases, they need to keep the data specifically for regulatory compliance. And in many cases, they want to maximize their existing investments. They’ve spent a lot of CapEx.”

As Kumar stressed, Microsoft wants to meet customers where they are, without forcing them to adopt a container architecture, for example, or replace their specialized engineered appliances to use Arc.

“Hybrid is really [about] providing that flexible choice to our customers, meeting them where they are, and not prescribing a solution,” he said.

He admitted that this approach makes engineering the solution more difficult, but the team decided the baseline should be a container endpoint and nothing more. And for the most part, Microsoft packaged up the tools its own engineers were already using to run Azure services on the company’s own infrastructure to manage these services in a multi-cloud environment.

“In hindsight, it was a little challenging at the beginning, because, you can imagine, when we initially built them, we didn’t imagine that we’ll be packaging them like this. But it’s a very modern design point,” Kumar said. But the result is that supporting customers is now relatively easy because it’s so similar to what the team does in Azure, too.

Kumar noted that one of the selling points for the Azure Data Services is also that the version of Azure SQL is essentially evergreen, allowing them to stop worrying about SQL Server licensing and end-of-life support questions.

Sep
16
2020
--

Pure Storage acquires data service platform Portworx for $370M

Pure Storage, the public enterprise data storage company, today announced that it has acquired Portworx, a well-funded startup that provides a cloud-native storage and data-management platform based on Kubernetes, for $370 million in cash. This marks Pure Storage’s largest acquisition to date and shows how important this market for multicloud data services has become.

Current Portworx enterprise customers include the likes of Carrefour, Comcast, GE Digital, Kroger, Lufthansa, and T-Mobile. At the core of the service is its ability to help users migrate their data and create backups. It creates a storage layer that allows developers to then access that data, no matter where it resides.

Pure Storage will use Portworx’s technology to expand its hybrid and multicloud services and provide Kubernetes -based data services across clouds.

Image Credits: Portworx

“I’m tremendously proud of what we’ve built at Portworx: An unparalleled data services platform for customers running mission-critical applications in hybrid and multicloud environments,” said Portworx CEO Murli Thirumale. “The traction and growth we see in our business daily shows that containers and Kubernetes are fundamental to the next-generation application architecture and thus competitiveness. We are excited for the accelerated growth and customer impact we will be able to achieve as a part of Pure.”

When the company raised its Series C round last year, Thirumale told me that Portworx had expanded its customer base by over 100% and its bookings increased by 376 from 2018 to 2019.

“As forward-thinking enterprises adopt cloud-native strategies to advance their business, we are thrilled to have the Portworx team and their groundbreaking technology joining us at Pure to expand our success in delivering multicloud data services for Kubernetes,” said Charles Giancarlo, chairman and CEO of Pure Storage. “This acquisition marks a significant milestone in expanding our Modern Data Experience to cover traditional and cloud native applications alike.”

Aug
17
2020
--

Suse contributes EiriniX to the Cloud Foundry Foundation

Suse today announced that it has contributed EiriniX, a framework for building extensions for Eirini, a technology that brings support for Kubernetes-based container orchestration to the Cloud Foundry platform-as-a-service project.

About a year ago, Suse also contributed the KubeCF project to the foundation, which itself allows the Cloud Foundry Application Runtime — the core of Cloud Foundry — to run on top of Kubernetes.

Image Credits: Suse

“At Suse we are developing upstream first as much as possible,” said Thomas Di Giacomo, president of Engineering and Innovation at Suse. “So, after experiencing the value of contributing KubeCF to the Foundation earlier this year, we decided it would be beneficial to both the Cloud Foundry community and the EiriniX team to do it again. We have seen an uptick in contributions to and usage of KubeCF since it became a Foundation project, indicating that more organizations are investing developer time into the upstream. Contributing EiriniX to the Foundation is a surefire way to get the broader community involved.”

Suse first demonstrated EiriniX a year ago. The tool implements features like the ability to SSH into a container and debug it, for example, or to use alternative logging solutions for KubeCF.

“There is significant value in contributing this project to the Foundation, as it ensures that other project teams looking for a similar solution to creating Extensions around Eirini will not reinvent the wheel,” said Chip Childers, executive director, Cloud Foundry Foundation. “Now that EiriniX exists within the Foundation, developers can take full advantage of its library of add-ons to Eirini and modify core features of Cloud Foundry. I’m excited to see all of the use cases for this project that have not yet been invented.” 

Aug
13
2020
--

Mirantis acquires Lens, an IDE for Kubernetes

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes-integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the company that originally developed Lens.

Lens itself was most recently owned by Lakend Labs, though, which describes itself as “a collective of cloud native compute geeks and technologists” that is “committed to preserving and making available the open-source software and products of Kontena.” Lakend open-sourced Lens a few months ago.

Image Credits: Mirantis

“The mission of Mirantis is very simple: We want to be — for the enterprise — the fastest way to [build] modern apps at scale,” Mirantis CEO Adrian Ionel told me. “We believe that enterprises are constantly undergoing this cycle of modernizing the way they build applications from one wave to the next — and we want to provide products to the enterprise that help them make that happen.”

Right now, that means a focus on helping enterprises build cloud-native applications at scale and, almost by default, that means providing these companies with all kinds of container infrastructure services.

“But there is another piece of the story that’s always been going through our minds, which is, how do we become more developer-centric and developer-focused, because, as we’ve all seen in the past 10 years, developers have become more and more in charge off what services and infrastructure they’re actually using,” Ionel explained. And that’s where the Kontena and Lens acquisitions fit in. Managing Kubernetes clusters, after all, isn’t trivial — yet now developers are often tasked with managing and monitoring how their applications interact with their company’s infrastructure.

“Lens makes it dramatically easier for developers to work with Kubernetes, to build and deploy their applications on Kubernetes, and it’s just a huge obstacle-remover for people who are turned off by the complexity of Kubernetes to get more value,” he added.

“I’m very excited to see that we found a common vision with Adrian for how to incorporate Lens and how to make life for developers more enjoyable in this cloud-native technology landscape,” Miska Kaipiainen, the former CEO of Kontena and now Mirantis’ director of Engineering, told me.

He describes Lens as an IDE for Kubernetes. While you could obviously replicate Lens’ functionality with existing tools, Kaipiainen argues that it would take 20 different tools to do this. “One of them could be for monitoring, another could be for logs. A third one is for command-line configuration, and so forth and so forth,” he said. “What we have been trying to do with Lens is that we are bringing all these technologies [together] and provide one single, unified, easy to use interface for developers, so they can keep working on their workloads and on their clusters, without ever losing focus and the context of what they are working on.”

Among other things, Lens includes a context-aware terminal, multi-cluster management capabilities that work across clouds and support for the open-source Prometheus monitoring service.

For Mirantis, Lens is a very strategic investment and the company will continue to develop the service. Indeed, Ionel said the Lens team now basically has unlimited resources.

Looking ahead, Kaipiainen said the team is looking at adding extensions to Lens through an API within the next couple of months. “Through this extension API, we are actually able to collaborate and work more closely with other technology vendors within the cloud technology landscape so they can start plugging directly into the Lens UI and visualize the data coming from their components, so that will make it very powerful.”

Ionel also added that the company is working on adding more features for larger software teams to Lens, which is currently a single-user product. A lot of users are already using Lens in the context of very large development teams, after all.

While the core Lens tools will remain free and open source, Mirantis will likely charge for some new features that require a centralized service for managing them. What exactly that will look like remains to be seen, though.

If you want to give Lens a try, you can download the Windows, macOS and Linux binaries here.

Aug
05
2020
--

Microsoft launches Open Service Mesh

Microsoft today announced the launch of a new open-source service mesh based on the Envoy proxy. The Open Service Mesh is meant to be a reference implementation of the Service Mesh Interface (SMI) spec, a standard interface for service meshes on Kubernetes that has the backing of most of the players in this ecosystem.

The company plans to donate Open Service Mesh to the Cloud Native Computing Foundation (CNCF) to ensure that it is community-led and has open governance.

“SMI is really resonating with folks and so we really thought that there was room in the ecosystem for a reference implementation of SMI where the mesh technology was first and foremost implementing those SMI APIs and making it the best possible SMI experience for customers,” Microsoft director of partner management for Azure Compute (and CNCF board member) Gabe Monroy told me.

Image Credits: Microsoft

He also added that, because SMI provides the lowest common denominator API design, Open Service Mesh gives users the ability to “bail out” to raw Envoy if they need some more advanced features. This “no cliffs” design, Monroy noted, is core to the philosophy behind Open Service Mesh.

As for its feature set, SMI handles all of the standard service mesh features you’d expect, including securing communications between services using mTLS, managing access control policies, service monitoring and more.

Image Credits: Microsoft

There are plenty of other service mesh technologies in the market today, though. So why would Microsoft launch this?

“What our customers have been telling us is that solutions that are out there today, Istio being a good example, are extremely complex,” he said. “It’s not just me saying this. We see the data in the AKS support queue of customers who are trying to use this stuff — and they’re struggling right here. This is just hard technology to use, hard technology to build at scale. And so the solutions that were out there all had something that wasn’t quite right and we really felt like something lighter weight and something with more of an SMI focus was what was going to hit the sweet spot for the customers that are dabbling in this technology today.”

Monroy also noted that Open Service Mesh can sit alongside other solutions like Linkerd, for example.

A lot of pundits expected Google to also donate its Istio service mesh to the CNCF. That move didn’t materialize. “It’s funny. A lot of people are very focused on the governance aspect of this,” he said. “I think when people over-focus on that, you lose sight of how are customers doing with this technology. And the truth is that customers are not having a great time with Istio in the wild today. I think even folks who are deep in that community will acknowledge that and that’s really the reason why we’re not interested in contributing to that ecosystem at the moment.”

Jul
14
2020
--

Google Cloud launches Confidential VMs

At its virtual Cloud Next ’20 event, Google Cloud today announced Confidential VMs, a new type of virtual machine that makes use of the company’s work around confidential computing to ensure that data isn’t just encrypted at rest but also while it is in memory.

We already employ a variety of isolation and sandboxing techniques as part of our cloud infrastructure to help make our multi-tenant architecture secure,” the company notes in today’s announcement. “Confidential VMs take this to the next level by offering memory encryption so that you can further isolate your workloads in the cloud. Confidential VMs can help all our customers protect sensitive data, but we think it will be especially interesting to those in regulated industries.”

In the backend, Confidential VMs make use of AMD’s Secure Encrypted Virtualization feature, available in its second-generation EPYC CPUs. With that, the data will stay encrypted when used and the encryption keys to make this happen are automatically generated in hardware and can’t be exported — and with that, even Google doesn’t have access to the keys either.

Image Credits: Google

Developers who want to shift their existing VMs to a Confidential VM can do so with just a few clicks. Google notes that it built Confidential VMs on top of its Shielded VMs, which already provide protection against rootkits and other exploits.

“With built-in secure encrypted virtualization, 2nd Gen AMD EPYC processors provide an innovative hardware-based security feature that helps secure data in a virtualized environment,” said Raghu Nambiar, corporate vice president, Data Center Ecosystem, AMD. “For the new Google Compute Engine Confidential VMs in the N2D series, we worked with Google to help customers both secure their data and achieve performance of their workloads.”

That last part is obviously important, given that the extra encryption and decryption steps do incur at least a minor performance penalty. Google says it worked with AMD and developed new open-source drivers to ensure that “the performance metrics of Confidential VMs are close to those of non-confidential VMs.” At least according to the benchmarks Google itself has disclosed so far, both startup times and memory read and throughput performance are virtually the same for regular VMs and Confidential VMs.

Jul
14
2020
--

Google Cloud’s new BigQuery Omni will let developers query data in GCP, AWS and Azure

At its virtual Cloud Next ’20 event, Google today announced a number of updates to its cloud portfolio, but the private alpha launch of BigQuery Omni is probably the highlight of this year’s event. Powered by Google Cloud’s Anthos hybrid-cloud platform, BigQuery Omni allows developers to use the BigQuery engine to analyze data that sits in multiple clouds, including those of Google Cloud competitors like AWS and Microsoft Azure — though for now, the service only supports AWS, with Azure support coming later.

Using a unified interface, developers can analyze this data locally without having to move data sets between platforms.

“Our customers store petabytes of information in BigQuery, with the knowledge that it is safe and that it’s protected,” said Debanjan Saha, the GM and VP of Engineering for Data Analytics at Google Cloud, in a press conference ahead of today’s announcement. “A lot of our customers do many different types of analytics in BigQuery. For example, they use the built-in machine learning capabilities to run real-time analytics and predictive analytics. […] A lot of our customers who are very excited about using BigQuery in GCP are also asking, ‘how can they extend the use of BigQuery to other clouds?’ ”

Image Credits: Google

Google has long said that it believes that multi-cloud is the future — something that most of its competitors would probably agree with, though they all would obviously like you to use their tools, even if the data sits in other clouds or is generated off-platform. It’s the tools and services that help businesses to make use of all of this data, after all, where the different vendors can differentiate themselves from each other. Maybe it’s no surprise then, given Google Cloud’s expertise in data analytics, that BigQuery is now joining the multi-cloud fray.

“With BigQuery Omni customers get what they wanted,” Saha said. “They wanted to analyze their data no matter where the data sits and they get it today with BigQuery Omni.”

Image Credits: Google

He noted that Google Cloud believes that this will help enterprises break down their data silos and gain new insights into their data, all while allowing developers and analysts to use a standard SQL interface.

Today’s announcement is also a good example of how Google’s bet on Anthos is paying off by making it easier for the company to not just allow its customers to manage their multi-cloud deployments but also to extend the reach of its own products across clouds. This also explains why BigQuery Omni isn’t available for Azure yet, given that Anthos for Azure is still in preview, while AWS support became generally available in April.

Jul
09
2020
--

Freshworks acquires IT orchestration service Flint

Customer engagement company Freshworks today announced that it has acquired Flint, an IT orchestration and cloud management platform based in India. The acquisition will help Freshworks strengthen its Freshservice IT support service by bringing a number of new automation tools to it. Maybe just as importantly, though, it will also bolster Freshworks’ ambitions around cloud management.

Freshworks CPO Prakash Ramamurthy, who joined the company last October, told me that while the company was already looking at expanding its IT services (ITSM) and operations management (ITOM) capabilities before the COVID-19 pandemic hit, having those capabilities has now become even more important, given that a lot of these teams are now working remotely.

“If you take ITSM, we allow for customers to create their own workflow for service catalog items and so on and so forth, but we found that there’s a lot of things which were repetitive tasks,” Ramamurthy said. “For example, I lost my password or new employee onboarding, where you need to auto-provision them in the same set of accounts. Flint had integrated with Freshservice to help automate and orchestrate some of these routine tasks and a lot of customers were using it and there’s a lot of interest in it.”

He noted that while the company was already seeing increased demand for these tools earlier in the year, the pandemic made that need even more obvious. And given that pressing need, Freshworks decided that it would be far easier to acquire an existing company than to build its own solution.

“Even in early January, we felt this was a space where we had to have a time-to-market advantage,” he said. “So acquiring and aggressively integrating it into our product lines seemed to be the most optimal thing to do than take our time to build it — and we are super fortunate that we placed the right bet because of what has happened since then.”

The acquisition helps Freshworks build out some of its existing services, but Ramamurthy also stressed that it will really help the company build out its operations management capabilities to go from alert management to also automatically solving common IT issues. “We feel there’s natural synergy and [Flint’s] orchestration solution and their connectors come in super handy because they have connectors to all the modern SaaS applications and the top five cloud providers and so on.”

But Flint’s technology will also help Freshworks build out its ability to help its users manage workloads across multiple clouds, an area where it is going to compete with a number of startups and incumbents. Since the company decided that it wants to play in this field, an acquisition also made a lot of sense given how long it would take to build out expertise in this area, too.

“Cloud management is a natural progression for our product line,” Ramamurthy noted. “As more and more customers have a multi-cloud strategy, we want to give them a single pane of glass for all the work workloads they’re running. And if they wanted to do cost optimization, if you want to build on top of that, we need the basic plumbing to be able to do discovery, which is kind of foundational for that.”

Freshworks will integrate Flint’s tools into Freshservice and likely offer it as part of its existing tiered pricing structure, with service orchestration likely being the first new capability it will offer.

Jul
08
2020
--

Suse acquires Kubernetes management platform Rancher Labs

Suse, which describes itself as “the world’s largest independent open source company,” today announced that it has acquired Rancher Labs, a company that has long focused on making it easier for enterprises to make their container clusters.

The two companies did not disclose the price of the acquisition, but Rancher was well funded, with a total of $95 million in investments. It’s also worth mentioning that it has only been a few months since the company announced its $40 million Series D round led by Telstra Ventures. Other investors include the likes of Mayfield and Nexus Venture Partners, GRC SinoGreen and F&G Ventures.

Like similar companies, Rancher’s original focus was first on Docker infrastructure before it pivoted to putting its emphasis on Kubernetes, once that became the de facto standard for container orchestration. Unsurprisingly, this is also why Suse is now acquiring this company. After a number of ups and downs — and various ownership changes — Suse has now found its footing again and today’s acquisition shows that its aiming to capitalize on its current strengths.

Just last month, the company reported the annual contract value of its booking increased by 30% year over year and that it saw a 63% increase in customer deals worth more than $1 million in the last quarter, with its cloud revenue growing 70%. While it is still in the Linux distribution business that the company was founded on, today’s Suse is a very different company, offering various enterprise platforms (including its Cloud Foundry-based Cloud Application Platform), solutions and services. And while it already offered a Kubernetes-based container platform, Rancher’s expertise will only help it to build out this business.

“This is an incredible moment for our industry, as two open source leaders are joining forces. The merger of a leader in Enterprise Linux, Edge Computing and AI with a leader in Enterprise Kubernetes Management will disrupt the market to help customers accelerate their digital transformation journeys,” said Suse CEO Melissa Di Donato in today’s announcement. “Only the combination of SUSE and Rancher will have the depth of a globally supported and 100% true open source portfolio, including cloud native technologies, to help our customers seamlessly innovate across their business from the edge to the core to the cloud.”

The company describes today’s acquisition as the first step in its “inorganic growth strategy” and Di Donato notes that this acquisition will allow the company to “play an even more strategic role with cloud service providers, independent hardware vendors, systems integrators and value-added resellers who are eager to provide greater customer experiences.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com