Jul
08
2021
--

Achieving digital transformation through RPA and process mining

Understanding what you will change is most important to achieve a long-lasting and successful robotic process automation transformation. There are three pillars that will be most impacted by the change: people, process and digital workers (also referred to as robots). The interaction of these three pillars executes workflows and tasks, and if integrated cohesively, determines the success of an enterprisewide digital transformation.

Robots are not coming to replace us, they are coming to take over the repetitive, mundane and monotonous tasks that we’ve never been fond of. They are here to transform the work we do by allowing us to focus on innovation and impactful work. RPA ties decisions and actions together. It is the skeletal structure of a digital process that carries information from point A to point B. However, the decision-making capability to understand and decide what comes next will be fueled by RPA’s integration with AI.

From a strategic standpoint, success measures for automating, optimizing and redesigning work should not be solely centered around metrics like decreasing fully loaded costs or FTE reduction, but should put the people at the center.

We are seeing software vendors adopt vertical technology capabilities and offer a wide range of capabilities to address the three pillars mentioned above. These include powerhouses like UiPath, which recently went public, Microsoft’s Softomotive acquisition, and Celonis, which recently became a unicorn with a $1 billion Series D round. RPA firms call it “intelligent automation,” whereas Celonis targets the execution management system. Both are aiming to be a one-stop shop for all things related to process.

We have seen investments in various product categories for each stage in the intelligent automation journey. Process and task mining for process discovery, centralized business process repositories for CoEs, executives to manage the pipeline and measure cost versus benefit, and artificial intelligence solutions for intelligent document processing.

For your transformation journey to be successful, you need to develop a deep understanding of your goals, people and the process.

Define goals and measurements of success

From a strategic standpoint, success measures for automating, optimizing and redesigning work should not be solely centered around metrics like decreasing fully loaded costs or FTE reduction, but should put the people at the center. To measure improved customer and employee experiences, give special attention to metrics like decreases in throughput time or rework rate, identify vendors that deliver late, and find missed invoice payments or determine loan requests from individuals that are more likely to be paid back late. These provide more targeted success measures for specific business units.

The returns realized with an automation program are not limited to metrics like time or cost savings. The overall performance of an automation program can be more thoroughly measured with the sum of successes of the improved CX/EX metrics in different business units. For each business process you will be redesigning, optimizing or automating, set a definitive problem statement and try to find the right solution to solve it. Do not try to fit predetermined solutions into the problems. Start with the problem and goal first.

Understand the people first

To accomplish enterprise digital transformation via RPA, executives should put people at the heart of their program. Understanding the skill sets and talents of the workforce within the company can yield better knowledge of how well each employee can contribute to the automation economy within the organization. A workforce that is continuously retrained and upskilled learns how to automate and flexibly complete tasks together with robots and is better equipped to achieve transformation at scale.

Jul
07
2021
--

The single vendor requirement ultimately doomed the DoD’s $10B JEDI cloud contract

When the Pentagon killed the JEDI cloud program yesterday, it was the end of a long and bitter road for a project that never seemed to have a chance. The question is why it didn’t work out in the end, and ultimately I think you can blame the DoD’s stubborn adherence to a single vendor requirement, a condition that never made sense to anyone, even the vendor that ostensibly won the deal.

In March 2018, the Pentagon announced a mega $10 billion, decade-long cloud contract to build the next generation of cloud infrastructure for the Department of Defense. It was dubbed JEDI, which aside from the Star Wars reference, was short for Joint Enterprise Defense Infrastructure.

The idea was a 10-year contract with a single vendor that started with an initial two-year option. If all was going well, a five-year option would kick in and finally a three-year option would close things out with earnings of $1 billion a year.

While the total value of the contract had it been completed was quite large, a billion a year for companies the size of Amazon, Oracle or Microsoft is not a ton of money in the scheme of things. It was more about the prestige of winning such a high-profile contract and what it would mean for sales bragging rights. After all, if you passed muster with the DoD, you could probably handle just about anyone’s sensitive data, right?

Regardless, the idea of a single-vendor contract went against conventional wisdom that the cloud gives you the option of working with the best-in-class vendors. Microsoft, the eventual winner of the ill-fated deal acknowledged that the single vendor approach was flawed in an interview in April 2018:

Leigh Madden, who heads up Microsoft’s defense effort, says he believes Microsoft can win such a contract, but it isn’t necessarily the best approach for the DoD. “If the DoD goes with a single award path, we are in it to win, but having said that, it’s counter to what we are seeing across the globe where 80% of customers are adopting a multicloud solution,” Madden told TechCrunch.

Perhaps it was doomed from the start because of that. Yet even before the requirements were fully known there were complaints that it would favor Amazon, the market share leader in the cloud infrastructure market. Oracle was particularly vocal, taking its complaints directly to the former president before the RFP was even published. It would later file a complaint with the Government Accountability Office and file a couple of lawsuits alleging that the entire process was unfair and designed to favor Amazon. It lost every time — and of course, Amazon wasn’t ultimately the winner.

While there was a lot of drama along the way, in April 2019 the Pentagon named two finalists, and it was probably not too surprising that they were the two cloud infrastructure market leaders: Microsoft and Amazon. Game on.

The former president interjected himself directly in the process in August that year, when he ordered the Defense Secretary to review the matter over concerns that the process favored Amazon, a complaint which to that point had been refuted several times over by the DoD, the Government Accountability Office and the courts. To further complicate matters, a book by former defense secretary Jim Mattis claimed the president told him to “screw Amazon out of the $10 billion contract.” His goal appeared to be to get back at Bezos, who also owns the Washington Post newspaper.

In spite of all these claims that the process favored Amazon, when the winner was finally announced in October 2019, late on a Friday afternoon no less, the winner was not in fact Amazon. Instead, Microsoft won the deal, or at least it seemed that way. It wouldn’t be long before Amazon would dispute the decision in court.

By the time AWS re:Invent hit a couple of months after the announcement, former AWS CEO Andy Jassy was already pushing the idea that the president had unduly influenced the process.

“I think that we ended up with a situation where there was political interference. When you have a sitting president, who has shared openly his disdain for a company, and the leader of that company, it makes it really difficult for government agencies, including the DoD, to make objective decisions without fear of reprisal,” Jassy said at that time.

Then came the litigation. In November the company indicated it would be challenging the decision to choose Microsoft charging that it was was driven by politics and not technical merit. In January 2020, Amazon filed a request with the court that the project should stop until the legal challenges were settled. In February, a federal judge agreed with Amazon and stopped the project. It would never restart.

In April the DoD completed its own internal investigation of the contract procurement process and found no wrongdoing. As I wrote at the time:

While controversy has dogged the $10-billion, decade-long JEDI contract since its earliest days, a report by the DoD’s inspector general’s office concluded today that, while there were some funky bits and potential conflicts, overall the contract procurement process was fair and legal and the president did not unduly influence the process in spite of public comments.

Last September the DoD completed a review of the selection process and it once again concluded that Microsoft was the winner, but it didn’t really matter as the litigation was still in motion and the project remained stalled.

The legal wrangling continued into this year, and yesterday the Pentagon finally pulled the plug on the project once and for all, saying it was time to move on as times have changed since 2018 when it announced its vision for JEDI.

The DoD finally came to the conclusion that a single-vendor approach wasn’t the best way to go, and not because it could never get the project off the ground, but because it makes more sense from a technology and business perspective to work with multiple vendors and not get locked into any particular one.

“JEDI was developed at a time when the Department’s needs were different and both the CSPs’ (cloud service providers) technology and our cloud conversancy was less mature. In light of new initiatives like JADC2 (the Pentagon’s initiative to build a network of connected sensors) and AI and Data Acceleration (ADA), the evolution of the cloud ecosystem within DoD, and changes in user requirements to leverage multiple cloud environments to execute mission, our landscape has advanced and a new way ahead is warranted to achieve dominance in both traditional and nontraditional warfighting domains,” said John Sherman, acting DoD chief information officer in a statement.

In other words, the DoD would benefit more from adopting a multicloud, multivendor approach like pretty much the rest of the world. That said, the department also indicated it would limit the vendor selection to Microsoft and Amazon.

“The Department intends to seek proposals from a limited number of sources, namely the Microsoft Corporation (Microsoft) and Amazon Web Services (AWS), as available market research indicates that these two vendors are the only Cloud Service Providers (CSPs) capable of meeting the Department’s requirements,” the department said in a statement.

That’s not going to sit well with Google, Oracle or IBM, but the department further indicated it would continue to monitor the market to see if other CSPs had the chops to handle their requirements in the future.

In the end, the single vendor requirement contributed greatly to an overly competitive and politically charged atmosphere that resulted in the project never coming to fruition. Now the DoD has to play technology catch-up, having lost three years to the histrionics of the entire JEDI procurement process and that could be the most lamentable part of this long, sordid technology tale.

Jul
06
2021
--

Nobody wins as DoD finally pulls the plug on controversial $10B JEDI contract

After several years of fighting and jockeying for position by the biggest cloud infrastructure companies in the world, the Pentagon finally pulled the plug on the controversial winner-take-all, $10 billion JEDI contract today. In the end, nobody won.

“With the shifting technology environment, it has become clear that the JEDI cloud contract, which has long been delayed, no longer meets the requirements to fill the DoD’s capability gaps,” a Pentagon spokesperson stated.

The contract procurement process began in 2018 with a call for RFPs for a $10 billion, decade-long contract to handle the cloud infrastructure strategy for The Pentagon. Pentagon spokesperson Heather Babb told TechCrunch why they were going with the. single-winner approach: “Single award is advantageous because, among other things, it improves security, improves data accessibility and simplifies the Department’s ability to adopt and use cloud services,” she said at the time.

From the start though, companies objected to the single-winner approach, believing that the Pentagon would be better served with a multi-vendor approach. Some companies, particularly Oracle believed the procurement process was designed to favor Amazon.

In the end it came down to a pair of finalists — Amazon and Microsoft — and in the end Microsoft won. But Amazon believed that it had superior technology and only lost the deal because of direct interference by the previous president who had open disdain for then-CEO Jeff Bezos (who is also the owner of the Washington Post newspaper).

Amazon decided to fight the decision in court, and after months of delay, the Pentagon made the decision that it was time to move on. In a blog post, Microsoft took a swipe at Amazon for precipitating the delay.

“The 20 months since DoD selected Microsoft as its JEDI partner highlights issues that warrant the attention of policymakers: When one company can delay, for years, critical technology upgrades for those who defend our nation, the protest process needs reform. Amazon filed its protest in November 2019 and its case was expected to take at least another year to litigate and yield a decision, with potential appeals afterward,” Microsoft wrote in its blog post about the end of the deal.

But in a statement of its own, Amazon reiterated its belief that the process was not fairly executed. “We understand and agree with the DoD’s decision. Unfortunately, the contract award was not based on the merits of the proposals and instead was the result of outside influence that has no place in government procurement. Our commitment to supporting our nation’s military and ensuring that our warfighters and defense partners have access to the best technology at the best price is stronger than ever. We look forward to continuing to support the DoD’s modernization efforts and building solutions that help accomplish their critical missions,” a company spokesperson said.

It seems like a fitting end to a project that I felt was doomed from the beginning. From the moment the Pentagon announced this contract with the cutesy twist on the Star Wars name, the procurement process has taken more twists and turns than a TV soap.

In the beginning, there was a lot of sound and fury and it led to a lot of nothing. We move onto whatever cloud procurement process happens next.

Jun
30
2021
--

Dispense with the chasm? No way!

Jeff Bussgang, a co-founder and general partner at Flybridge Capital, recently wrote an Extra Crunch guest post that argued it is time for a refresh when it comes to the technology adoption life cycle and the chasm. His argument went as follows:

  1. VCs in recent years have drastically underestimated the size of SAMs (serviceable addressable markets) for their startup investments because they were “trained to think only a portion of the SAM is obtainable within any reasonable window of time because of the chasm.”
  2. The chasm is no longer the barrier it once was because businesses have finally understood that software is eating the world.
  3. As a result, the early majority has joined up with the innovators and early adopters to create an expanded early market. Effectively, they have defected from the mainstream market to cross the chasm in the other direction, leaving only the late majority and the laggards on the other side.
  4. That is why we now are seeing multiple instances of very large high-growth markets that appear to have no limit to their upside. There is no chasm to cross until much later in the life cycle, and it isn’t worth much effort to cross it then.

Now, I agree with Jeff that we are seeing remarkable growth in technology adoption at levels that would have astonished investors from prior decades. In particular, I agree with him when he says:

The pandemic helped accelerate a global appreciation that digital innovation was no longer a luxury but a necessity. As such, companies could no longer wait around for new innovations to cross the chasm. Instead, everyone had to embrace change or be exposed to an existential competitive disadvantage.

But this is crossing the chasm! Pragmatic customers are being forced to adopt because they are under duress. It is not that they buy into the vision of software eating the world. It is because their very own lunches are being eaten. The pandemic created a flotilla of chasm-crossings because it unleashed a very real set of existential threats.

The key here is to understand the difference between two buying decision processes, one governed by visionaries and technology enthusiasts (the early adopters and innovators), the other by pragmatists (the early majority).

The key here is to understand the difference between two buying decision processes, one governed by visionaries and technology enthusiasts (the early adopters and innovators), the other by pragmatists (the early majority). The early group makes their decisions based on their own analyses. They do not look to others for corroborative support. Pragmatists do. Indeed, word-of-mouth endorsements are by far the most impactful input not only about what to buy and when but also from whom.

Jun
29
2021
--

GitHub previews new AI tool that makes coding suggestions

GitHub has unveiled a new product that leverages artificial intelligence to help you write code more efficiently. Named GitHub Copilot, today’s new product can suggest lines of code and even sometimes entire functions.

GitHub has partnered with OpenAI to develop this tool. It doesn’t replace developers, it’s just a tool that should improve productivity and make it easier to learn how to code. GitHub frames this new tool as an AI pair programmer.

The model behind GitHub Copilot has been trained on billions of lines of code — many of them are hosted and available publicly on GitHub itself. When you’re writing code, GitHub Copilot suggests code as you type. You can cycle through suggestions, accept or reject them.

In order to figure out what you’re currently coding, GitHub Copilot tries to parse the meaning of a comment, the name of the function you are writing or the past couple of lines. The company shows a few demos on its website.

Image Credits: GitHub

In particular, you can describe a function in plain English in a comment and then convert it to actual code. If you’re getting started with a new language or you’ve been using no-code or low-code tools in the past, that feature could be useful.

If you’re writing code every day, GitHub Copilot can be used to work with a new framework or library. You don’t have to read the documentation from start to finish as GitHub Copilot already knows the specific functions and features of the framework you’re working with. It could also replace many Stack Overflow queries.

GitHub Copilot integrates directly with Visual Studio Code. You can install it as an extension or use it in the cloud with GitHub Codespaces. Over time, the service should improve based on how you interact with GitHub Copilot. As you accept and reject suggestions, those suggestions should get better.

Currently available as a technical preview, GitHub plans to launch a commercial product based on GitHub Copilot. It currently works best with Python, JavaScript, TypeScript, Ruby and Go.

Image Credits: GitHub

Jun
25
2021
--

Edge Delta raises $15M Series A to take on Splunk

Seattle-based Edge Delta, a startup that is building a modern distributed monitoring stack that is competing directly with industry heavyweights like Splunk, New Relic and Datadog, today announced that it has raised a $15 million Series A funding round led by Menlo Ventures and Tim Tully, the former CTO of Splunk. Previous investors MaC Venture Capital and Amity Ventures also participated in this round, which brings the company’s total funding to date to $18 million.

“Our thesis is that there’s no way that enterprises today can continue to analyze all their data in real time,” said Edge Delta co-founder and CEO Ozan Unlu, who has worked in the observability space for about 15 years already (including at Microsoft and Sumo Logic). “The way that it was traditionally done with these primitive, centralized models — there’s just too much data. It worked 10 years ago, but gigabytes turned into terabytes and now terabytes are turning into petabytes. That whole model is breaking down.”

Image Credits: Edge Delta

He acknowledges that traditional big data warehousing works quite well for business intelligence and analytics use cases. But that’s not real-time and also involves moving a lot of data from where it’s generated to a centralized warehouse. The promise of Edge Delta is that it can offer all of the capabilities of this centralized model by allowing enterprises to start to analyze their logs, metrics, traces and other telemetry right at the source. This, in turn, also allows them to get visibility into all of the data that’s generated there, instead of many of today’s systems, which only provide insights into a small slice of this information.

While competing services tend to have agents that run on a customer’s machine, but typically only compress the data, encrypt it and then send it on to its final destination, Edge Delta’s agent starts analyzing the data right at the local level. With that, if you want to, for example, graph error rates from your Kubernetes cluster, you wouldn’t have to gather all of this data and send it off to your data warehouse where it has to be indexed before it can be analyzed and graphed.

With Edge Delta, you could instead have every single node draw its own graph, which Edge Delta can then combine later on. With this, Edge Delta argues, its agent is able to offer significant performance benefits, often by orders of magnitude. This also allows businesses to run their machine learning models at the edge, as well.

Image Credits: Edge Delta

“What I saw before I was leaving Splunk was that people were sort of being choosy about where they put workloads for a variety of reasons, including cost control,” said Menlo Ventures’ Tim Tully, who joined the firm only a couple of months ago. “So this idea that you can move some of the compute down to the edge and lower latency and do machine learning at the edge in a distributed way was incredibly fascinating to me.”

Edge Delta is able to offer a significantly cheaper service, in large part because it doesn’t have to run a lot of compute and manage huge storage pools itself since a lot of that is handled at the edge. And while the customers obviously still incur some overhead to provision this compute power, it’s still significantly less than what they would be paying for a comparable service. The company argues that it typically sees about a 90 percent improvement in total cost of ownership compared to traditional centralized services.

Image Credits: Edge Delta

Edge Delta charges based on volume and it is not shy to compare its prices with Splunk’s and does so right on its pricing calculator. Indeed, in talking to Tully and Unlu, Splunk was clearly on everybody’s mind.

“There’s kind of this concept of unbundling of Splunk,” Unlu said. “You have Snowflake and the data warehouse solutions coming in from one side, and they’re saying, ‘hey, if you don’t care about real time, go use us.’ And then we’re the other half of the equation, which is: actually there’s a lot of real-time operational use cases and this model is actually better for those massive stream processing datasets that you required to analyze in real time.”

But despite this competition, Edge Delta can still integrate with Splunk and similar services. Users can still take their data, ingest it through Edge Delta and then pass it on to the likes of Sumo Logic, Splunk, AWS’s S3 and other solutions.

Image Credits: Edge Delta

“If you follow the trajectory of Splunk, we had this whole idea of building this business around IoT and Splunk at the Edge — and we never really quite got there,” Tully said. “I think what we’re winding up seeing collectively is the edge actually means something a little bit different. […] The advances in distributed computing and sophistication of hardware at the edge allows these types of problems to be solved at a lower cost and lower latency.”

The Edge Delta team plans to use the new funding to expand its team and support all of the new customers that have shown interest in the product. For that, it is building out its go-to-market and marketing teams, as well as its customer success and support teams.

 

Jun
02
2021
--

Iterative raises $20M for its MLOps platform

Iterative, an open-source startup that is building an enterprise AI platform to help companies operationalize their models, today announced that it has raised a $20 million Series A round led by 468 Capital and Mesosphere co-founder Florian Leibert. Previous investors True Ventures and Afore Capital also participated in this round, which brings the company’s total funding to $25 million.

The core idea behind Iterative is to provide data scientists and data engineers with a platform that closely resembles a modern GitOps-driven development stack.

After spending time in academia, Iterative co-founder and CEO Dmitry Petrov joined Microsoft as a data scientist on the Bing team in 2013. He noted that the industry has changed quite a bit since then. While early on, the questions were about how to build machine learning models, today the problem is how to build predictable processes around machine learning, especially in large organizations with sizable teams. “How can we make the team productive, not the person? This is a new challenge for the entire industry,” he said.

Big companies (like Microsoft) were able to build their own proprietary tooling and processes to build their AI operations, Petrov noted, but that’s not an option for smaller companies.

Currently, Iterative’s stack consists of a couple of different components that sit on top of tools like GitLab and GitHub. These include DVC for running experiments and data and model versioning, CML, the company’s CI/CD platform for machine learning, and the company’s newest product, Studio, its SaaS platform for enabling collaboration between teams. Instead of reinventing the wheel, Iterative essentially provides data scientists who already use GitHub or GitLab to collaborate on their source code with a tool like DVC Studio that extends this to help them collaborate on data and metrics, too.

Image Credits: Iterative

“DVC Studio enables machine learning developers to run hundreds of experiments with full transparency, giving other developers in the organization the ability to collaborate fully in the process,” said Petrov. “The funding today will help us bring more innovative products and services into our ecosystem.”

Petrov stressed that he wants to build an ecosystem of tools, not a monolithic platform. When the company closed this current funding round about three months ago, Iterative had about 30 employees, many of whom were previously active in the open-source community around its projects. Today, that number is already closer to 60.

“Data, ML and AI are becoming an essential part of the industry and IT infrastructure,” said Leibert, general partner at 468 Capital. “Companies with great open-source adoption and bottom-up market strategy, like Iterative, are going to define the standards for AI tools and processes around building ML models.”

May
25
2021
--

Microsoft launches new tools for Teams developers

At its (virtual) Build conference today, Microsoft launched a number of new features, tools and services for developers who want to integrate their services with Teams, the company’s Slack competitor. It’s no secret that Microsoft basically looks at Teams, which now has about 145 million daily active users, as the new hub for employees to get work done, so it’s no surprise that it wants third-party developers to bring their services right to Teams as well. And to do so, it’s now offering a set of new tools that will make this easier and enable developers to build new user experiences in Teams.

There’s a lot going on here, but maybe the most important news is the launch of the enhanced Microsoft Teams Toolkit for Visual Studio and Visual Studio Code.

“This essentially enables developers to build apps easier and faster — and to build very powerful apps tapping into the rich Microsoft stack,” Microsoft group program manager Archana Saseetharan explained. “With the updated toolkit […], we enable flexibility for developers. We want to meet developers where they are.”

Image Credits: Microsoft

The toolkit offers support for tools and frameworks like React, SharePoint and .NET. Some of the updates the team enabled with this release are integration with Aure Functions, the SharePoint Framework integration and a single-line integration with the Microsoft Graph. Microsoft is also making it easier for developers to integrate an authorization workflow into their Teams apps. “Login is the first kind of experience of any user with an app — and most of the drop-offs happen there,” Saseetharan said. “So [single-sign on] is something we completely are pushing hard on.”

The team also launched a new Developer Portal for Microsoft Teams that makes it easier for developers to register and configure their apps from a single tool. ISVs will also be able to use the new portal to offer their apps for in-Teams purchases.

Other new Teams features for developers include ways for developers to build real-time multi-user experiences like whiteboards and project boards, for example, as well as a new meeting event API to build meeting-related workflows for when a meeting starts and ends, for example, as well as new features for the Teams Together mode that will let developers design their own Together experiences.

There are a few other new features here as well, but what it all comes down to is that Microsoft wants developers to consider Teams as a viable platform for their services — and with 145 million daily active users, that’s potentially a lucrative way for software firms to get their services in front of a new audience.

“Teams is enabling a new class of apps called collaborative apps,” said Karan Nigam, Microsoft’s director of product marketing for Teams. “We are uniquely positioned to bring the richness to the collaboration space — a ton of innovation to the extensibility side to make apps richer, making it easier with the toolkit update, and then have a single-stop shop with the developer portal where the entire lifecycle can be managed. Ultimately, for a developer, they don’t have to go to multiple places, it’s one single flow from the business perspective for them as well.”

read

Apr
30
2021
--

Cloud infrastructure market keeps rolling in Q1 with almost $40B in revenue

Conventional wisdom over the last year has suggested that the pandemic has driven companies to the cloud much faster than they ever would have gone without that forcing event, with some suggesting it has compressed years of transformation into months. This quarter’s cloud infrastructure revenue numbers appear to be proving that thesis correct.

With The Big Three — Amazon, Microsoft and Google — all reporting this week, the market generated almost $40 billion in revenue, according to Synergy Research data. That’s up $2 billion from last quarter and up 37% over the same period last year. Canalys’s numbers were slightly higher at $42 billion.

As you might expect if you follow this market, AWS led the way with $13.5 billion for the quarter, up 32% year over year. That’s a run rate of $54 billion. While that is an eye-popping number, what’s really remarkable is the yearly revenue growth, especially for a company the size and maturity of Amazon. The law of large numbers would suggest this isn’t sustainable, but the pie keeps growing and Amazon continues to take a substantial chunk.

Overall AWS held steady with 32% market share. While the revenue numbers keep going up, Amazon’s market share has remained firm for years at around this number. It’s the other companies down market that are gaining share over time, most notably Microsoft, which is now at around 20% share — good for about $7.8 billion this quarter.

Google continues to show signs of promise under Thomas Kurian, hitting $3.5 billion, good for 9% as it makes a steady march toward double digits. Even IBM had a positive quarter, led by Red Hat and cloud revenue, good for 5% or about $2 billion overall.

Synergy Research cloud infrastructure bubble map for Q1 2021. AWS is leader, followed by Microsoft and Google.

Image Credits: Synergy Research

John Dinsdale, chief analyst at Synergy, says that even though AWS and Microsoft have firm control of the market, that doesn’t mean there isn’t money to be made by the companies playing behind them.

“These two don’t have to spend too much time looking in their rearview mirrors and worrying about the competition. However, that is not to say that there aren’t some excellent opportunities for other players. Taking Amazon and Microsoft out of the picture, the remaining market is generating over $18 billion in quarterly revenues and growing at over 30% per year. Cloud providers that focus on specific regions, services or user groups can target several years of strong growth,” Dinsdale said in a statement.

Canalys, another firm that watches the same market as Synergy, had similar findings with slight variations, certainly close enough to confirm one another’s findings. They have AWS with 32%, Microsoft 19% and Google with 7%.

Canalys market share chart with Amazon with 32%, Microsoft 19% and Google 7%

Image Credits: Canalys

Canalys analyst Blake Murray says that there is still plenty of room for growth, and we will likely continue to see big numbers in this market for several years. “Though 2020 saw large-scale cloud infrastructure spending, most enterprise workloads have not yet transitioned to the cloud. Migration and cloud spend will continue as customer confidence rises during 2021. Large projects that were postponed last year will resurface, while new use cases will expand the addressable market,” he said.

The numbers we see are hardly a surprise anymore, and as companies push more workloads into the cloud, the numbers will continue to impress. The only question now is if Microsoft can continue to close the market share gap with Amazon.

 

Apr
22
2021
--

Google’s Anthos multicloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multicloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the “Google Cloud Services Platform,” which launched three years ago). Hybrid and multicloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. Recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call “an anchor in the cloud” to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com