Apr
30
2021
--

Analytics as a service: Why more enterprises should consider outsourcing

With an increasing number of enterprise systems, growing teams, a rising proliferation of the web and multiple digital initiatives, companies of all sizes are creating loads of data every day. This data contains excellent business insights and immense opportunities, but it has become impossible for companies to derive actionable insights from this data consistently due to its sheer volume.

According to Verified Market Research, the analytics-as-a-service (AaaS) market is expected to grow to $101.29 billion by 2026. Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights. Through AaaS, managed services providers (MSPs) can help organizations get started on their analytics journey immediately without extravagant capital investment.

MSPs can take ownership of the company’s immediate data analytics needs, resolve ongoing challenges and integrate new data sources to manage dashboard visualizations, reporting and predictive modeling — enabling companies to make data-driven decisions every day.

AaaS could come bundled with multiple business-intelligence-related services. Primarily, the service includes (1) services for data warehouses; (2) services for visualizations and reports; and (3) services for predictive analytics, artificial intelligence (AI) and machine learning (ML). When a company partners with an MSP for analytics as a service, organizations are able to tap into business intelligence easily, instantly and at a lower cost of ownership than doing it in-house. This empowers the enterprise to focus on delivering better customer experiences, be unencumbered with decision-making and build data-driven strategies.

Organizations that have not started on their analytics journey or are spending scarce data engineer resources to resolve issues with analytics implementations are not identifying actionable data insights.

In today’s world, where customers value experiences over transactions, AaaS helps businesses dig deeper into their psyche and tap insights to build long-term winning strategies. It also enables enterprises to forecast and predict business trends by looking at their data and allows employees at every level to make informed decisions.

Apr
30
2021
--

Cloud infrastructure market keeps rolling in Q1 with almost $40B in revenue

Conventional wisdom over the last year has suggested that the pandemic has driven companies to the cloud much faster than they ever would have gone without that forcing event, with some suggesting it has compressed years of transformation into months. This quarter’s cloud infrastructure revenue numbers appear to be proving that thesis correct.

With The Big Three — Amazon, Microsoft and Google — all reporting this week, the market generated almost $40 billion in revenue, according to Synergy Research data. That’s up $2 billion from last quarter and up 37% over the same period last year. Canalys’s numbers were slightly higher at $42 billion.

As you might expect if you follow this market, AWS led the way with $13.5 billion for the quarter, up 32% year over year. That’s a run rate of $54 billion. While that is an eye-popping number, what’s really remarkable is the yearly revenue growth, especially for a company the size and maturity of Amazon. The law of large numbers would suggest this isn’t sustainable, but the pie keeps growing and Amazon continues to take a substantial chunk.

Overall AWS held steady with 32% market share. While the revenue numbers keep going up, Amazon’s market share has remained firm for years at around this number. It’s the other companies down market that are gaining share over time, most notably Microsoft, which is now at around 20% share — good for about $7.8 billion this quarter.

Google continues to show signs of promise under Thomas Kurian, hitting $3.5 billion, good for 9% as it makes a steady march toward double digits. Even IBM had a positive quarter, led by Red Hat and cloud revenue, good for 5% or about $2 billion overall.

Synergy Research cloud infrastructure bubble map for Q1 2021. AWS is leader, followed by Microsoft and Google.

Image Credits: Synergy Research

John Dinsdale, chief analyst at Synergy, says that even though AWS and Microsoft have firm control of the market, that doesn’t mean there isn’t money to be made by the companies playing behind them.

“These two don’t have to spend too much time looking in their rearview mirrors and worrying about the competition. However, that is not to say that there aren’t some excellent opportunities for other players. Taking Amazon and Microsoft out of the picture, the remaining market is generating over $18 billion in quarterly revenues and growing at over 30% per year. Cloud providers that focus on specific regions, services or user groups can target several years of strong growth,” Dinsdale said in a statement.

Canalys, another firm that watches the same market as Synergy, had similar findings with slight variations, certainly close enough to confirm one another’s findings. They have AWS with 32%, Microsoft 19% and Google with 7%.

Canalys market share chart with Amazon with 32%, Microsoft 19% and Google 7%

Image Credits: Canalys

Canalys analyst Blake Murray says that there is still plenty of room for growth, and we will likely continue to see big numbers in this market for several years. “Though 2020 saw large-scale cloud infrastructure spending, most enterprise workloads have not yet transitioned to the cloud. Migration and cloud spend will continue as customer confidence rises during 2021. Large projects that were postponed last year will resurface, while new use cases will expand the addressable market,” he said.

The numbers we see are hardly a surprise anymore, and as companies push more workloads into the cloud, the numbers will continue to impress. The only question now is if Microsoft can continue to close the market share gap with Amazon.

 

Apr
30
2021
--

The health data transparency movement is birthing a new generation of startups

In the early 2000s, Jeff Bezos gave a seminal TED Talk titled “The Electricity Metaphor for the Web’s Future.” In it, he argued that the internet will enable innovation on the same scale that electricity did.

We are at a similar inflection point in healthcare, with the recent movement toward data transparency birthing a new generation of innovation and startups.

Those who follow the space closely may have noticed that there are twin struggles taking place: a push for more transparency on provider and payer data, including anonymous patient data, and another for strict privacy protection for personal patient data. What’s the main difference?

This sector is still somewhat nascent — we are in the first wave of innovation, with much more to come.

Anonymized data is much more freely available, while personal data is being locked even tighter (as it should be) due to regulations like GDPR, CCPA and their equivalents around the world.

The former trend is enabling a host of new vendors and services that will ultimately make healthcare better and more transparent for all of us.

These new companies could not have existed five years ago. The Affordable Care Act was the first step toward making anonymized data more available. It required healthcare institutions (such as hospitals and healthcare systems) to publish data on costs and outcomes. This included the release of detailed data on providers.

Later legislation required biotech and pharma companies to disclose monies paid to research partners. And every physician in the U.S. is now required to be in the National Practitioner Identifier (NPI), a comprehensive public database of providers.

All of this allowed the creation of new types of companies that give both patients and providers more control over their data. Here are some key examples of how.

Allowing patients to access all their own health data in one place

This is a key capability of patients’ newly found access to health data. Think of how often, as a patient, providers aren’t aware of treatment or a test you’ve had elsewhere. Often you end up repeating a test because a provider doesn’t have a record of a test conducted elsewhere.

Apr
30
2021
--

Developer-focused video platform Mux achieves unicorn status with $105M funding

Barely more than eight months after announcing a $37 million funding round, Mux has another $105 million.

The Series D was led by Coatue and values the company at more than $1 billion (Mux isn’t disclosing the specific valuation). Existing investors Accel, Andreessen Horowitz and Cobalt also participated, as did new investor Dragoneer.

Co-founder and CEO Jon Dahl told me that Mux didn’t need to raise more funding. But after last year’s Series C, the company’s leadership kept in touch with Coatue and other investors who’d expressed interest, and they ultimately decided that more money could help fuel faster growth during “this inflection moment in video.”

Building on the thesis popularized by a16z co-founder Marc Andreessen, Dahl said, “I think video’s eating software, the same way software was eating the world 10 years ago.” In other words, where video was once something we watched at our desks and on our sofas, it’s now everywhere, whether we’re scrolling through our social media feeds or exercising on our Pelotons.

“We’re at the early days of a five- or 10-year major transition, where video is moving into being a first-class part of every software project,” he said.

Dahl argued that Mux is well-suited for this transition because it’s “a video platform for developers,” with an API-centric approach that results in faster publishing and reliable streaming for viewers. Its first product was a monitoring and analytics tool called Mux Data, followed by its streaming video product Mux Video.

“If you’re going to build a video platform and do it data-first, you need heavy data and monitoring and analytics,” Dahl explained. “We built the data layer [and then] we built the streaming platform.”

Customers include Robinhood, PBS, ViacomCBS, Equinox Media and VSCO — Dahl said that while Mux works with digital media companies, “our core market is software.” He suggested that back when the company was founded in 2015, video was largely seen as a “niche,” or “something you needed if you were ESPN or Netflix.” But the last few years have illustrated that “video is a fundamental part of how we communicate” and that “every software company should have video as a core part of its products.”

Mux founders Adam Brown, Steven Heffernan, Matt McClure and Jon Dahl

Mux founders Adam Brown, Steven Heffernan, Matt McClure and Jon Dahl. Image Credits: Mux

Not surprisingly, demand increased dramatically during the pandemic. During the past year, on-demand streaming via the Mux platform grew by 300%, while live video streaming grew 3,700% and revenue quadrupled.

“Which is a lot of work,” Dahl said with a laugh. “We definitely spent a lot of the last year ramping and scaling and investing in the platform.”

This new funding will allow Mux (which has now raised a total of $175 million) to continue that investment. Dahl said he plans to grow the team from 80 to 200 employees and to explore potential acquisitions.

“We were impressed by Mux’s laser focus on the developer community, and saw impressive customer retention and expansion indicative of the strong value their solutions provide,” said Coatue General Partner David Schneider in a statement. “This funding will enable Mux to continue to build on their customer-centric platform and we are proud to partner with Mux as it leads the way to this hybrid future.”

Apr
29
2021
--

Talking Drupal #292 – New Relic

Today we are joined by David Needham to learn the basics of New Relic.

www.talkingdrupal.com/292

Topics

  • Nic – Netlify
  • Mike – Water computer
  • David – Spring cleaning
  • John – UPC
  • What is New Relic?
  • Where New Relic fits with other tools
  • New Relic tools
  • What information is provided
  • Data points and insight
  • Common issues to help resolve

Resources

David’s .d.o

Pantheon

Steve Mould Water Computer

UPC codes and SKUs

New Relic’s learning portal

New Relic FutureStack conference

Guest

David Needham  @davidneedham   davidneedham.me

Hosts

Stephen Cross – www.stephencross.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

Mike MIles   www.mike-miles.com   @mikemiles86

 

Apr
29
2021
--

Healthcare is the next wave of data liberation

Why can we see all our bank, credit card and brokerage data on our phones instantaneously in one app, yet walk into a doctor’s office blind to our healthcare records, diagnoses and prescriptions? Our health status should be as accessible as our checking account balance.

The liberation of financial data enabled by startups like Plaid is beginning to happen with healthcare data, which will have an even more profound impact on society; it will save and extend lives. This accessibility is quickly approaching.

As early investors in Quovo and PatientPing, two pioneering companies in financial and healthcare data, respectively, it’s evident to us the winners of the healthcare data transformation will look different than they did with financial data, even as we head toward a similar end state.

For over a decade, government agencies and consumers have pushed for this liberation.

This push for greater data liquidity coincides with demand from consumers for better information about cost and quality.

In 2009, the Health Information Technology for Economic and Clinical Health Act (HITECH) gave the first big industry push, catalyzing a wave of digitization through electronic health records (EHR). Today, over 98% of medical records are digitized. This market is dominated by multibillion?dollar vendors like Epic, Cerner and Allscripts, which control 70% of patient records. However, these giant vendors have yet to make these records easily accessible.

A second wave of regulation has begun to address the problem of trapped data to make EHRs more interoperable and valuable. Agencies within the Department of Health and Human Services have mandated data sharing among payers and providers using a common standard, the Fast Healthcare Interoperability Resources (FHIR) protocol.

Image Credits: F-Prime Capital

This push for greater data liquidity coincides with demand from consumers for better information about cost and quality. Employers have been steadily shifting a greater share of healthcare expenses to consumers through high-deductible health plans — from 30% in 2012 to 51% in 2018. As consumers pay for more of the costs, they care more about the value of different health options, yet are unable to make those decisions without real-time access to cost and clinical data.

Image Credits: F-Prime Capital

Tech startups have an opportunity to ease the transmission of healthcare data and address the push of regulation and consumer demands. The lessons from fintech make it tempting to assume that a Plaid for healthcare data would be enough to address all of the challenges within healthcare, but it is not the right model. Plaid’s aggregator model benefited from a relatively high concentration of banks, a limited number of data types and low barriers to data access.

By contrast, healthcare data is scattered across tens of thousands of healthcare providers, stored in multiple data formats and systems per provider, and is rarely accessed by patients directly. Many people log into their bank apps frequently, but few log into their healthcare provider portals, if they even know one exists.

HIPPA regulations and strict patient consent requirements also meaningfully increase friction to data access and sharing. Financial data serves mostly one-to-one use cases, while healthcare data is a many-to-many problem. A single patient’s data is spread across many doctors and facilities and is needed by just as many for care coordination.

Because of this landscape, winning healthcare technology companies will need to build around four propositions:

Apr
29
2021
--

Improving Percona Monitoring and Management EC2 Instance Resilience Using CloudWatch Alarm Actions

Percona Monitoring and Management EC2 Instance Resilience Using CloudWatch

Percona Monitoring and Management EC2 Instance Resilience Using CloudWatchNothing lasts forever, including hardware running your EC2 instances. You will usually receive an advance warning on hardware degradation and subsequent instance retirement, but sometimes hardware fails unexpectedly. Percona Monitoring and Management (PMM) currently doesn’t have an HA setup, and such failures can leave wide gaps in monitoring if not resolved quickly.

In this post, we’ll see how to set up automatic recovery for PMM instances deployed in AWS through Marketplace. The automation will take care of the instance following an underlying systems failure. We’ll also set up an automatic restart procedure in case the PMM instance itself is experiencing issues. These simple automatic actions go a long way in improving the resilience of your monitoring setup and minimizing downtime.

Some Background

Each EC2 instance has two associated status checks: System and Instance. You can read in more detail about them on the “Types of status checks” AWS documentation page. The gist of it is, the System check fails to pass when there are some infrastructure issues. The Instance check fails to pass if there’s anything wrong on the instance side, like its OS having issues. You can normally see the results of these checks as “2/2 checks passed” markings on your instances in the EC2 console.

ec2 instance overview showing 2/2 status checks

CloudWatch, an AWS monitoring system, can react to the status check state changes. Specifically, it is possible to set up a “recover” action for an EC2 instance where a system check is failing. The recovered instance is identical to the original, but will not retain the same public IP unless it’s assigned an Elastic IP. I recommend that you use Elastic IP for your PMM instances (see also this note in PMM documentation). For the full list of caveats related to instance recovery check out the “Recover your instance” page in AWS documentation.

According to CloudWatch pricing, having two alarms set up will cost $0.2/month. An acceptable cost for higher availability.

Automatically Recovering PMM on System Failure

Let’s get to actually setting up the automation. There are at least two ways to set up the alarm through GUI: from the EC2 console, and from the CloudWatch interface. The latter option is a bit involved, but it’s very easy to set up alarms from the EC2 console. Just right-click on your instance, pick the “Monitor and troubleshoot” section in the drop-down menu, and then choose the “Manage CloudWatch alarms” item.

EC2 console dropdown menu navigation to CloudWatch Alarms

Once there, choose the “Status check failed” as the “Type of data to sample”, and specify the “Recover” Alarm action. You should see something like this.

Adding CloudWatch alarm through EC2 console interface

You could notice that GUI offers to set up a notification channel for the alarm. If you want to get a message if your PMM instance is recovered automatically, feel free to set that up.

The alarm will fire when the maximum value of the StatusCheckFailed_System metric is >=0.99 during two consecutive 300 seconds periods. Once the alarm fires, it will recover the PMM instance. We can check out our new alarm in the CloudWatch GUI.

EC2 instance restart alarm

EC2 console will also show that the alarm is set up.

Single alarm set in the instance overview

This example uses a pretty conservative alarm check duration of 10 minutes, spread over two 5-minute intervals. If you want to recover a PMM instance sooner, risking triggering on false positives, you can bring down the alarm period and number of evaluation periods. We also use the “maximum” of the metric over a 5-minute interval. That means a check could be in failed state only one minute out of five, but still count towards alarm activation. The assumption here is that checks don’t flap for ten minutes without a reason.

Automatically Restarting PMM on Instance Failure

While we’re at it, we can also set up an automatic action to execute when “Instance status check” is failing. As mentioned, usually that happens when there’s something wrong within the instance: really high load, configuration issue, etc. Whenever a system check fails, an instance check is going to be failing, too, so we’ll set this alarm to check for a longer period of time before firing. That’ll also help us to minimize the rate of false-positive restarts, for example, due to a spike in load. We use the same period of 300 seconds here but will only alarm after three periods show instance failure. The restart will thus happen after ~15 minutes. Again, this is pretty conservative, so adjust as you think works best for you.

In the GUI, repeat the same actions that we did last time, but pick the “Status check failed: instance” data type, and “Reboot” alarm action.

Adding CloudWatch alarm for instance failure through EC2 console interface

Once done, you can see both alarms reflected in the EC2 instance list.

EC2 instance overview showing 2 alarms set

Setting up Alarms Using AWS CLI

Setting up our alarm is also very easy to do using the AWS CLI.  You’ll need an AWS account with permissions to create CloudWatch alarms, and some EC2 permissions to list instances and change their state. A pretty minimal set of permissions can be found in the policy document below. This set is enough for a user to invoke CLI commands in this post.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "cloudwatch:PutMetricAlarm",
        "cloudwatch:GetMetricStatistics",
        "cloudwatch:ListMetrics",
        "cloudwatch:DescribeAlarms"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "ec2:DescribeInstanceStatus",
        "ec2:DescribeInstances",
        "ec2:StopInstances",
        "ec2:StartInstances"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    }
  ]
}

And here are the specific AWS CLI commands. Do not forget to change instance id (i-xxx) and region in the command. First, setting up recovery after system status check failure.

$ aws cloudwatch put-metric-alarm \
    --alarm-name restart_PMM_on_system_failure_i-xxx \
    --alarm-actions arn:aws:automate:REGION:ec2:recover \
    --metric-name StatusCheckFailed_System --namespace AWS/EC2 \
    --dimensions Name=InstanceId,Value=i-xxx \
    --statistic Maximum --period 300 --evaluation-periods 2 \
    --datapoints-to-alarm 2 --threshold 1 \
    --comparison-operator GreaterThanOrEqualToThreshold \
    --tags Key=System,Value=PMM

Second, setting up restart after instance status check failure.

$ aws cloudwatch put-metric-alarm \
    --alarm-name restart_PMM_on_instance_failure_i-xxx \
    --alarm-actions arn:aws:automate:REGION:ec2:reboot \
    --metric-name StatusCheckFailed_Instance --namespace AWS/EC2 \
    --dimensions Name=InstanceId,Value=i-xxx \
    --statistic Maximum --period 300 --evaluation-periods 3 \
    --datapoints-to-alarm 3 --threshold 1 \
    --comparison-operator GreaterThanOrEqualToThreshold \
    --tags Key=System,Value=PMM

Testing New Alarms

Unfortunately, it doesn’t seem to be possible to simulate system status check failure. If there’s a way, let us know in the comments. Because of that, we’ll be testing our alarms using instance failure instead. The simplest way to fail the Instance status check is to mess up networking on an instance. Let’s just bring down a network interface. In this particular instance, it’s ens5.

# ip link
...
2: ens5:...
...
# date
Sat Apr 17 20:06:26 UTC 2021
# ip link set ens5 down

The stuff of nightmares: having no response for a command. In 15 minutes, we can check that the PMM instance is available again. We can see that alarm triggered and executed the restart action at 20:21:37 UTC.

CloudWatch showing Alarm fired actions

And we can access the server now.

$ date
Sat Apr 17 20:22:59 UTC 2021

The alarm itself takes a little more time to return to OK state. Finally, let’s take a per-minute look at the instance check metric.

CloudWatch metric overview

Instance failure was detected at 20:10 and resolved at 20:24, according to the AWS data. In reality, the server rebooted and was accessible even earlier. PMM instances deployed from a marketplace image have all the required service in autostart, and thus PMM is fully available after restarting without a need for an operator to take any action.

Summary

Even though Percona Monitoring and Management itself doesn’t offer high availability at this point, you can utilize built-in AWS features to make your PMM installation more resilient to failure. PMM in the Marketplace is set up in a way that should not have problems with recovery from the instance retirement events. After your PMM instance is recovered or rebooted, you’ll be able to access monitoring without any manual intervention.

Note: While this procedure is universal and can be applied to any EC2 instance, there are some caveats explained in the “What do I need to know when my Amazon EC2 instance is scheduled for retirement?” article on the AWS site. Note specifically the Warning section. 

References

Apr
29
2021
--

IBM is acquiring cloud app and network management firm Turbonomic for up to $2B

IBM today made another acquisition to deepen its reach into providing enterprises with AI-based services to manage their networks and workloads. It announced that it is acquiring Turbonomic, a company that provides tools to manage application performance (specifically resource management), along with Kubernetes and network performance — part of its bigger strategy to bring more AI into IT ops, or as it calls it, AIOps.

Financial terms of the deal were not disclosed, but according to data in PitchBook, Turbonomic was valued at nearly $1 billion — $963 million, to be exact — in its last funding round in September 2019. A report in Reuters rumoring the deal a little earlier today valued it at between $1.5 billion and $2 billion. A source tells us the figure is accurate.

The Boston-based company’s investors included General Atlantic, Cisco, Bain, Highland Capital Partners, and Red Hat. The last of these, of course, is now a part of IBM (so it was theoretically also an investor), and together Red Hat and IBM have been developing a range of cloud-based tools addressing telco, edge and enterprise use cases.

This latest deal will help extend that further, and it has more generally been an area that IBM has been aggressive in recently. Last November IBM acquired another company called Instana to bring application performance management into its stable, and it pointed out today that the Turbonomic deal will complement that and the two technologies’ tools will be integrated together, IBM said.

Turbonomic’s tools are particularly useful in hybrid cloud architectures, which involve not just on-premise and cloud workloads, but workloads that typically are extended across multiple cloud environments. While this may be the architecture people apply for more resilience, reasons of cost, location or other practicalities, the fact of the matter is that it can be a challenge to manage. Turbonomic’s tools automate management, analyse performance, and suggest changes for network operations engineers to make to meet usage demands.

“Businesses are looking for AI-driven software to help them manage the scale and complexity challenges of running applications cross-cloud,” said Ben Nye, CEO, Turbonomic, in a statement. “Turbonomic not only prescribes actions, but allows customers to take them. The combination of IBM and Turbonomic will continuously assure target application response times even during peak demand.”

The bigger picture for IBM is that it’s another sign of how the company is continuing to move away from its legacy business based around servers and deeper into services, and specifically services on the infrastructure of the future, cloud-based networks.

“IBM continues to reshape its future as a hybrid cloud and AI company,” said Rob Thomas, SVP, IBM Cloud and Data Platform, in a statement. “The Turbonomic acquisition is yet another example of our commitment to making the most impactful investments to advance this strategy and ensure customers find the most innovative ways to fuel their digital transformations.”

A large part of the AI promise in the world of network operations and IT ops is how it will afford companies to rely more on automation, another area where IBM has been very active. (In a very different application of this technology — in business services — this month, it acquired MyInvenio in Italy to bring process mining technology in house.)

The promise of automation, meanwhile, is lower operation costs, a critical issue for managing network performance and availability in hybrid cloud deployments.

“We believe that AI-powered automation has become inevitable, helping to make all information-centric jobs more productive,” said Dinesh Nirmal, General Manager, IBM Automation, in a statement. “That’s why IBM continues to invest in providing our customers with a one-stop shop of AI-powered automation capabilities that spans business processes and IT. The addition of Turbonomic now takes our portfolio another major step forward by ensuring customers will have full visibility into what is going on throughout their hybrid cloud infrastructure, and across their entire enterprise.”

Apr
29
2021
--

Wasabi scores $112M Series C on $700M valuation to take on cloud storage hyperscalers

Taking on Amazon S3 in the cloud storage game would seem to be a fool-hearty proposition, but Wasabi has found a way to build storage cheaply and pass the savings onto customers. Today the Boston-based startup announced a $112 million Series C investment on a $700 million valuation.

Fidelity Management & Research Company led the round with participation from previous investors. It reports that it has now raised $219 million in equity so far, along with additional debt financing, but it takes a lot of money to build a storage business.

CEO David Friend says that business is booming and he needed the money to keep it going. “The business has just been exploding. We achieved a roughly $700 million valuation on this round, so  you can imagine that business is doing well. We’ve tripled in each of the last three years and we’re ahead of plan for this year,” Friend told me.

He says that demand continues to grow and he’s been getting requests internationally. That was one of the primary reasons he went looking for more capital. What’s more, data sovereignty laws require that certain types of sensitive data like financial and healthcare be stored in-country, so the company needs to build more capacity where it’s needed.

He says they have nailed down the process of building storage, typically inside co-location facilities, and during the pandemic they actually became more efficient as they hired a firm to put together the hardware for them onsite. They also put channel partners like managed service providers (MSPs) and value added resellers (VARs) to work by incentivizing them to sell Wasabi to their customers.

Wasabi storage starts at $5.99 per terabyte per month. That’s a heck of a lot cheaper than Amazon S3, which starts at 0.23 per gigabyte for the first 50 terabytes or $23.00 a terabyte, considerably more than Wasabi’s offering.

But Friend admits that Wasabi still faces headwinds as a startup. No matter how cheap it is, companies want to be sure it’s going to be there for the long haul and a round this size from an investor with the pedigree of Fidelity will give the company more credibility with large enterprise buyers without the same demands of venture capital firms.

“Fidelity to me was the ideal investor. […] They don’t want a board seat. They don’t want to come in and tell us how to run the company. They are obviously looking toward an IPO or something like that, and they are just interested in being an investor in this business because cloud storage is a virtually unlimited market opportunity,” he said.

He sees his company as the typical kind of market irritant. He says that his company has run away from competitors in his part of the market and the hyperscalers are out there not paying attention because his business remains a fraction of theirs for the time being. While an IPO is far off, he took on an institutional investor this early because he believes it’s possible eventually.

“I think this is a big enough market we’re in, and we were lucky to get in at just the right time with the right kind of technology. There’s no doubt in my mind that Wasabi could grow to be a fairly substantial public company doing cloud infrastructure. I think we have a nice niche cut out for ourselves, and I don’t see any reason why we can’t continue to grow,” he said.

Apr
29
2021
--

Vectra AI picks up $130M at a $1.2B valuation for its network approach to threat detection and response

Cybersecurity nightmares like the SolarWinds hack highlight how malicious hackers continue to exploit vulnerabilities in software and apps to do their dirty work. Today a startup that’s built a platform to help organizations protect themselves from this by running threat detection and response at the network level is announcing a big round of funding to continue its growth.

Vectra AI, which provides a cloud-based service that uses artificial intelligence technology to monitor both on-premise and cloud-based networks for intrusions, has closed a round of $130 million at a post-money valuation of $1.2 billion.

The challenge that Vectra is looking to address is that applications — and the people who use them — will continue to be weak links in a company’s security set-up, not least because malicious hackers are continually finding new ways to piece together small movements within them to build, lay and finally use their traps. While there will continue to be an interesting, and mostly effective, game of cat-and-mouse around those applications, a service that works at the network layer is essential as an alternative line of defense, one that can find those traps before they are used.

“Think about where the cloud is. We are in the wild west,” Hitesh Sheth, Vectra’s CEO, said in an interview. “The attack surface is so broad and attacks happen at such a rapid rate that the security concerns have never been higher at the enterprise. That is driving a lot of what we are doing.”

Sheth said that the funding will be used in two areas. First, to continue expanding its technology to meet the demands of an ever-growing threat landscape — it also has a team of researchers who work across the business to detect new activity and build algorithms to respond to it. And second, for acquisitions to bring in new technology and potentially more customers.

(Indeed, there has been a proliferation of AI-based cybersecurity startups in recent years, in areas like digital forensics, application security and specific sectors like SMBs, all of which complement the platform that Vectra has built, so you could imagine a number of interesting targets.)

The funding is being led by funds managed by Blackstone Growth, with unnamed existing investors participating (past backers include Accel, Khosla and TCV, among other financial and strategic investors). Vectra today largely focuses on enterprises, highly demanding ones with lots at stake to lose. Blackstone was initially a customer of Vectra’s, using the company’s flagship Cognito platform, Viral Patel — the senior MD who led the investment for the firm — pointed out to me.

The company has built some specific products that have been very prescient in anticipating vulnerabilities in specific applications and services. While it said that sales of its Cognito platform grew 100% last year, Cognito Detect for Microsoft Office 365 (a separate product) sales grew over 700%. Coincidentally, Microsoft’s cloud apps have faced a wave of malicious threats. Sheth said that implementing Cognito (or indeed other network security protection) “could have prevented the SolarWinds hack” for those using it.

“Through our experience as a client of Vectra, we’ve been highly impressed by their world-class technology and exceptional team,” John Stecher, CTO at Blackstone, said in a statement. “They have exactly the types of tools that technology leaders need to separate the signal from the noise in defending their organizations from increasingly sophisticated cyber threats. We’re excited to back Vectra and Hitesh as a strategic partner in the years ahead supporting their continued growth.”

Looking ahead, Sheth said that endpoint security will not be a focus for the moment because “in cloud there is so much open territory”. Instead it partners with the likes of CrowdStrike, SentinelOne, Carbon Black and others.

In terms of what is emerging as a stronger entry point, social media is increasingly coming to the fore, he said. “Social media tends to be an effective vector to get in and will remain to be for some time,” he said, with people impersonating others and suggesting conversations over encrypted services like WhatsApp. “The moment you move to encryption and exchange any documents, it’s game over.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com