May
03
2021
--

Changes to Percona Monitoring and Management on AWS Marketplace

Percona Monitoring and Management AWS Marketplace

Percona Monitoring and Management AWS MarketplacePercona Monitoring and Management has been available for single-click deployment from AWS Marketplace for several years now, and we have hundreds of instances concurrently active and growing rapidly due to unparalleled ease of deployment.

Today we’re announcing we are changing pricing for Percona Monitoring and Management on AWS Marketplace. Currently, Percona Monitoring and Management (PMM) is available on AWS Marketplace at no added cost, and effective June 1, 2021, we will add a surcharge equal to 10% of the PMM AWS EC2 Costs.

Why are we making this change?

Making Percona Monitoring and Management available as a one-click deployment on AWS Marketplace is a considerable resource investment, yet, with the current model, only AWS directly benefits from the value which we jointly provide to the users choosing to run PMM on AWS. With the addition of this surcharge, both companies will benefit.

How does this reflect on Percona’s Open Source Commitment?

Percona Monitoring and Management remains a fully Open Source Project.  We’re changing how commercial offerings jointly provided by AWS and Percona will operate.  

I do not want to pay this surcharge, are there free options?

Using Amazon Marketplace is not the only way to deploy PMM on AWS. Many deploy PMM on Amazon EC2 using Docker, and this option continues to require no additional spend other than your infrastructure costs.

What are the benefits of running Percona Monitoring and Management through AWS Marketplace compared to alternative deployment methods?

The main benefit of running Percona Monitoring and Management through the AWS Marketplace is convenience; you can easily change the instance type or add more storage as your PMM load grows. You also have an easy path to high availability with CloudWatch Alarm Actions.

 

Register for Percona Live ONLINE
A Virtual Event about Open Source Databases

 

How does the 10% surcharge compare?

We believe 10% extra for software on top of the infrastructure costs is a very modest charge.  Amazon RDS, for example, has a surcharge starting at 30% to more than 70%, depending on the instance type.

How will I know the exact amount of such a surcharge?

Your bill from AWS will include a separate line item for this charge, in addition to the infrastructure costs consumed by PMM.

What does it mean for Percona Monitoring and Management on AWS Marketplace?

Having a revenue stream that is directly tied to AWS Marketplace deployment will increase the amount of resources we can spend on making Percona Monitoring and Management work with AWS even better. If you’re using PMM with AWS, deploying it through Amazon Marketplace will be a great way to support PMM Development.

Will Percona Monitoring and Management started through AWS Marketplace be entitled to any additional Support options?

No, Percona Monitoring and Management commercial support is available with Percona Support for Open Source Databases.  If you do not have a commercial support subscription, you can get help from the community at the Percona Forums.

What will happen to Percona Monitoring Instances started from AWS Marketplace which are already up and running?

As new pricing goes into effect on June 1st, AWS will give you 90 days’ notice before applying new prices.  If you want to avoid the surcharge, you can move your installation to a Docker-based EC2 install.

What Could AWS Do Better?

It would be great if AWS would develop some sort of affiliate program for Open Source projects, which would allow them to get a share from the value they create for AWS by driving additional infrastructure spend without having to resort to added costs. I believe this would be a win-win for Open Source projects, especially smaller ones, and AWS.

Apr
29
2021
--

Improving Percona Monitoring and Management EC2 Instance Resilience Using CloudWatch Alarm Actions

Percona Monitoring and Management EC2 Instance Resilience Using CloudWatch

Percona Monitoring and Management EC2 Instance Resilience Using CloudWatchNothing lasts forever, including hardware running your EC2 instances. You will usually receive an advance warning on hardware degradation and subsequent instance retirement, but sometimes hardware fails unexpectedly. Percona Monitoring and Management (PMM) currently doesn’t have an HA setup, and such failures can leave wide gaps in monitoring if not resolved quickly.

In this post, we’ll see how to set up automatic recovery for PMM instances deployed in AWS through Marketplace. The automation will take care of the instance following an underlying systems failure. We’ll also set up an automatic restart procedure in case the PMM instance itself is experiencing issues. These simple automatic actions go a long way in improving the resilience of your monitoring setup and minimizing downtime.

Some Background

Each EC2 instance has two associated status checks: System and Instance. You can read in more detail about them on the “Types of status checks” AWS documentation page. The gist of it is, the System check fails to pass when there are some infrastructure issues. The Instance check fails to pass if there’s anything wrong on the instance side, like its OS having issues. You can normally see the results of these checks as “2/2 checks passed” markings on your instances in the EC2 console.

ec2 instance overview showing 2/2 status checks

CloudWatch, an AWS monitoring system, can react to the status check state changes. Specifically, it is possible to set up a “recover” action for an EC2 instance where a system check is failing. The recovered instance is identical to the original, but will not retain the same public IP unless it’s assigned an Elastic IP. I recommend that you use Elastic IP for your PMM instances (see also this note in PMM documentation). For the full list of caveats related to instance recovery check out the “Recover your instance” page in AWS documentation.

According to CloudWatch pricing, having two alarms set up will cost $0.2/month. An acceptable cost for higher availability.

Automatically Recovering PMM on System Failure

Let’s get to actually setting up the automation. There are at least two ways to set up the alarm through GUI: from the EC2 console, and from the CloudWatch interface. The latter option is a bit involved, but it’s very easy to set up alarms from the EC2 console. Just right-click on your instance, pick the “Monitor and troubleshoot” section in the drop-down menu, and then choose the “Manage CloudWatch alarms” item.

EC2 console dropdown menu navigation to CloudWatch Alarms

Once there, choose the “Status check failed” as the “Type of data to sample”, and specify the “Recover” Alarm action. You should see something like this.

Adding CloudWatch alarm through EC2 console interface

You could notice that GUI offers to set up a notification channel for the alarm. If you want to get a message if your PMM instance is recovered automatically, feel free to set that up.

The alarm will fire when the maximum value of the StatusCheckFailed_System metric is >=0.99 during two consecutive 300 seconds periods. Once the alarm fires, it will recover the PMM instance. We can check out our new alarm in the CloudWatch GUI.

EC2 instance restart alarm

EC2 console will also show that the alarm is set up.

Single alarm set in the instance overview

This example uses a pretty conservative alarm check duration of 10 minutes, spread over two 5-minute intervals. If you want to recover a PMM instance sooner, risking triggering on false positives, you can bring down the alarm period and number of evaluation periods. We also use the “maximum” of the metric over a 5-minute interval. That means a check could be in failed state only one minute out of five, but still count towards alarm activation. The assumption here is that checks don’t flap for ten minutes without a reason.

Automatically Restarting PMM on Instance Failure

While we’re at it, we can also set up an automatic action to execute when “Instance status check” is failing. As mentioned, usually that happens when there’s something wrong within the instance: really high load, configuration issue, etc. Whenever a system check fails, an instance check is going to be failing, too, so we’ll set this alarm to check for a longer period of time before firing. That’ll also help us to minimize the rate of false-positive restarts, for example, due to a spike in load. We use the same period of 300 seconds here but will only alarm after three periods show instance failure. The restart will thus happen after ~15 minutes. Again, this is pretty conservative, so adjust as you think works best for you.

In the GUI, repeat the same actions that we did last time, but pick the “Status check failed: instance” data type, and “Reboot” alarm action.

Adding CloudWatch alarm for instance failure through EC2 console interface

Once done, you can see both alarms reflected in the EC2 instance list.

EC2 instance overview showing 2 alarms set

Setting up Alarms Using AWS CLI

Setting up our alarm is also very easy to do using the AWS CLI.  You’ll need an AWS account with permissions to create CloudWatch alarms, and some EC2 permissions to list instances and change their state. A pretty minimal set of permissions can be found in the policy document below. This set is enough for a user to invoke CLI commands in this post.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "cloudwatch:PutMetricAlarm",
        "cloudwatch:GetMetricStatistics",
        "cloudwatch:ListMetrics",
        "cloudwatch:DescribeAlarms"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "ec2:DescribeInstanceStatus",
        "ec2:DescribeInstances",
        "ec2:StopInstances",
        "ec2:StartInstances"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    }
  ]
}

And here are the specific AWS CLI commands. Do not forget to change instance id (i-xxx) and region in the command. First, setting up recovery after system status check failure.

$ aws cloudwatch put-metric-alarm \
    --alarm-name restart_PMM_on_system_failure_i-xxx \
    --alarm-actions arn:aws:automate:REGION:ec2:recover \
    --metric-name StatusCheckFailed_System --namespace AWS/EC2 \
    --dimensions Name=InstanceId,Value=i-xxx \
    --statistic Maximum --period 300 --evaluation-periods 2 \
    --datapoints-to-alarm 2 --threshold 1 \
    --comparison-operator GreaterThanOrEqualToThreshold \
    --tags Key=System,Value=PMM

Second, setting up restart after instance status check failure.

$ aws cloudwatch put-metric-alarm \
    --alarm-name restart_PMM_on_instance_failure_i-xxx \
    --alarm-actions arn:aws:automate:REGION:ec2:reboot \
    --metric-name StatusCheckFailed_Instance --namespace AWS/EC2 \
    --dimensions Name=InstanceId,Value=i-xxx \
    --statistic Maximum --period 300 --evaluation-periods 3 \
    --datapoints-to-alarm 3 --threshold 1 \
    --comparison-operator GreaterThanOrEqualToThreshold \
    --tags Key=System,Value=PMM

Testing New Alarms

Unfortunately, it doesn’t seem to be possible to simulate system status check failure. If there’s a way, let us know in the comments. Because of that, we’ll be testing our alarms using instance failure instead. The simplest way to fail the Instance status check is to mess up networking on an instance. Let’s just bring down a network interface. In this particular instance, it’s ens5.

# ip link
...
2: ens5:...
...
# date
Sat Apr 17 20:06:26 UTC 2021
# ip link set ens5 down

The stuff of nightmares: having no response for a command. In 15 minutes, we can check that the PMM instance is available again. We can see that alarm triggered and executed the restart action at 20:21:37 UTC.

CloudWatch showing Alarm fired actions

And we can access the server now.

$ date
Sat Apr 17 20:22:59 UTC 2021

The alarm itself takes a little more time to return to OK state. Finally, let’s take a per-minute look at the instance check metric.

CloudWatch metric overview

Instance failure was detected at 20:10 and resolved at 20:24, according to the AWS data. In reality, the server rebooted and was accessible even earlier. PMM instances deployed from a marketplace image have all the required service in autostart, and thus PMM is fully available after restarting without a need for an operator to take any action.

Summary

Even though Percona Monitoring and Management itself doesn’t offer high availability at this point, you can utilize built-in AWS features to make your PMM installation more resilient to failure. PMM in the Marketplace is set up in a way that should not have problems with recovery from the instance retirement events. After your PMM instance is recovered or rebooted, you’ll be able to access monitoring without any manual intervention.

Note: While this procedure is universal and can be applied to any EC2 instance, there are some caveats explained in the “What do I need to know when my Amazon EC2 instance is scheduled for retirement?” article on the AWS site. Note specifically the Warning section. 

References

Apr
27
2021
--

Arm launches its latest chip design for HPC, data centers and the edge

Arm today announced the launch of two new platforms, Arm Neoverse V1 and Neoverse N2, as well as a new mesh interconnect for them. As you can tell from the name, V1 is a completely new product and maybe the best example yet of Arm’s ambitions in the data center, high-performance computing and machine learning space. N2 is Arm’s next-generation general compute platform that is meant to span use cases from hyperscale clouds to SmartNICs and running edge workloads. It’s also the first design based on the company’s new Armv9 architecture.

Not too long ago, high-performance computing was dominated by a small number of players, but the Arm ecosystem has scored its fair share of wins here recently, with supercomputers in South Korea, India and France betting on it. The promise of V1 is that it will vastly outperform the older N1 platform, with a 2x gain in floating-point performance, for example, and a 4x gain in machine learning performance.

Image Credits: Arm

“The V1 is about how much performance can we bring — and that was the goal,” Chris Bergey, SVP and GM of Arm’s Infrastructure Line of Business, told me. He also noted that the V1 is Arm’s widest architecture yet. He noted that while V1 wasn’t specifically built for the HPC market, it was definitely a target market. And while the current Neoverse V1 platform isn’t based on the new Armv9 architecture yet, the next generation will be.

N2, on the other hand, is all about getting the most performance per watt, Bergey stressed. “This is really about staying in that same performance-per-watt-type envelope that we have within N1 but bringing more performance,” he said. In Arm’s testing, NGINX saw a 1.3x performance increase versus the previous generation, for example.

Image Credits: Arm

In many ways, today’s release is also a chance for Arm to highlight its recent customer wins. AWS Graviton2 is obviously doing quite well, but Oracle is also betting on Ampere’s Arm-based Altra CPUs for its cloud infrastructure.

“We believe Arm is going to be everywhere — from edge to the cloud. We are seeing N1-based processors deliver consistent performance, scalability and security that customers want from Cloud infrastructure,” said Bev Crair, senior VP, Oracle Cloud Infrastructure Compute. “Partnering with Ampere Computing and leading ISVs, Oracle is making Arm server-side development a first-class, easy and cost-effective solution.”

Meanwhile, Alibaba Cloud and Tencent are both investing in Arm-based hardware for their cloud services as well, while Marvell will use the Neoverse V2 architecture for its OCTEON networking solutions.

Apr
22
2021
--

Google’s Anthos multicloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multicloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the “Google Cloud Services Platform,” which launched three years ago). Hybrid and multicloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. Recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call “an anchor in the cloud” to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

Mar
24
2021
--

Why Adam Selipsky was the logical choice to run AWS

When AWS CEO Andy Jassy announced in an email to employees yesterday that Tableau CEO Adam Selipsky was returning to run AWS, it was probably not the choice most considered. But to the industry watchers we spoke to over the last couple of days, it was a move that made absolute sense once you thought about it.

Gartner analyst Ed Anderson says that the cultural fit was probably too good for Jassy to pass up. Selipsky spent 11 years helping build the division. It was someone he knew well and had worked side by side with for over a decade. He could slide into the new role and be trusted to continue building the lucrative division.

Anderson says that even though the size and scope of AWS has changed dramatically since Selipsky left in 2016 when the company closed the year on a $16 billion run rate, he says that the organization’s cultural dynamics haven’t changed all that much.

“Success in this role requires a deep understanding of the Amazon/AWS culture in addition to a vision for AWS’s future growth. Adam already knows the AWS culture from his previous time at AWS. Yes, AWS was a smaller business when he left, but the fundamental structure and strategy was in place and the culture hasn’t notably evolved since then,” Anderson told me.

Matt McIlwain, managing director at Madrona Venture Group, says the experience Selipsky had after he left AWS will prove invaluable when he returns.

“Adam transformed Tableau from a desktop, licensed software company to a cloud, subscription software company that thrived. As the leader of AWS, Adam is returning to a culture he helped grow as the sales and marketing leader that brought AWS to prominence and broke through from startup customers to become the leading enterprise solution for public cloud,” he said.

Holger Mueller, an analyst with Constellation Research, says that Selipsky’s business experience gave him the edge over other candidates. “His business acumen won out over [internal candidates] Matt Garmin and Peter DeSantis. Insight on how Salesforce works may be helpful and valued as well,” Mueller pointed out.

As for leaving Tableau and with it Salesforce, the company that purchased it for $15.7 billion in 2019, Brent Leary, founder and principal analyst at CRM Essentials, believes that it was only a matter of time before some of these acquired company CEOs left to do other things. In fact, he’s surprised it didn’t happen sooner.

“Given Salesforce’s growing stable of top notch CEOs accumulated by way of a slew of high-profile acquisitions, you really can’t expect them all to stay forever, and given Adam Selipsky’s tenure at AWS before becoming Tableau’s CEO, this move makes a whole lot of sense. Amazon brings back one of their own, and he is also a wildly successful CEO in his own right,” Leary said.

While the consensus is that Selipsky is a good choice, he is going to have awfully big shoes to fill. The fact is that division is continuing to grow like a large company currently on a run rate of over $50 billion. With a track record like that to follow, and Jassy still close at hand, Selipsky has to simply continue letting the unit do its thing while putting his own unique stamp on it.

Any kind of change is disconcerting though, and it will be up to him to put customers and employees at ease and plow ahead into the future. Same mission. New boss.

Mar
23
2021
--

Tableau CEO Adam Selipsky is returning to AWS to replace Andy Jassy as CEO

When Amazon announced last month that Jeff Bezos was moving into the executive chairman role, and AWS CEO Andy Jassy would be taking over the entire Amazon operation, speculation began about who would replace Jassy.

People considered a number of internal candidates viable, such as Peter DeSantis, vice president of global infrastructure at AWS and Matt Garman, who is vice president of sales and marketing. Not many would have chosen Tableau CEO Adam Selipsky, but sure enough he is returning home to run the division he left in 2016.

In an email to employees, Jassy wasted no time getting to the point that Selipsky was his choice, saying that the former employee helped launch the division when they hired him in 2005, then spent 11 years helping Jassy build the unit before taking the job at Tableau. Through that lens, the choice makes perfect sense.

“Adam brings strong judgment, customer obsession, team building, demand generation, and CEO experience to an already very strong AWS leadership team. And, having been in such a senior role at AWS for 11 years, he knows our culture and business well,” Jassy wrote in the email.

Jassy has run AWS since its earliest days, taking it from humble beginnings as a kind of internal experiment on running a storage web service to building a mega division currently on a $51 billion run rate. It is that juggernaut that will be Selipsky’s to run, but he seems well-suited for the job.

He is a seasoned executive, and while he’s been away from AWS since before it really began to grow into a huge operation, he should still understand the culture well enough to step smoothly into the role.  At the same time, he’s leaving Tableau, a company he helped transform from a desktop software company into one firmly based in the cloud.

Salesforce bought Tableau in June 2019 for a cool $15.7 billion and Selipsky has remained at the helm since then, but perhaps the lure of running AWS was too great and he decided to take the leap to the new job.

When we wrote a story at the end of last year about Salesforce’s deep bench of executive talent, Selipsky was one of the CEOs we pointed at as a possible replacement should CEO and chairman Marc Benioff step down. But with it looking more like president and COO Bret Taylor would be the heir apparent, perhaps Selipsky was ready for a new challenge.

Selipsky will make his return to AWS on May 17th and spend a few weeks with Jassy in a transitional time before taking over the division to run on his own. As Jassy slides into the Amazon CEO role, it’s clear the two will continue to work closely together, just as they did for over a decade.

Mar
17
2021
--

Amazon will expand its Amazon Care on-demand healthcare offering U.S.-wide this summer

Amazon is apparently pleased with how its Amazon Care pilot in Seattle has gone, since it announced this morning that it will be expanding the offering across the U.S. this summer, and opening it up to companies of all sizes, in addition to its own employees. The Amazon Care model combines on-demand and in-person care, and is meant as a solution from the search giant to address shortfalls in current offering for employer-sponsored healthcare offerings.

In a blog post announcing the expansion, Amazon touted the speed of access to care made possible for its employees and their families via the remote, chat and video-based features of Amazon Care. These are facilitated via a dedicated Amazon Care app, which provides direct, live chats via a nurse or doctor. Issues that then require in-person care is then handled via a house call, so a medical professional is actually sent to your home to take care of things like administering blood tests or doing a chest exam, and prescriptions are delivered to your door as well.

The expansion is being handled differently across both in-person and remote variants of care; remote services will be available starting this summer to both Amazon’s own employees, as well as other companies who sign on as customers, starting this summer. The in-person side will be rolling out more slowly, starting with availability in Washington, D.C., Baltimore, and “other cities in the coming months” according to the company.

As of today, Amazon Care is expanding in its home state of Washington to begin serving other companies. The idea is that others will sing on to make Amazon Care part of its overall benefits package for employees. Amazon is touting the speed advantages of testing services, including results delivery, for things including COVID-19 as a major strength of the service.

The Amazon Care model has a surprisingly Amazon twist, too – when using the in-person care option, the app will provide an updating ETA for when to expect your physician or medical technician, which is eerily similar to how its primary app treats package delivery.

While the Amazon Care pilot in Washington only launched a year-and-a-half ago, the company has had its collective mind set on upending the corporate healthcare industry for some time now. It announced a partnership with Berkshire Hathaway and JPMorgan back at the very beginning of 2018 to form a joint venture specifically to address the gaps they saw in the private corporate healthcare provider market.

That deep pocketed all-star team ended up officially disbanding at the outset of this year, after having done a whole lot of not very much in the three years in between. One of the stated reasons that Amazon and its partners gave for unpartnering was that each had made a lot of progress on its own in addressing the problems it had faced anyway. While Berkshire Hathaway and JPMorgan’s work in that regard might be less obvious, Amazon was clearly referring to Amazon Care.

It’s not unusual for large tech companies with lots of cash on the balance sheet and a need to attract and retain top-flight talent to spin up their own healthcare benefits for their workforces. Apple and Google both have their own on-campus wellness centers staffed by medical professionals, for instance. But Amazon’s ambitious have clearly exceeded those of its peers, and it looks intent on making a business line out of the work it did to improve its own employee care services — a strategy that isn’t too dissimilar from what happened with AWS, by the way.

Mar
02
2021
--

Microsoft Azure expands its NoSQL portfolio with Managed Instances for Apache Cassandra

At its Ignite conference today, Microsoft announced the launch of Azure Managed Instance for Apache Cassandra, its latest NoSQL database offering and a competitor to Cassandra-centric companies like Datastax. Microsoft describes the new service as a ‘semi-managed offering that will help companies bring more of their Cassandra-based workloads into its cloud.

“Customers can easily take on-prem Cassandra workloads and add limitless cloud scale while maintaining full compatibility with the latest version of Apache Cassandra,” Microsoft explains in its press materials. “Their deployments gain improved performance and availability, while benefiting from Azure’s security and compliance capabilities.”

Like its counterpart, Azure SQL Manages Instance, the idea here is to give users access to a scalable, cloud-based database service. To use Cassandra in Azure before, businesses had to either move to Cosmos DB, its highly scalable database service which supports the Cassandra, MongoDB, SQL and Gremlin APIs, or manage their own fleet of virtual machines or on-premises infrastructure.

Cassandra was originally developed at Facebook and then open-sourced in 2008. A year later, it joined the Apache Foundation and today it’s used widely across the industry, with companies like Apple and Netflix betting on it for some of their core services, for example. AWS launched a managed Cassandra-compatible service at its re:Invent conference in 2019 (it’s called Amazon Keyspaces today), Microsoft launched the Cassandra API for Cosmos DB in September 2018. With today’s announcement, though, the company can now offer a full range of Cassandra-based servicer for enterprises that want to move these workloads to its cloud.


Early Stage is the premiere ‘how-to’ event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company-building: Fundraising, recruiting, sales, legal, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included in each for audience questions and discussion.


Feb
17
2021
--

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

Feb
17
2021
--

TigerGraph raises $105M Series C for its enterprise graph database

TigerGraph, a well-funded enterprise startup that provides a graph database and analytics platform, today announced that it has raised a $105 million Series C funding round. The round was led by Tiger Global and brings the company’s total funding to over $170 million.

“TigerGraph is leading the paradigm shift in connecting and analyzing data via scalable and native graph technology with pre-connected entities versus the traditional way of joining large tables with rows and columns,” said TigerGraph founder and CEO, Yu Xu. “This funding will allow us to expand our offering and bring it to many more markets, enabling more customers to realize the benefits of graph analytics and AI.”

Current TigerGraph customers include the likes of Amgen, Citrix, Intuit, Jaguar Land Rover and UnitedHealth Group. Using a SQL-like query language (GSQL), these customers can use the company’s services to store and quickly query their graph databases. At the core of its offerings is the TigerGraphDB database and analytics platform, but the company also offers a hosted service, TigerGraph Cloud, with pay-as-you-go pricing, hosted either on AWS or Azure. With GraphStudio, the company also offers a graphical UI for creating data models and visually analyzing them.

The promise for the company’s database services is that they can scale to tens of terabytes of data with billions of edges. Its customers use the technology for a wide variety of use cases, including fraud detection, customer 360, IoT, AI and machine learning.

Like so many other companies in this space, TigerGraph is facing some tailwind thanks to the fact that many enterprises have accelerated their digital transformation projects during the pandemic.

“Over the last 12 months with the COVID-19 pandemic, companies have embraced digital transformation at a faster pace driving an urgent need to find new insights about their customers, products, services, and suppliers,” the company explains in today’s announcement. “Graph technology connects these domains from the relational databases, offering the opportunity to shrink development cycles for data preparation, improve data quality, identify new insights such as similarity patterns to deliver the next best action recommendation.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com