Nov
19
2020
--

Amazon S3 Storage Lens gives IT visibility into complex S3 usage

As your S3 storage requirements grow, it gets harder to understand exactly what you have, and this especially true when it crosses multiple regions. This could have broad implications for administrators, who are forced to build their own solutions to get that missing visibility. AWS changed that this week when it announced a new product called Amazon S3 Storage Lens, a way to understand highly complex S3 storage environments.

The tool provides analytics that help you understand what’s happening across your S3 object storage installations, and to take action when needed. As the company describes the new service in a blog post, “This is the first cloud storage analytics solution to give you organization-wide visibility into object storage, with point-in-time metrics and trend lines as well as actionable recommendations,” the company wrote in the post.

Amazon S3 Storage Lens Console

Image Credits: Amazon

The idea is to present a set of 29 metrics in a dashboard that help you “discover anomalies, identify cost efficiencies and apply data protection best practices,” according to the company. IT administrators can get a view of their storage landscape and can drill down into specific instances when necessary, such as if there is a problem that requires attention. The product comes out of the box with a default dashboard, but admins can also create their own customized dashboards, and even export S3 Lens data to other Amazon tools.

For companies with complex storage requirements, as in thousands or even tens of thousands of S3 storage instances, who have had to kludge together ways to understand what’s happening across the systems, this gives them a single view across it all.

S3 Storage Lens is now available in all AWS regions, according to the company.

Nov
18
2020
--

IBM is acquiring APM startup Instana as it continues to expand hybrid cloud vision

As IBM transitions from software and services to a company fully focussed on hybrid cloud management, it announced  its intention to buy Instana, an applications performance management startup with a cloud native approach that fits firmly within that strategy.

The companies did not reveal the purchase price.

With Instana, IBM can build on its internal management tools, giving it a way to monitor containerized environments running Kubernetes. It hopes by adding the startup to the fold it can give customers a way to manage complex hybrid and multi-cloud environments.

“Our clients today are faced with managing a complex technology landscape filled with mission-critical applications and data that are running across a variety of hybrid cloud environments – from public clouds, private clouds and on-premises,” Rob Thomas, senior vice president for cloud and data platform said in a statement. He believes Instana will help ease that load, while using machine learning to provide deeper insights.

At the time of the company’s $30 million Series C in 2018, TechCrunch’s Frederic Lardinois described the company this way. “What really makes Instana stand out is its ability to automatically discover and monitor the ever-changing infrastructure that makes up a modern application, especially when it comes to running containerized microservices.” That would seem to be precisely the type of solution that IBM would be looking for.

As for Instana, the founders see a good fit for the two companies, especially in light of the Red Hat acquisition in 2018 that is core to IBM’s hybrid approach. “The combination of Instana’s next generation APM and Observability platform with IBM’s Hybrid Cloud and AI technologies excited me from the day IBM approached us with the idea of joining forces and combining our technologies,” CEO Mirko Novakovic wrote in a blog post announcing the deal.

Indeed, in a recent interview IBM CEO Arvind Krishna told CNBC’s Jon Fortt, that they are betting the farm on hybrid cloud management with Red Hat at the center. When you combine that with the decision to spin out the company’s managed infrastructure services business, this purchase shows that they intend to pursue every angle

“The Red Hat acquisition gave us the technology base on which to build a hybrid cloud technology platform based on open-source, and based on giving choice to our clients as they embark on this journey. With the success of that acquisition now giving us the fuel, we can then take the next step, and the larger step, of taking the managed infrastructure services out. So the rest of the company can be absolutely focused on hybrid cloud and artificial intelligence,” Krishna told CNBC.

Instana, which is based in Chicago with offices in Munich, was founded in 2015 in the early days of Kubernetes and the startup’s APM solution has evolved to focus more on the needs of monitoring in a cloud native environment. The company raised $57 million along the way with the most recent round being that Series C in 2018.

The deal per usual is subject to regulatory approvals, but the company believes it should close in the next few months.

Nov
18
2020
--

Abacus.AI raises another $22M and launches new AI modules

AI startup RealityEngines.AI changed its name to Abacus.AI in July. At the same time, it announced a $13 million Series A round. Today, only a few months later, it is not changing its name again, but it is announcing a $22 million Series B round, led by Coatue, with Decibel Ventures and Index Partners participating as well. With this, the company, which was co-founded by former AWS and Google exec Bindu Reddy, has now raised a total of $40.3 million.

Abacus co-founder Bindu Reddy, Arvind Sundararajan and Siddartha Naidu. Image Credits: Abacus.AI

In addition to the new funding, Abacus.AI is also launching a new product today, which it calls Abacus.AI Deconstructed. Originally, the idea behind RealityEngines/Abacus.AI was to provide its users with a platform that would simplify building AI models by using AI to automatically train and optimize them. That hasn’t changed, but as it turns out, a lot of (potential) customers had already invested into their own workflows for building and training deep learning models but were looking for help in putting them into production and managing them throughout their lifecycle.

“One of the big pain points [businesses] had was, ‘look, I have data scientists and I have my models that I’ve built in-house. My data scientists have built them on laptops, but I don’t know how to push them to production. I don’t know how to maintain and keep models in production.’ I think pretty much every startup now is thinking of that problem,” Reddy said.

Image Credits: Abacus.AI

Since Abacus.AI had already built those tools anyway, the company decided to now also break its service down into three parts that users can adapt without relying on the full platform. That means you can now bring your model to the service and have the company host and monitor the model for you, for example. The service will manage the model in production and, for example, monitor for model drift.

Another area Abacus.AI has long focused on is model explainability and de-biasing, so it’s making that available as a module as well, as well as its real-time machine learning feature store that helps organizations create, store and share their machine learning features and deploy them into production.

As for the funding, Reddy tells me the company didn’t really have to raise a new round at this point. After the company announced its first round earlier this year, there was quite a lot of interest from others to also invest. “So we decided that we may as well raise the next round because we were seeing adoption, we felt we were ready product-wise. But we didn’t have a large enough sales team. And raising a little early made sense to build up the sales team,” she said.

Reddy also stressed that unlike some of the company’s competitors, Abacus.AI is trying to build a full-stack self-service solution that can essentially compete with the offerings of the big cloud vendors. That — and the engineering talent to build it — doesn’t come cheap.

Image Credits: Abacus.AI

It’s no surprise then that Abacus.AI plans to use the new funding to increase its R&D team, but it will also increase its go-to-market team from two to ten in the coming months. While the company is betting on a self-service model — and is seeing good traction with small- and medium-sized companies — you still need a sales team to work with large enterprises.

Come January, the company also plans to launch support for more languages and more machine vision use cases.

“We are proud to be leading the Series B investment in Abacus.AI, because we think that Abacus.AI’s unique cloud service now makes state-of-the-art AI easily accessible for organizations of all sizes, including start-ups,” Yanda Erlich, a p artner at Coatue Ventures  told me. “Abacus.AI’s end-to-end autonomous AI service powered by their Neural Architecture Search invention helps organizations with no ML expertise easily deploy deep learning systems in production.”

 

Nov
18
2020
--

Grouparoo snares $3M seed to build open source customer data integration framework

Creating a great customer experience requires a lot of data from a variety of sources, and pulling that disparate data together has captured the attention of companies and big and small from Salesforce and Adobe to Segment and Klaviyo. Today, Grouparoo, a new startup from three industry vets is the next company up with an open source framework designed to make it easier for developers to access and make use of customer data.

The company announced a $3 million seed investment led by Eniac Ventures and Fuel Capital with participation from Hack VC, Liquid2, SCM Advisors and several unnamed angel investors.

Grouparoo CEO and co-founder Brian Leonard says that his company has created this open source customer data framework based on his own experience and difficulty getting customer data into the various tools he has been using since he was technical founder at TaskRabbit in 2008.

“We’re an open source data framework that helps companies easily sync their customer data from their database or warehouse to all of the SaaS tools where they need it. [After you] install it, you teach it about your customers, like what properties are important in each of those profiles. And then it allows you to segment them into the groups that matter,” Leonard explained.

This could be something like high earners in San Francisco along with names and addresses. Grouparoo can grab this data and transfer it to a marketing tool like Marketo or Zendesk and these tools could then learn who your VIP customers are.

For now the company is just the three founders Leonard, CTO Evan Tahler and COO Andy Jih, and while he wasn’t ready to commit to how many people he might hire in the next 12 months, he sees it being less than 10. At this early stage, the three co-founders have already been considering how to build a diverse and inclusive company, something he helped contribute to while he was at TaskRabbit.

“So, coming from [what we built at TaskRabbit] and starting something new, it’s important to all three of us to start [building a diverse company] from the beginning, and especially combined with this notion that we’re building something open source. We’ve been talking a lot about being open about our culture and what’s important to us,” he said.

TaskRabbit also comes into play in their investment where Fuel GP Leah Solivan was also founder of TaskRabbit. “Grouparoo is solving a real and acute issue that companies grapple with as they scale — giving every member of the team access to the data they need to drive revenue, acquire customers and improve real-time decision making. Brian, Andy and Evan have developed an elegant solution to an issue we experienced firsthand at TaskRabbit,” she said.

For now the company is taking an open source approach to build a community around the tool. It is still pre-revenue, but the plan is to find a way to build something commercial on top of the open source tooling. They are considering an open core license where they can add features or support or offer the tool as a service. Leonard says that is something they intend to work out in 2021.

Nov
18
2020
--

CloudBolt announces $35M Series B debt/equity investment to help manage hybrid cloud

CloudBolt, a Bethesda, Maryland startup that helps companies manage hybrid cloud environments, announced a $35 million Series B investment today. It was split between $15 million in equity investment and $20 million in debt.

Insight Partners provided the equity side of the equation, while Hercules Capital and Bridge Bank supplied the venture debt. The company has now raised more than $61 million in equity and debt, according to Crunchbase data.

CEO Jeff Kukowski says that his company helps customers with cloud and DevOps management including cost control, compliance and security. “We help [our customers] take advantage of the fact that most organizations are already hybrid cloud, multi cloud and/or multi tool. So you have all of this innovation happening in the world, and we make it easier for them to take advantage of it,” he said.

As he sees it, the move to cloud and DevOps, which was supposed to simplify everything, has actually created new complexity, and the tools his company sells are designed to help companies reduce some of that added complexity. What they do is provide a way to automate, secure and optimize their workloads, regardless of the tools or approach to infrastructure they are using.

The company closed the funding round at the end of last quarter and put it to work with a couple of acquisitions — Kumolus and SovLabs — to help accelerate and fill in the road map. Kumolus, which was founded in 2011 and raised $1.7 million, according to Crunchbase, really helps CloudBolt extend its vision from managing on premises to the public cloud.

SovLabs was an early-stage startup working on a very specific problem creating a framework for extending VMware automation.

CloudBolt currently has 170 employees. While Kukowski didn’t want to get specific about the number of additional employees he might be adding to that in the next 12 months, he says that as he does, he thinks about diversity in three ways.

“One is just pure education. So we as a company regularly meet and educate on issues around inclusion, social justice and diversity. We also recruit with those ideas in mind. And then we also have a standing committee within the company that continues to look at issues not only for discussion, but quite frankly for investment in terms of time and fundraising,” he said.

Kukowski says that going remote because of COVID has allowed the company to hire from anywhere, but he still looks forward to a time when he can meet face-to-face with his employees and customers, and sees that as always being part of his company’s culture.

CloudBolt was founded in 2012 and has around 200 customers. Kukowski says that the company is growing between 40% and 50% year over year, although he wouldn’t share specific revenue numbers.

Nov
17
2020
--

Wondering How to Run Percona XtraDB Cluster on Kubernetes? Try Our Operator!

Run Percona XtraDB Cluster on Kubernetes

Run Percona XtraDB Cluster on KubernetesKubernetes has been a big trend for a while now, particularly well-suited for microservices. Running your main databases on Kubernetes is probably NOT what you are looking for. However, there’s a niche market for them. My colleague Stephen Thorn did a great job explaining this in The Criticality of a Kubernetes Operator for Databases. If you are considering running your database on Kubernetes, have a look at it first. And, if after reading it you start wondering how the Operator works, Stephen also wrote an Introduction to Percona Kubernetes Operator for Percona XtraDB Cluster (PXC), which presents the Kubernetes architecture and how the Percona Operator simplifies the deployment of a full HA PXC cluster in this environment, proxies included!

Now, if you are curious about how it actually works in practice but are afraid the entry barrier is too high, I can help you with that. In fact, this technology is widespread now, with most cloud providers offering a dedicated Kubernetes engine. In this blog post, I’ll walk you over the steps on how to deploy a Percona XtraDB Cluster (PXC) using the Percona Operator for Kubernetes on Google Cloud Platform (GCP).

Creating a Virtual Environment to Run Kubernetes on GCP

Google Cloud Platform includes among its products the Google Kubernetes Engine (GKE). We can take advantage of their trial offer to create our test cluster there: https://cloud.google.com.

After you sign up, you can access all the bells and whistles in their web interface. Note the Kubernetes Engine API is not enabled by default, you need to do it by visiting the Kubernetes Engine section in the left menu, under COMPUTE.

For the purpose of deploying our environment, we should install their SDK and work from the command line: see https://cloud.google.com/sdk/docs/install and follow the respective installation instructions for your OS (you will probably want to install the SDK on your personal computer).

With the SDK installed, we can initialize our environment, which requires authenticating to the Google Cloud account:

gcloud init

You will be prompted to choose a cloud project to use: there’s one created by default when the account is activated, named “My First Project”. It will receive a unique id, which you can verify in the Google Cloud interface, but usually, it is displayed as the first option presented in the prompt.

Alternatively, you can use gcloud config set to configure your default project and zone, among other settings.

For this exercise, we will be creating a 3-node cluster named k8-test-cluster with n1-standard-4 instances in the us-central1-b zone:

gcloud container clusters create --machine-type n1-standard-4 --num-nodes 3 --zone us-central1-b --cluster-version latest k8-test-cluster

If the command above was successful, you should see your newly created cluster in the list returned by:

gcloud container clusters list

Getting Ready to Work with Kubernetes

Besides the Google Cloud SDK that is used to manage the cloud instances, we also need the Kubernetes command-line tool, kubectl, to manage the Kubernetes cluster. One way to install it is through gcloud itself:

gcloud components install kubectl

This method won’t work for everyone though, as the Cloud SDK component manager is disabled for certain kinds of installation, such as through apt or yum in Linux. I find myself in this group, using Ubuntu, but the failed attempt to install kubectl through gcloud suggested another approach that worked for me:

sudo apt install kubectl

Deploying a PXC Cluster Using the Percona Kubernetes Operator

The Percona operators are available on Github. The most straightforward way to obtain a copy is by cloning the operator’s repository. The latest version of the PXC operator is 1.6.0 and we can clone it with the following command:

git clone -b v1.6.0 https://github.com/percona/percona-xtradb-cluster-operator

Move inside the created directory:

cd percona-xtradb-cluster-operator

and run the following sequence of commands:

  1. Define the Custom Resource Definitions for PXC:

    kubectl apply -f deploy/crd.yaml
  2. Create a namespace on Kubernetes and associate it to your current context:

    kubectl create namespace pxc
    kubectl config set-context $(kubectl config current-context) --namespace=pxc
  3. Define Role-Based Access Control (RBAC) for PXC:

    kubectl apply -f deploy/rbac.yaml
  4. Start the operator within Kubernetes:

    kubectl apply -f deploy/operator.yaml
  5. Configure PXC users and their credentials:

    kubectl apply -f deploy/secrets.yaml
  6. Finally, deploy the cluster:

    kubectl apply -f deploy/cr.yaml

You can find a more detailed explanation of each of these steps, as well as how to customize your installation, in the Percona Kubernetes Operator for Percona XtraDB Cluster online documentation, which includes a quickstart guide for GKE.

Now, it is a matter of waiting for the deployment to complete, which you can monitor with:

kubectl get pods

A successful deployment will show output for the above command similar to:

NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-haproxy-0                                 2/2     Running   0          4m21s
cluster1-haproxy-1                                 2/2     Running   0          2m47s
cluster1-haproxy-2                                 2/2     Running   0          2m21s
cluster1-pxc-0                                     1/1     Running   0          4m22s
cluster1-pxc-1                                     1/1     Running   0          2m52s
cluster1-pxc-2                                     1/1     Running   0          111s
percona-xtradb-cluster-operator-79d786dcfb-9lthw   1/1     Running   0          4m37s

As you can see above, the operator will deploy seven pods with the default settings, and those are distributed across the three GKE n1-standard-4 machines we created at first:

kubectl get nodes
NAME                                             STATUS   ROLES    AGE    VERSION
gke-k8-test-cluster-default-pool-02c370e1-gvfg   Ready    <none>   152m   v1.17.13-gke.1400
gke-k8-test-cluster-default-pool-02c370e1-lvh7   Ready    <none>   152m   v1.17.13-gke.1400
gke-k8-test-cluster-default-pool-02c370e1-qn3p   Ready    <none>   152m   v1.17.13-gke.1400

Accessing the Cluster

One way to access the cluster is by creating an interactive shell in the Kubernetes cluster:

kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il

From there, we can access MySQL through the cluster’s HAproxy writer node:

mysql -h cluster1-haproxy -uroot -proot_password

Note the hostname used above is an alias, the connection being routed to one of the HAproxy servers available in the cluster. It is also possible to connect to a specific node by modifying the host option -h with the node’s name:

mysql -h cluster1-pxc-0 -uroot -proot_password

This is where all the fun and experimentation starts: you can test and break things without worrying too much as you can easily and quickly start again from scratch.

Destroying the Cluster and Deleting the Test Environment

Once you are done playing with your Kubernetes cluster, you can destroy it with:

gcloud container clusters delete --zone=us-central1-b k8-test-cluster

It’s important to note the command above will not discard the persistent disk volumes that were created and used by the nodes, which you can check with the command:

gcloud compute disks list

A final purging command is required to remove those as well:

gcloud compute disks delete <disk_name_1> <disk_name_2> <disk_name_3> --zone=us-central1-b

If you are feeling overzealous, you can double-check that all has been deleted:

gcloud container clusters list

gcloud compute disks list

Learn More About Percona Kubernetes Operator for Percona XtraDB Cluster

Interested In Hands-On Learning?

Be sure to get in touch with Percona’s Training Department to schedule your PXC Kubernetes training engagement. Our expert trainers will guide your team firstly through the basics, cover all the configuration noted above (and then some), and then dive deeper into how the operator functions along with High-Availability exercises, disaster recovery scenarios, backups, restore, and much more.

Nov
17
2020
--

Marketing automation platform Klaviyo scores $200M Series C on $4.15B valuation

Boston-based marketing automation firm Klaviyo wants to change the way marketers interact with data, giving them direct access to their data and their customers. It believes that makes it easier to customize the messages and produce better results. Investors apparently agree, awarding the company a $200 million Series C on a hefty $4.15 billion valuation today.

The round was led by Accel with help from Summit Partners. It comes on the heels of last year’s $150 million Series B, and brings the total raised to $385.5 million, according the company. Accel’s Ping Li will also be joining the company board under the terms of today’s announcement.

Marketing automation and communication takes on a special significance as we find ourselves in the midst of this pandemic and companies need to find ways to communicate in meaningful ways with customers who can’t come into brick and mortar establishments. Company CEO and co-founder Andrew Bialecki says that his company’s unique use of data helps in this regard.

“I think our success is because we are a hybrid customer data and marketing platform. We think about what it takes to create these owned experiences. They’re very contextual and you need all of that customer data, not some of it, all of it, and you need that to be tightly coupled with how you’re building customer experiences,” Bialecki explained.

Andrew Bialecki, CEO and co-founder at Klaviyo

Andrew Bialecki, CEO and co-founder at Klaviyo Image Credits: Klaviyo

He believes that by providing a platform of this scope that combines the data, the ability to customize messages and the use of machine learning to keep improving that, it will help them compete with the largest platforms. In fact his goal is to help companies understand that they don’t have to give up their customer data to Amazon, Google and Facebook.

“The flip side of that is growing through Amazon where you give up all your customer data, or Facebook or Google where you kind of are delegated to wherever their algorithms decide where you get to show up,” he said. With Klaviyo, the company retains its own data, and Ping Li, who is leading the investment at Accel says that it where the e-commerce market is going.

“So the question is, is there a tool that allows you to do that as easily as going on Facebook and Google, and I think that’s the vision and the promise that Klaviyo is delivering on,” Li said.  He believes that this will allow their customers to actually build that kind of fidelity with their customers by going directly to them, instead of through a third-party intermediary.

The company has seen some significant success with 50,000 customers in 125 countries along with that lofty valuation. The customer number has doubled year over year, even during the economic malaise brought on by the pandemic.

Today, the company has 500 employees with plans to double that in the next year. As he grows his company, Bialecki believes diversity is not just the right thing to do, it’s also smart business. “I think the competitive advantages that tech companies are going to have going forward, especially for the tech companies that are not the leaders today, but [could be] leaders in the coming decades, it’s because they have the most diverse teams and inclusive culture and those are both big focuses for us,” he said.

As they move forward flush with this cash, the company wants to continue to build out the platform, giving customers access to a set of tools that allow them to know their own customers on an increasingly granular level, while delivering more meaningful interactions. “It’s all about accelerating product development and getting into new markets,” Bialecki said. They certainly have plenty of runway to do that now.

Nov
17
2020
--

Dropbox shifts business product focus to remote work with Spaces update

In a September interview at TechCrunch Disrupt, Dropbox co-founder and CEO Drew Houston talked about how the pandemic had forced the company to rethink what work means, and how his company is shifting with the new requirements of a work-from-home world. Today, the company announced broad changes to Dropbox Spaces, the product introduced last year, to make it a collaboration and project management tool designed with these new requirements in mind.

Dropbox president Timothy Young says that the company has always been about making it easy to access files wherever you happen to be and whatever device you happen to be on, whether that was in a consumer or business context. As the company has built out its business products over the last several years, that involved sharing content internally or externally. Today’s announcement is about helping teams plan and execute around the content you create with a strong project focus.

“Now what we’re basically trying to do is really help distributed teams stay organized, collaborate together and keep moving along, but also do so in a really secure way and support IT, administrators and companies with some features around that as well, while staying true to Dropbox principles,” Young said.

This involves updating Spaces to be a full-fledged project management tool designed with a distributed workforce in mind. Spaces connects to other tools like your calendar, people directory, project management software — and of course files. You can create a project, add people and files, then set up a timeline and assign and track tasks, In addition, you can access meetings directly from Spaces and communicate with team members, who can be inside or outside the company.

Houston suggested a product like this could be coming in his September interview when he said:

“Back in March we started thinking about this, and how [the rapid shift to distributed work] just kind of happened. It wasn’t really designed. What if you did design it? How would you design this experience to be really great? And so starting in March we reoriented our whole product road map around distributed work,” he said.

Along these same lines, Young says the company itself plans to continue to be a remote first company even after the pandemic ends, and will continue to build tools to make it easier to collaborate and share information with that personal experience in mind.

Today’s announcement is a step in that direction. Dropbox Spaces has been in private beta and should be available at the beginning of next year.

Nov
17
2020
--

Cato Network snags $130M Series E on $1B valuation as cloud wide area networking thrives

Cato Networks has spent the last five years building a cloud-based wide area network that lets individuals connect to network resources regardless of where they are. When the pandemic hit, and many businesses shifted to work from home, it was the perfect moment for technology like this. Today, the company was rewarded with a $130 million Series E investment on $1 billion valuation.

Lightspeed Venture Partners led the round, with participation from new investor Coatue and existing investors Greylock, Aspect Ventures/Acrew Capital, Singtel Innov8 and Shlomo Kramer (who is the co-founder and CEO of the company). The company reports it has now raised $332 million since inception.

Kramer is a serial entrepreneur. He co-founded Check Point Software, which went public in 1996, and Imperva, which went public in 2011 and was later acquired by private equity firm Thoma Bravo in 2018. He helped launch Cato in 2015. “In 2015, we identified that the wide area networks (WANs), which is a tens of billions of dollars market, was still built on the same technology stack […] that connects physical locations, and appliances that protect physical locations and was primarily sold by the telcos and MSPs for many years,” Kramer explained.

The idea with Cato was to take that technology and redesign it for a mobile and cloud world, not one that was built for the previous generation of software that lived in private data centers and was mostly accessed from an office. Today they have a cloud-based network of 60 Points of Presence (PoPs) around the world, giving customers access to networking resources and network security no matter where they happen to be.

The bet they made was a good one because the world has changed, and that became even more pronounced this year when COVID hit and forced many people to work from home. Now suddenly having the ability to sign in from anywhere became more important than ever, and they have been doing well, with 2x growth in ARR this year (although he wouldn’t share specific revenue numbers).

As a company getting Series E funding, Kramer doesn’t shy away from the idea of eventually going public, especially since he’s done it twice before, but neither is he ready to commit any time table. For now, he says the company is growing rapidly, with almost 700 customers — and that’s why it decided to take such a large capital influx right now.

Cato currently has 270 employees, with plans to grow to 400 by the end of next year. He says that Cato is a global company with headquarters in Israel, where diversity involves religion, but he is trying to build a diverse and inclusive culture regardless of the location.

“My feeling is that inclusion needs to happen in the earlier stages of the funnel. I’m personally involved in these efforts, at the educational sector level, and when students are ready to be recruited by startups, we are already competitive, and if you look at our employee base it’s very diverse,” Kramer said.

With the new funds, he plans to keep building the company and the product. “There’s a huge opportunity and we want to move as fast as possible. We are also going to make very big investments on the engineering side to take the solution and go to the next level,” he said.

Nov
16
2020
--

Harbr raises $38.5M to help enterprises exchange and share big data troves securely

Organizations today are sitting on mountains of data that they amass and use in their own businesses, but many are also looking to share those troves with other parties to expand their prospects — a model that comes with challenges (privacy and data protection being two key ones); and, these days (due to COVID-19 and the push to more digital transformation), with urgency; but also big rewards if you can pull it off well.

Today, a new London startup called Harbr, which has built a secure platform to enable big data exchange, is announcing a big round of funding to tap into that demand.

The company has raised $38.5 million in a Series A round of funding, just six months since emerging from stealth mode. It plans to use the money to hire more people to meet the demand of serving more enterprise customers, and for R&D.

Led jointly by new backers Dawn Capital and Tiger Global Management, the round also had participation from past investors Mike Chalfen, Boldstart Ventures, Crane Venture Partners, Backed and Seedcamp, alongside UiPath’s founder and CEO Daniel Dines and head of strategy Brandon Deer. Harbr has now raised over $50 million, and it’s not disclosing its valuation.

Harbr has been around since 2017, but it only came out of stealth mode earlier this year, in May. Its approach has mirrored that of a lot of other enterprise startups that spend a long time building their product under wraps. Identifying the market opportunity when it was still nascent, Harbr then worked directly (and quietly) with enterprises to figure out what they needed and built it, before launching it as a commercial product (with customers already in hand).

“Back in 2017 no one was talking about enterprise data exchanges,” Harbr’s CSO Anthony Cosgrove (who co-founded the company with Gary Butler, the CEO) told me in an interview. “So we worked with big companies to understand their needs and built Harbr based on that.”

Customers include those in financial and enterprise services such as Moody’s Analytics and WinterCorp, as well as governments. Cosgrove noted that nearly 100% of Harbr’s clients are in the U.S., where the startup’s chairman Leo Spiegel is based. Spiegel is also an investor, with an extensive enterprise data services resume to his name.

“This is a team that has worked together for a long time,” Spiegel said in an interview. “Gary [the CEO] and I have worked together for 20 years before Harbr. I have been in data a very long time, and we have a lot of relationships with U.S. companies.” (That is one sign of why this enterprise startup has raised a substantial amount of funding so early in its public life.)

Cosgrove, an MBE, himself has a background in banking and before that U.K. government.

The platform today provides enterprises with a way to tap into data that an organization may already have in data lakes and warehouses, which it already uses for analytics and business intelligence. The idea is to make that data ready and secure for enterprise data exchange, either with other parts of your own large organization, or with third parties. That involves creating a “clean room”, providing tools for making it accessible by third parties, and potentially turning it into a data marketplace, if that is your goal.

Image Credits: Harbr

The challenges that Harbr addresses come from a couple of different angles. The first of these is technical: putting data troves from disparate sources into a format that can be usable by others. The second of these is commercial: creating something that you can then provide to others, but also making that marketplace findable and usable. The third of these is security.

Cosgrove said that he doesn’t think of Harbr as a security company first, but he points out that these days this has become as much of a concern (if not more) than simply making a data product usable. Being able to protect your data as valuable IP is important, but on top of that, you have the roles of privacy and data protection.

These have moved from being fringe concerns to a priority for many users, and, in an increasing number of cases, a legal requirement. So, as companies look for ways to tap into the big data opportunity while keeping those principles in mind, they are looking for companies built with privacy and data protection from the ground up.

“We’re really focused on helping people to treat data as a product. They bring assets into a platform and turn them into data products that are easy to consume, use and merge,” said Cosgrove. “We see security as a by-product of that: you have to consider security as part of it.” Harbr the name is a play on Harbor, which itself is a reference to safe harbor principles and regulations.

Harbr is not the only company looking at this opportunity. InfoSum, also out of the U.K., is also tackling the concept of a privacy-first approach to federated data, providing a way to share data across organizations without compromising data protection in any way. DataFleets out of the Bay Area is another startup also building a platform and tools to help enterprises with this challenge and opportunity.

“For data to become truly powerful, we need more automation and collaboration. Today, human efforts are consumed by finding and preparing data, rather than focused on high-value activities that drive real productivity gains,” said Evgenia Plotnikova, partner at Dawn Capital, in a statement. “Harbr is in the vanguard of companies changing this reality, and we are incredibly excited to be partnering with them. Customers we’ve spoken to find Harbr’s enterprise data exchange transformative, and their engagement across Fortune 1000 companies substantiates this.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com