Aug
25
2021
--

Cribl raises $200M to help enterprises do more with their data

At a time when remote work, cybersecurity attacks and increased privacy and compliance requirements threaten a company’s data, more companies are collecting and storing their observability data, but are being locked in with vendors or have difficulty accessing the data.

Enter Cribl. The San Francisco-based company is developing an “open ecosystem of data” for enterprises that utilizes unified data pipelines, called “observability pipelines,” to parse and route any type of data that flows through a corporate IT system. Users can then choose their own analytics tools and storage destinations like Splunk, Datadog and Exabeam, but without becoming dependent on a vendor.

The company announced Wednesday a $200 million round of Series C funding to value Cribl at $1.5 billion, according to a source close to the company. Greylock and Redpoint Ventures co-led the round and were joined by new investor IVP, existing investors Sequoia and CRV and strategic investment from Citi Ventures and CrowdStrike. The new capital infusion gives Cribl a total of $254 million in funding since the company was started in 2017, Cribl co-founder and CEO Clint Sharp told TechCrunch.

Sharp did not discuss the valuation; however, he believes that the round is “validation that the observability pipeline category is legit.” Data is growing at a compound annual growth rate of 25%, and organizations are collecting five times more data today than they did 10 years ago, he explained.

“Ultimately, they want to ask and answer questions, especially for IT and security people,” Sharp added. “When Zoom sends data on who started a phone call, that might be data I need to know so I know who is on the call from a security perspective and who they are communicating with. Also, who is sending files to whom and what machines are communicating together in case there is a malicious actor. We can also find out who is having a bad experience with the system and what resources they can access to try and troubleshoot the problem.”

Cribl also enables users to choose how they want to store their data, which is different from competitors that often lock companies into using only their products. Instead, customers can buy the best products from different categories and they will all talk to each other through Cribl, Sharp said.

Though Cribl is developing a pipeline for data, Sharp sees it more as an “observability lake,” as more companies have differing data storage needs. He explains that the lake is where all of the data will go that doesn’t need to go into an existing storage solution. The pipelines will send the data to specific tools and then collect the data, and what doesn’t fit will go back into the lake so companies have it to go back to later. Companies can keep the data for longer and more cost effectively.

Cribl said it is seven times more efficient at processing event data and boasts a customer list that includes Whole Foods, Vodafone, FINRA, Fannie Mae and Cox Automotive.

Sharp went after additional funding after seeing huge traction in its existing customer base, saying that “when you see that kind of traction, you want to keep doubling down.” His aim is to have a presence in every North American city and in Europe, to continue launching new products and growing the engineering team.

Up next, the company is focusing on go-to-market and engineering growth. Its headcount is 150 currently, and Sharp expects to grow that to 250 by the end of the year.

Over the last fiscal year, Cribl grew its revenue 293%, and Sharp expects that same trajectory for this year. The company is now at a growth stage, and with the new investment, he believes Cribl is the “future leader in observability.”

“This is a great investment for us, and every dollar, we believe, is going to create an outsized return as we are the only commercial company in this space,” he added.

Scott Raney, managing director at Redpoint Ventures, said his firm is a big enterprise investor in software, particularly in companies that help organizations leverage data to protect themselves, a sweet spot that Cribl falls into.

He feels Sharp is leading a team, having come from Splunk, that has accomplished a lot, has a vision and a handle on the business and knows the market well. Where Splunk is capturing the machine data and using its systems to extract the data, Cribl is doing something similar in directing the data where it needs to go, while also enabling companies to utilize multiple vendors and build apps to sit on top of its infrastructure.

“Cribl is adding opportunity by enriching the data flowing through, and the benefits are going to be meaningful in cost reduction,” Raney said. “The attitude out there is to put data in cheaper places, and afford more flexibility to extract data. Step one is to make that transition, and step two is how to drive the data sitting there. Cribl is doing something that will go from being a big business to a legacy company 30 years from now.”

Jun
25
2021
--

Edge Delta raises $15M Series A to take on Splunk

Seattle-based Edge Delta, a startup that is building a modern distributed monitoring stack that is competing directly with industry heavyweights like Splunk, New Relic and Datadog, today announced that it has raised a $15 million Series A funding round led by Menlo Ventures and Tim Tully, the former CTO of Splunk. Previous investors MaC Venture Capital and Amity Ventures also participated in this round, which brings the company’s total funding to date to $18 million.

“Our thesis is that there’s no way that enterprises today can continue to analyze all their data in real time,” said Edge Delta co-founder and CEO Ozan Unlu, who has worked in the observability space for about 15 years already (including at Microsoft and Sumo Logic). “The way that it was traditionally done with these primitive, centralized models — there’s just too much data. It worked 10 years ago, but gigabytes turned into terabytes and now terabytes are turning into petabytes. That whole model is breaking down.”

Image Credits: Edge Delta

He acknowledges that traditional big data warehousing works quite well for business intelligence and analytics use cases. But that’s not real-time and also involves moving a lot of data from where it’s generated to a centralized warehouse. The promise of Edge Delta is that it can offer all of the capabilities of this centralized model by allowing enterprises to start to analyze their logs, metrics, traces and other telemetry right at the source. This, in turn, also allows them to get visibility into all of the data that’s generated there, instead of many of today’s systems, which only provide insights into a small slice of this information.

While competing services tend to have agents that run on a customer’s machine, but typically only compress the data, encrypt it and then send it on to its final destination, Edge Delta’s agent starts analyzing the data right at the local level. With that, if you want to, for example, graph error rates from your Kubernetes cluster, you wouldn’t have to gather all of this data and send it off to your data warehouse where it has to be indexed before it can be analyzed and graphed.

With Edge Delta, you could instead have every single node draw its own graph, which Edge Delta can then combine later on. With this, Edge Delta argues, its agent is able to offer significant performance benefits, often by orders of magnitude. This also allows businesses to run their machine learning models at the edge, as well.

Image Credits: Edge Delta

“What I saw before I was leaving Splunk was that people were sort of being choosy about where they put workloads for a variety of reasons, including cost control,” said Menlo Ventures’ Tim Tully, who joined the firm only a couple of months ago. “So this idea that you can move some of the compute down to the edge and lower latency and do machine learning at the edge in a distributed way was incredibly fascinating to me.”

Edge Delta is able to offer a significantly cheaper service, in large part because it doesn’t have to run a lot of compute and manage huge storage pools itself since a lot of that is handled at the edge. And while the customers obviously still incur some overhead to provision this compute power, it’s still significantly less than what they would be paying for a comparable service. The company argues that it typically sees about a 90 percent improvement in total cost of ownership compared to traditional centralized services.

Image Credits: Edge Delta

Edge Delta charges based on volume and it is not shy to compare its prices with Splunk’s and does so right on its pricing calculator. Indeed, in talking to Tully and Unlu, Splunk was clearly on everybody’s mind.

“There’s kind of this concept of unbundling of Splunk,” Unlu said. “You have Snowflake and the data warehouse solutions coming in from one side, and they’re saying, ‘hey, if you don’t care about real time, go use us.’ And then we’re the other half of the equation, which is: actually there’s a lot of real-time operational use cases and this model is actually better for those massive stream processing datasets that you required to analyze in real time.”

But despite this competition, Edge Delta can still integrate with Splunk and similar services. Users can still take their data, ingest it through Edge Delta and then pass it on to the likes of Sumo Logic, Splunk, AWS’s S3 and other solutions.

Image Credits: Edge Delta

“If you follow the trajectory of Splunk, we had this whole idea of building this business around IoT and Splunk at the Edge — and we never really quite got there,” Tully said. “I think what we’re winding up seeing collectively is the edge actually means something a little bit different. […] The advances in distributed computing and sophistication of hardware at the edge allows these types of problems to be solved at a lower cost and lower latency.”

The Edge Delta team plans to use the new funding to expand its team and support all of the new customers that have shown interest in the product. For that, it is building out its go-to-market and marketing teams, as well as its customer success and support teams.

 

Jun
22
2021
--

Vantage raises $4M to help businesses understand their AWS costs

Vantage, a service that helps businesses analyze and reduce their AWS costs, today announced that it has raised a $4 million seed round led by Andreessen Horowitz. A number of angel investors, including Brianne Kimmel, Julia Lipton, Stephanie Friedman, Calvin French Owen, Ben and Moisey Uretsky, Mitch Wainer and Justin Gage, also participated in this round.

Vantage started out with a focus on making the AWS console a bit easier to use — and helping businesses figure out what they are spending their cloud infrastructure budgets on in the process. But as Vantage co-founder and CEO Ben Schaechter told me, it was the cost transparency features that really caught on with users.

“We were advertising ourselves as being an alternative AWS console with a focus on developer experience and cost transparency,” he said. “What was interesting is — even in the early days of early access before the formal GA launch in January — I would say more than 95% of the feedback that we were getting from customers was entirely around the cost features that we had in Vantage.”

Image Credits: Vantage

Like any good startup, the Vantage team looked at this and decided to double down on these features and highlight them in its marketing, though it kept the existing AWS Console-related tools as well. The reason the other tools didn’t quite take off, Schaechter believes, is because more and more, AWS users have become accustomed to infrastructure-as-code to do their own automatic provisioning. And with that, they spend a lot less time in the AWS Console anyway.

“But one consistent thing — across the board — was that people were having a really, really hard time 12 times a year, where they would get a shocking AWS bill and had to figure out what happened. What Vantage is doing today is providing a lot of value on the transparency front there,” he said.

Over the course of the last few months, the team added a number of new features to its cost transparency tools, including machine learning-driven predictions (both on the overall account level and service level) and the ability to share reports across teams.

Image Credits: Vantage

While Vantage expects to add support for other clouds in the future, likely starting with Azure and then GCP, that’s actually not what the team is focused on right now. Instead, Schaechter noted, the team plans to add support for bringing in data from third-party cloud services instead.

“The number one line item for companies tends to be AWS, GCP, Azure,” he said. “But then, after that, it’s Datadog, Cloudflare, Sumo Logic, things along those lines. Right now, there’s no way to see, P&L or an ROI from a cloud usage-based perspective. Vantage can be the tool where that’s showing you essentially, all of your cloud costs in one space.”

That is likely the vision the investors bought into, as well, and even though Vantage is now going up against enterprise tools like Apptio’s Cloudability and VMware’s CloudHealth, Schaechter doesn’t seem to be all that worried about the competition. He argues that these are tools that were born in a time when AWS had only a handful of services and only a few ways of interacting with those. He believes that Vantage, as a modern self-service platform, will have quite a few advantages over these older services.

“You can get up and running in a few clicks. You don’t have to talk to a sales team. We’re helping a large number of startups at this stage all the way up to the enterprise, whereas Cloudability and CloudHealth are, in my mind, kind of antiquated enterprise offerings. No startup is choosing to use those at this point, as far as I know,” he said.

The team, which until now mostly consisted of Schaechter and his co-founder and CTO Brooke McKim, bootstrapped the company up to this point. Now they plan to use the new capital to build out its team (and the company is actively hiring right now), both on the development and go-to-market side.

The company offers a free starter plan for businesses that track up to $2,500 in monthly AWS cost, with paid plans starting at $30 per month for those who need to track larger accounts.

Mar
09
2021
--

YL Ventures sells its stake in cybersecurity unicorn Axonius for $270M

YL Ventures, the Israel-focused cybersecurity seed fund, today announced that it has sold its stake in cybersecurity asset management startup Axonius, which only a week ago announced a $100 million Series D funding round that now values it at around $1.2 billion.

ICONIQ Growth, Alkeon Capital Management, DTCP and Harmony Partners acquired YL Venture’s stake for $270 million. This marks YL’s first return from its third $75 million fund, which it raised in 2017, and the largest return in the firm’s history.

With this sale, the company’s third fund still has six portfolio companies remaining. It closed its fourth fund with $120 million in committed capital in the middle of 2019.

Unlike YL, which focuses on early-stage companies — though it also tends to participate in some later-stage rounds — the investors that are buying its stake specialize in later-stage companies that are often on an IPO path. ICONIQ Growth has invested in the likes of Adyen, CrowdStrike, Datadog and Zoom, for example, and has also regularly partnered with YL Ventures on its later-stage investments.

“The transition from early-stage to late-stage investors just makes sense as we drive toward IPO, and it allows each investor to focus on what they do best,” said Dean Sysman, co-founder and CEO of Axonius. “We appreciate the guidance and support the YL Ventures team has provided during the early stages of our company and we congratulate them on this successful journey.”

To put this sale into perspective for the Silicon Valley and Tel Aviv-based YL Ventures, it’s worth noting that it currently manages about $300 million. Its current portfolio includes the likes of Orca Security, Hunters and Cycode. This sale is a huge win for the firm.

Its most headline-grabbing exit so far was Twistlock, which was acquired by Palo Alto Networks for $410 million in 2019, but it has also seen exits of its portfolio companies to Microsoft, Proofpoint, CA Technologies and Walmart, among others. The fund participated in Axonius’ $4 million seed round in 2017 up to its $58 million Series C round a year ago.

It seems like YL Ventures is taking a very pragmatic approach here. It doesn’t specialize in late-stage firms — and until recently, Israeli startups always tended to sell long before they got to a late-stage round anyway. And it can generate a nice — and guaranteed — return for its own investors, too.

“This exit netted $270 million in cash directly to our third fund, which had $75 million total in capital commitments, and this fund still has six outstanding portfolio companies remaining,” Yoav Leitersdorf, YL Ventures’ founder and managing partner, told me. “Returning multiple times that fund now with a single exit, with the rest of the portfolio companies still there for the upside is the most responsible — yet highly profitable path — we could have taken for our fund at this time. And all this while diverting our energies and means more towards our seed-stage companies (where our help is more impactful), and at the same time supporting Axonius by enabling it to bring aboard such excellent late-stage investors as ICONIQ and Alkeon — a true win-win-win situation for everyone involved!”

He also noted that this sale achieved a top-decile return for the firm’s limited partners and allows it to focus its resources and attention toward the younger companies in its portfolio.

Dec
02
2020
--

Fylamynt raises $6.5M for its cloud workflow automation platform

Fylamynt, a new service that helps businesses automate their cloud workflows, today announced both the official launch of its platform as well as a $6.5 million seed round. The funding round was led by Google’s AI-focused Gradient Ventures fund. Mango Capital and Point72 Ventures also participated.

At first glance, the idea behind Fylamynt may sound familiar. Workflow automation has become a pretty competitive space, after all, and the service helps developers connect their various cloud tools to create repeatable workflows. We’re not talking about your standard IFTTT- or Zapier -like integrations between SaaS products, though. The focus of Fylamynt is squarely on building infrastructure workflows. While that may sound familiar, too, with tools like Ansible and Terraform automating a lot of that already, Fylamynt sits on top of those and integrates with them.

Image Credits: Fylamynt

“Some time ago, we used to do Bash and scripting — and then [ … ] came Chef and Puppet in 2006, 2007. SaltStack, as well. Then Terraform and Ansible,” Fylamynt co-founder and CEO Pradeep Padala told me. “They have all done an extremely good job of making it easier to simplify infrastructure operations so you don’t have to write low-level code. You can write a slightly higher-level language. We are not replacing that. What we are doing is connecting that code.”

So if you have a Terraform template, an Ansible playbook and maybe a Python script, you can now use Fylamynt to connect those. In the end, Fylamynt becomes the orchestration engine to run all of your infrastructure code — and then allows you to connect all of that to the likes of DataDog, Splunk, PagerDuty Slack and ServiceNow.

Image Credits: Fylamynt

The service currently connects to Terraform, Ansible, Datadog, Jira, Slack, Instance, CloudWatch, CloudFormation and your Kubernetes clusters. The company notes that some of the standard use cases for its service are automated remediation, governance and compliance, as well as cost and performance management.

The company is already working with a number of design partners, including Snowflake.

Fylamynt CEO Padala has quite a bit of experience in the infrastructure space. He co-founded ContainerX, an early container-management platform, which later sold to Cisco. Before starting ContainerX, he was at VMWare and DOCOMO Labs. His co-founders, VP of Engineering Xiaoyun Zhu and CTO David Lee, also have deep expertise in building out cloud infrastructure and operating it.

“If you look at any company — any company building a product — let’s say a SaaS product, and they want to run their operations, infrastructure operations very efficiently,” Padala said. “But there are always challenges. You need a lot of people, it takes time. So what is the bottleneck? If you ask that question and dig deeper, you’ll find that there is one bottleneck for automation: that’s code. Someone has to write code to automate. Everything revolves around that.”

Fylamynt aims to take the effort out of that by allowing developers to either write Python and JSON to automate their workflows (think “infrastructure as code” but for workflows) or to use Fylamynt’s visual no-code drag-and-drop tool. As Padala noted, this gives developers a lot of flexibility in how they want to use the service. If you never want to see the Fylamynt UI, you can go about your merry coding ways, but chances are the UI will allow you to get everything done as well.

One area the team is currently focusing on — and will use the new funding for — is building out its analytics capabilities that can help developers debug their workflows. The service already provides log and audit trails, but the plan is to expand its AI capabilities to also recommend the right workflows based on the alerts you are getting.

“The eventual goal is to help people automate any service and connect any code. That’s the holy grail. And AI is an enabler in that,” Padala said.

Gradient Ventures partner Muzzammil “MZ” Zaveri echoed this. “Fylamynt is at the intersection of applied AI and workflow automation,” he said. “We’re excited to support the Fylamynt team in this uniquely positioned product with a deep bench of integrations and a nonprescriptive builder approach. The vision of automating every part of a cloud workflow is just the beginning.”

The team, which now includes about 20 employees, plans to use the new round of funding, which closed in September, to focus on its R&D, build out its product and expand its go-to-market team. On the product side, that specifically means building more connectors.

The company offers both a free plan as well as enterprise pricing and its platform is now generally available.

Nov
10
2020
--

With $29M in funding, Isovalent launches its cloud-native networking and security platform

Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreessen Horowitz and Google. In addition, the company today officially launched its Cilium Enterprise platform (which was in stealth until now) to help enterprises connect, observe and secure their applications.

The open-source Cilium project is already seeing growing adoption, with Google choosing it for its new GKE dataplane, for example. Other users include Adobe, Capital One, Datadog and GitLab. Isovalent is following what is now the standard model for commercializing open-source projects by launching an enterprise version.

Image Credits: Cilium

The founding team of CEO Dan Wendlandt and CTO Thomas Graf has deep experience in working on the Linux kernel and building networking products. Graf spent 15 years working on the Linux kernel and created the Cilium open-source project, while Wendlandt worked on Open vSwitch at Nicira (and then VMware).

Image Credits: Isovalent

“We saw that first wave of network intelligence be moved into software, but I think we both shared the view that the first wave was about replicating the traditional network devices in software,” Wendlandt told me. “You had IPs, you still had ports, you created virtual routers, and this and that. We both had that shared vision that the next step was to go beyond what the hardware did in software — and now, in software, you can do so much more. Thomas, with his deep insight in the Linux kernel, really saw this eBPF technology as something that was just obviously going to be groundbreaking technology, in terms of where we could take Linux networking and security.”

As Graf told me, when Docker, Kubernetes and containers, in general, become popular, what he saw was that networking companies at first were simply trying to reapply what they had already done for virtualization. “Let’s just treat containers as many as miniature VMs. That was incredibly wrong,” he said. “So we looked around, and we saw eBPF and said: this is just out there and it is perfect, how can we shape it forward?”

And while Isovalent’s focus is on cloud-native networking, the added benefit of how it uses the eBPF Linux kernel technology is that it also gains deep insights into how data flows between services and hence allows it to add advanced security features as well.

As the team noted, though, users definitely don’t need to understand or program eBPF, which is essentially the next generation of Linux kernel modules, themselves.

Image Credits: Isovalent

“I have spent my entire career in this space, and the North Star has always been to go beyond IPs + ports and build networking visibility and security at a layer that is aligned with how developers, operations and security think about their applications and data,” said Martin Casado, partner at Andreesen Horowitz (and the founder of Nicira). “Until just recently, the technology did not exist. All of that changed with Kubernetes and eBPF.  Dan and Thomas have put together the best team in the industry and given the traction around Cilium, they are well on their way to upending the world of networking yet again.”

As more companies adopt Kubernetes, they are now reaching a stage where they have the basics down but are now facing the next set of problems that come with this transition. Those, almost by default, include figuring out how to isolate workloads and get visibility into their networks — all areas where Isovalent/Cilium can help.

The team tells me its focus, now that the product is out of stealth, is about building out its go-to-market efforts and, of course, continue to build out its platform.

Mar
31
2020
--

Amid shift to remote work, application performance monitoring is IT’s big moment

In recent weeks, millions have started working from home, putting unheard-of pressure on services like video conferencing, online learning, food delivery and e-commerce platforms. While some verticals have seen a marked reduction in traffic, others are being asked to scale to new heights.

Services that were previously nice to have are now necessities, but how do organizations track pressure points that can add up to a critical failure? There is actually a whole class of software to help in this regard.

Monitoring tools like Datadog, New Relic and Elastic are designed to help companies understand what’s happening inside their key systems and warn them when things may be going sideways. That’s absolutely essential as these services are being asked to handle unprecedented levels of activity.

At a time when performance is critical, application performance monitoring (APM) tools are helping companies stay up and running. They also help track root causes should the worst case happen and they go down, with the goal of getting going again as quickly as possible.

We spoke to a few monitoring vendor CEOs to understand better how they are helping customers navigate this demand and keep systems up and running when we need them most.

IT’s big moment

Feb
12
2019
--

Datadog acquires app testing company Madumbo

Datadog, the popular monitoring and analytics platform, today announced that it has acquired Madumbo, an AI-based application testing platform.

“We’re excited to have the Madumbo team join Datadog,” said Olivier Pomel, Datadog’s CEO. “They’ve built a sophisticated AI platform that can quickly determine if a web application is behaving correctly. We see their core technology strengthening our platform and extending into many new digital experience monitoring capabilities for our customers.”

Paris-based Madumbo, which was incubated at Station F and launched in 2017, offers its users a way to test their web apps without having to write any additional code. It promises to let developers build tests by simply interacting with the site, using the Madumbo test recorder, and to help them build test emails, password and testing data on the fly. The Madumbo system then watches your site and adapts its check to whatever changes you make. This bot also watches for JavaScript errors and other warnings and can be integrated into a deployment script.

The team will join Datadog’s existing Paris office and will work on new products, which Datadog says will be announced later this year. Datadog will phase out the Madumbo platform over the course of the next few months.

“Joining Datadog and bringing Madumbo’s AI-powered testing technology to its platform is an amazing opportunity,” said Gabriel-James Safar, CEO of Madumbo. “We’ve long admired Datadog and its leadership, and are excited to expand the scope of our existing technology by integrating tightly with Datadog’s other offerings.”

Jul
12
2018
--

Datadog launches Watchdog to help you monitor your cloud apps

Your typical cloud monitoring service integrates with dozens of service and provides you a pretty dashboard and some automation to help you keep tabs on how your applications are doing. Datadog has long done that but today, it is adding a new service called Watchdog, which uses machine learning to automatically detect anomalies for you.

The company notes that a traditional monitoring setup involves defining your parameters based on how you expect the application to behave and then set up dashboards and alerts to monitor them. Given the complexity of modern cloud applications, that approach has its limits, so an additional layer of automation becomes necessary.

That’s where Watchdog comes in. The service observes all of the performance data it can get its paws on, learns what’s normal, and then provides alerts when something unusual happens and — ideally — provides insights into where exactly the issue started.

“Watchdog builds upon our years of research and training of algorithms on our customers data sets. This technology is unique in that it not only identifies an issue programmatically, but also points users to probable root causes to kick off an investigation,” Datadog’s head of data science Homin Lee notes in today’s announcement.

The service is now available to all Datadog customers in its Enterprise APM plan.

Jun
27
2018
--

Intermix.io looks to help data engineers find their worst bottlenecks

For any company built on top of machine learning operations, the more data it has, the better it is off — as long as it can keep it all under control. But as more and more information pours in from disparate sources, gets logged in obscure databases and is generally hard (or slow) to query, the process of getting that all into one neat place where a data scientist can actually start running the statistics is quickly running into one of machine learning’s biggest bottlenecks.

That’s a problem Intermix.io and its founders, Paul Lappas and Lars Kamp, hope to solve. Engineers get a granular look at all of the different nuances behind what’s happening with some specific function, from the query all the way through all of the paths it’s taking to get to its end result. The end product is one that helps data engineers monitor the flow of information going through their systems, regardless of the source, to isolate bottlenecks early and see where processes are breaking down. The company also said it has raised seed funding from Uncork Capital, S28 Capital, PAUA Ventures along with Bastian Lehman, CEO of Postmates and Hasso Plattner, founder of SAP.

“Companies realize being data driven is a key to success,” Kamp said. “The cloud makes it cheap and easy to store your data forever, machine learning libraries are making things easy to digest. But a company that wants to be data driven wants to hire a data scientist. This is the wrong first hire. To do that they need access to all the relevant data, and have it be complete and clean. That falls to data engineers who need to build data assembly lines where they are creating meaningful types to get data usable to the data scientist. That’s who we serve.”

Intermix.io works in a couple of ways: First, it tags all of that data, giving the service a meta-layer of understanding what does what, and where it goes; second, it taps every input in order to gather metrics on performance and help identify those potential bottlenecks; and lastly, it’s able to track that performance all the way from the query to the thing that ends up on a dashboard somewhere. The idea here is that if, say, some server is about to run out of space somewhere or is showing some performance degradation, that’s going to start showing up in the performance of the actual operations pretty quickly — and needs to be addressed.

All of this is an efficiency play that might not seem to make sense at a smaller scale. The waterfall of new devices that come online every day, as well as more and more ways of understanding how people use tools online, even the smallest companies can quickly start building massive data sets. And if that company’s business depends on some machine learning happening in the background, that means it’s dependent on all that training and tracking happening as quickly and smoothly as possible, with any hiccups leading to real-term repercussions for its own business.

Intermix.io isn’t the first company to try to create some application performance management software. There are others like Data Dog and New Relic, though Lappas says that the primary competition from them comes in the form of traditional APM software with some additional scripts tacked on. However, data flows are a different layer altogether, which means they require a more unique and custom approach to addressing that problem.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com