Mar
25
2020
--

Humio announces $20M Series B to advance unlimited logging tool

Humio, a startup that has built a modern unlimited logging solution, announced a $20 million Series B investment today.

Dell Technologies Capital led the round with participation from previous investor Accel. Today’s investment brings the total raised to $32 million, according to the company.

Humio co-founder and CEO Geeta Schmidt says the startup wanted to build a solution that would allow companies to log everything, while reducing the overall cost associated with doing that, a tough problem due to the resource and data volume involved. The company deals with customers who are processing multiple terabytes of data per day.

“We really wanted to build an infrastructure where it’s easy to log everything and answer anything in real time. So we built an index-free logging solution which allows you to ask […] ad hoc questions over large volumes of data,” Schmidt told TechCrunch.

They are able to ingest so much data by using streaming technology, says company EVP of sales Morten Gram. “We have this real time streaming engine that makes it possible for customers to monitor whatever they know they want to be looking at. So they can build dashboards and alerts for these [metrics] that will be running in real time,” Gram explained.

What’s more, because the solution enables companies to log everything, rather than pick and choose what to log, they can ask questions about things they might not know, such as an on-going security incident or a major outage, and trace the answer from the data in the logs as the incident is happening.

Perhaps more importantly, the company has come up with technology to reduce the cost associated with processing and storing such high volumes of data. “We have thought a lot about trying to do a lot more with a lot less resources. And so, for example, one of our customers, who moved from a competitor, has gone from 80 servers to 14 doing the same volumes of data,” she said.

Deepak Jeevankumar, managing director and lead investor at Dell Technologies Capital, says that his firm recognized that Humio was solving these issues in a creative and modern way.

“Humio’s team has created a new log analysis architecture for the microservices age. This can support real-time analysis at full-speed ingest, while decreasing cost of storage and analysis by at least an order of magnitude,” he explained. “In a short-period of time, Humio has won the confidence of many Fortune 500 customers who have shifted their log platforms to Humio from legacy, decade-old architectures that do not scale for the cloud world.”

The company’s customers include Netlify, Bloomberg, HP Aruba and Michigan State University. It offers on-prem, cloud and hosted SaaS products. Today, the company also announced it was introducing an unlimited ingest plan for hosted SaaS customers.

Feb
06
2020
--

Datree announces $8M Series A as it joins Y Combinator

Datree, the early-stage startup building a DevOps policy engine on GitHub, announced an $8 million Series A today. It also announced it has joined the Y Combinator Winter 20 cohort.

Blumberg and TLV Partners led the round with participation from Y Combinator . The company has now raised $11 million with the $3 million seed round announced in 2018.

Since that seed round, company co-founder and CEO Shimon Tolts says that the company learned that while scanning code for issues was something DevOps teams found useful, they wanted help defining the rules. So Datree has created a series of rules packages you can run against the code to find any gaps or issues.

“We offer development best practices, coding standards and security and compliance policies. What happens today is that, as you connect to Datree, we connect to your source code and scan the entire code base, and we recommend development best practices based on your technology stack,” Tolts explained.

He says that they build these rules packages based on the company’s own expertise, as well as getting help from the community, and in some cases partnering with experts. For its Docker security package, it teamed up with Aqua Security.

The focus remains on applying these rules in GitHub where developers are working. Before committing the code, they can run the appropriate rules packages against it to ensure they are in compliance with best practices.

Datree rules packages. Screenshot: Datree

Tolts says they began looking at Y Combinator after the seed round because they wanted more guidance on building out the business. “We knew that Y Combinator could really help us because our product is relevant to 95 percent of all YC companies, and the program has helped us go and work on six figure deals with more mature YC companies,” he said.

Datree is working directly with Y Combinator CEO Michael Seibel, and he says being part of the Winter 20 cohort has helped him refine his go-to-market motion. He admits he is not a typical YC company having been around since 2017 with an existing product and 12 employees, but he thinks it will help propel the company in the long run.

Feb
05
2020
--

Where top VCs are investing in open source and dev tools (Part 2 of 2)

In part two of a survey that asks top VCs about exciting opportunities in open source and dev tools, we dig into responses from 10 leading open-source-focused investors at firms that span early to growth stage across software-specific firms, corporate venture arms and prominent generalist firms.

In the conclusion to our survey, we’ll hear from:

These responses have been edited for clarity and length.

Feb
05
2020
--

Where top VCs are investing in open source and dev tools (Part 1 of 2)

The once-polarizing world of open-source software has recently become one of the hotter destinations for VCs.

As the popularity of open source increases among organizations and developers, startups in the space have reached new heights and monstrous valuations.

Over the past several years, we’ve seen surging open-source companies like Databricks reach unicorn status, as well as VCs who cashed out behind a serious number of exits involving open-source and dev tool companies, deals like IBM’s Red Hat acquisition or Elastic’s late-2018 IPO. Last year, the exit spree continued with transactions like F5 Networks’ acquisition of NGINX and a number of high-profile acquisitions from mainstays like Microsoft and GitHub.

Similarly, venture investment in new startups in the space has continued to swell. More investors are taking shots at finding the next big payout, with annual invested capital in open-source and dev tool startups increasing at a roughly 10% compounded annual growth rate (CAGR) over the last five years, according to data from Crunchbase. Furthermore, attractive returns in the space seem to be adding more fuel to the fire, as open-source and dev tool startups saw more than $2 billion invested in the space in 2019 alone, per Crunchbase data.

As we close out another strong year for innovation and venture investing in the sector, we asked 18 of the top open-source-focused VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunities. For purposes of length and clarity, responses have been edited and split (in no particular order) into part one and part two of this survey. In part one of our survey, we hear from:

Sep
17
2019
--

GitLab hauls in $268M Series E on 2.75B valuation

GitLab is a company that doesn’t pull any punches or try to be coy. It actually has had a page on its website for some time stating it intends to go public on November 18, 2020. You don’t see that level of transparency from late-stage startups all that often. Today, the company announced a huge $268 million Series E on a tidy $2.75 billion valuation.

Investors include Adage Capital Management, Alkeon Capital, Altimeter Capital, Capital Group, Coatue Management, D1 Capital Partners, Franklin Templeton, Light Street Capital, Tiger Management Corp. and Two Sigma Investments.

The company seems to be primed and ready for that eventual IPO. Last year, GitLab co-founder and CEO Sid Sijbrandij said that his CFO Paul Machle told him he wanted to begin planning to go public, and he would need two years in advance to prepare the company. As Sijbrandij tells it, he told him to pick a date.

“He said, I’ll pick the 16th of November because that’s the birthday of my twins. It’s also the last week before Thanksgiving, and after Thanksgiving, the stock market is less active, so that’s a good time to go out,” Sijbrandij told TechCrunch.

He said that he considered it a done deal and put the date on the GitLab Strategy page, a page that outlines the company’s plans for everything it intends to do. It turned out that he was a bit too quick on the draw. Machle had checked the date in the interim and realized that it was a Monday, which is not traditionally a great day to go out, so they decided to do it two days later. Now the target date is officially November 18, 2020.

Screenshot 2019 09 17 08.35.33 2

GitLab has the date it’s planning to go public listed on its Strategy page.

As for that $268 million, it gives the company considerable runway ahead of that planned event, but Sijbrandij says it also gives him flexibility in how to take the company public. “One other consideration is that there are two options to go public. You can do an IPO or direct listing. We wanted to preserve the optionality of doing a direct listing next year. So if we do a direct listing, we’re not going to raise any additional money, and we wanted to make sure that this is enough in that case,” he explained.

Sijbrandij says that the company made a deliberate decision to be transparent early on. Being based on an open-source project, it’s sometimes tricky to make that transition to a commercial company, and sometimes that has a negative impact on the community and the number of contributions. Transparency was a way to combat that, and it seems to be working.

He reports that the community contributes 200 improvements to the GitLab open-source product every month, and that’s double the amount of just a year ago, so the community is still highly active in spite of the parent company’s commercial success.

It did not escape his notice that Microsoft acquired GitHub last year for $7.5 billion. It’s worth noting that GitLab is a similar kind of company that helps developers manage and distribute code in a DevOps environment. He claims in spite of that eye-popping number, his goal is to remain an independent company and take this through to the next phase.

“Our ambition is to stay an independent company. And that’s why we put out the ambition early to become a listed company. That’s not totally in our control as the majority of the company is owned by investors, but as long as we’re more positive about the future than the people around us, I think we can we have a shot at not getting acquired,” he said.

The company was founded in 2014 and was a member of Y Combinator in 2015. It has been on a steady growth trajectory ever since, hauling in more than $426 million. The last round before today’s announcement was a $100 million Series D last September.

May
21
2019
--

Praqma puts Atlassian’s Data Center products into containers

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently, and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes, because the company today announced that its channel partner Praqma is launching Atlassian Software in Kubernetes (ASK), a new solution that allows enterprises to run and manage as containers its on-premise applications like Jira Data Center, with the help of Kubernetes.

Praqma is now making ASK available as open source.

As the company notes in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” Praqma’s team explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.

Apr
23
2019
--

Harness hauls in $60M Series B investment on $500M valuation

Series B rounds used to be about establishing a product-market fit, but for some startups the whole process seems to be accelerating. Harness, the startup founded by AppDynamics co-founder and CEO Jyoti Bansal is one of those companies that is putting the pedal the metal with his second startup, taking his learnings and a $60 million round to build the company much more quickly.

Harness already has an eye-popping half billion dollar valuation. It’s not terribly often I hear valuations in a Series B discussion. More typically CEOs want to talk growth rates, but Bansal volunteered the information, excited by the startup’s rapid development.

The round was led by IVP, GV (formerly Google Ventures) and ServiceNow Ventures. Existing investors Big Labs, Menlo Ventures and Unusual Ventures also participated. Today’s investment brings the total raised to $80 million, according to Crunchbase data.

Bansal obviously made a fair bit of money when he sold AppDynamics to Cisco in 2017 for $3.7 billion and he could have rested after his great success. Instead he turned his attention almost immediately to a new challenge, helping companies move to a new continuous delivery model more rapidly by offering Continuous Delivery as a Service.

As companies move to containers and the cloud, they face challenges implementing new software delivery models. As is often the case, large web scale companies like Facebook, Google and Netflix have the resources to deliver these kinds of solutions quickly, but it’s much more difficult for most other companies.

Bansal saw an opportunity here to package continuous delivery approaches as a service. “Our approach in the market is Continuous Delivery as a Service, and instead of you trying to engineer this, you get this platform that can solve this problem and bring you the best tooling that a Google or Facebook or Netflix would have,” Basal explained.

The approach has gained traction quickly. The company has grown from 25 employees at launch in 2017 to 100 today. It boasts 50 enterprise customers including Home Depot, Santander Bank and McAfee.

He says that the continuous delivery piece could just be a starting point, and the money from the round will be plowed back into engineering efforts to expand the platform and solve other problems DevOps teams face with a modern software delivery approach.

Bansal admits that it’s unusual to have this kind of traction this early, and he says that his growth is much faster than it was at AppDynamics at the same stage, but he believes the opportunity here is huge as companies look for more efficient ways to deliver software. “I’m a little bit surprised. I thought this was a big problem when I started, but it’s an even bigger problem than I thought and how much pain was out there and how ready the market was to look at a very different way of solving this problem,” he said.

Mar
20
2019
--

Blameless emerges from stealth with $20M investment to help companies transition to SRE

Site Reliability Engineering (SRE) is an extension of DevOps designed for more complex environments. The problem is that this type of approach is difficult to implement and has usually only been in reach of large companies, requiring custom software. Blameless, a Bay Area startup, wants to put it reach of everyone. It emerged from stealth today with an SRE platform for the masses and around $20 million in funding.

For starters, the company announced two rounds of funding with $3.6 million in seed money last April and a $16.5 million Series A investment more recently in January. Investors included Accel,  Lightspeed Venture Partners and others.

Company co-founder and CEO Ashar Rizqi knows first-hand just how difficult it is to implement an SRE system. He built custom systems for Box and Mulesoft before launching Blameless two years ago. He and his co-founder COO Lyon Wong saw a gap in the market where companies who wanted to implement SRE were being limited because of a lack of tooling and decided to build it themselves.

Rizqi says SRE changes the way you work and interact and Blameless gives structure to that change. “It changes the way you communicate, prioritize and work, but we’re adding data and metrics to support that shift” he said.

Screenshot: Blameless

As companies move to containers and continuous delivery models, it brings a level of complexity to managing the developers, who are working to maintain the delivery schedule, and operations, who must make sure the latest builds get out with a minimum of bugs. It’s not easy to manage, especially given the speed involved.

Over time, the bugs build up and the blame circulates around the DevOps team as they surface. The company name comes because their platform should remove blame from the equation by providing the tooling to get deeper visibility into all aspects of the delivery model.

At that point, companies can understand more clearly the kinds of compromises they need to make to get products out the door, rather than randomly building up this technical debt over time. This is exacerbated by the fact that companies are building their software from a variety of sources, whether open source or API services, and it’s hard to know the impact that external code is having on your product.

“Technical debt is accelerating as there is greater reliability on micro services. It’s a black box. You don’t own all the lines of code you are executing,” Rizqi explained. His company’s solution is designed to help with that problem.

The company currently has 23 employees and 20 customers including DigitalOcean and Home Depot.

Feb
21
2019
--

JFrog acquires Shippable, adding continuous integration and delivery to its DevOps platform

JFrog, the popular DevOps startup now valued at more than $1 billion after raising $165 million last October, is making a move to expand the tools and services it provides to developers on its software operations platform: it has acquired Shippable, a cloud-based continuous integration and delivery platform (CI/CD) that developers use to ship code and deliver app and microservices updates, and plans to integrate it into its Enterprise+ platform.

Terms of the deal — JFrog’s fifth acquisition — are not being disclosed, said Shlomi Ben Haim, JFrog’s co-founder and CEO, in an interview. From what I understand, though, it was in the ballpark of Shippable’s most recent valuation, which was $42.6 million back in 2014 when it raised $8 million, according to PitchBook data.  (And that was the last time it raised money.)

Shippable employees are joining JFrog and plan to release the first integrations with Enterprise+ this coming summer, and a full integration by Q3 of this year.

Shippable, founded in 2013, made its name early on as a provider of a containerized continuous integration and delivery platform based on Docker containers, but as Kubernetes has overtaken Docker in containerized deployments, the startup had also shifted its focus beyond Docker containers.

The acquisition speaks to the consolidation that is afoot in the world of DevOps, where developers and organizations are looking for more end-to-end toolkits, not just to help develop, update and run their apps and microservices, but to provide security and more — or at least, makers of DevOps tools hope they will be, as they themselves look to grow their margins and business.

As more organizations run ever more of their operations as apps and microservices, DevOps have risen in prominence and are offered both toolkits from standalone businesses as well as those whose infrastructure is touched and used by DevOps tools. That means a company like JFrog has an expanding pool of competitors that include not just the likes of Docker, Sonatype and GitLab, but also AWS, Google Cloud Platform and Azure and “the Red Hats of the world,” in the words of Ben Haim.

For Shippable customers, the integration will give them access to security, binary management and other enterprise development tools.

“We’re thrilled to join the JFrog family and further the vision around Liquid Software,” said Avi Cavale, founder and CEO of Shippable, in a statement. “Shippable users and customers have long enjoyed our next-generation technology, but now will have access to leading security, binary management and other high-powered enterprise tools in the end-to-end JFrog Platform. This is truly exciting, as the combined forces of JFrog and Shippable can make full DevOps automation from code to production a reality.”

On the part of JFrog, the company will be using Shippable to provide a native CI/CD tool directly within JFrog.

“Before most of our users would use Jenkins, Circle CI and other CI/CD automation tools,” Ben Haim said. “But what you are starting to see in the wider market is a gradual consolidation of CI tools into code repository.”

He emphasized that this will not mean any changes for developers who are already happy using Jenkins or other integrations: just that it will now be offering a native solution that will be offered alongside these (presumably both with easier functionality and with competitive pricing).

JFrog today has 5,000 paying customers, up from 4,500 in October, including “most of the Fortune 500,” with marquee customers including the likes of Apple and Adobe, but also banks, healthcare organizations and insurance companies — “conservative businesses,” said Ben Haim, that are also now realizing the importance of using DevOps.

Oct
23
2018
--

Reclaiming space on your Docker PMM server deployment

reclaiming space Docker PMM

reclaiming space Docker PMMRecently we had a customer that had issues with a filled disk on the server hosting their Docker pmm-server environment. They were not able to access the web UI, or even stop the pmm-server container because they had filled the /var/ mount point.

Setting correct expectations

The best way to avoid these kinds of issues in the first place is to plan ahead, and to know exactly with what you are dealing with in terms of disk space requirements. Michael Coburn has written a great blogpost on this matter:

https://www.percona.com/blog/2017/05/04/how-much-disk-space-should-i-allocate-for-percona-monitoring-and-management/

We are now using Prometheus version 2 inside PMM server, so you should take it with a pinch of salt. On the other hand, it will show how you should plan ahead, and think about the “steady state” disk usage, so it’s a good read.

That’s the first step to make sure you won’t get into trouble down the line. But, what happens if you are already in trouble? We’ll see two quick ways that may help reclaiming space.

Before anything else, you should stop any and all PMM clients running, so that you don’t have a race condition after recovering some space, in which metrics coming from the running clients will fill up whatever disk you had freed.

If

pmm-admin stop --all

  won’t work, you can stop the services manually, or even manually kill the running processes as a last resort:

shell> systemctl list-unit-files | grep enabled | grep pmm | awk '{print $1}' | xargs -n 1 systemctl stop
shell> ps ax | egrep "exporter|qan-agent|pmm" | grep -v "ssh" | awk '{print $1}' | xargs kill

Removing unused containers

In order for the next steps to be as effective as possible, make sure there are no unused containers running, or stopped:

shell> docker ps -a

If you see any container that you know you don’t need anymore:

shell> docker stop <container_name>
shell> docker rm -v <container_name>

WARNING! Do not remove the pmm-data container!

Reclaiming space from unused Docker images

After you are done cleaning unused containers, we can move forward with removing unused images. Unless you are manually building your own Docker images, it’s really easy to get them again if needed, so you shouldn’t be afraid of deleting the ones that are not being used. In fact, you don’t need to explicitly download the images. By simply running

docker run … image_name

  Docker will automatically do it for you if it’s not found locally.

shell> docker image prune -a
WARNING! This will remove all images without at least one container associated to them.
Are you sure you want to continue? [y/N] y
Deleted Images:
...
Total reclaimed space: 3.97GB

Not too bad, we just reclaimed 4Gb of disk space. This alone should be enough to restart the Docker service and have the pmm-server container back up. But we want more, just because we can ?

Reclaiming space from orphaned Docker volumes

By default, when removing a container (with

docker rm

 ) Docker will not delete the associated volumes, unless you use the -v switch as we did above. This will mean that, unless you were aware of this fact, you will probably have some other gigabytes worth of data occupying disk space. We can easily do this with the volume prune command:

shell> docker volume prune
WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
...
Total reclaimed space: 115GB

Yeah… that’s some significant amount of disk space we just reclaimed back! Again, make sure you don’t care about any of the volumes from your past containers to be able to do this safely, since there is no turning back from this, obviously.

For earlier versions of Docker where this command is not available, you can check this link.

Planning ahead

As mentioned before, you should now revisit Michael’s blogpost, and set the metrics retention and queries retention variables to whatever makes sense for your environment. Even if you plan ahead, you may not be counting on the additional variable overhead of images and orphaned volumes, so you may want to (warning: shameless plug for my own blogpost ahead) use different mount points for your PMM deployment, and avoid using the shared /var/lib/docker/ mount point for it.

PMM also includes a Disk Space usage dashboard, that you can use to monitor this.

Don’t forget to start back up your PMM clients, and continue to monitor them 24×7!

Photo by Andrew Wulf on Unsplash

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com