Nov
01
2018
--

HashiCorp scores $100M investment on $1.9 billion valuation

HashiCorp, the company that has made hay developing open-source tools for managing cloud infrastructure, obviously has a pretty hefty commercial business going too. Today the company announced an enormous $100 million round on a unicorn valuation of $1.9 billion.

The round was led by IVP, whose investments include AppDynamics, Slack and Snap. Newcomer Bessemer Venture Partners joined existing investors GGV Capital, Mayfield, Redpoint Ventures and True Ventures in the round. Today’s investment brings the total raised to $179 million.

The company’s open-source tools have been downloaded 45 million times, according to data provided by the company. It has used that open-source base to fuel the business (as many have done before).

“Because practitioners choose technologies in the cloud era, we’ve taken an open source-first approach and partnered with the cloud providers to enable a common workflow for cloud adoption. Commercially, we view our responsibility as a strategic partner to the Global 2000 as they adopt hybrid and multi-cloud. This round of funding will help us accelerate our efforts,” company CEO Dave McJannet said in a statement.

To keep growing, it needs to build out its worldwide operations and that requires big bucks. In addition, as the company scales that means adding staff to beef up customer success, support and training teams. The company plans on making investments in these areas with the new funding.

HashiCorp launched in 2012. It was the brainchild of two college students, Mitchell Hashimoto and Armon Dadgar, who came up with the idea of what would become HashiCorp while they were still at the University of Washington. As I wrote in 2014 on the occasion of their $10 million Series A round:

After graduating and getting jobs, Hashimoto and Dadgar reunited in 2012 and launched HashiCorp. They decided to break their big problem down into smaller, more manageable pieces and eventually built the five open source tools currently on offer. In fact, they found as they developed each one, the community let them know about adjacent problems and they layered on each new tool to address a different need.

HashiCorp has continued to build on that early vision, layering on new tools over the years. It is not alone in building a business on top of open source and getting rewarded for their efforts. Just this morning, Neo4j, a company that built a business on top of its open-source graph database project, announced an $80 million Series E investment.

Oct
24
2017
--

HashiCorp raises $40M for its cloud infrastructure automation services

 HashiCorp is probably best known for Terraform, its open-source tool for automatically provisioning infrastructure by describing it as code. But the company also offers a whole range of additional open-source security tools and products that enable multi-cloud deployments, as well as enterprise versions of these tools that add features for larger teams on top of these free versions. Read More

Apr
28
2015
--

HashiCorp Attacks Credentials Security With Open Source Secrets Manager

Vault protected with multiple chains and padlocks. Once upon a time, when you wanted to secure something of value, you put it in a vault and distributed the keys. Today, when you want to secure software credentials especially as you move across services, you can use a digital secrets manager and distribute the virtual keys. HashiCorp announced an early release of an open source secrets manager today appropriately called Vault. The tool… Read More

Dec
09
2014
--

HashiCorp Announces New DevOps Management Tool And $10M In Funding

Two guys wearing DevOps Tshirts On the heels of closing a $10 million Series A funding, previously stealthy open source DevOps toolkit provider, HashiCorp, unveiled its new commercial offering that connects all of their open source products. Called Atlas, the new service provides a single place for developing, building, deploying and monitoring distributed applications. The funding was led by Mayfield with participation… Read More

Dec
05
2014
--

Streamlined Percona XtraDB Cluster (or anything) testing with Consul and Vagrant

Introducing Consul

I’m always interested in what Mitchell Hashimoto and Hashicorp are up to, I typically find their projects valuable.  If you’ve heard of Vagrant, you know their work.

I recently became interested in a newer project they have called ‘Consul‘.  Consul is a bit hard to describe.  It is (in part):

  • Highly consistent metadata store (a bit like Zookeeeper)
  • A monitoring system (lightweight Nagios)
  • A service discovery system, both DNS and HTTP-based. (think of something like haproxy, but instead of tcp load balancing, it provides dns lookups with healthy services)

What this has to do with Percona XtraDB Cluster

I’ve had some more complex testing for Percona XtraDB Cluster (PXC) to do on my plate for quite a while, and I started to explore Consul as a tool to help with this.  I already have Vagrant setups for PXC, but ensuring all the nodes are healthy, kicking off tests, gathering results, etc. were still difficult.

So, my loose goals for Consul are:

  • A single dashboard to ensure my testing environment is healthy
  • Ability to adapt to any size environment — 3 node clusters up to 20+
  • Coordinate starting and stopping load tests running on any number of test clients
  • Have the ability to collect distributed test results

I’ve succeeded on some of these fronts with a Vagrant environment I’ve been working on. This spins up:

  • A Consul cluster (default is a single node)
  • Test server(s)
  • A PXC cluster

Additionally, it integrates the Test servers and PXC nodes with Consul such that:

  • The servers setup a Consul agent in client mode to the  Consul cluster
  • Additionally, they setup a local DNS forwarder that sends all DNS requests to the ‘.consul’ domain to the local agent to be serviced by the Consul cluster.
  • The servers register services with Consul that run local health checks
  • The test server(s) setup a ‘watch’ in consul to wait for starting sysbench on a consul ‘event’.

Seeing it in action

Once I run my ‘vagrant up’, I get a consul UI I can connect to on my localhost at port 8501:

Consul's Node Overview

Consul’s Node Overview

 

I can see all 5 of my nodes.  I can check the services and see that test1 is failing one health check because sysbench isn’t running yet:

Consul reporting sysbench is not running.

Consul reporting sysbench is not running.

This is expected, because I haven’t started testing yet.  I can see that my PXC cluster is healthy:

 

Health checks are using clustercheck from the PXC package

Health checks are using clustercheck from the PXC package

 

Involving Percona Cloud Tools in the system

So far, so good.  This Vagrant configuration (if I provide a PERCONA_AGENT_API_KEY in my environment) also registers my test servers with Percona Cloud Tools, so I can see data being reported there for my nodes:

Percona Cloud Tool's Dashboard for a single node

Percona Cloud Tool’s Dashboard for a single node

So now I am ready to begin my test.  To do so, I simply need to issue a consul event from any of the nodes:

jayj@~/Src/pxc_consul [507]$ vagrant ssh consul1
Last login: Wed Nov 26 14:32:38 2014 from 10.0.2.2
[root@consul1 ~]# consul event -name='sysbench_update_index'
Event ID: 7c8aab42-fd2e-de6c-cb0c-1de31c02ce95

My pre-configured watchers on my test node knows what to do with that event and launches sysbench.  Consul shows that sysbench is indeed running:

Screen Shot 2014-11-26 at 9.43.29 AM

 

And I can indeed see traffic start to come in on Percona Cloud Tools:

Screen Shot 2014-11-26 at 9.53.11 AM

I have testing traffic limited for my example, but that’s easily tunable via the Vagrantfile.  To show something a little more impressive, here’s a 5 node cluster running hitting around 2500 tps total throughput:

Screen Shot 2014-11-26 at 1.08.48 PM

So to summarize thus far:

  • I can spin up any size cluster I want and verify it is healthy with Consul’s UI
  • I can spin up any number of test servers and kick off sysbench on all of them simultaneously

Another big trick of Consul’s

That so far so good, but let me point out a few things that may not be obvious.  If you check the Vagrantfile, I use a consul hostname in a few places.  First, on the test servers:

# sysbench setup
            'tables' => 1,
            'rows' => 1000000,
            'threads' => 4 * pxc_nodes,
            'tx_rate' => 10,
            'mysql_host' => 'pxc.service.consul'

then again on the PXC server configuration:

# PXC setup
          "percona_server_version"  => pxc_version,
          'innodb_buffer_pool_size' => '1G',
          'innodb_log_file_size' => '1G',
          'innodb_flush_log_at_trx_commit' => '0',
          'pxc_bootstrap_node' => (i == 1 ? true : false ),
          'wsrep_cluster_address' => 'gcomm://pxc.service.consul',
          'wsrep_provider_options' => 'gcache.size=2G; gcs.fc_limit=1024',

Notice ‘pxc.service.consul’.  This hostname is provided by Consul and resolves to all the IPs of the current servers both having and passing the ‘pxc’ service health check:

[root@test1 ~]# host pxc.service.consul
pxc.service.consul has address 172.28.128.7
pxc.service.consul has address 172.28.128.6
pxc.service.consul has address 172.28.128.5

So I am using this to my advantage in two ways:

  1. My PXC cluster bootstraps the first node automatically, but all the other nodes use this hostname for their wsrep_cluster_address.  This means: no specific hostnames or ips in the my.cnf file, and this hostname will always be up to date with what nodes are active in the cluster; which is the precise list that should be in the wsrep_cluster_address at any given moment.
  2. My test servers connect to this hostname, therefore they always know where to connect and they will round-robin (if I have enough sysbench threads and PXC nodes) to different nodes based on the response of the dns lookup, which returns 3 of the active nodes in a different order each time.

(Some of) The Issues

This is still a work in progress and there are many improvements that could be made:

  • I’m relying on PCT to collect my data, but it’d be nice to utilize Consul’s central key/value store to store results of the independent sysbench runs.
  • Consul’s leader election could be used to help the cluster determine which node should bootstrap on first startup. I am assuming node1 should bootstrap.
  • A variety of bugs in various software still makes this a bit clunky sometimes to manage.  Here is a sample:
    • Consul events sometimes don’t fire in the current release (though it looks to be fixed soon)
    • PXC joining nodes sometimes get stuck putting speed bumps into the automated deploy.
    • Automated installs of percona-agent (which sends data to Percona Cloud Tools) is straight-forward, except when different cluster nodes clobber each other’s credentials.

So, in summary, I am happy with how easily Consul integrates and I’m already finding it useful for a product in its 0.4.1 release.

The post Streamlined Percona XtraDB Cluster (or anything) testing with Consul and Vagrant appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com