Sep
10
2019
--

HashiCorp announces fully managed service mesh on Azure

Service mesh is just beginning to take hold in the cloud-native world, and as it does, vendors are looking for ways to help customers understand it. One way to simplify the complexity of dealing with the growing number of service mesh products out there is to package it as a service. Today, HashiCorp announced a new service on Azure to address that need, building it into the Consul product.

HashiCorp co-founder and CTO Armon Dadgar says it’s a fully managed service. “We’ve partnered closely with Microsoft to offer a native Consul [service mesh] service. At the highest level, the goal here is, how do we make it basically push-button,” Dadgar told TechCrunch.

He adds that there is extremely tight integration in terms of billing and permissions, as well as other management functions, as you would expect with a managed service in the public cloud. Brendan Burns, one of the original Kubernetes developers, who is now a distinguished engineer at Microsoft, says the HashiCorp solution really strips away a lot of the complexity associated with running a service mesh.

“In this case, HashiCorp is using some integration into the Azure control plane to run Consul for you. So you just consume the service mesh. You don’t have to worry about the operations of the service mesh, Burns said. He added, “This is really turning it into a service instead of a do-it-yourself exercise.”

Service meshes are tools used in conjunction with containers and Kubernetes in a dynamic cloud native environment to help micro services communicate and interoperate with one another. There is a growing number of them, including Istio, Envoy and Linkerd, jockeying for position right now.

Burns makes it clear that while Microsoft is working closely with HashiCorp on this project, it’s also working with other vendors, as well. “Our goal with the service mesh interface specification was really to let a lot of partners be successful on the platform. You know, there’s a bunch of different service meshes. It’s a place where we feel like there’s a lot of evolution and experimentation happening, so we want to make sure that our customers can can find the right solution for them,” Burns explained.

The HashiCorp Consul service is currently in private beta.

Sep
10
2019
--

HashiCorp expands Terraform free version, adds paid tier for SMBs

HashiCorp has had a free tier for its Terraform product in the past, but it was basically for a single user. Today, the company announced it was expanding that free tier to allow up to five users, while also increasing the range of functions that are available before you have to pay.

“We’re announcing a pretty large expansion of the Terraform Cloud free tier. So many of the capabilities that used to be exclusively in our Terraform enterprise product, we’re now bringing down into the Terraform free tier. It allows you to do central actual execution of Terraform and apply the full lifecycle as part of the free tier,” HashiCorp co-founder and CTO Armon Dadgar explained.

In addition, the company announced a middle tier aimed at SMBs. Dadgar says the new pricing tier helped address some obvious gaps in the pricing catalog for a large sets of users who outgrew the free product yet weren’t ready for the enterprise version.

“We were seeing a lot of friction with our SMB customers trying to figure out how to go from one-user Terraform to a team of five people or a team of 20 people. And I think the challenge was that we had the enterprise product, which in terms of deployment and pricing, is really geared toward Global 2000 kinds of companies,” Dadgar told TechCrunch.

He said this left a huge gap for smaller teams of between five and 100-user teams, which forced those teams to kludge together solutions to fit their requirements. The company thought it would make more sense to have a paid tier specifically geared for this group that would create a logical path for all users on the platform, while solving a known problem.

“It’s a logical path, but it also just answers the constant questions on forums and mailing lists regarding how to collaborate [with smaller teams]. Before, we didn’t have a prescriptive answer, and so there was a lot of DIY, and this is our attempt at a prescriptive answer of how you should do this,” he said.

Terraform is the company’s tool for defining, deploying and managing infrastructure as code. There is an open-source product, an on-prem version and a SaaS version.

May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

Nov
01
2018
--

HashiCorp scores $100M investment on $1.9 billion valuation

HashiCorp, the company that has made hay developing open-source tools for managing cloud infrastructure, obviously has a pretty hefty commercial business going too. Today the company announced an enormous $100 million round on a unicorn valuation of $1.9 billion.

The round was led by IVP, whose investments include AppDynamics, Slack and Snap. Newcomer Bessemer Venture Partners joined existing investors GGV Capital, Mayfield, Redpoint Ventures and True Ventures in the round. Today’s investment brings the total raised to $179 million.

The company’s open-source tools have been downloaded 45 million times, according to data provided by the company. It has used that open-source base to fuel the business (as many have done before).

“Because practitioners choose technologies in the cloud era, we’ve taken an open source-first approach and partnered with the cloud providers to enable a common workflow for cloud adoption. Commercially, we view our responsibility as a strategic partner to the Global 2000 as they adopt hybrid and multi-cloud. This round of funding will help us accelerate our efforts,” company CEO Dave McJannet said in a statement.

To keep growing, it needs to build out its worldwide operations and that requires big bucks. In addition, as the company scales that means adding staff to beef up customer success, support and training teams. The company plans on making investments in these areas with the new funding.

HashiCorp launched in 2012. It was the brainchild of two college students, Mitchell Hashimoto and Armon Dadgar, who came up with the idea of what would become HashiCorp while they were still at the University of Washington. As I wrote in 2014 on the occasion of their $10 million Series A round:

After graduating and getting jobs, Hashimoto and Dadgar reunited in 2012 and launched HashiCorp. They decided to break their big problem down into smaller, more manageable pieces and eventually built the five open source tools currently on offer. In fact, they found as they developed each one, the community let them know about adjacent problems and they layered on each new tool to address a different need.

HashiCorp has continued to build on that early vision, layering on new tools over the years. It is not alone in building a business on top of open source and getting rewarded for their efforts. Just this morning, Neo4j, a company that built a business on top of its open-source graph database project, announced an $80 million Series E investment.

Oct
24
2017
--

HashiCorp raises $40M for its cloud infrastructure automation services

 HashiCorp is probably best known for Terraform, its open-source tool for automatically provisioning infrastructure by describing it as code. But the company also offers a whole range of additional open-source security tools and products that enable multi-cloud deployments, as well as enterprise versions of these tools that add features for larger teams on top of these free versions. Read More

Apr
28
2015
--

HashiCorp Attacks Credentials Security With Open Source Secrets Manager

Vault protected with multiple chains and padlocks. Once upon a time, when you wanted to secure something of value, you put it in a vault and distributed the keys. Today, when you want to secure software credentials especially as you move across services, you can use a digital secrets manager and distribute the virtual keys. HashiCorp announced an early release of an open source secrets manager today appropriately called Vault. The tool… Read More

Dec
09
2014
--

HashiCorp Announces New DevOps Management Tool And $10M In Funding

Two guys wearing DevOps Tshirts On the heels of closing a $10 million Series A funding, previously stealthy open source DevOps toolkit provider, HashiCorp, unveiled its new commercial offering that connects all of their open source products. Called Atlas, the new service provides a single place for developing, building, deploying and monitoring distributed applications. The funding was led by Mayfield with participation… Read More

Dec
05
2014
--

Streamlined Percona XtraDB Cluster (or anything) testing with Consul and Vagrant

Introducing Consul

I’m always interested in what Mitchell Hashimoto and Hashicorp are up to, I typically find their projects valuable.  If you’ve heard of Vagrant, you know their work.

I recently became interested in a newer project they have called ‘Consul‘.  Consul is a bit hard to describe.  It is (in part):

  • Highly consistent metadata store (a bit like Zookeeeper)
  • A monitoring system (lightweight Nagios)
  • A service discovery system, both DNS and HTTP-based. (think of something like haproxy, but instead of tcp load balancing, it provides dns lookups with healthy services)

What this has to do with Percona XtraDB Cluster

I’ve had some more complex testing for Percona XtraDB Cluster (PXC) to do on my plate for quite a while, and I started to explore Consul as a tool to help with this.  I already have Vagrant setups for PXC, but ensuring all the nodes are healthy, kicking off tests, gathering results, etc. were still difficult.

So, my loose goals for Consul are:

  • A single dashboard to ensure my testing environment is healthy
  • Ability to adapt to any size environment — 3 node clusters up to 20+
  • Coordinate starting and stopping load tests running on any number of test clients
  • Have the ability to collect distributed test results

I’ve succeeded on some of these fronts with a Vagrant environment I’ve been working on. This spins up:

  • A Consul cluster (default is a single node)
  • Test server(s)
  • A PXC cluster

Additionally, it integrates the Test servers and PXC nodes with Consul such that:

  • The servers setup a Consul agent in client mode to the  Consul cluster
  • Additionally, they setup a local DNS forwarder that sends all DNS requests to the ‘.consul’ domain to the local agent to be serviced by the Consul cluster.
  • The servers register services with Consul that run local health checks
  • The test server(s) setup a ‘watch’ in consul to wait for starting sysbench on a consul ‘event’.

Seeing it in action

Once I run my ‘vagrant up’, I get a consul UI I can connect to on my localhost at port 8501:

Consul's Node Overview

Consul’s Node Overview

 

I can see all 5 of my nodes.  I can check the services and see that test1 is failing one health check because sysbench isn’t running yet:

Consul reporting sysbench is not running.

Consul reporting sysbench is not running.

This is expected, because I haven’t started testing yet.  I can see that my PXC cluster is healthy:

 

Health checks are using clustercheck from the PXC package

Health checks are using clustercheck from the PXC package

 

Involving Percona Cloud Tools in the system

So far, so good.  This Vagrant configuration (if I provide a PERCONA_AGENT_API_KEY in my environment) also registers my test servers with Percona Cloud Tools, so I can see data being reported there for my nodes:

Percona Cloud Tool's Dashboard for a single node

Percona Cloud Tool’s Dashboard for a single node

So now I am ready to begin my test.  To do so, I simply need to issue a consul event from any of the nodes:

jayj@~/Src/pxc_consul [507]$ vagrant ssh consul1
Last login: Wed Nov 26 14:32:38 2014 from 10.0.2.2
[root@consul1 ~]# consul event -name='sysbench_update_index'
Event ID: 7c8aab42-fd2e-de6c-cb0c-1de31c02ce95

My pre-configured watchers on my test node knows what to do with that event and launches sysbench.  Consul shows that sysbench is indeed running:

Screen Shot 2014-11-26 at 9.43.29 AM

 

And I can indeed see traffic start to come in on Percona Cloud Tools:

Screen Shot 2014-11-26 at 9.53.11 AM

I have testing traffic limited for my example, but that’s easily tunable via the Vagrantfile.  To show something a little more impressive, here’s a 5 node cluster running hitting around 2500 tps total throughput:

Screen Shot 2014-11-26 at 1.08.48 PM

So to summarize thus far:

  • I can spin up any size cluster I want and verify it is healthy with Consul’s UI
  • I can spin up any number of test servers and kick off sysbench on all of them simultaneously

Another big trick of Consul’s

That so far so good, but let me point out a few things that may not be obvious.  If you check the Vagrantfile, I use a consul hostname in a few places.  First, on the test servers:

# sysbench setup
            'tables' => 1,
            'rows' => 1000000,
            'threads' => 4 * pxc_nodes,
            'tx_rate' => 10,
            'mysql_host' => 'pxc.service.consul'

then again on the PXC server configuration:

# PXC setup
          "percona_server_version"  => pxc_version,
          'innodb_buffer_pool_size' => '1G',
          'innodb_log_file_size' => '1G',
          'innodb_flush_log_at_trx_commit' => '0',
          'pxc_bootstrap_node' => (i == 1 ? true : false ),
          'wsrep_cluster_address' => 'gcomm://pxc.service.consul',
          'wsrep_provider_options' => 'gcache.size=2G; gcs.fc_limit=1024',

Notice ‘pxc.service.consul’.  This hostname is provided by Consul and resolves to all the IPs of the current servers both having and passing the ‘pxc’ service health check:

[root@test1 ~]# host pxc.service.consul
pxc.service.consul has address 172.28.128.7
pxc.service.consul has address 172.28.128.6
pxc.service.consul has address 172.28.128.5

So I am using this to my advantage in two ways:

  1. My PXC cluster bootstraps the first node automatically, but all the other nodes use this hostname for their wsrep_cluster_address.  This means: no specific hostnames or ips in the my.cnf file, and this hostname will always be up to date with what nodes are active in the cluster; which is the precise list that should be in the wsrep_cluster_address at any given moment.
  2. My test servers connect to this hostname, therefore they always know where to connect and they will round-robin (if I have enough sysbench threads and PXC nodes) to different nodes based on the response of the dns lookup, which returns 3 of the active nodes in a different order each time.

(Some of) The Issues

This is still a work in progress and there are many improvements that could be made:

  • I’m relying on PCT to collect my data, but it’d be nice to utilize Consul’s central key/value store to store results of the independent sysbench runs.
  • Consul’s leader election could be used to help the cluster determine which node should bootstrap on first startup. I am assuming node1 should bootstrap.
  • A variety of bugs in various software still makes this a bit clunky sometimes to manage.  Here is a sample:
    • Consul events sometimes don’t fire in the current release (though it looks to be fixed soon)
    • PXC joining nodes sometimes get stuck putting speed bumps into the automated deploy.
    • Automated installs of percona-agent (which sends data to Percona Cloud Tools) is straight-forward, except when different cluster nodes clobber each other’s credentials.

So, in summary, I am happy with how easily Consul integrates and I’m already finding it useful for a product in its 0.4.1 release.

The post Streamlined Percona XtraDB Cluster (or anything) testing with Consul and Vagrant appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com