Sep
09
2020
--

Snyk bags another $200M at $2.6B valuation 9 months after last raise

When we last reported on Snyk in January, eons ago in COVID time, the company announced $150 million investment on a valuation of over $1 billion. Today, barely nine months later, it announced another $200 million and its valuation has expanded to $2.6 billion.

The company is obviously drawing some serious investor attention, and even a pandemic is not diminishing that interest. Addition led today’s round, bringing the total raised to $450 million with $350 million coming this year alone.

Snyk has a unique approach to security, building it into the development process instead of offloading it to a separate security team. If you want to build a secure product, you need to think about it as you’re developing the product, and that’s what Snyk’s product set is designed to do — check for security as you’re committing your build to your git repository.

With an open-source product at the top of funnel to drive interest in the platform, CEO Peter McKay says the pandemic has only accelerated the appeal of the company. In fact, the startup’s annual recurring revenue (ARR) is growing at a remarkable 275% year over year.

McKay says even with the pandemic his company has been accelerating, adding 100 employees in the last 12 months to take advantage of the increasing revenue. “When others were kind of scaling back we invested and it worked out well because our business never slowed down. In fact, in a lot of the industries it really picked up,” he said.

That’s because as many other founders have pointed out, COVID is speeding up the rate at which many companies are moving to the cloud, and that’s working to Snyk’s favor. “We’ve just capitalized on this accelerated shift to the cloud and modern cloud-native applications,” he said.

The company currently has 375 employees, with plans to add 100 more in the next year. As it grows, McKay says that he is looking to build a diverse and inclusive culture, something he learned about as he moved through his career at VMware and Veeam.

He says one of the keys at Snyk is putting every employee through unconscious bias training to help limit bias in the hiring process, and the executive team has taken a pledge to make the company’s hiring practices more diverse. Still, he recognizes it takes work to achieve these goals, and it’s always easy for an experienced team to go back to the network instead of digging deeper for a more diverse candidate pool.

“I think we’ve put all the pieces in place to get there, but I think like a lot of companies, there’s still a long way to go,” he said. But he recognizes the sooner you embed diversity into the company culture, the better because it’s hard to go back after the fact and do it.

Addition founder Lee Fixel says he sees a company that’s accelerating rapidly and that’s why he was willing to pour in so big an investment. “Snyk’s impressive growth is a signal that the market is ready to embrace a change from traditional security and empower developers to tackle the new security risk that comes with a software-driven digital world,” he said in a statement.

Snyk was founded in 2015. The founders brought McKay on board for some experienced leadership in 2018 to help lead the company through its rapid growth. Prior to the $350 million in new money this year, the company raised $70 million in 2019.

May
27
2020
--

Docker expands relationship with Microsoft to ease developer experience across platforms

When Docker sold off its enterprise division to Mirantis last fall, that didn’t mark the end of the company. In fact, Docker still exists and has refocused as a cloud-native developer tools vendor. Today it announced an expanded partnership with Microsoft around simplifying running Docker containers in Azure.

As its new mission suggests, it involves tighter integration between Docker and a couple of Azure developer tools including Visual Studio Code and Azure Container Instances (ACI). According to Docker, it can take developers hours or even days to set up their containerized environment across the two sets of tools.

The idea of the integration is to make it easier, faster and more efficient to include Docker containers when developing applications with the Microsoft tool set. Docker CEO Scott Johnston says it’s a matter of giving developers a better experience.

“Extending our strategic relationship with Microsoft will further reduce the complexity of building, sharing and running cloud-native, microservices-based applications for developers. Docker and VS Code are two of the most beloved developer tools and we are proud to bring them together to deliver a better experience for developers building container-based apps for Azure Container Instances,” Johnston said in a statement.

Among the features they are announcing is the ability to log into Azure directly from the Docker command line interface, a big simplification that reduces going back and forth between the two sets of tools. What’s more, developers can set up a Microsoft ACI environment complete with a set of configuration defaults. Developers will also be able to switch easily between their local desktop instance and the cloud to run applications.

These and other integrations are designed to make it easier for Azure and Docker common users to work in in the Microsoft cloud service without having to jump through a lot of extra hoops to do it.

It’s worth noting that these integrations are starting in Beta, but the company promises they should be released some time in the second half of this year.

May
13
2020
--

VMware to acquire Kubernetes security startup Octarine and fold it into Carbon Black

VMware announced today that it intends to buy early-stage Kubernetes security startup Octarine and fold it into Carbon Black, a security company it bought last year for $2.1 billion. The company did not reveal the price of today’s acquisition.

According to a blog post announcing the deal, from Patrick Morley, general manager and senior vice president at VMware’s Security Business Unit, Octarine should fit in with what Carbon Black calls its “intrinsic security strategy” — that is, protecting content and applications wherever they live. In the case of Octarine, that is cloud native containers in Kubernetes environments.

“Acquiring Octarine enables us to advance intrinsic security for containers (and Kubernetes environments), by embedding the Octarine technology into the VMware Carbon Black Cloud, and via deep hooks and integrations with the VMware Tanzu platform,” Morley wrote in a blog post.

This also fits in with VMware’s Kubernetes strategy, having purchased Heptio, an early Kubernetes company started by Craig McLuckie and Joe Beda, two folks who helped develop Kubernetes while at Google before starting their own company,

We covered Octarine last year when it released a couple of open-source tools to help companies define the Kubernetes security parameters. As we quoted head of product Julien Sobrier at the time:

Kubernetes gives a lot of flexibility and a lot of power to developers. There are over 30 security settings, and understanding how they interact with each other, which settings make security worse, which make it better, and the impact of each selection is not something that’s easy to measure or explain.

As for the startup, it now gets folded into VMware’s security business. While the CEO tried to put a happy face on the acquisition in a blog post, it seems its days as an independent entity are over. “VMware’s commitment to cloud native computing and intrinsic security, which have been demonstrated by its product announcements and by recent acquisitions, makes it an ideal home for Octarine,” the company CEO Shemer Schwarz wrote in the post.

Octarine was founded in 2017 and has raised $9 million, according to PitchBook data.

Aug
27
2019
--

Kubernetes – Introduction to Containers

Kubernetes - Introduction to Containers

Kubernetes - Introduction to ContainersHere at Percona’s Training and Education department, we are always at work with trying to keep our materials up-to-date and relevant to current technologies.  In addition, we keep an eye out for what topics are “hot” and are generating a lot of buzz.  Unless you’ve been living under a rock for the past year, Kubernetes is the current buzz word/technology. This is the first post in this blog series where we will dive into Kubernetes, and explore using it with Percona XtraDB Cluster.

Editor Disclaimer: This post is not intended to convince you to switch to a containerized environment. This post is simply aimed to educate about a growing trend in the industry.

In The Beginning…

Let’s start at the beginning. First, there were hardware-based servers; we still have these today. Despite the prevalence of “the cloud”, many large companies still use private datacenters and run their databases and applications “the traditional way.” Stepping up from hardware we venture into virtual machine territory. Popular solutions here are VMWare (commercial product), QEMU, KVM, and Virtual Box (the latter three are OSS).

Virtual Machines (VM) abstract away the hardware on which they are running using emulation software; this is generally referred to as the hypervisor. Such a methodology allows for an entirely separate operating system (OS) to run “on top of” the OS running natively against the hardware. For example, you may have CentOS 7 running inside a VM, which in turn, is running on your Mac laptop, or you could have Windows 10 running inside a VM, which is running on your Ubuntu 18 desktop PC.

There are several nice things about using VMs in your infrastructure. One of those is the ability to move a VM from one server to another. Likewise, deploying 10 more VMs just like the first one is usually just a couple of clicks or a few commands. Another is a better utilization of hardware resources. If your servers have 8 CPU cores and 128GB of memory, running one application on that server might not fully utilize all that hardware. Instead, you could run multiple VMs on that one server, thus taking advantage of all the resources. Additionally, using a VM allows you to bundle your application, and all associated libraries, and configs into one “unit” that can be deployed and copied very easily (We’ll come back to this concept in a minute).

One of the major downsides of the VM solution is the fact that you have an entire, separate, OS running.  It is a huge waste of resources to run a CentOS 7 VM on top of a CentOS 7 hardware-server.  If you have 5 CentOS 7 VMs running, you have five separate linux kernels (in addition to the host kernel), five separate copies of /usr, and five separate copies of the same libraries and application code taking up precious memory. Some might say that, in this setup, 90-99% of each VM is duplicating efforts already being made by the base OS. Additionally, each VM is dealing with its own CPU context switching management. Also, what happens when one VM needs a bit more CPU than another? There exists the concept of CPU “stealing” where one VM can take CPU from another VM. Imagine if that was your critical DB that had CPU stolen!

What if you could eliminate all that duplication but still give your application the ability to be bundled and deployed across tens or hundreds of servers with just a few commands? Enter: containers.

What is a Container?

Simply put, a container is a type of “lightweight” virtual machine without all the overhead of running an independent operating system on top of a hypervisor. A container can be as small as a single line script or as large as <insert super complicated application with lots of libraries, code, images, etc.>.

With containers, you essentially get all of the “pros” surrounding VMs with a lot less “cons,” mainly in the area of performance. A container is not an emulation, like VMs. On Linux, the kernel provides control groups (cgroups) functionality, which can limit and isolate CPU, memory, disk, and network. This action of isolation becomes the container; everything needed is ‘contained’ in one place/group. In more advanced setups, these cgroups can even limit/throttle the number of resources given to each container. With containers, there is no need for an additional OS install; no emulation, no hypervisor. Because of this, you can achieve “near bare-metal performance” while using containers.

Containers, and any applications running inside the containers, run inside their own, isolated, namespace, and thus, a container can only see its own process, disk contents, devices assigned, and granted network access.

Much like VMs, containers can be started, stopped, and copied from machine to machine, making deployment of applications across hundreds of servers quite easy.

One potential downside to containers is that they are inherently stateless. This means that if you have a running container, and make a change to its configuration or update a library, etc., that state is not saved if the container is restarted. The initial state of the launched container will be restored on restart. That said, most container implementations provide a method for supplying external storage which can hold stateful information to survive a restart or redeployment.

Dock the Ship, Capt’n

The concept of “containers” is generic. Many different implementations of containers exist, much like different distributions of Linux. The most popular implementation of containers in the Linux space is Docker. Docker is free, open-source software, but can also be sold with a commercial license for more advanced features. Docker is built upon one of the earlier linux container implementations, LXC.

Docker uses the concept of “images” to distribute containers. For example, you can download and install the Percona MySQL 8.0 image to your server, and from that image, create as many separate, independent containers of ‘Percona MySQL 8’ as your hardware can handle. Without getting too deep into the specifics, Docker images are the end result of multiple compressed layers which were used during the building process of this image. Herein lies another pro for Docker. If multiple images share common layers, there only needs to be one copy of that layer on any system, thus the potential for space savings can be huge.

Another nice feature of Docker containers is its built-in versioning capabilities. If you are familiar with the version control software Git, then you should know the concept of “tags” which is extremely similar to Docker tags. These tags allow you to track and specify Docker images when creating and deploying images. For example, using the same image (

percona/percona-server

 ), you can deploy

percona/percona-server:5.7

  or

percona/percona-server:8.0

  In this example, “5.7” and “8.0” are the tags used.

Docker also has a huge community ecosystem for sharing images. Visit https://hub.docker.com/ and search for just about any popular OSS project and you will probably find out that someone has already created an image for you to utilize.

Too Many Ships at Port

Docker containers are extremely simple to maintain, deploy, and use within your infrastructure. But what happens when you have hundreds of servers, and you’re trying to manage thousands of docker containers? Sure, if one server goes down, all you need to do, technically speaking, is re-launch those downed containers on another server. Do you have another server in your sea of servers with enough free capacity to handle this additional load? What if you were using persistent storage and need that volume moved along with the container? Running and managing large installations of containers can quickly become overwhelming.

Now we will discuss the topic of container orchestration.

Kubernetes

Pronounced, koo-ber-net-ies, and often abbreviated K8S (there are 8 letters between the ‘K’ and last ‘S’), is the current mainstream project for managing Docker containers at very large scale.  Kubernetes is Greek for “governor”, “helmsman”, or “captain”. Rather than repeating it here, you can read more about the history of K8S on your own.

Terminology

K8S has many moving parts to itself, as you might have imagined. Take a look at this image.

Kubernetes

Source: Wikipedia (modified)

Nodes

At the bottom of the image are two ‘Kubernetes Nodes’. These are usually your physical hardware servers, but could also be VMs at your cloud provider. These nodes will have a minimum OS installed, along with other necessary packages like K8S and Docker.

Kubelet is a management binary used by K8S to watch existing containers, run health checks, and launch new containers.

Kube-proxy is a simple port-mapping proxy service. Let’s say you had a simple webserver running as a container on 172.14.11.109:34223. Your end-users would access an external/public IP over 443 like normal, and kube-proxy would map/route that external access to the correct internal container.

Pods

The next concept to understand is a pod. This is typically the base term used when discussing what is running on a node. A pod can be a single container or can be an entire setup, running multiple containers, with mapped external storage, that should be grouped together as a single unit.

Networking

Networking within K8S is a complex topic, and a deep discussion is not suitable for this post. At an entry-level, what you need to know is that K8S creates an “overlay” network, allowing a container in a pod on node1 to talk to a container in a pod on node322 which is in another datacenter. Should your company policy require it, you can create multiple overlay networks across your entire K8S and use them as a way to isolate traffic between pods (ie: think old-school VLANs).

Master Node

At the top of the diagram, you can see the developer/operator interacting with the K8S master node. This node runs several processes which are critical to K8S operation:

kube-apiserver: A simple server to handle/route/process API requests from other K8S tools and even your own custom tools

etcd: A highly available key-value store. This is where K8S stores all of its stateful data.

kube-scheduler: Manages inventory and decides where to launch new pods based on available node resources.

You might have noticed that there is only 1 master node in the above diagram. This is the default installation. This is not highly available and should the master node go down, the entire K8S infrastructure will go down with it. In a true production environment, it is highly recommended that you run the master node in a highly-available manner.

Wrap It Up

In this post, we discussed how traditional hardware-based setups have been slowly migrating, first towards virtual machine setups and now to containerized deployments. We brought up the issues surrounding managing large container environments and introduced the leading container orchestration platform, Kubernetes. Lastly, we dove into K8S’s various components and architecture.

Once you have your K8S cluster set up with a master node, and many worker nodes, you’ll finally be able to launch some pods and have services accessible. This is what we will do in part 2 of this blog series.

May
23
2019
--

Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Apr
23
2019
--

Harness hauls in $60M Series B investment on $500M valuation

Series B rounds used to be about establishing a product-market fit, but for some startups the whole process seems to be accelerating. Harness, the startup founded by AppDynamics co-founder and CEO Jyoti Bansal is one of those companies that is putting the pedal the metal with his second startup, taking his learnings and a $60 million round to build the company much more quickly.

Harness already has an eye-popping half billion dollar valuation. It’s not terribly often I hear valuations in a Series B discussion. More typically CEOs want to talk growth rates, but Bansal volunteered the information, excited by the startup’s rapid development.

The round was led by IVP, GV (formerly Google Ventures) and ServiceNow Ventures. Existing investors Big Labs, Menlo Ventures and Unusual Ventures also participated. Today’s investment brings the total raised to $80 million, according to Crunchbase data.

Bansal obviously made a fair bit of money when he sold AppDynamics to Cisco in 2017 for $3.7 billion and he could have rested after his great success. Instead he turned his attention almost immediately to a new challenge, helping companies move to a new continuous delivery model more rapidly by offering Continuous Delivery as a Service.

As companies move to containers and the cloud, they face challenges implementing new software delivery models. As is often the case, large web scale companies like Facebook, Google and Netflix have the resources to deliver these kinds of solutions quickly, but it’s much more difficult for most other companies.

Bansal saw an opportunity here to package continuous delivery approaches as a service. “Our approach in the market is Continuous Delivery as a Service, and instead of you trying to engineer this, you get this platform that can solve this problem and bring you the best tooling that a Google or Facebook or Netflix would have,” Basal explained.

The approach has gained traction quickly. The company has grown from 25 employees at launch in 2017 to 100 today. It boasts 50 enterprise customers including Home Depot, Santander Bank and McAfee.

He says that the continuous delivery piece could just be a starting point, and the money from the round will be plowed back into engineering efforts to expand the platform and solve other problems DevOps teams face with a modern software delivery approach.

Bansal admits that it’s unusual to have this kind of traction this early, and he says that his growth is much faster than it was at AppDynamics at the same stage, but he believes the opportunity here is huge as companies look for more efficient ways to deliver software. “I’m a little bit surprised. I thought this was a big problem when I started, but it’s an even bigger problem than I thought and how much pain was out there and how ready the market was to look at a very different way of solving this problem,” he said.

Apr
03
2019
--

Container security startup Aqua lands $62M Series C

Aqua Security, a startup that helps customers launch containers securely, announced a $62 million Series C investment today led by Insight Partners.

Existing investors Lightspeed Venture Partners, M12 (Microsoft’s venture fund), TLV Partners and Shlomo Kramer also participated. With today’s investment, the startup’s investments since inception now total over $100 million, according to the company.

Early investors took a chance on the company when it was founded in 2015. Containers were barely a thing back then, but the founders had a vision of what was coming down the pike and their bet has paid off in a big way as the company now has first-mover advantage. As more companies turn to Kubernetes and containers, the need for a security product built from the ground up to secure this kind of environment is essential.

While co-founder and CEO Dror Davidoff says the company has 60 Fortune 500 customers, he’s unable to share names, but he can provide some clues like five of the world’s top banks. As companies like that turn to new technology like containers, they aren’t going to go whole hog without a solid security option. Aqua gives them that.

“Our customers are all taking very dramatic steps towards adoption of those new technologies, and they know that existing security tools that they have in place will not solve the problems,” Davidoff told TechCrunch. He said that most customers have started small, but then have expanded as container adoption increases.

You may thank that an ephemeral concept like a container would be less of a security threat, but Davidoff says that the open nature of containerization actually leaves them vulnerable to tampering. “Container lives long enough to be dangerous,” he said. He added, “They are structured in an open way, making it simple to hack, and once in, to do lateral movement. If the container holds sensitive info, it’s easy to have access to that information.”

Aqua scans container images for malware and makes sure only certified images can run, making it difficult for a bad actor to insert an insecure image, but the ephemeral nature of containers also helps if something slips through. DevOp can simply take down the faulty container and put a newly certified clean one quickly.

The company has 150 employees with offices in the Boston area and R&D in Tel Aviv in Israel. With the new influx of cash, the company plans to expand quickly, growing sales and marketing, customer support and expanding the platform into areas to cover emerging areas like serverless computing. Davidoff says the company could double in size in the next 12-18 months and he’s expecting 3x to 4x customer growth.

All of that money should provide fuel to grow the company as containerization spreads and companies look for a security solution to keep containers in production safe.

Mar
20
2019
--

Blameless emerges from stealth with $20M investment to help companies transition to SRE

Site Reliability Engineering (SRE) is an extension of DevOps designed for more complex environments. The problem is that this type of approach is difficult to implement and has usually only been in reach of large companies, requiring custom software. Blameless, a Bay Area startup, wants to put it reach of everyone. It emerged from stealth today with an SRE platform for the masses and around $20 million in funding.

For starters, the company announced two rounds of funding with $3.6 million in seed money last April and a $16.5 million Series A investment more recently in January. Investors included Accel,  Lightspeed Venture Partners and others.

Company co-founder and CEO Ashar Rizqi knows first-hand just how difficult it is to implement an SRE system. He built custom systems for Box and Mulesoft before launching Blameless two years ago. He and his co-founder COO Lyon Wong saw a gap in the market where companies who wanted to implement SRE were being limited because of a lack of tooling and decided to build it themselves.

Rizqi says SRE changes the way you work and interact and Blameless gives structure to that change. “It changes the way you communicate, prioritize and work, but we’re adding data and metrics to support that shift” he said.

Screenshot: Blameless

As companies move to containers and continuous delivery models, it brings a level of complexity to managing the developers, who are working to maintain the delivery schedule, and operations, who must make sure the latest builds get out with a minimum of bugs. It’s not easy to manage, especially given the speed involved.

Over time, the bugs build up and the blame circulates around the DevOps team as they surface. The company name comes because their platform should remove blame from the equation by providing the tooling to get deeper visibility into all aspects of the delivery model.

At that point, companies can understand more clearly the kinds of compromises they need to make to get products out the door, rather than randomly building up this technical debt over time. This is exacerbated by the fact that companies are building their software from a variety of sources, whether open source or API services, and it’s hard to know the impact that external code is having on your product.

“Technical debt is accelerating as there is greater reliability on micro services. It’s a black box. You don’t own all the lines of code you are executing,” Rizqi explained. His company’s solution is designed to help with that problem.

The company currently has 23 employees and 20 customers including DigitalOcean and Home Depot.

Aug
07
2018
--

Evolute debuts enterprise container migration and management platform

Evolute, a 3-year old startup out of Mountain View, officially launched the Evolute platform today with the goal of helping large organizations migrate applications to containers and manage those containers at scale.

Evolute founder and CEO Kristopher Francisco says he wants to give all Fortune 500 companies access to the same technology that big companies like Apple and Google enjoy because of their size and scale.

“We’re really focused on enabling enterprise companies to do two things really well. The first thing is to be able to systematically move into the container technology. And the second thing is to be able to run operationally at scale with existing and new applications that they’re creating in their enterprise environment,” Francisco explained.

While there are a number of sophisticated competing technologies out there, he says that his company has come up with some serious differentiators. For starters, getting legacy tech into containers has proven a time-consuming and challenging process. In fact, he says manually moving a legacy app and all its dependencies to a container has typically taken 3-6 months per application.

He claims his company has reduced that process to minutes, putting containerization within reach of just about any large organization that wants to move their existing applications to container technology, while reducing the total ramp-up time to convert a portfolio of existing applications from years to a couple of weeks.

Evolute management console. Screenshot: Evolute

The second part of the equation is managing the containers, and Francisco acknowledges that there are other platforms out there for running containers in production including Kubernetes, the open source container orchestration tool, but he says his company’s ability to manage containers at scale separates him from the pack.

“In the enterprise, the reason that you see the [containerization] adoption numbers being so low is partially because of the scale challenge they face. In the Evolute platform, we actually provide them the native networking, security and management capabilities to be able to run at scale,” he said.

The company also announced that it been invited to join the Chevron Technology Ventures’ Catalyst Program, which provides support for early stage companies like Evolute. This could help push Evolute to business units inside Chevron looking to move into containerization technology and be big boost for the startup.

The company has been around in since 2015 and boasts several other Fortune 500 companies beyond Chevron as customers, although it is not in a position to name them publicly just yet. The company has 5 full time employees and has raised $500,000 in seed money across two rounds, according to data on Crunchbase.

Nov
21
2017
--

Capital One begins journey as a software vendor with the release of Critical Stack Beta

 If every company is truly a software company, Capital One is out to the prove it. It was one of the early users of Critical Stack, a tool designed to help build security into the container orchestration process. In fact, it liked it so much it bought the company in 2016, and today it’s releasing Critical Stack in Beta. This is a critical step toward becoming a commercial product, giving… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com