Aug
15
2018
--

Twistlock snares $33 million Series C investment to secure cloud native environments

As the world shifts to a cloud native approach, the way you secure applications as they get deployed is changing too. Twistlock, a company built from the ground up to secure cloud native environments, announced a $33 million Series C round today led by Iconiq Capital.

Previous investors YL Ventures, TenEleven, Rally Ventures, Polaris Partners and Dell Technologies Capital also participated in the round. The company reports it has received a total of $63 million in venture investment to date.

Twistlock is solving a hard problem around securing containers and serverless, which are by their nature ephemeral. They can live for fractions of seconds making it hard track problems when they happen. According to company CEO and co-founder Ben Bernstein, his company came out of the gate building a security product designed to protect a cloud-native environment with the understanding that while containers and serverless computing may be ephemeral, they are still exploitable.

“It’s not about how long they live, but about the fact that the way they live is more predictable than a traditional computer, which could be running for a very long time and might have humans actually using it,” Bernstein said.

Screenshot: Twistlock

As companies move to a cloud native environment using Dockerized containers and managing them with Kubernetes and other tools, they create a highly automated system to deal with the deployment volume. While automation simplifies deployment, it can also leave companies vulnerable to host of issues. For example, if a malicious actor were to get control of the process via a code injection attack, they could cause a lot of problems without anyone knowing about it.

Twistlock is built to help prevent that, while also helping customers recognize when an exploit happens and performing forensic analysis to figure out how it happened.

It’s not a traditional Software as a Service as we’ve come to think of it. Instead, it is a service that gets installed on whatever public or private cloud that the customer is using. So far, they count just over 200 customers including Walgreens and Aetna and a slew of other companies you would definitely recognize, but they couldn’t name publicly.

The company, which was founded in 2015, is based in Portland, Oregon with their R&D arm in Israel. They currently have 80 employees. Bernstein said from a competitive standpoint, the traditional security vendors are having trouble reacting to cloud native, and while he sees some startups working at it, he believes his company has the most mature offering, at least for now.

“We don’t have a lot of competition right now, but as we start progressing we will see more,” he said. He plans to use the money they receive today to help expand their marketing and sales arm to continue growing their customer base, but also engineering to stay ahead of that competition as the cloud-native security market continues to develop.

Jul
31
2018
--

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Jul
24
2018
--

Google Cloud goes all-in on hybrid with its new Cloud Services Platform

The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.

Jul
08
2018
--

Serverless computing could unleash a new startup ecosystem

While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.

Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).

That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.

Building layers of abstraction

In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.

Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.

Photo: shutterjack/Getty Images

While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.

All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.

It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.

Removing another barrier to entry

Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying, patching or monitoring — all those details at the the server and operating system level go away,” he explained.

He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.

Blocks of servers in cloud data center.

Colin Anderson/Getty Images

Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.

Survey says…

Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.

When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.

Graph: Digital Ocean

Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.

Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.

Creating ecosystems

The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.

This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.

Photo: shylendrahoode/Getty Images

Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.

Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.

S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.

Beyond tooling

Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.

Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.

While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.

Apr
03
2018
--

Stackery lands $5.5 million for serverless platform

When Stackery’s founders were still at New Relic in 2014, they recognized there was an opportunity to provide instrumentation for the emerging serverless tech market. They left the company after New Relic’s IPO and founded Stackery with the goal of providing a governance and management layer for serverless architecture.

The company had a couple of big announcements today starting with their $5.5 million round, which they are calling a “seed plus” — and a new tool for tracking serverless performance called the Health Metrics Dashboard.

Let’s start with the funding round. Why the Seed Plus designation? Company co-founder and CEO Nathan Taggart says they could have done an A round, but the designation was a reflection of the reality of where their potential market is today. “From our perspective, there was an appetite for an A, but the Seed Plus represents the current stage of the market,” he said. That stage is still emerging as companies begin to see the benefits of the serverless approach.

HWVP led the round. Voyager Capital, Pipeline Capital Partners, and Founders’ Co-op also participated. Today’s investment brings the total raised to $7.3 million since the company was founded in 2016.

Serverless computing like AWS Lambda or Azure Functions is a bit of a misnomer. There is a server underlying the program, but instead of maintaining a dedicated server for your particular application, you only pay when there is a trigger event. Like cloud computing that came before, developers love it because it saves them a ton of time configuring (or begging) for resources for their applications.

But as with traditional cloud computing — serverless is actually a cloud service — developers can easily access it. If you think back to the Consumerization of IT phenomenon that began around 2011, it was this ability to procure cloud services so easily that resulted in a loss of control inside organizations.

As back then, companies want the advantages of serverless technology, but they also want to know how much they are paying, who’s using it and that it’s secure and in compliance with all the rules of the organization. That’s where Stackery comes in.

As for the new Health Metrics Dashboard, that’s an extension of this vision, one that fits in quite well with the monitoring roots of the founders. Serverless often involves containers, which can encompass many functions. When something goes wrong it’s hard to trace what the root cause was.

Stackery Health Metrics Dashboard. Photo: Stackery

“We are showing architecture-wide throughput and performance at each resource point and [developers] can figure out where there are bottlenecks, performance problems or failure.

The company launched in 2016. It is based in Portland, Oregon and currently has 9 employees, of which five are engineers. They plan to bring on three more by the end of the year.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com