Jun
27
2019
--

We’re talking Kubernetes at TC Sessions: Enterprise with Google’s Aparna Sinha and VMware’s Craig McLuckie

Over the past five years, Kubernetes has grown from a project inside of Google to an open source powerhouse with an ecosystem of products and services, attracting billions of dollars in venture investment. In fact, we’ve already seen some successful exits, including one from one of our panelists.

On September 5th at TC Sessions: Enterprise, we’re going to be discussing the rise of Kubernetes with two industry veterans. For starters we have Aparna Sinha, director of product management for Kubernetes and the newly announced Anthos product. Sinha was in charge of several early Kubernetes releases and has worked on the Kubernetes team at Google since 2016. Prior to joining Google, she had 15 years experience in enterprise software settings.

Craig McLuckie will also be joining the conversation. He’s one of the original developers of Kubernetes at Google. He went on to found his own Kubernetes startup, Heptio, with Joe Beda, another Google Kubernetes alum. They sold the company to VMware last year for $505 million after raising $33.5 million, according to Crunchbase data.

The two bring a vast reservoir of knowledge and will be discussing the history of Kubernetes, why Google decided to open source it and how it came to grow so quickly. Two other Kubernetes luminaries will be joining them. We’ll have more about them in another post soon.

Kubernetes is a container orchestration engine. Instead of developing large monolithic applications that sit on virtual machines, containers run a small part of the application. As the components get smaller, it requires an orchestration layer to deliver the containers when needed and make them go away when they are not longer required. Kubernetes acts as the orchestra leader.

As Kubernetes, containerization and the cloud-native ethos it encompasses has grown, it has helped drive the enterprise shift to the cloud in general. If you can write your code once, and use it in the cloud or on prem, it means you don’t have to manage applications using different tool sets and that has had broad appeal for enterprises making the shift to the cloud.

TC Sessions: Enterprise (September 5 at San Francisco’s Yerba Buena Center) will take on the big challenges and promise facing enterprise companies today. TechCrunch’s editors will bring to the stage founders and leaders from established and emerging companies to address rising questions, like the promised revolution from machine learning and AI, intelligent marketing automation and the inevitability of the cloud, as well as the outer reaches of technology, like quantum computing and blockchain.

Tickets are now available for purchase on our website at the early-bird rate of $395; student tickets are just $75.

Student tickets are just $75 – grab them here.

We have a limited number of Startup Demo Packages available for $2,000, which includes four tickets to attend the event.

For each ticket purchased for TC Sessions: Enterprise, you will also be registered for a complimentary Expo Only pass to TechCrunch Disrupt SF on October 2-4.

May
29
2019
--

Logz.io lands $52M to keep growing open source-based logging tools

Logz.io announced a $52 million Series D investment today. The round was led by General Catalyst.

Other investors participating in the round included OpenView Ventures, 83North, Giza Venture Capital, Vintage Investment Partners, Greenspring Associates and Next47. Today’s investment brings the total raised to nearly $100 million, according to Crunchbase data.

Logz.io is a company built on top of the open-source tools Elasticsearch, Logstash and Kibana (collectively known by the acronym ELK) and Grafana. It’s taking those tools in a typical open-source business approach, packaging them up and offering them as a service. This approach enables large organizations to take advantage of these tools without having to deal with the raw open-source projects.

The company’s solutions intelligently scan logs looking for anomalies. When it finds them, it surfaces the problem and informs IT or security, depending on the scenario, using a tool like PagerDuty. This area of the market has been dominated in recent years by vendors like Splunk and Sumo Logic, but company founder and CEO Tomer Levy saw a chance to disrupt that space by packaging a set of open-source logging tools that were rapidly increasing in popularity. They believed they could build on that growing popularity, while solving a pain point the founders had actually experienced in previous positions, which is always a good starting point for a startup idea.

Screenshot: Logz.io

“We saw that the majority of the market is actually using open source. So we said, we want to solve this problem, a problem we have faced in the past and didn’t have a solution. What we’re going to do is we’re going to provide you with an easy-to-use cloud service that is offering an open-source compatible solution,” Levy explained. In other words, they wanted to build on that open-source idea, but offer it in a form that was easier to consume.

Larry Bohn, who is leading the investment for General Catalyst, says that his firm liked the idea of a company building on top of open source because it provides a built-in community of developers to drive the startup’s growth — and it appears to be working. “The numbers here were staggering in terms of how quickly people were adopting this and how quickly it was growing. It was very clear to us that the company was enjoying great success without much of a commercial orientation,” Bohn explained.

In fact, Logz.io already has 700 customers, including large names like Schneider Electric, The Economist and British Airways. The company has 175 employees today, but Levy says they expect to grow that by 250 by the end of this year, as they use this money to accelerate their overall growth.

May
29
2019
--

Logz.io lands $52M to keep growing open source-based logging tools

Logz.io announced a $52 million Series D investment today. The round was led by General Catalyst.

Other investors participating in the round included OpenView Ventures, 83North, Giza Venture Capital, Vintage Investment Partners, Greenspring Associates and Next47. Today’s investment brings the total raised to nearly $100 million, according to Crunchbase data.

Logz.io is a company built on top of the open-source tools Elasticsearch, Logstash and Kibana (collectively known by the acronym ELK) and Grafana. It’s taking those tools in a typical open-source business approach, packaging them up and offering them as a service. This approach enables large organizations to take advantage of these tools without having to deal with the raw open-source projects.

The company’s solutions intelligently scan logs looking for anomalies. When it finds them, it surfaces the problem and informs IT or security, depending on the scenario, using a tool like PagerDuty. This area of the market has been dominated in recent years by vendors like Splunk and Sumo Logic, but company founder and CEO Tomer Levy saw a chance to disrupt that space by packaging a set of open-source logging tools that were rapidly increasing in popularity. They believed they could build on that growing popularity, while solving a pain point the founders had actually experienced in previous positions, which is always a good starting point for a startup idea.

Screenshot: Logz.io

“We saw that the majority of the market is actually using open source. So we said, we want to solve this problem, a problem we have faced in the past and didn’t have a solution. What we’re going to do is we’re going to provide you with an easy-to-use cloud service that is offering an open-source compatible solution,” Levy explained. In other words, they wanted to build on that open-source idea, but offer it in a form that was easier to consume.

Larry Bohn, who is leading the investment for General Catalyst, says that his firm liked the idea of a company building on top of open source because it provides a built-in community of developers to drive the startup’s growth — and it appears to be working. “The numbers here were staggering in terms of how quickly people were adopting this and how quickly it was growing. It was very clear to us that the company was enjoying great success without much of a commercial orientation,” Bohn explained.

In fact, Logz.io already has 700 customers, including large names like Schneider Electric, The Economist and British Airways. The company has 175 employees today, but Levy says they expect to grow that by 250 by the end of this year, as they use this money to accelerate their overall growth.

May
28
2019
--

The challenges of truly embracing cloud native

There is a tendency at any conference to get lost in the message. Spending several days immersed in any subject tends to do that. The purpose of such gatherings is, after all, to sell the company or technologies being featured.

Against the beautiful backdrop of the city of Barcelona last week, we got the full cloud native message at KubeCon and CloudNativeCon. The Cloud Native Computing Foundation (CNCF), which houses Kubernetes and related cloud native projects, had certainly honed the message along with the community who came to celebrate its five-year anniversary. The large crowds that wandered the long hallways of the Fira Gran Via conference center proved it was getting through, at least to a specific group.

Cloud native computing involves a combination of software containerization along with Kubernetes and a growing set of adjacent technologies to manage and understand those containers. It also involves the idea of breaking down applications into discrete parts known as microservices, which in turn leads to a continuous delivery model, where developers can create and deliver software more quickly and efficiently. At the center of all this is the notion of writing code once and being able to deliver it on any public cloud, or even on-prem. These approaches were front and center last week.

At five years old, many developers have embraced these concepts, but cloud native projects have reached a size and scale where they need to move beyond the early adopters and true believers and make their way deep into the enterprise. It turns out that it might be a bit harder for larger companies with hardened systems to make wholesale changes in the way they develop applications, just as it is difficult for large organizations to take on any type of substantive change.

Putting up stop signs

May
23
2019
--

Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

May
23
2019
--

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native, put simply, involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google, says the Kubernetes community has really embraced the serverless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition. “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained.

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice, fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container, such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open-source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points out, there isn’t a one-size-fits-all approach to cloud-native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open-source community and vendors will respond with tools to help them. Bringing serverless and containers together is just one example of that.

May
21
2019
--

Praqma puts Atlassian’s Data Center products into containers

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently, and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes, because the company today announced that its channel partner Praqma is launching Atlassian Software in Kubernetes (ASK), a new solution that allows enterprises to run and manage as containers its on-premise applications like Jira Data Center, with the help of Kubernetes.

Praqma is now making ASK available as open source.

As the company notes in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” Praqma’s team explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.

May
15
2019
--

Solo.io wants to bring order to service meshes with centralized management hub

As containers and microservices have proliferated, a new kind of tool called the service mesh has developed to help manage and understand interactions between services. While Kubernetes has emerged as the clear container orchestration tool of choice, there is much less certainty in the service mesh market. Solo.io today announced a new open-source tool called Service Mesh Hub, designed to help companies manage multiple service meshes in a single interface.

It is early days for the service mesh concept, but there are already multiple offerings, including Istio, Linkerd (pronounced Linker-Dee) and Envoy. While the market sorts itself it out, it requires a new set of tools, a management layer, so that developers and operations can monitor and understand what’s happening inside the various service meshes they are running.

Idit Levine, founder and CEO at Solo, says she formed the company because she saw an opportunity to develop a set of tooling for a nascent market. Since founding the company in 2017, it has developed several open-source tools to fill that service mesh tool vacuum.

Levine says that she recognized that companies would be using multiple service meshes for multiple situations and that not every company would have the technical capabilities to manage this. That is where the idea for the Service Mesh Hub was born.

It’s a centralized place for companies to add the different service mesh tools they are using, understand the interactions happening within the mesh and add extensions to each one from a kind of extension app store. Solo wants to make adding these tools a simple matter of pointing and clicking. While it obviously still requires a certain level of knowledge about how these tools work, it removes some of the complexity around managing them.

Solo.io Service Mesh Hub

Solo.io Service Mesh Hub (Screenshot: Solo.io)

“The reason we created this is because we believe service mesh is something big, and we want people to use it, and we feel it’s hard to adopt right now. We believe by creating that kind of framework or platform, it will make it easier for people to actually use it,” Levine told TechCrunch.

The vision is that eventually companies will be able to add extensions to the store for free, or even at some point for a fee, and it is through these paid extensions that the company will be able to make money. She recognized that some companies will be creating extensions for internal use only, and in those cases, they can add them to the hub and mark them as private and only that company can see them.

For every abstraction it seems, there is a new set of problems to solve. The service mesh is a response to the problem of managing multiple services. It solves three key issues, according to Levine. It allows a company to route the microservices, have visibility into them to see logs and metrics of the mesh and to provide security to manage which services can talk to each other.

Levine’s company is a response to the issues that have developed around understanding and managing the service meshes themselves. She says she doesn’t worry about a big company coming in and undermining her mission because she says that they are too focused on their own tools to create a set of uber-management tools like these (but that doesn’t mean the company wouldn’t be an attractive acquisition target).

So far, the company has taken more than $13 million in funding, according to Crunchbase data.

May
09
2019
--

Cisco open sources MindMeld conversational AI platform

Cisco announced today that it was open-sourcing the MindMeld conversation AI platform, making it available to anyone who wants to use it under the Apache 2.0 license.

MindMeld is the conversational AI company that Cisco bought in 2017. The company put the technology to use in Cisco Spark Assistant later that year to help bring voice commands to meeting hardware, which was just beginning to emerge at the time.

Today, there is a concerted effort to bring voice to enterprise use cases, and Cisco is offering the means for developers to do that with the MindMeld tool set. “Today, Cisco is taking a big step towards empowering developers with more comprehensive and practical tools for building conversational applications by open-sourcing the MindMeld Conversational AI Platform,” Cisco’s head of machine learning Karthik Raghunathan wrote in a blog post.

The company also wants to make it easier for developers to get going with the platform, so it is releasing the Conversational AI Playbook, a step-by-step guide book to help developers get started with conversation-driven applications. Cisco says this is about empowering developers, and that’s probably a big part of the reason.

But it would also be in Cisco’s best interest to have developers outside of Cisco working with and on this set of tools. By open-sourcing them, the hope is that a community of developers, whether Cisco customers or others, will begin using, testing and improving the tools; helping it to develop the platform faster and more broadly than it could, even inside an organization as large as Cisco.

Of course, just because they offer it doesn’t necessarily automatically mean the community of interested developers will emerge, but given the growing popularity of voice-enabled used cases, chances are some will give it a look. It will be up to Cisco to keep them engaged.

Cisco is making all of this available on its own DevNet platform starting today.

May
08
2019
--

Steve Singh stepping down as Docker CEO

TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as chairman of the board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open-source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of a capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open-source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 million investment last year, which some saw as a sign of continuing struggles for the company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open-source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:

Note: Docker has now confirmed this story in a press release.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com