May
23
2019
--

Takeaways from KubeCon; the latest on Kubernetes and cloud native development

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller discuss major announcements that came out of the Linux Foundation’s European KubeCon/CloudNativeCon conference and discuss the future of Kubernetes and cloud-native technologies.

Nearly doubling in size year-over-year, this year’s KubeCon conference brought big news and big players, with major announcements coming from some of the world’s largest software vendors including Google, AWS, Microsoft, Red Hat, and more. Frederic and Ron discuss how the Kubernetes project grew to such significant scale and which new initiatives in cloud-native development show the most promise from both a developer and enterprise perspective.

“This ecosystem starts sprawling, and we’ve got everything from security companies to service mesh companies to storage companies. Everybody is here. The whole hall is full of them. Sometimes it’s hard to distinguish between them because there are so many competing start-ups at this point.

I’m pretty sure we’re going to see a consolidation in the next six months or so where some of the bigger players, maybe Oracle, maybe VMware, will start buying some of these smaller companies. And I’m sure the show floor will look quite different about a year from now. All the big guys are here because they’re all trying to figure out what’s next.”

Frederic and Ron also dive deeper into the startup ecosystem rapidly developing around Kubernetes and other cloud-native technologies and offer their take on what areas of opportunity may prove to be most promising for new startups and founders down the road.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

May
23
2019
--

Andreessen pours $22M into PlanetScale’s database-as-a-service

PlanetScale’s founders invented the technology called Vitess that scaled YouTube. Now they’re selling it to any enterprise that wants their data both secure and consistently accessible. And thanks to its ability to re-shard databases while they’re operating, it can solve businesses’ troubles with GDPR, which demands they store some data in the same locality as the user to whom it belongs.

The potential to be a computing backbone that both competes with and complements Amazon’s AWS has now attracted a mammoth $22 million Series A for PlanetScale. Led by Andreessen Horowitz and joined by the firm’s Cultural Leadership Fund, head of the US Digital Service Matt Cutts, plus existing investor SignalFire, the round is a tall step up from the startup’s $3 million seed it raised a year ago. Andreessen general partner Peter Levine will join the PlanetScale board, bringing his enterprise launch expertise.

PlanetScale co-founders (from left): Jitendra Vaidya and Sugu Sougoumarane

“What we’re discovering is that people we thought were at one point competitors, like AWS and hosted relational databases — we’re discovering they may be our partners instead since we’re seeing a reasonable demand for our services in front of AWS’ hosted databases,” says CEO Jitendra Vaidya. “We are growing quite well.” Competing database startups were raising big rounds, so PlanetScale connected with Andreessen in search of more firepower.

A predecessor to Kubernetes, Vitess is a horizontal scaling sharding middleware built for MySQL. It lets businesses segment their database to boost memory efficiency without sacrificing reliable access speeds. PlanetScale sells Vitess in four ways: hosting on its database-as-a-service, licensing of the tech that can be run on-premises for clients or through another cloud provider, professional training for using Vitess and on-demand support for users of the open-source version of Vitess. PlanetScale now has 18 customers paying for licenses and services, and plans to release its own multi-cloud hosting to a general audience soon.

With data becoming so valuable and security concerns rising, many companies want cross-data center durability so one failure doesn’t break their app or delete information. But often the trade-off is unevenness in how long data takes to access. “If you take 100 queries, 99 might return results in 10 milliseconds, but one will take 10 seconds. That unpredictability is not something that apps can live with,” Vaidya tells me. PlanetScale’s Vitess gives enterprises the protection of redundancy but consistent speeds. It also allows businesses to continually update their replication logs so they’re only seconds behind what’s in production rather than doing periodic exports that can make it tough to track transactions and other data in real-time.

Now equipped with a ton of cash for a 20-person team, PlanetScale plans to double its staff by adding more sales, marketing and support. “We don’t have any concerns about the engineering side of things, but we need to figure out a go-to-market strategy for enterprises,” Vaidya explains. “As we’re both technical co-founders, about half of our funding is going towards hiring those functions [outside of engineering], and making that part of our organization work well and get results.”

But while a $22 million round from Andreessen Horowitz would be exciting for almost any startup, the funding for PlanetScale could assist the whole startup ecosystem. GDPR was designed to reign in tech giants. In reality, it applied compliance costs to all companies — yet the rich giants have more money to pay for those efforts. For a smaller startup, figuring out how to obey GDPR’s data localization mandate could be a huge engineering detour they can hardly afford. PlanetScale offers them not only databases but compliance-as-a-service too. It shards their data to where it has to be, and the startup can focus on their actual product.

May
23
2019
--

Serverless and containers: Two great technologies that work better together

Cloud native models using containerized software in a continuous delivery approach could benefit from serverless computing where the cloud vendor generates the exact amount of resources required to run a workload on the fly. While the major cloud vendors have recognized this and are already creating products to abstract away the infrastructure, it may not work for every situation in spite of the benefits.

Cloud native, put simply, involves using containerized applications and Kubernetes to deliver software in small packages called microservices. This enables developers to build and deliver software faster and more efficiently in a continuous delivery model. In the cloud native world, you should be able to develop code once and run it anywhere, on prem or any public cloud, or at least that is the ideal.

Serverless is actually a bit of a misnomer. There are servers underlying the model, but instead of dedicated virtual machines, the cloud vendor delivers exactly the right number of resources to run a particular workload for the right amount of time and no more.

Nothing is perfect

Such an arrangement would seem to be perfectly suited to a continuous delivery model, and while vendors have recognized the beauty of such an approach, as one engineer pointed out, there is never a free lunch in processes that are this complex, and it won’t be a perfect solution for every situation.

Arpana Sinha, director of product management at Google, says the Kubernetes community has really embraced the serverless idea, but she says that it is limited in its current implementation, delivered in the form of functions with products like AWS Lambda, Google Cloud Functions and Azure Functions.

“Actually, I think the functions concept is a limited concept. It is unfortunate that that is the only thing that people associate with serverless,” she said.

She says that Google has tried to be more expansive in its definition. “It’s basically a concept for developers where you are able to seamlessly go from writing code to deployment and the infrastructure takes care of all of the rest, making sure your code is deployed in the appropriate way across the appropriate, most resilient parts of the infrastructure, scaling it as your app needs additional resources, scaling it down as your traffic goes down, and charging you only for what you’re consuming,” she explained.

But Matt Whittington, senior engineer on the Kubernetes Team at Atlassian says, while it sounds good in theory, in practice, fully automated infrastructure could be unrealistic in some instances. “Serverless could be promising for certain workloads because it really allows developers to focus on the code, but it’s not a perfect solution. There is still some underlying tuning.”

He says you may not be able to leave it completely up to the vendor unless there is a way to specify the requirements for each container, such as instructing them you need a minimum container load time, a certain container kill time or perhaps you need to deliver it a specific location. He says in reality it won’t be fully automated, at least while developers fiddle with the settings to make sure they are getting the resources they need without over-provisioning and paying for more than they need.

Vendors bringing solutions

The vendors are putting in their two cents trying to create tools that bring this ideal together. For instance, Google announced a service called Google Cloud Run at Google Cloud Next last month. It’s based on the open-source Knative project, and in essence combines the goodness of serverless for developers running containers. Other similar services include AWS Fargate and Azure Container Instances, both of which are attempting to bring together these two technologies in a similar package.

In fact, Gabe Monroy, partner program manager at Microsoft, says Azure Container Instances is designed to solve this problem without being dependent on a functions-driven programming approach. “What Azure Container Instances does is it allows you to run containers directly on the Azure compute fabric, no virtual machines, hypervisor isolated, pay-per-second billing. We call it serverless containers,” he said.

While serverless and containers might seem like a good fit, as Monroy points out, there isn’t a one-size-fits-all approach to cloud-native technologies, whatever the approach may be. Some people will continue to use a function-driven serverless approach like AWS Lambda or Azure Functions and others will shift to containers and look for other ways to bring these technologies together. Whatever happens, as developer needs change, it is clear the open-source community and vendors will respond with tools to help them. Bringing serverless and containers together is just one example of that.

May
16
2019
--

OpenFin raises $17 million for its OS for finance

OpenFin, the company looking to provide the operating system for the financial services industry, has raised $17 million in funding through a Series C round led by Wells Fargo, with participation from Barclays and existing investors including Bain Capital Ventures, J.P. Morgan and Pivot Investment Partners. Previous investors in OpenFin also include DRW Venture Capital, Euclid Opportunities and NYCA Partners.

Likening itself to “the OS of finance,” OpenFin seeks to be the operating layer on which applications used by financial services companies are built and launched, akin to iOS or Android for your smartphone.

OpenFin’s operating system provides three key solutions which, while present on your mobile phone, has previously been absent in the financial services industry: easier deployment of apps to end users, fast security assurances for applications and interoperability.

Traders, analysts and other financial service employees often find themselves using several separate platforms simultaneously, as they try to source information and quickly execute multiple transactions. Yet historically, the desktop applications used by financial services firms — like trading platforms, data solutions or risk analytics — haven’t communicated with one another, with functions performed in one application not recognized or reflected in external applications.

“On my phone, I can be in my calendar app and tap an address, which opens up Google Maps. From Google Maps, maybe I book an Uber . From Uber, I’ll share my real-time location on messages with my friends. That’s four different apps working together on my phone,” OpenFin CEO and co-founder Mazy Dar explained to TechCrunch. That cross-functionality has long been missing in financial services.

As a result, employees can find themselves losing precious time — which in the world of financial services can often mean losing money — as they juggle multiple screens and perform repetitive processes across different applications.

Additionally, major banks, institutional investors and other financial firms have traditionally deployed natively installed applications in lengthy processes that can often take months, going through long vendor packaging and security reviews that ultimately don’t prevent the software from actually accessing the local system.

OpenFin CEO and co-founder Mazy Dar (Image via OpenFin)

As former analysts and traders at major financial institutions, Dar and his co-founder Chuck Doerr (now president & COO of OpenFin) recognized these major pain points and decided to build a common platform that would enable cross-functionality and instant deployment. And since apps on OpenFin are unable to access local file systems, banks can better ensure security and avoid prolonged yet ineffective security review processes.

And the value proposition offered by OpenFin seems to be quite compelling. OpenFin boasts an impressive roster of customers using its platform, including more than 1,500 major financial firms, almost 40 leading vendors and 15 of the world’s 20 largest banks.

More than 1,000 applications have been built on the OS, with OpenFin now deployed on more than 200,000 desktops — a noteworthy milestone given that the ever-popular Bloomberg Terminal, which is ubiquitously used across financial institutions and investment firms, is deployed on roughly 300,000 desktops.

Since raising their Series B in February 2017, OpenFin’s deployments have more than doubled. The company’s headcount has also doubled and its European presence has tripled. Earlier this year, OpenFin also launched it’s OpenFin Cloud Services platform, which allows financial firms to launch their own private local app stores for employees and customers without writing a single line of code.

To date, OpenFin has raised a total of $40 million in venture funding and plans to use the capital from its latest round for additional hiring and to expand its footprint onto more desktops around the world. In the long run, OpenFin hopes to become the vital operating infrastructure upon which all developers of financial applications are innovating.

Apple and Google’s mobile operating systems and app stores have enabled more than a million apps that have fundamentally changed how we live,” said Dar. “OpenFin OS and our new app store services enable the next generation of desktop apps that are transforming how we work in financial services.”

May
15
2019
--

VMware acquires Bitnami to deliver packaged applications anywhere

VMware announced today that it’s acquiring Bitnami, the package application company that was a member of the Y Combinator Winter 2013 class. The companies didn’t share the purchase price.

With Bitnami, the company can now deliver more than 130 popular software packages in a variety of formats, such as Docker containers or virtual machine, an approach that should be attractive for VMware as it makes its transformation to be more of a cloud services company.

“Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud — public or hybrid — and in the most optimal format — virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software,” the company wrote in a blog post announcing the deal.

Per usual, Bitnami’s founders see the exit through the prism of being able to build out the platform faster with the help of a much larger company. “Joining forces with VMware means that we will be able to both double-down on the breadth and depth of our current offering and bring Bitnami to even more clouds as well as accelerating our push into the enterprise,” the founders wrote in a blog post on the company website.

Holger Mueller, an analyst at Constellation Research says the deal fits well with VMware’s overall strategy. “Enterprises want easy, fast ways to deploy packaged applications and providers like Bitnami take the complexity out of this process. So this is a key investment for VMware that wants to position itselfy not only as the trusted vendor for virtualizaton across the hybrid cloud, but also as a trusted application delivery vendor,” he said.

The company has raised a modest $1.1 million since its founding in 2011 and says that it has been profitable since early days when it took the funding. In the blog post, the company states that nothing will change for customers from their perspective.

“In a way, nothing is changing. We will continue to develop and maintain our application catalog across all the platforms we support and even expand to additional ones. Additionally, if you are a company using Bitnami in production, a lot of new opportunities just opened up.”

Time will tell whether that is the case, but it is likely that Bitnami will be able to expand its offerings as part of a larger organization like VMware. The deal is expected to close by the end of this quarter (which is fiscal Q2 2020 for VMware).

VMware is a member of the Dell federation of products and came over as part of the massive $67 billion EMC deal in 2016. The company operates independently, is sold as a separate company on the stock market and makes its own acquisitions.

May
15
2019
--

Solo.io wants to bring order to service meshes with centralized management hub

As containers and microservices have proliferated, a new kind of tool called the service mesh has developed to help manage and understand interactions between services. While Kubernetes has emerged as the clear container orchestration tool of choice, there is much less certainty in the service mesh market. Solo.io today announced a new open-source tool called Service Mesh Hub, designed to help companies manage multiple service meshes in a single interface.

It is early days for the service mesh concept, but there are already multiple offerings, including Istio, Linkerd (pronounced Linker-Dee) and Envoy. While the market sorts itself it out, it requires a new set of tools, a management layer, so that developers and operations can monitor and understand what’s happening inside the various service meshes they are running.

Idit Levine, founder and CEO at Solo, says she formed the company because she saw an opportunity to develop a set of tooling for a nascent market. Since founding the company in 2017, it has developed several open-source tools to fill that service mesh tool vacuum.

Levine says that she recognized that companies would be using multiple service meshes for multiple situations and that not every company would have the technical capabilities to manage this. That is where the idea for the Service Mesh Hub was born.

It’s a centralized place for companies to add the different service mesh tools they are using, understand the interactions happening within the mesh and add extensions to each one from a kind of extension app store. Solo wants to make adding these tools a simple matter of pointing and clicking. While it obviously still requires a certain level of knowledge about how these tools work, it removes some of the complexity around managing them.

Solo.io Service Mesh Hub

Solo.io Service Mesh Hub (Screenshot: Solo.io)

“The reason we created this is because we believe service mesh is something big, and we want people to use it, and we feel it’s hard to adopt right now. We believe by creating that kind of framework or platform, it will make it easier for people to actually use it,” Levine told TechCrunch.

The vision is that eventually companies will be able to add extensions to the store for free, or even at some point for a fee, and it is through these paid extensions that the company will be able to make money. She recognized that some companies will be creating extensions for internal use only, and in those cases, they can add them to the hub and mark them as private and only that company can see them.

For every abstraction it seems, there is a new set of problems to solve. The service mesh is a response to the problem of managing multiple services. It solves three key issues, according to Levine. It allows a company to route the microservices, have visibility into them to see logs and metrics of the mesh and to provide security to manage which services can talk to each other.

Levine’s company is a response to the issues that have developed around understanding and managing the service meshes themselves. She says she doesn’t worry about a big company coming in and undermining her mission because she says that they are too focused on their own tools to create a set of uber-management tools like these (but that doesn’t mean the company wouldn’t be an attractive acquisition target).

So far, the company has taken more than $13 million in funding, according to Crunchbase data.

May
14
2019
--

Algorithmia raises $25M Series B for its AI automation platform

Algorithmia, a Seattle-based startup that offers a cloud-agnostic AI automation platform for enterprises, today announced a $25 million Series B funding round led by Norwest Partners. Madrona, Gradient Ventures, Work-Bench, Osage University Partners and Rakuten Ventures also participated in this round.

While the company started out five years ago as a marketplace for algorithms, it now mostly focuses on machine learning and helping enterprises take their models into production.

“It’s actually really hard to productionize machine learning models,” Algorithmia CEO Diego Oppenheimer told me. “It’s hard to help data scientists to not deal with data infrastructure but really being able to build out their machine learning and AI muscle.”

To help them, Algorithmia essentially built out a machine learning DevOps platform that allows data scientists to train their models on the platform and with the framework of their choice, bring it to Algorithmia — a platform that has already been blessed by their IT departments — and take it into production.

“Every Fortune 500 CIO has an AI initiative but they are bogged down by the difficulty of managing and deploying ML models,” said Rama Sekhar, a partner at Norwest Venture Partners, who has now joined the company’s board. “Algorithmia is the clear leader in building the tools to manage the complete machine learning life cycle and helping customers unlock value from their R&D investments.”

With the new funding, the company will double down on this focus by investing in product development to solve these issues, but also by building out its team, with a plan to double its headcount over the next year. A year from now, Oppenheimer told me, he hopes that Algorithmia will be a household name for data scientists and, maybe more importantly, their platform of choice for putting their models into production.

“How does Algorithmia succeed? Algorithmia succeeds when our customers are able to deploy AI and ML applications,” Oppenheimer said. “And although there is a ton of excitement around doing this, the fact is that it’s really difficult for companies to do so.”

The company previously raised a $10.5 million Series A round led by Google’s AI fund. It’s customers now include the United Nations, a number of U.S. intelligence agencies and Fortune 500 companies. In total, more than 90,000 engineers and data scientists are now on the platform.

May
09
2019
--

Cisco open sources MindMeld conversational AI platform

Cisco announced today that it was open-sourcing the MindMeld conversation AI platform, making it available to anyone who wants to use it under the Apache 2.0 license.

MindMeld is the conversational AI company that Cisco bought in 2017. The company put the technology to use in Cisco Spark Assistant later that year to help bring voice commands to meeting hardware, which was just beginning to emerge at the time.

Today, there is a concerted effort to bring voice to enterprise use cases, and Cisco is offering the means for developers to do that with the MindMeld tool set. “Today, Cisco is taking a big step towards empowering developers with more comprehensive and practical tools for building conversational applications by open-sourcing the MindMeld Conversational AI Platform,” Cisco’s head of machine learning Karthik Raghunathan wrote in a blog post.

The company also wants to make it easier for developers to get going with the platform, so it is releasing the Conversational AI Playbook, a step-by-step guide book to help developers get started with conversation-driven applications. Cisco says this is about empowering developers, and that’s probably a big part of the reason.

But it would also be in Cisco’s best interest to have developers outside of Cisco working with and on this set of tools. By open-sourcing them, the hope is that a community of developers, whether Cisco customers or others, will begin using, testing and improving the tools; helping it to develop the platform faster and more broadly than it could, even inside an organization as large as Cisco.

Of course, just because they offer it doesn’t necessarily automatically mean the community of interested developers will emerge, but given the growing popularity of voice-enabled used cases, chances are some will give it a look. It will be up to Cisco to keep them engaged.

Cisco is making all of this available on its own DevNet platform starting today.

May
09
2019
--

Against the Slacklash

Such hate. Such dismay. “How Slack is ruining work.” “Actually, Slack really sucks.” “Slack may actually be hurting your workplace productivity.” “Slack is awful.” Slack destroys teams’ ability to think, plan & get complex work out the door.” “Slack is a terrible collaboration tool.” “Face it, Slack is ruining your life.”

Contrarian view: Slack is not inherently bad. Rather, the particular way in which you are misusing it epitomizes your company’s deeper problems. I’m the CTO of a company which uses Slack extensively, successfully, and happily — but because we’re a consultancy, I have also been the sometime member of dozens of others’ Slack workspaces, where I have witnessed all the various flavors of flaws recounted above. In my experience, those are not actually caused by Slack.

Please note that I am not saying “Slack is just a tool, you have to use it correctly.” Even if that were so, a tool which lends itself so easily to being used so badly would be a bad tool. What I’m saying is something more subtle, and far more damning: that Slack is a mirror which reflects the pathologies inherent in your workplace. The fault, dear Brutus, is not in our Slacks, but in ourselves.

May
07
2019
--

Red Hat and Microsoft are cozying up some more with Azure Red Hat OpenShift

It won’t be long before Red Hat becomes part of IBM, the result of the $34 billion acquisition last year that is still making its way to completion. For now, Red Hat continues as a stand-alone company, and is if to flex its independence muscles, it announced its second agreement in two days with Microsoft Azure, Redmond’s public cloud infrastructure offering. This one involving running Red Hat OpenShift on Azure.

OpenShift is RedHat’s Kubernetes offering. The thinking is that you can start with OpenShift in your data center, then as you begin to shift to the cloud, you can move to Azure Red Hat OpenShift — such a catchy name — without any fuss, as you have the same management tools you have been used to using.

As Red Hat becomes part of IBM, it sees that it’s more important than ever to maintain its sense of autonomy in the eyes of developers and operations customers, as it holds its final customer conference as an independent company. Red Hat executive vice president and president, of products and technologies certainly sees it that way. “I think [the partnership] is a testament to, even with moving to IBM at some point soon, that we are going to be  separate and really keep our Switzerland status and give the same experience for developers and operators across anyone’s cloud,” he told TechCrunch.

It’s essential to see this announcement in the context of both IBM’s and Microsoft’s increasing focus on the hybrid cloud, and also in the continuing requirement for cloud companies to find ways to work together, even when it doesn’t always seem to make sense, because as Microsoft CEO Satya Nadella has said, customers will demand it. Red Hat has a big enterprise customer presence and so does Microsoft. If you put them together, it could be the beginning of a beautiful friendship.

Scott Guthrie, executive vice president for the cloud and AI group at Microsoft understands that. “Microsoft and Red Hat share a common goal of empowering enterprises to create a hybrid cloud environment that meets their current and future business needs. Azure Red Hat OpenShift combines the enterprise leadership of Azure with the power of Red Hat OpenShift to simplify container management on Kubernetes and help customers innovate on their cloud journeys,” he said in a statement.

This news comes on the heels of yesterday’s announcement, also involving Kubernetes. TechCrunch’s own Frederic Lardinois described it this way:

What’s most interesting here, however, is KEDA, a new open-source collaboration between Red Hat and Microsoft that helps developers deploy serverless, event-driven containers. Kubernetes-based event-driven autoscaling, or KEDA, as the tool is called, allows users to build their own event-driven applications on top of Kubernetes. KEDA handles the triggers to respond to events that happen in other services and scales workloads as needed.

Azure Red Hat OpenShift is available now on Azure. The companies are working on some other integrations too including Red Hat Enterprise Linux (RHEL) running on Azure and Red Hat Enterprise Linux 8 support in Microsoft SQL Server 2019.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com