Jan
23
2019
--

Open-source leader Confluent raises $125M on $2.5B valuation

Confluent, the commercial company built on top of the open-source Apache Kafka project, announced a $125 million Series D round this morning on an enormous $2.5 billion valuation.

The round was led by existing investor Sequoia Capital, with participation from Index Ventures and Benchmark, which also participated in previous rounds. Today’s investment brings the total raised to $206 million, according to the company.

The valuation soared from the previous round when the company was valued at $500 million. What’s more, the company’s bookings have scaled along with the valuation.

Graph: Confluent

 

While CEO Jay Kreps wouldn’t comment directly on a future IPO, he hinted that it is something the company is looking to do at some point. “With our growth and momentum so far, and with the latest funding, we are in a very good position to and have a desire to build a strong, independent company,” Kreps told TechCrunch.

Confluent and Kafka have developed a streaming data technology that processes massive amounts of information in real time, something that comes in handy in today’s data-intensive environment. The base streaming database technology was developed at LinkedIn as a means of moving massive amounts of messages. The company decided to open-source that technology in 2011, and Confluent launched as the commercial arm in 2014.

Kreps, writing in a company blog post announcing the funding, said that the events concept encompasses the basic building blocks of businesses. “These events are the orders, sales and customer experiences, that constitute the operation of the business. Databases have long helped to store the current state of the world, but we think this is only half of the story. What is missing are the continually flowing stream of events that represents everything happening in a company, and that can act as the lifeblood of its operation,” he wrote.

Kreps pointed out that as an open-source project, Confluent depends on the community. “This is not something we’re doing alone. Apache Kafka has a massive community of contributors of which we’re just one part,” he wrote.

While the base open-source component remains available for free download, it doesn’t include the additional tooling the company has built to make it easier for enterprises to use Kafka. Recent additions include a managed cloud version of the product and a marketplace, Confluent Hub, for sharing extensions to the platform.

As we watch the company’s valuation soar, it does so against a backdrop of other companies based on open source selling for big bucks in 2018, including IBM buying Red Hat for $34 billion in October and Salesforce acquiring MuleSoft in June for $6.5 billion.

The company’s most recent round was $50 million in March, 2017.

Jan
22
2019
--

Linux Foundation launches Hyperledger Grid to provide framework for supply chain projects

The Linux Foundation’s Hyperledger Project has a singular focus on the blockchain, but this morning it announced a framework for building supply chain projects where it didn’t want blockchain stealing the show.

In fact, the foundation is careful to point out that this project is not specifically about the blockchain, so much as providing the building blocks for a broader view of solving supply chain digitization issues. As it describes in a blog post announcing the project, it is neither an application nor a blockchain project, per se. So what is it?

“Grid is an ecosystem of technologies, frameworks and libraries that work together, letting application developers make the choice as to which components are most appropriate for their industry or market model.”

Hyperledger doesn’t want to get locked down by jargon or preconceived notions of what these projects should look like. It wants to provide developers with a set of tools and libraries and let them loose to come up with ideas and build applications specific to their industry requirements.

Primary contributors to the project to this point have been Cargill, Intel and Bitwise IO.

Supply chain has been a major early use case for distributed ledger applications in the enterprise. In fact, earlier today we covered an announcement from Citizens Reserve, a startup building a Supply Chain as a Service on the blockchain. IBM has been working on several supply chain uses cases, including diamond tracking and food supply protection.

But the distributed ledger idea is so new both for supply chain and the enterprise in general that developers are still very much finding their way. By providing a flexible, open-source framework, The Linux Foundation is giving developers an open option and trying to provide a flexible foundation to build applications as this all shakes out.

Jan
09
2019
--

AWS gives open source the middle finger

AWS launched DocumentDB today, a new database offering that is compatible with the MongoDB API. The company describes DocumentDB as a “fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools.” In effect, it’s a hosted drop-in replacement for MongoDB that doesn’t use any MongoDB code.

AWS argues that while MongoDB is great at what it does, its customers have found it hard to build fast and highly available applications on the open-source platform that can scale to multiple terabytes and hundreds of thousands of reads and writes per second. So what the company did was build its own document database, but made it compatible with the Apache 2.0 open source MongoDB 3.6 API.

If you’ve been following the politics of open source over the last few months, you’ll understand that the optics of this aren’t great. It’s also no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

The wrinkle here is that MongoDB was one of the first companies that aimed to put a stop to this by re-licensing its open-source tools under a new license that explicitly stated that companies that wanted to do this had to buy a commercial license. Since then, others have followed.

“Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model,” MongoDB CEO and president Dev Ittycheria told us. “However, developers are technically savvy enough to distinguish between the real thing and a poor imitation. MongoDB will continue to outperform any impersonations in the market.”

That’s a pretty feisty comment. Last November, Ittycheria told my colleague Ron Miller that he believed that AWS loved MongoDB because it drives a lot of consumption. In that interview, he also noted that “customers have spent the last five years trying to extricate themselves from another large vendor. The last thing they want to do is replay the same movie.”

MongoDB co-founder and CTO Eliot Horowitz echoed this. “In order to give developers what they want, AWS has been pushed to offer an imitation MongoDB service that is based on the MongoDB code from two years ago,” he said. “Our entire company is focused on one thing — giving developers the best way to work with data with the freedom to run anywhere. Our commitment to that single mission will continue to differentiate the real MongoDB from any imitation products that come along.”

A company spokesperson for MongoDB also highlighted that the 3.6 API that DocumentDB is compatible with is now two years old and misses most of the newest features, including ACID transactions, global clusters and mobile sync.

To be fair, AWS has become more active in open source lately and, in a way, it’s giving developers what they want (and not all developers are happy with MongoDB’s own hosted service). Bypassing MongoDB’s licensing by going for API comparability, given that AWS knows exactly why MongoDB did that, was always going to be a controversial move and won’t endear the company to the open-source community.

Jan
09
2019
--

Baidu Cloud launches its open-source edge computing platform

At CES, the Chinese tech giant Baidu today announced OpenEdge, its open-source edge computing platform. At its core, OpenEdge is the local package component of Baidu’s existing Intelligent Edge (BIE) commercial offering and obviously plays well with that service’s components for managing edge nodes and apps.

Because this is obviously a developer announcement, I’m not sure why Baidu decided to use CES as the venue for this release, but there can be no doubt that China’s major tech firms have become quite comfortable with open source. Companies like Baidu, Alibaba, Tencent and others are often members of the Linux Foundation and its growing stable of projects, for example, and virtually ever major open-source organization now looks to China as its growth market. It’s no surprise, then, that we’re also now seeing a wider range of Chinese companies that open source their own projects.

“Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy,” says Baidu VP and GM of Baidu Cloud Watson Yin. “By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications.”

A company spokesperson tells us that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud.

Baidu also today announced that it has partnered with Intel to launch the BIE-AI-Box and with NXP Semiconductors to launch the BIE-AI-Board. The box is designed for in-vehicle video analysis while the board is small enough for cameras, drones, robots and similar applications.

CES 2019 coverage - TechCrunch

Dec
07
2018
--

Pivotal announces new serverless framework

Pivotal has always been about making open-source tools for enterprise developers, but surprisingly, up until now, the arsenal has lacked a serverless component. That changed today with the alpha launch of Pivotal Function Service.

Pivotal Function Service is a Kubernetes-based, multi-cloud function service. It’s part of the broader Pivotal vision of offering you a single platform for all your workloads on any cloud,” the company wrote in a blog post announcing the new service.

What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least.

The idea up until now has been that the large-scale cloud providers like Amazon, Google and Microsoft could dial up whatever infrastructure your functions require, then dial them down when you’re finished without you ever having to think about the underlying infrastructure. The cloud provider deals with whatever compute, storage and memory you need to run the function, and no more.

Pivotal wants to take that same idea and make it available in the cloud across any cloud service. It also wants to make it available on-prem, which may seem curious at first, but Pivotal’s Onsi Fakhouri says customers want that same abilities both on-prem and in the cloud. “One of the key values that you often hear about serverless is that it will run down to zero and there is less utilization, but at the same time there are customers who want to explore and embrace the serverless programming paradigm on-prem,” Fakhouri said. Of course, then it is up to IT to ensure that there are sufficient resources to meet the demands of the serverless programs.

The new package includes several key components for developers, including an environment for building, deploying and managing your functions, a native eventing ability that provides a way to build rich event triggers to call whatever functionality you require and the ability to do this within a Kubernetes-based environment. This is particularly important as companies embrace a hybrid use case to manage the events across on-prem and cloud in a seamless way.

One of the advantages of Pivotal’s approach is that Pivotal can work on any cloud as an open product. This is in contrast to the cloud providers like Amazon, Google and Microsoft, which provide similar services that run exclusively on their clouds. Pivotal is not the first to build an open-source Function as a Service, but they are attempting to package it in a way that makes it easier to use.

Serverless doesn’t actually mean there are no underlying servers. Instead, it means that developers don’t have to point to any servers because the cloud provider takes care of whatever infrastructure is required. In an on-prem scenario, IT has to make those resources available.

Dec
05
2018
--

Camunda hauls in $28M investment as workflow automation remains hot

Camunda, a Berlin-based company that builds open-source workflow automation software, announced a €25 million (approximately $28 million) investment from Highland Europe today.

This is the company’s first investment in its 10-year history. CEO and co-founder Jakob Freund says the company has been profitable since Day One, but decided to bring in outside capital now to take on a more aggressive international expansion.

The company launched in 2008 and for the first five years offered business process management consulting services, but they found traditional offerings from companies like Oracle, IBM and Pega weren’t encouraging software developers to really embrace BPM and build new applications.

In 2013 the company decided to solve that problem and began a shift from consulting to software. “We launched our own open-source project, Camunda BPM, in 2013. We also offered a commercial distribution, obviously, because that’s where the revenue came from,” Freund explained.

The project took off and they flipped their revenue sources from 80 percent consulting/20 percent software to 90 percent software/10 percent consulting in the five years since first creating the product. They boast 200 paying customers and have built out an entire stack of products since their initial product launch.

The company expanded from 13 employees in 2013 to 100 today, with offices in Berlin and San Francisco. Freund wants to open more offices and to expand the head count. To do that, he felt the time was right to go out and get some outside money. He said they continue to be profitable and more than doubled their ARR (annual recurring revenue) in the last 12 months, but knowing they wanted to expand quickly, they wanted the investment as a hedge in case revenue slowed down during the expansion.

“However, we also want to invest heavily right now and build up the team very quickly over the next couple of years. And we want to do that in such a quick way that we want to make sure that if the revenue growth doesn’t happen as quickly as the headcount building, we’re not getting any situation where we would then need to go look funding,” he explained. Instead, they struck while the company and the overall workflow automation space is hot.

He says they want to open more sales and support offices on the east coast of the U.S. and move into Asia, as well. Further, they want to keep investing in the open-source products, and the new money gives them room to do all of this.

Nov
15
2018
--

Uber joins Linux Foundation, cementing commitment to open-source tools

Uber announced today at the 2018 Uber Open Summit that it was joining the Linux Foundation as a Gold Member, making a firm commitment to using and contributing to open-source tools.

Uber CTO Thuan Pham sees the Linux Foundation as a place for companies like his to nurture and develop open-source projects. “Open source technology is the backbone of many of Uber’s core services and as we continue to mature, these solutions will become ever more important,” he said in a blog post announcing the partnership.

What’s surprising is not that they joined, but that it took so long. Uber has been long known for making use of open source in its core tools, working on over 320 open-source projects and repositories from 1,500 contributors involving over 70,000 commits, according to data provided by the company.

“Uber has made significant investments in shared software development and community collaboration through open source over the years, including contributing the popular open-source project Jaeger, a distributed tracing system, to the Linux Foundation’s Cloud Native Computing Foundation in 2017,” an Uber spokesperson told TechCrunch.

Linux Foundation Executive Director Jim Zemlin was certainly happy to welcome Uber into the fold. “Their expertise will be instrumental for our projects as we continue to advance open solutions for cloud native technologies, deep learning, data visualization and other technologies that are critical to businesses today,” Zemlin said in a statement.

The Linux Foundation is an umbrella group supporting myriad open-source projects and providing an organizational structure for companies like Uber to contribute and maintain open-source projects. It houses sub-organizations like the Cloud Native Computing Foundation, Cloud Foundry Foundation, The Hyperledger Foundation and the Linux operating system, among others.

These open-source projects provide a base on top of which contributing companies and the community of developers can add value if they wish and build a business. Others like Uber, which uses these technologies to fuel their backend systems, won’t sell additional services, but can capitalize on the openness to help fuel their own requirements in the future, while also acting as a contributor to give as well as take.

Nov
15
2018
--

Canonical plans to raise its first outside funding as it looks to a future IPO

It’s been 14 years since Mark Shuttleworth first founded and funded Canonical and the Ubuntu project. At the time, it was mostly a Linux distribution. Today, it’s a major enterprise player that offers a variety of products and services. Throughout the years, Shuttleworth self-funded the project and never showed much interest in taking outside money. Now, however, that’s changing.

As Shuttleworth told me, he’s now looking for investors as he looks to get the company on track to an IPO. It’s no secret that the company’s recent re-focusing on the enterprise — and shutting down projects like the Ubuntu phone and the Unity desktop environment — was all about that, after all. Shuttleworth sees raising money as a step in this direction — and as a way of getting the company in shape for going public.

“The first step would be private equity,” he told me. “And really, that’s because having outside investors with outside members of the board essentially starts to get you to have to report and be part of that program. I’ve got a set of things that I think we need to get right. That’s what we’re working towards now. Then there’s a set of things that private investors are looking for and the next set of things is when you’re doing a public offering, there’s a different level of discipline required.”

It’s no secret that Shuttleworth, who sports an impressive beard these days, was previously resistant to this, and he acknowledged as much. “I think that’s a fair characterization,” he said. “I enjoy my independence and I enjoy being able to make long-term calls. I still feel like I’ll have the ability to do that, but I do appreciate keenly the responsibility of taking other people’s money. When it’s your money, it’s slightly different.”

Refocusing Canonical on the enterprise business seems to be paying off already. “The numbers are looking good. The business is looking healthy. It’s not a charity. It’s not philanthropy,” he said. “There are some key metrics that I’m watching, which are the gate for me to take the next step, which would be growth equity.” Those metrics, he told me, are the size of the business and how diversified it is.

Shuttleworth likens this program of getting the company ready to IPO to getting fit. “There’s no point in saying: I haven’t done any exercise in the last 10 years but I’m going to sign up for tomorrow’s marathon,” he said.

The move from being a private company to taking outside investment and going public — especially after all these years of being self-funded — is treacherous, though, and Shuttleworth admitted as much, especially in terms of being forced to setting short-term goals to satisfy investors that aren’t necessarily in the best interest of the company in the long term. Shuttleworth thinks he can negotiate those issues, though.

Interestingly, he thinks the real danger is quite a different one. “I think the most dangerous thing in making that shift is the kind of shallowness of the unreasonably big impact that stock price has on people’s mood,” he said. “Today, at Canonical, it’s 600 people. You might have some that are having a really great day and some that are having a shitty day. And they have to figure out what’s real about both of those scenarios. But they can kind of support each other. […] But when you have a stock ticker, everybody thinks they’re having a great day, or everybody thinks they’re having a shitty day in a way that may be completely uncorrelated to how well they’re actually doing.”

Shuttleworth does not believe that IBM’s acquisition of its competitor Red Hat will have any immediate effect on its business, though. What he does think, however, is that this move is making a lot of people rethink for the first time in years the investment they’ve been making in Red Hat and its enterprise Linux distribution. Canonical’s promise is that Ubuntu, as well as its OpenStack tools and services, are not just competitive but also more cost-effective in the long run, and offer enterprises an added degree of agility. And if more businesses start looking at Canonical and Ubuntu, that can only speed up Shuttleworth’s (and his bankers’) schedule for hitting Canonical’s metrics for raising money and going public.

Nov
12
2018
--

The Ceph storage project gets a dedicated open-source foundation

Ceph is an open source technology for distributed storage that gets very little public attention but that provides the underlying storage services for many of the world’s largest container and OpenStack deployments. It’s used by financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, telcos like Deutsche Telekom, car manufacturers like BMW and software firms like SAP and Salesforce.

These days, you can’t have a successful open source project without setting up a foundation that manages the many diverging interests of the community and so it’s maybe no surprise that Ceph is now getting its own foundation. Like so many other projects, the Ceph Foundation will be hosted by the Linux Foundation.

“While early public cloud providers popularized self-service storage infrastructure, Ceph brings the same set of capabilities to service providers, enterprises, and individuals alike, with the power of a robust development and user community to drive future innovation in the storage space,” writes Sage Weil, Ceph co-creator, project leader, and chief architect at Red Hat for Ceph. “Today’s launch of the Ceph Foundation is a testament to the strength of a diverse open source community coming together to address the explosive growth in data storage and services.”

Given its broad adoption, it’s also no surprise that there’s a wide-ranging list of founding members. These include Amihan Global, Canonical, CERN, China Mobile, Digital Ocean, Intel, ProphetStor Data Service, OVH Hosting Red Hat, SoftIron, SUSE, Western Digital, XSKY Data Technology and ZTE. It’s worth noting that many of these founding members were already part of the slightly less formal Ceph Community Advisory Board.

“Ceph has a long track record of success what it comes to helping organizations with effectively managing high growth and expand data storage demands,” said Jim Zemlin, the executive director of the Linux Foundation. “Under the Linux Foundation, the Ceph Foundation will be able to harness investments from a much broader group to help support the infrastructure needed to continue the success and stability of the Ceph ecosystem.”

cepha and linux foundation logo

Ceph is an important building block for vendors who build both OpenStack- and container-based platforms. Indeed, two-thirds of OpenStack users rely on Ceph and it’s a core part of Rook, a Cloud Native Computing Foundation project that makes it easier to build storage services for Kubernetes-based applications. As such, Ceph straddles many different worlds and it makes sense for the project to gets its own neutral foundation now, though I can’t help but think that the OpenStack Foundation would’ve also liked to host the project.

Today’s announcement comes only days after the Linux Foundation also announced that it is now hosting the GraphQL Foundation.

Nov
08
2018
--

Google Cloud wants to make it easier for data scientists to share models

Today, Google Cloud announced Kubeflow pipelines and AI Hub, two tools designed to help data scientists put to work across their organizations the models they create.

Rajen Sheth, director of product management for Google Cloud’s AI and ML products, says that the company recognized that data scientists too often build models that never get used. He says that if machine learning is really a team sport, as Google believes, models must get passed from data scientists to data engineers and developers who can build applications based on them.

To help fix that, Google is announcing Kubeflow pipelines, which are an extension of Kubeflow, an open-source framework built on top of Kubernetes designed specifically for machine learning. Pipelines are essentially containerized building blocks that people in the machine learning ecosystem can string together to build and manage machine learning workflows.

By placing the model in a container, data scientists can simply adjust the underlying model as needed and relaunch in a continuous delivery kind of approach. Sheth says this opens up even more possibilities for model usage in a company.

“[Kubeflow pipelines] also give users a way to experiment with different pipeline variants to identify which ones produce the best outcomes in a reliable and reproducible environment,” Sheth wrote in a blog post announcing the new machine learning features.

The company is also announcing AI Hub, which, as the name implies, is a central place where data scientists can go to find different kinds of ML content, including Kubeflow pipelines, Jupyter notebooks, TensorFlow modules and so forth. This will be a public repository seeded with resources developed by Google Cloud AI, Google Research and other teams across Google, allowing data scientists to take advantage of Google’s own research and development expertise.

But Google wanted the hub to be more than a public library — it also sees it as a place where teams can share information privately inside their organizations, giving it a dual purpose. This should provide another way to extend model usage by making essential building blocks available in a central repository.

AI Hub will be available in Alpha starting today with some initial components from Google, as well as tools for sharing some internal resources, but the plan is to keep expanding the offerings and capabilities over time.

Google believes if it provides easier ways to share model building blocks across an organization, the more likely they will be put to work. These tools are a step toward achieving that.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com