Feb
21
2019
--

Redis Labs changes its open-source license — again

Redis Labs, fresh off its latest funding round, today announced a change to how it licenses its Redis Modules. This may not sound like a big deal, but in the world of open-source projects, licensing is currently a big issue. That’s because organizations like Redis, MongoDB, Confluent and others have recently introduced new licenses that make it harder for their competitors to take their products and sell them as rebranded services without contributing back to the community (and most of these companies point directly at AWS as the main offender here).

“Some cloud providers have repeatedly taken advantage of successful opensource projects, without significant contributions to their communities,” the Redis Labs team writes today. “They repackage software that was not developed by them into competitive, proprietary service offerings and use their business leverage to reap substantial revenues from these open source projects.”

The point of these new licenses it to put a stop to this.

This is not the first time Redis Labs has changed how it licenses its Redis Modules (and I’m stressing the “Redis Modules” part here because this is only about modules from Redis Labs and does not have any bearing on how the Redis database project itself is licensed). Back in 2018, Redis Labs changed its license from AGPL to Apache 2 modified with Commons Clause. The “Commons Clause” is the part that places commercial restrictions on top of the license.

That created quite a stir, as Redis Labs co-founder and CEO Ofer Bengal told me a few days ago when we spoke about the company’s funding.

“When we came out with this new license, there were many different views,” he acknowledged. “Some people condemned that. But after the initial noise calmed down — and especially after some other companies came out with a similar concept — the community now understands that the original concept of open source has to be fixed because it isn’t suitable anymore to the modern era where cloud companies use their monopoly power to adopt any successful open source project without contributing anything to it.”

The way the code was licensed, though, created a bit of confusion, the company now says, because some users thought they were only bound by the terms of the Apache 2 license. Some terms in the Commons Clause, too, weren’t quite clear (including the meaning of “substantial,” for example).

So today, Redis Labs is introducing the Redis Source Available License. This license, too, only applies to certain Redis Modules created by Redis Labs. Users can still get the code, modify it and integrate it into their applications — but that application can’t be a database product, caching engine, stream processing engine, search engine, indexing engine or ML/DL/AI serving engine.

By definition, an open-source license can’t have limitations. This new license does, so it’s technically not an open-source license. In practice, the company argues, it’s quite similar to other permissive open-source licenses, though, and shouldn’t really affect most developers who use the company’s modules (and these modules are RedisSearch, RedisGraph, RedisJSON, RedisML and RedisBloom).

This is surely not the last we’ve heard of this. Sooner or later, more projects will follow the same path. By then, we’ll likely see more standard licenses that address this issue so other companies won’t have to change multiple times. Ideally, though, we won’t need it because everybody will play nice — but since we’re not living in a utopia, that’s not likely to happen.

Feb
07
2019
--

Google open sources ClusterFuzz

Google today announced that it is open sourcing ClusterFuzz, a scalable fuzzing tool that can run on clusters with more than 25,000 machines.

The company has long used the tool internally, and if you’ve paid particular attention to Google’s fuzzing efforts (and you have, right?), then this may all seem a bit familiar. That’s because Google launched the OSS-Fuzz service a couple of years ago and that service actually used ClusterFuzz. OSS-Fuzz was only available to open-source projects, though, while ClusterFuzz is now available for anyone to use.

The overall concept behind fuzzing is pretty straightforward: you basically throw lots of data (including random inputs) at your application and see how it reacts. Often, it’ll crash, but sometimes you’ll be able to find memory leaks and security flaws. Once you start anything at scale, though, it becomes more complicated and you’ll need tools like ClusterFuzz to manage that complexity.

ClusterFuzz automates the fuzzing process all the way from bug detection to reporting — and then retesting the fix. The tool itself also uses open-source libraries like the libFuzzer fuzzing engine and the AFL fuzzer to power some of the core fuzzing features that generate the test cases for the tool.

Google says it has used the tool to find more than 16,000 bugs in Chrome and 11,000 bugs in more than  160 open-source projects that used OSS-Fuzz. Since so much of the software testing and deployment toolchain is now generally automated, it’s no surprise that fuzzing is also becoming a hot topic these days (I’ve seen references to “continuous fuzzing” pop up quite a bit recently).

Jan
29
2019
--

Timescale announces $15M investment and new enterprise version of TimescaleDB

It’s a big day for Timescale, makers of the open-source time-series database, TimescaleDB. The company announced a $15 million investment and a new enterprise version of the product.

The investment is technically an extension of the $12.4 million Series A it raised last January, which it’s referring to as A1. Today’s round is led by Icon Ventures, with existing investors Benchmark, NEA and Two Sigma Ventures also participating. With today’s funding, the startup has raised $31 million.

Timescale makes a time-series database. That means it can ingest large amounts of data and measure how it changes over time. This comes in handy for a variety of use cases, from financial services to smart homes to self-driving cars — or any data-intensive activity you want to measure over time.

While there are a number of time-scale database offerings on the market, Timescale co-founder and CEO Ajay Kulkarni says that what makes his company’s approach unique is that it uses SQL, one of the most popular languages in the world. Timescale wanted to take advantage of that penetration and build its product on top of Postgres, the popular open-source SQL database. This gave it an offering that is based on SQL and is highly scalable.

Timescale admittedly came late to the market in 2017, but by offering a unique approach and making it open source, it has been able to gain traction quickly. “Despite entering into what is a very crowded database market, we’ve seen quite a bit of community growth because of this message of SQL and scale for time series,” Kulkarni told TechCrunch.

In just over 22 months, the company has more than a million downloads and a range of users from older guard companies like Charter, Comcast and Hexagon Mining to more modern companies like Nutanix and and TransferWise.

With a strong base community in place, the company believes that it’s now time to commercialize its offering, and in addition to an open-source license, it’s introducing a commercial license. “Up until today, our main business model has been through support and deployment assistance. With this new release, we also will have enterprise features that are available with a commercial license,” Kulkarni explained.

The commercial version will offer a more sophisticated automation layer for larger companies with greater scale requirements. It will also provide better lifecycle management, so companies can get rid of older data or move it to cheaper long-term storage to reduce costs. It’s also offering the ability to reorder data in an automated fashion when that’s required, and, finally, it’s making it easier to turn the time series data into a series of data points for analytics purposes. The company also hinted that a managed cloud version is on the road map for later this year.

The new money should help Timescale continue fueling the growth and development of the product, especially as it builds out the commercial offering. Timescale, which was founded in 2015 in NYC, currently has 30 employees. With the new influx of cash, it expects to double that over the next year.

Jan
23
2019
--

Open-source leader Confluent raises $125M on $2.5B valuation

Confluent, the commercial company built on top of the open-source Apache Kafka project, announced a $125 million Series D round this morning on an enormous $2.5 billion valuation.

The round was led by existing investor Sequoia Capital, with participation from Index Ventures and Benchmark, which also participated in previous rounds. Today’s investment brings the total raised to $206 million, according to the company.

The valuation soared from the previous round when the company was valued at $500 million. What’s more, the company’s bookings have scaled along with the valuation.

Graph: Confluent

 

While CEO Jay Kreps wouldn’t comment directly on a future IPO, he hinted that it is something the company is looking to do at some point. “With our growth and momentum so far, and with the latest funding, we are in a very good position to and have a desire to build a strong, independent company,” Kreps told TechCrunch.

Confluent and Kafka have developed a streaming data technology that processes massive amounts of information in real time, something that comes in handy in today’s data-intensive environment. The base streaming database technology was developed at LinkedIn as a means of moving massive amounts of messages. The company decided to open-source that technology in 2011, and Confluent launched as the commercial arm in 2014.

Kreps, writing in a company blog post announcing the funding, said that the events concept encompasses the basic building blocks of businesses. “These events are the orders, sales and customer experiences, that constitute the operation of the business. Databases have long helped to store the current state of the world, but we think this is only half of the story. What is missing are the continually flowing stream of events that represents everything happening in a company, and that can act as the lifeblood of its operation,” he wrote.

Kreps pointed out that as an open-source project, Confluent depends on the community. “This is not something we’re doing alone. Apache Kafka has a massive community of contributors of which we’re just one part,” he wrote.

While the base open-source component remains available for free download, it doesn’t include the additional tooling the company has built to make it easier for enterprises to use Kafka. Recent additions include a managed cloud version of the product and a marketplace, Confluent Hub, for sharing extensions to the platform.

As we watch the company’s valuation soar, it does so against a backdrop of other companies based on open source selling for big bucks in 2018, including IBM buying Red Hat for $34 billion in October and Salesforce acquiring MuleSoft in June for $6.5 billion.

The company’s most recent round was $50 million in March, 2017.

Jan
22
2019
--

Linux Foundation launches Hyperledger Grid to provide framework for supply chain projects

The Linux Foundation’s Hyperledger Project has a singular focus on the blockchain, but this morning it announced a framework for building supply chain projects where it didn’t want blockchain stealing the show.

In fact, the foundation is careful to point out that this project is not specifically about the blockchain, so much as providing the building blocks for a broader view of solving supply chain digitization issues. As it describes in a blog post announcing the project, it is neither an application nor a blockchain project, per se. So what is it?

“Grid is an ecosystem of technologies, frameworks and libraries that work together, letting application developers make the choice as to which components are most appropriate for their industry or market model.”

Hyperledger doesn’t want to get locked down by jargon or preconceived notions of what these projects should look like. It wants to provide developers with a set of tools and libraries and let them loose to come up with ideas and build applications specific to their industry requirements.

Primary contributors to the project to this point have been Cargill, Intel and Bitwise IO.

Supply chain has been a major early use case for distributed ledger applications in the enterprise. In fact, earlier today we covered an announcement from Citizens Reserve, a startup building a Supply Chain as a Service on the blockchain. IBM has been working on several supply chain uses cases, including diamond tracking and food supply protection.

But the distributed ledger idea is so new both for supply chain and the enterprise in general that developers are still very much finding their way. By providing a flexible, open-source framework, The Linux Foundation is giving developers an open option and trying to provide a flexible foundation to build applications as this all shakes out.

Jan
09
2019
--

AWS gives open source the middle finger

AWS launched DocumentDB today, a new database offering that is compatible with the MongoDB API. The company describes DocumentDB as a “fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools.” In effect, it’s a hosted drop-in replacement for MongoDB that doesn’t use any MongoDB code.

AWS argues that while MongoDB is great at what it does, its customers have found it hard to build fast and highly available applications on the open-source platform that can scale to multiple terabytes and hundreds of thousands of reads and writes per second. So what the company did was build its own document database, but made it compatible with the Apache 2.0 open source MongoDB 3.6 API.

If you’ve been following the politics of open source over the last few months, you’ll understand that the optics of this aren’t great. It’s also no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

The wrinkle here is that MongoDB was one of the first companies that aimed to put a stop to this by re-licensing its open-source tools under a new license that explicitly stated that companies that wanted to do this had to buy a commercial license. Since then, others have followed.

“Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model,” MongoDB CEO and president Dev Ittycheria told us. “However, developers are technically savvy enough to distinguish between the real thing and a poor imitation. MongoDB will continue to outperform any impersonations in the market.”

That’s a pretty feisty comment. Last November, Ittycheria told my colleague Ron Miller that he believed that AWS loved MongoDB because it drives a lot of consumption. In that interview, he also noted that “customers have spent the last five years trying to extricate themselves from another large vendor. The last thing they want to do is replay the same movie.”

MongoDB co-founder and CTO Eliot Horowitz echoed this. “In order to give developers what they want, AWS has been pushed to offer an imitation MongoDB service that is based on the MongoDB code from two years ago,” he said. “Our entire company is focused on one thing — giving developers the best way to work with data with the freedom to run anywhere. Our commitment to that single mission will continue to differentiate the real MongoDB from any imitation products that come along.”

A company spokesperson for MongoDB also highlighted that the 3.6 API that DocumentDB is compatible with is now two years old and misses most of the newest features, including ACID transactions, global clusters and mobile sync.

To be fair, AWS has become more active in open source lately and, in a way, it’s giving developers what they want (and not all developers are happy with MongoDB’s own hosted service). Bypassing MongoDB’s licensing by going for API comparability, given that AWS knows exactly why MongoDB did that, was always going to be a controversial move and won’t endear the company to the open-source community.

Jan
09
2019
--

Baidu Cloud launches its open-source edge computing platform

At CES, the Chinese tech giant Baidu today announced OpenEdge, its open-source edge computing platform. At its core, OpenEdge is the local package component of Baidu’s existing Intelligent Edge (BIE) commercial offering and obviously plays well with that service’s components for managing edge nodes and apps.

Because this is obviously a developer announcement, I’m not sure why Baidu decided to use CES as the venue for this release, but there can be no doubt that China’s major tech firms have become quite comfortable with open source. Companies like Baidu, Alibaba, Tencent and others are often members of the Linux Foundation and its growing stable of projects, for example, and virtually ever major open-source organization now looks to China as its growth market. It’s no surprise, then, that we’re also now seeing a wider range of Chinese companies that open source their own projects.

“Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy,” says Baidu VP and GM of Baidu Cloud Watson Yin. “By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications.”

A company spokesperson tells us that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud.

Baidu also today announced that it has partnered with Intel to launch the BIE-AI-Box and with NXP Semiconductors to launch the BIE-AI-Board. The box is designed for in-vehicle video analysis while the board is small enough for cameras, drones, robots and similar applications.

CES 2019 coverage - TechCrunch

Dec
07
2018
--

Pivotal announces new serverless framework

Pivotal has always been about making open-source tools for enterprise developers, but surprisingly, up until now, the arsenal has lacked a serverless component. That changed today with the alpha launch of Pivotal Function Service.

Pivotal Function Service is a Kubernetes-based, multi-cloud function service. It’s part of the broader Pivotal vision of offering you a single platform for all your workloads on any cloud,” the company wrote in a blog post announcing the new service.

What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least.

The idea up until now has been that the large-scale cloud providers like Amazon, Google and Microsoft could dial up whatever infrastructure your functions require, then dial them down when you’re finished without you ever having to think about the underlying infrastructure. The cloud provider deals with whatever compute, storage and memory you need to run the function, and no more.

Pivotal wants to take that same idea and make it available in the cloud across any cloud service. It also wants to make it available on-prem, which may seem curious at first, but Pivotal’s Onsi Fakhouri says customers want that same abilities both on-prem and in the cloud. “One of the key values that you often hear about serverless is that it will run down to zero and there is less utilization, but at the same time there are customers who want to explore and embrace the serverless programming paradigm on-prem,” Fakhouri said. Of course, then it is up to IT to ensure that there are sufficient resources to meet the demands of the serverless programs.

The new package includes several key components for developers, including an environment for building, deploying and managing your functions, a native eventing ability that provides a way to build rich event triggers to call whatever functionality you require and the ability to do this within a Kubernetes-based environment. This is particularly important as companies embrace a hybrid use case to manage the events across on-prem and cloud in a seamless way.

One of the advantages of Pivotal’s approach is that Pivotal can work on any cloud as an open product. This is in contrast to the cloud providers like Amazon, Google and Microsoft, which provide similar services that run exclusively on their clouds. Pivotal is not the first to build an open-source Function as a Service, but they are attempting to package it in a way that makes it easier to use.

Serverless doesn’t actually mean there are no underlying servers. Instead, it means that developers don’t have to point to any servers because the cloud provider takes care of whatever infrastructure is required. In an on-prem scenario, IT has to make those resources available.

Dec
05
2018
--

Camunda hauls in $28M investment as workflow automation remains hot

Camunda, a Berlin-based company that builds open-source workflow automation software, announced a €25 million (approximately $28 million) investment from Highland Europe today.

This is the company’s first investment in its 10-year history. CEO and co-founder Jakob Freund says the company has been profitable since Day One, but decided to bring in outside capital now to take on a more aggressive international expansion.

The company launched in 2008 and for the first five years offered business process management consulting services, but they found traditional offerings from companies like Oracle, IBM and Pega weren’t encouraging software developers to really embrace BPM and build new applications.

In 2013 the company decided to solve that problem and began a shift from consulting to software. “We launched our own open-source project, Camunda BPM, in 2013. We also offered a commercial distribution, obviously, because that’s where the revenue came from,” Freund explained.

The project took off and they flipped their revenue sources from 80 percent consulting/20 percent software to 90 percent software/10 percent consulting in the five years since first creating the product. They boast 200 paying customers and have built out an entire stack of products since their initial product launch.

The company expanded from 13 employees in 2013 to 100 today, with offices in Berlin and San Francisco. Freund wants to open more offices and to expand the head count. To do that, he felt the time was right to go out and get some outside money. He said they continue to be profitable and more than doubled their ARR (annual recurring revenue) in the last 12 months, but knowing they wanted to expand quickly, they wanted the investment as a hedge in case revenue slowed down during the expansion.

“However, we also want to invest heavily right now and build up the team very quickly over the next couple of years. And we want to do that in such a quick way that we want to make sure that if the revenue growth doesn’t happen as quickly as the headcount building, we’re not getting any situation where we would then need to go look funding,” he explained. Instead, they struck while the company and the overall workflow automation space is hot.

He says they want to open more sales and support offices on the east coast of the U.S. and move into Asia, as well. Further, they want to keep investing in the open-source products, and the new money gives them room to do all of this.

Nov
15
2018
--

Uber joins Linux Foundation, cementing commitment to open-source tools

Uber announced today at the 2018 Uber Open Summit that it was joining the Linux Foundation as a Gold Member, making a firm commitment to using and contributing to open-source tools.

Uber CTO Thuan Pham sees the Linux Foundation as a place for companies like his to nurture and develop open-source projects. “Open source technology is the backbone of many of Uber’s core services and as we continue to mature, these solutions will become ever more important,” he said in a blog post announcing the partnership.

What’s surprising is not that they joined, but that it took so long. Uber has been long known for making use of open source in its core tools, working on over 320 open-source projects and repositories from 1,500 contributors involving over 70,000 commits, according to data provided by the company.

“Uber has made significant investments in shared software development and community collaboration through open source over the years, including contributing the popular open-source project Jaeger, a distributed tracing system, to the Linux Foundation’s Cloud Native Computing Foundation in 2017,” an Uber spokesperson told TechCrunch.

Linux Foundation Executive Director Jim Zemlin was certainly happy to welcome Uber into the fold. “Their expertise will be instrumental for our projects as we continue to advance open solutions for cloud native technologies, deep learning, data visualization and other technologies that are critical to businesses today,” Zemlin said in a statement.

The Linux Foundation is an umbrella group supporting myriad open-source projects and providing an organizational structure for companies like Uber to contribute and maintain open-source projects. It houses sub-organizations like the Cloud Native Computing Foundation, Cloud Foundry Foundation, The Hyperledger Foundation and the Linux operating system, among others.

These open-source projects provide a base on top of which contributing companies and the community of developers can add value if they wish and build a business. Others like Uber, which uses these technologies to fuel their backend systems, won’t sell additional services, but can capitalize on the openness to help fuel their own requirements in the future, while also acting as a contributor to give as well as take.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com