Apr
09
2019
--

Google Cloud Run brings serverless and containers together

Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. Today at Google Cloud Next, the company announced a new product called Cloud Run that is designed to bring the two together. At the same time, the company also announced Cloud Run for GKE, which is specifically designed to run on Google’s version of Kubernetes.

Oren Teich, director of product management for serverless, says these products came out of discussions with customers. As he points out, developers like the flexibility and agility they get using serverless architecture, but have been looking for more than just compute resources. They want to get access to the full stack, and to that end the company is announcing Cloud Run.

“Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and it’s end-to-end managed,” Teich explained.

As for the GKE tool, it provides the same kinds of benefits, except for developers running their containers on Google’s GKE version of Kubernetes. Keep in mind, developers could be using any version of Kubernetes their organizations happen to have chosen, so it’s not a given that they will be using Google’s flavor of Kubernetes.

“What this means is that a developer can take the exact same experience, the exact same code they’ve written — and they have G Cloud command line, the same UI and our console and they can just with one-click target the destination they want,” he said.

All of this is made possible through yet another open-source project the company introduced last year called Knative. “Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose — fully managed on Google Cloud Platform, on your GKE cluster or on your own self-managed Kubernetes cluster,” Teich and Eyal Manor, VP of engineering, wrote in a blog post introducing Cloud Run.

Serverless, as you probably know by now, is a bit of a misnomer. It’s not really taking away servers, but it is eliminating the need for developers to worry about them. Instead of loading their application on a particular virtual machine, the cloud provider, in this case, Google, provisions the exact level of resources required to run an operation. Once that’s done, these resources go away, so you only pay for what you use at any given moment.

Apr
02
2019
--

Chef goes 100% open source

Chef, the popular automation service, today announced that it is open-sourcing all of its software under the Apache 2 license. Until now, Chef used an open core model with a number of proprietary products that complemented its open-source tools. Most of these proprietary tools focused on enterprise users and their security and deployment needs. Now, all of these tools, which represent somewhere between a third and a half of Chef’s total code base, are open source, too.

“We’re moving away from our open core model,” Chef SVP of products and engineering Corey Scobie told me. “We’re now moving to exclusively open-source software development.”

He added that this also includes open product development. Going forward, the company plans to share far more details about its roadmap, feature backlogs and other product development details. All of Chef’s commercial offerings will also be built from the same open-source code that everybody now has access to.

Scobie noted there are a number of reasons why the company is doing this. He believes, for example, that the best way to build software is to collaborate in public with those who are actually using it.

“With that philosophy in mind, it was really easy to justify how we’d take the remainder of the software that we produce and make it open source,” Scobie said. “We believe that that’s the best way to build software that works for people — real people in the real world.”

Another reason, Scobie said, is that it was becoming increasingly difficult for Chef to explain which parts of the software were open source and which were not. “We wanted to make that conversation easier, to be perfectly honest.”

Chef’s decision comes during a bit of a tumultuous time in the open-source world. A number of companies like Redis, MongoDB and Elastic have recently moved to licenses that explicitly disallow the commercial use of their open-source products by large cloud vendors like AWS unless they also buy a commercial license.

But here is Chef, open-sourcing everything. Chef co-founder and board member Adam Jacob doesn’t think that’s a problem. “In the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect,” he said. “I think, in fact, the value was always in the totality of the product.”

Jacob also argues that those companies that are moving to these new, more restrictive licenses, are only hurting themselves. “It turns out that the product was what mattered in the first place,” he said. “They continue to produce great enterprise software for their customers and their customers continue to be happy and continue to buy it, which is what they always would’ve done.” He also noted that he doesn’t think AWS will ever be better at running Elasticsearch than Elastic or, for that matter, at running Chef better than Chef.

It’s worth noting that Chef also today announced the launch of its Enterprise Automation Stack, which brings together under a unified umbrella all of Chef’s tools (Chef Automate, Infra, InSpec, Habitat and Workstation).

“Chef is fully committed to enabling organizations to eliminate friction across the lifecycle of all of their applications, ensuring that, whether they build their solutions from our open-source code or license our commercial distribution, they can benefit from collaboration as code,” said Chef CEO Barry Crist. “Chef Enterprise Automation Stack lets teams establish and maintain a consistent path to production for any application, in order to increase velocity and improve efficiency, so deployment and updates of mission-critical software become easier, move faster and work flawlessly.”

Mar
14
2019
--

Microsoft open sources its data compression algorithm and hardware for the cloud

The amount of data that the big cloud computing providers now store is staggering, so it’s no surprise that most store all of this information as compressed data in some form or another — just like you used to zip your files back in the days of floppy disks, CD-ROMs and low-bandwidth connections. Typically, those systems are closely guarded secrets, but today, Microsoft open sourced the algorithm, hardware specification and Verilog source code for how it compresses data in its Azure cloud. The company is contributing all of this to the Open Compute Project (OCP).

Project Zipline, as Microsoft calls this project, can achieve 2x higher compression ratios compared to the standard Zlib-L4 64KB model. To do this, the algorithm — and its hardware implementation — were specifically tuned for the kind of large data sets Microsoft sees in its cloud. Because the system works at the systems level, there is virtually no overhead and Microsoft says that it is actually able to manage higher throughput rates and lower latency than other algorithms are currently able to achieve.

Microsoft stresses that it is also contributing the Verilog source code for register transfer language (RTL) — that is, the low-level code that makes this all work. “Contributing RTL at this level of detail as open source to OCP is industry leading,” Kushagra Vaid, the general manager for Azure hardware infrastructure, writes. “It sets a new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level.”

Microsoft is currently using this system in its own Azure cloud, but it is now also partnering with others in the Open Compute Project. Among these partners are Intel, AMD, Ampere, Arm, Marvell, SiFive, Broadcom, Fungible, Mellanox, NGD Systems, Pure Storage, Synopsys and Cadence.

“Over time, we anticipate Project Zipline compression technology will make its way into several market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices,” writes Vaid.

Feb
27
2019
--

Open-source communities fight over telco market

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, ex-OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.

Feb
21
2019
--

Redis Labs changes its open-source license — again

Redis Labs, fresh off its latest funding round, today announced a change to how it licenses its Redis Modules. This may not sound like a big deal, but in the world of open-source projects, licensing is currently a big issue. That’s because organizations like Redis, MongoDB, Confluent and others have recently introduced new licenses that make it harder for their competitors to take their products and sell them as rebranded services without contributing back to the community (and most of these companies point directly at AWS as the main offender here).

“Some cloud providers have repeatedly taken advantage of successful opensource projects, without significant contributions to their communities,” the Redis Labs team writes today. “They repackage software that was not developed by them into competitive, proprietary service offerings and use their business leverage to reap substantial revenues from these open source projects.”

The point of these new licenses it to put a stop to this.

This is not the first time Redis Labs has changed how it licenses its Redis Modules (and I’m stressing the “Redis Modules” part here because this is only about modules from Redis Labs and does not have any bearing on how the Redis database project itself is licensed). Back in 2018, Redis Labs changed its license from AGPL to Apache 2 modified with Commons Clause. The “Commons Clause” is the part that places commercial restrictions on top of the license.

That created quite a stir, as Redis Labs co-founder and CEO Ofer Bengal told me a few days ago when we spoke about the company’s funding.

“When we came out with this new license, there were many different views,” he acknowledged. “Some people condemned that. But after the initial noise calmed down — and especially after some other companies came out with a similar concept — the community now understands that the original concept of open source has to be fixed because it isn’t suitable anymore to the modern era where cloud companies use their monopoly power to adopt any successful open source project without contributing anything to it.”

The way the code was licensed, though, created a bit of confusion, the company now says, because some users thought they were only bound by the terms of the Apache 2 license. Some terms in the Commons Clause, too, weren’t quite clear (including the meaning of “substantial,” for example).

So today, Redis Labs is introducing the Redis Source Available License. This license, too, only applies to certain Redis Modules created by Redis Labs. Users can still get the code, modify it and integrate it into their applications — but that application can’t be a database product, caching engine, stream processing engine, search engine, indexing engine or ML/DL/AI serving engine.

By definition, an open-source license can’t have limitations. This new license does, so it’s technically not an open-source license. In practice, the company argues, it’s quite similar to other permissive open-source licenses, though, and shouldn’t really affect most developers who use the company’s modules (and these modules are RedisSearch, RedisGraph, RedisJSON, RedisML and RedisBloom).

This is surely not the last we’ve heard of this. Sooner or later, more projects will follow the same path. By then, we’ll likely see more standard licenses that address this issue so other companies won’t have to change multiple times. Ideally, though, we won’t need it because everybody will play nice — but since we’re not living in a utopia, that’s not likely to happen.

Feb
07
2019
--

Google open sources ClusterFuzz

Google today announced that it is open sourcing ClusterFuzz, a scalable fuzzing tool that can run on clusters with more than 25,000 machines.

The company has long used the tool internally, and if you’ve paid particular attention to Google’s fuzzing efforts (and you have, right?), then this may all seem a bit familiar. That’s because Google launched the OSS-Fuzz service a couple of years ago and that service actually used ClusterFuzz. OSS-Fuzz was only available to open-source projects, though, while ClusterFuzz is now available for anyone to use.

The overall concept behind fuzzing is pretty straightforward: you basically throw lots of data (including random inputs) at your application and see how it reacts. Often, it’ll crash, but sometimes you’ll be able to find memory leaks and security flaws. Once you start anything at scale, though, it becomes more complicated and you’ll need tools like ClusterFuzz to manage that complexity.

ClusterFuzz automates the fuzzing process all the way from bug detection to reporting — and then retesting the fix. The tool itself also uses open-source libraries like the libFuzzer fuzzing engine and the AFL fuzzer to power some of the core fuzzing features that generate the test cases for the tool.

Google says it has used the tool to find more than 16,000 bugs in Chrome and 11,000 bugs in more than  160 open-source projects that used OSS-Fuzz. Since so much of the software testing and deployment toolchain is now generally automated, it’s no surprise that fuzzing is also becoming a hot topic these days (I’ve seen references to “continuous fuzzing” pop up quite a bit recently).

Jan
29
2019
--

Timescale announces $15M investment and new enterprise version of TimescaleDB

It’s a big day for Timescale, makers of the open-source time-series database, TimescaleDB. The company announced a $15 million investment and a new enterprise version of the product.

The investment is technically an extension of the $12.4 million Series A it raised last January, which it’s referring to as A1. Today’s round is led by Icon Ventures, with existing investors Benchmark, NEA and Two Sigma Ventures also participating. With today’s funding, the startup has raised $31 million.

Timescale makes a time-series database. That means it can ingest large amounts of data and measure how it changes over time. This comes in handy for a variety of use cases, from financial services to smart homes to self-driving cars — or any data-intensive activity you want to measure over time.

While there are a number of time-scale database offerings on the market, Timescale co-founder and CEO Ajay Kulkarni says that what makes his company’s approach unique is that it uses SQL, one of the most popular languages in the world. Timescale wanted to take advantage of that penetration and build its product on top of Postgres, the popular open-source SQL database. This gave it an offering that is based on SQL and is highly scalable.

Timescale admittedly came late to the market in 2017, but by offering a unique approach and making it open source, it has been able to gain traction quickly. “Despite entering into what is a very crowded database market, we’ve seen quite a bit of community growth because of this message of SQL and scale for time series,” Kulkarni told TechCrunch.

In just over 22 months, the company has more than a million downloads and a range of users from older guard companies like Charter, Comcast and Hexagon Mining to more modern companies like Nutanix and and TransferWise.

With a strong base community in place, the company believes that it’s now time to commercialize its offering, and in addition to an open-source license, it’s introducing a commercial license. “Up until today, our main business model has been through support and deployment assistance. With this new release, we also will have enterprise features that are available with a commercial license,” Kulkarni explained.

The commercial version will offer a more sophisticated automation layer for larger companies with greater scale requirements. It will also provide better lifecycle management, so companies can get rid of older data or move it to cheaper long-term storage to reduce costs. It’s also offering the ability to reorder data in an automated fashion when that’s required, and, finally, it’s making it easier to turn the time series data into a series of data points for analytics purposes. The company also hinted that a managed cloud version is on the road map for later this year.

The new money should help Timescale continue fueling the growth and development of the product, especially as it builds out the commercial offering. Timescale, which was founded in 2015 in NYC, currently has 30 employees. With the new influx of cash, it expects to double that over the next year.

Jan
23
2019
--

Open-source leader Confluent raises $125M on $2.5B valuation

Confluent, the commercial company built on top of the open-source Apache Kafka project, announced a $125 million Series D round this morning on an enormous $2.5 billion valuation.

The round was led by existing investor Sequoia Capital, with participation from Index Ventures and Benchmark, which also participated in previous rounds. Today’s investment brings the total raised to $206 million, according to the company.

The valuation soared from the previous round when the company was valued at $500 million. What’s more, the company’s bookings have scaled along with the valuation.

Graph: Confluent

 

While CEO Jay Kreps wouldn’t comment directly on a future IPO, he hinted that it is something the company is looking to do at some point. “With our growth and momentum so far, and with the latest funding, we are in a very good position to and have a desire to build a strong, independent company,” Kreps told TechCrunch.

Confluent and Kafka have developed a streaming data technology that processes massive amounts of information in real time, something that comes in handy in today’s data-intensive environment. The base streaming database technology was developed at LinkedIn as a means of moving massive amounts of messages. The company decided to open-source that technology in 2011, and Confluent launched as the commercial arm in 2014.

Kreps, writing in a company blog post announcing the funding, said that the events concept encompasses the basic building blocks of businesses. “These events are the orders, sales and customer experiences, that constitute the operation of the business. Databases have long helped to store the current state of the world, but we think this is only half of the story. What is missing are the continually flowing stream of events that represents everything happening in a company, and that can act as the lifeblood of its operation,” he wrote.

Kreps pointed out that as an open-source project, Confluent depends on the community. “This is not something we’re doing alone. Apache Kafka has a massive community of contributors of which we’re just one part,” he wrote.

While the base open-source component remains available for free download, it doesn’t include the additional tooling the company has built to make it easier for enterprises to use Kafka. Recent additions include a managed cloud version of the product and a marketplace, Confluent Hub, for sharing extensions to the platform.

As we watch the company’s valuation soar, it does so against a backdrop of other companies based on open source selling for big bucks in 2018, including IBM buying Red Hat for $34 billion in October and Salesforce acquiring MuleSoft in June for $6.5 billion.

The company’s most recent round was $50 million in March, 2017.

Jan
22
2019
--

Linux Foundation launches Hyperledger Grid to provide framework for supply chain projects

The Linux Foundation’s Hyperledger Project has a singular focus on the blockchain, but this morning it announced a framework for building supply chain projects where it didn’t want blockchain stealing the show.

In fact, the foundation is careful to point out that this project is not specifically about the blockchain, so much as providing the building blocks for a broader view of solving supply chain digitization issues. As it describes in a blog post announcing the project, it is neither an application nor a blockchain project, per se. So what is it?

“Grid is an ecosystem of technologies, frameworks and libraries that work together, letting application developers make the choice as to which components are most appropriate for their industry or market model.”

Hyperledger doesn’t want to get locked down by jargon or preconceived notions of what these projects should look like. It wants to provide developers with a set of tools and libraries and let them loose to come up with ideas and build applications specific to their industry requirements.

Primary contributors to the project to this point have been Cargill, Intel and Bitwise IO.

Supply chain has been a major early use case for distributed ledger applications in the enterprise. In fact, earlier today we covered an announcement from Citizens Reserve, a startup building a Supply Chain as a Service on the blockchain. IBM has been working on several supply chain uses cases, including diamond tracking and food supply protection.

But the distributed ledger idea is so new both for supply chain and the enterprise in general that developers are still very much finding their way. By providing a flexible, open-source framework, The Linux Foundation is giving developers an open option and trying to provide a flexible foundation to build applications as this all shakes out.

Jan
09
2019
--

AWS gives open source the middle finger

AWS launched DocumentDB today, a new database offering that is compatible with the MongoDB API. The company describes DocumentDB as a “fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools.” In effect, it’s a hosted drop-in replacement for MongoDB that doesn’t use any MongoDB code.

AWS argues that while MongoDB is great at what it does, its customers have found it hard to build fast and highly available applications on the open-source platform that can scale to multiple terabytes and hundreds of thousands of reads and writes per second. So what the company did was build its own document database, but made it compatible with the Apache 2.0 open source MongoDB 3.6 API.

If you’ve been following the politics of open source over the last few months, you’ll understand that the optics of this aren’t great. It’s also no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

The wrinkle here is that MongoDB was one of the first companies that aimed to put a stop to this by re-licensing its open-source tools under a new license that explicitly stated that companies that wanted to do this had to buy a commercial license. Since then, others have followed.

“Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model,” MongoDB CEO and president Dev Ittycheria told us. “However, developers are technically savvy enough to distinguish between the real thing and a poor imitation. MongoDB will continue to outperform any impersonations in the market.”

That’s a pretty feisty comment. Last November, Ittycheria told my colleague Ron Miller that he believed that AWS loved MongoDB because it drives a lot of consumption. In that interview, he also noted that “customers have spent the last five years trying to extricate themselves from another large vendor. The last thing they want to do is replay the same movie.”

MongoDB co-founder and CTO Eliot Horowitz echoed this. “In order to give developers what they want, AWS has been pushed to offer an imitation MongoDB service that is based on the MongoDB code from two years ago,” he said. “Our entire company is focused on one thing — giving developers the best way to work with data with the freedom to run anywhere. Our commitment to that single mission will continue to differentiate the real MongoDB from any imitation products that come along.”

A company spokesperson for MongoDB also highlighted that the 3.6 API that DocumentDB is compatible with is now two years old and misses most of the newest features, including ACID transactions, global clusters and mobile sync.

To be fair, AWS has become more active in open source lately and, in a way, it’s giving developers what they want (and not all developers are happy with MongoDB’s own hosted service). Bypassing MongoDB’s licensing by going for API comparability, given that AWS knows exactly why MongoDB did that, was always going to be a controversial move and won’t endear the company to the open-source community.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com