May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

May
08
2019
--

Steve Singh stepping down as Docker CEO

TechCrunch has learned that Docker CEO Steve Singh will be stepping down after two years at the helm, and former Hortonworks CEO Rob Bearden will be taking over. An email announcement went out this morning to Docker employees.

People close to the company confirmed that Singh will be leaving the CEO position, staying on the job for several months to help Bearden with the transition. He will then remain with the organization in his role as chairman of the board. They indicated that Bearden has been working closely with Singh over the last several months as a candidate to join the board and as a consultant to the executive team.

Singh clicked with him and viewed him as a possible successor, especially given his background with leadership positions at several open-source companies, including taking Hortonworks public before selling to Cloudera last year. Singh apparently saw someone who could take the company to the next level as he moved on. As one person put it, he was tired of working 75 hours a week, but he wanted to leave the company in the hands of a capable steward.

Last week in an interview at DockerCon, the company’s annual customer conference in San Francisco, Singh appeared tired, but a leader who was confident in his position and who saw a bright future for his company. He spoke openly about his leadership philosophy and his efforts to lift the company from the doldrums it was in when he took over two years prior, helping transform it from a mostly free open-source offering into a revenue-generating company with 750 paying enterprise customers.

In fact, he told me that under his leadership the company was on track to become free cash flow positive by the end of this fiscal year, a step he said would mean that Docker would no longer need to seek outside capital. He even talked of the company eventually going public.

Apparently, he felt it was time to pass the torch before the company took those steps, saw a suitable successor in Bearden and offered him the position. While it might have made more sense to announce this at DockerCon with the spotlight focused on the company, it was not a done deal yet by the time the conference was underway in San Francisco, people close to the company explained.

Docker took a $92 million investment last year, which some saw as a sign of continuing struggles for the company, but Singh said he took the money to continue to invest in building revenue-generating enterprise products, some of which were announced at DockerCon last week. He indicated that the company would likely not require any additional investment moving forward.

As for Bearden, he is an experienced executive with a history of successful exits. In addition to his experience at Hortonworks, he was COO at SpringSource, a developer tool suite that was sold to VMware for $420 million in 2009 (and is now part of Pivotal). He was also COO at JBoss, an open-source middleware company acquired by Red Hat in 2006.

Whether he will do the same with Docker remains to be seen, but as the new CEO, it will be up to him to guide the company moving forward to the next steps in its evolution, whether that eventually results in a sale or the IPO that Singh alluded to.

Email to staff from Steve Singh:

Note: Docker has now confirmed this story in a press release.

Apr
30
2019
--

Docker looks to partners and packages to ease container implementation

Docker appears to be searching for ways to simplify the core value proposition of the company — creating, deploying and managing containers. While most would agree it has revolutionized software development, like many technology solutions, it takes a certain level of expertise and staffing to pull off. At DockerCon, the company’s customer conference taking place this week in San Francisco, Docker announced several ways it could help customers with the tough parts of implementing a containerized solution.

For starters, the company announced a beta of Docker Enterprise 3.0 this morning. That update is all about making life simpler for developers. As companies move to containerized environments, it’s a challenge for all but the largest organizations like Google, Amazon and Facebook, all of whom have massive resource requirements and correspondingly large engineering teams.

Most companies don’t have that luxury though, and Docker recognizes if it wants to bring containerization to a larger number of customers, it has to create packages and programs that make it easier to implement.

Docker Enterprise 3.0 is a step toward providing a solution that lets developers concentrate on the development aspects, while working with templates and other tools to simplify the deployment and management side of things.

The company sees customers struggling with implementation and how to configure and build a containerized workflow, so it is working with systems integrators to help smooth out the difficult parts. Today, the company announced Docker Enterprise as a Service, with the goal of helping companies through the process of setting up and managing a containerized environment, using the Docker stack and adjacent tooling like Kubernetes.

The service provider will take care of operational details like managing upgrades, rolling out patches, doing backups and undertaking capacity planning — all of those operational tasks that require a high level of knowledge around enterprise container stacks.

Capgemini will be the first go-to-market partner. “Capgemini has a combination of automation, technology tools, as well as services on the back end that can manage the installation, provisioning and management of the enterprise platform itself in cases where customers don’t want to do that, and they want to pay someone to do that for them,” Scott Johnston, chief product officer at Docker told TechCrunch.

The company has released tools in the past to help customers move legacy applications into containers without a lot of fuss. Today, the company announced a solution bundle called Accelerate Greenfield, a set of tools designed to help customers get up and running as container-first development companies.

“This is for those organizations that may be a little further along. They’ve gone all-in on containers committing to taking a container-first approach to new application development,” Johnston explained. He says this could be cloud native microservices or even a LAMP stack application, but the point is that they want to put everything in containers on a container platform.

Accelerate Greenfield is designed to do that. “They get the benefits where they know that from the developer to the production end point, it’s secure. They have a single way to define it all the way through the life cycle. They can make sure that it’s moving quickly, and they have that portability built into the container format, so they can deploy [wherever they wish],” he said.

These programs and products are all about providing a level of hand-holding, either by playing a direct consultative role, working with a systems integrator or providing a set of tools and technologies to walk the customer through the containerization life cycle. Whether they provide a sufficient level of help that customers require is something we will learn over time as these programs mature.

Apr
30
2019
--

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker, told TechCrunch.

To that end, it announced a beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago, when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations that had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments, without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer, and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open-source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind, including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKS as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges of how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 beta will be available later this quarter.

Apr
24
2019
--

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compilation.

This new capability, which will work for applications written in JavaScript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Strategic Alliances David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often significantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platforms and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Messina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure on which they run. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

Dec
04
2018
--

Microsoft and Docker team up to make packaging and running cloud-native applications easier

Microsoft and Docker today announced a new joint open-source project, the Cloud Native Application Bundle (CNAB), that aims to make the lifecycle management of cloud-native applications easier. At its core, the CNAB is nothing but a specification that allows developers to declare how an application should be packaged and run. With this, developers can define their resources and then deploy the application to anything from their local workstation to public clouds.

The specification was born inside Microsoft, but as the team talked to Docker, it turns out that the engineers there were working on a similar project. The two decided to combine forces and launch the result as a single open-source project. “About a year ago, we realized we’re both working on the same thing,” Microsoft’s Gabe Monroy told me. “We decided to combine forces and bring it together as an industry standard.”

As part of this, Microsoft is launching its own reference implementation of a CNAB client today. Duffle, as it’s called, allows users to perform all the usual lifecycle steps (install, upgrade, uninstall), create new CNAB bundles and sign them cryptographically. Docker is working on integrating CNAB into its own tools, too.

Microsoft also today launched Visual Studio extension for building and hosting these bundles, as well as an example implementation of a bundle repository server and an Electron installer that lets you install a bundle with the help of a GUI.

Now it’s worth noting that we’re talking about a specification and reference implementations here. There is obviously a huge ecosystem of lifecycle management tools on the market today that all have their own strengths and weaknesses. “We’re not going to be able to unify that tooling,” said Monroy. “I don’t think that’s a feasible goal. But what we can do is we can unify the model around it, specifically the lifecycle management experience as well as the packaging and distribution experience. That’s effectively what Docker has been able to do with the single-workload case.”

Over time, Microsoft and Docker would like for the specification to end up in a vendor-neutral foundation. Which one remains to be seen, though the Open Container Initiative seems like the natural home for a project like this.

Nov
20
2018
--

How CVE-2018-19039 Affects Percona Monitoring and Management

CVE-2018-19039

CVE-2018-19039Grafana Labs has released an important security update, and as you’re aware PMM uses Grafana internally. You’re probably curious whether this issue affects you.  CVE-2018-19039 “File Exfiltration vulnerability Security fix” covers a recently discovered security flaw that allows any Grafana user with Editor or Admin permissions to have read access to the filesystem, performed with the same privileges as the Grafana process has.

We have good news: if you’re running PMM 1.10.0 or later (released April 2018), you’re not affected by this security issue.

The reason you’re not affected is an interesting one. CVE-2018-19039 relates to Grafana component PhantomJS, which Percona omitted when we changed how we build the version of Grafana embedded in Percona Monitoring and Management. We became aware of this via bug PMM-2837 when we discovered images do not render.

We fixed this image rendering issue in and applied the required security update in 1.17. This ensures PMM is not vulnerable to CVE-2018-19039.

Users of PMM who are running release 1.1.0 (February 2017) through 1.9.1 (April 2018) are advised to upgrade ASAP.  If you cannot immediately upgrade, we advise that you take two steps:

  1. Convert all Grafana users to Viewer role
  2. Remove all Dashboards that contain text panels

How to Get PMM Server

PMM is available for installation using three methods:

Nov
15
2018
--

Docker inks partnership with MuleSoft as Salesforce takes a strategic stake

Docker and MuleSoft have announced a broad deal to sell products together and integrate their platforms. As part of it, Docker is getting an investment from Salesforce, the CRM giant that acquired MuleSoft for $6.5 billion last spring.

Salesforce is not disclosing the size of the stake it’s taking in Docker, but it is strategic: it will see its new MuleSoft working with Docker to connect containerized applications to multiple data sources across an organization. Putting the two companies together, you can connect these containerized applications to multiple data sources in a modern way, even with legacy applications.

The partnership is happening on multiple levels and includes technical integration to help customers more easily use the two toolsets together. It also includes a sales agreement to invite each company’s sales team when it makes sense, and to work with systems integrators and ISVs, who help companies put these kind of complex solutions to work inside large organizations.

Docker chief product officer Scott Johnston said it was really about bringing together two companies whose missions were aligned with what they were hearing from customers. That involves tapping into some broad trends around getting more out of their legacy applications and a growing desire to take an API-driven approach to developer productivity, while getting additional value out of their existing data sources. “Both companies have been working separately on these challenges for the last several years, and it just made sense as we listen to the market and listen to customers that we joined forces,” Johnston told TechCrunch.

Uri Sarid, MuleSoft’s CTO, agrees that customers have been using both products and it called for a more formal arrangement. “We have joint customers and the partnership will be fortifying that. So that’s a great motion, but we believe in acceleration. And so if there are things that we can do, and we now have plans for what we will do to make that even faster, to make that even more natural and built-in, we can accelerate the motion to this. Before, you had to think about these two concerns separately, and we are working on interoperability that makes you not have to think about them separately,” he explained.

This announcement comes at a time of massive consolidation in the enterprise. In the last couple of weeks, we have seen IBM buying Red Hat for $34 billion, SAP acquiring Qualtrics for $8 billion and Vista Equity Partners scooping up Apptio for $1.94 billion. Salesforce acquired MuleSoft earlier this year in its own mega deal in an effort to bridge the gap between data in the cloud and on-prem.

The final piece of today’s announcement is that investment from Salesforce Ventures. Johnston would not say how much the investment was for, but did say it was about aligning the two partners.

Docker had raised almost $273 million before today’s announcement. It’s possible it could be looking for a way to exit, and with the trend toward enterprise consolidation, Salesforce’s investment may be a way to test the waters for just that. If it seems like an odd match, remember that Salesforce bought Heroku in 2010 for $212 million.

Sep
11
2018
--

Anaxi brings more visibility to the development process

Anaxi‘s mission is to bring more transparency to the software development process. The tool, which is now live for iOS, with web and Android versions planned for the near future, connects to GitHub to give you actionable insights about the state of your projects and manage your projects and issues. Support for Atlassian’s Jira is also in the works.

The new company was founded by former Apple engineering manager and Docker EVP of product development Marc Verstaen and former CodinGame CEO John Lafleur. Unsurprisingly, this new tool is all about fixing the issues these two have seen in their daily lives as developers.

“I’ve been doing software for 40 years,” Verstaen told me.” And every time is the same. You start with a small team and it’s fine. Then you grow and you don’t know what’s going on. It’s a black box.” While the rest of the business world now focuses on data and analytics, software development never quite reached that point. Verstaen argues that this was acceptable until 10 or 15 years ago because only software companies were doing software. But now that every company is becoming a software company, that’s not acceptable anymore.

Using Anaxi, you can easily see all issue reports and pull requests from your GitHub repositories, both public and private. But you also get visual status indicators that tell you when a project has too many blockers, for example, as well as the ability to define your own labels. You also can define due dates for issues.

One interesting aspect of Anaxi is that it doesn’t store all of this information on your phone or on a proprietary server. Instead, it only caches as little information as necessary (including your handles) and then pulls the rest of the information from GitHub as needed. That cache is encrypted on the phone, but for the most part, Anaxi simply relies on the GitHub API to pull in data when needed. There’s a bit of a trade-off here in terms of speed, but Verstaen noted that this also means you always get the most recent data and that GitHub’s API is quite fast and easy to work with.

The service is currently available for free. The company plans to introduce pricing plans in the future, with prices based on the number of developers that use the product inside a company.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com