Sep
03
2021
--

Installing PostgreSQL using Docker

install postgresql with docker

Following the series of blogs started by Peter Zaitsev in Installing MySQL with Docker, on deploying Docker containers running open source databases, in this article, I’ll demonstrate how to install PostgreSQL using Docker.

Before proceeding, it is important to remind you of Peter’s warning from his article, which applies here as well: 

“The following instructions are designed to get a test instance running quickly and easily; you do not want to use these for production deployments.” 

We intend to show how to bring up PostgreSQL instance(s) into Docker containers. Moreover, you will find some basic Docker management operations and some snippets containing examples, always based on the Docker Hub PostgreSQL official image.

Installing

Starting with the installation, the following example shows how to bring up a Docker container within the latest PostgreSQL release from Docker Hub. It is required to provide the password of the Postgres user at the moment you are launching the container; otherwise, the container will not be created:

$ docker run --name dockerPostgreSQL \
-e POSTGRES_PASSWORD=secret \
-d postgres:latest

Once you start the container, you can check if it successfully started with docker inspect:

$ docker inspect -f '{{.State.Running}}' dockerPostgreSQL
true

If not, you can try starting it manually (you will see how to do it on the Managing the PostgreSQL Containers section).

Getting Connected to the Containerized Instance

The snippet below will show you how to connect to your PostgreSQL containerized instance.

  • Directly from docker:
$ docker exec -it dockerPostgreSQL psql --user postgres
postgres=# \conninfo

You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432".

  • Or from a PostgreSQL client connecting to its IP address, which you can obtain with the command docker inspect:
$ docker inspect -f '{{.NetworkSettings.IPAddress}}' dockerPostgreSQL
172.16.0.12

$ psql --user postgres --port 5432 --host 172.16.0.12
postgres=# \conninfo
You are connected to database "postgres" as user "postgres" on host "172.16.0.12" at port "50432".

Managing the PostgreSQL Containers

The command below will show you how to list all active containers and their respective statuses:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                     NAMES
7f88656c4864        postgres:11         "docker-entrypoint..."   4 minutes ago       Up 4 minutes        0.0.0.0:50432->5432/tcp   dockerPostgreSQL11
8aba8609dabc        postgres:11.5       "docker-entrypoint..."   21 minutes ago      Up 21 minutes       5432/tcp                  dockerPostgreSQL115
15b9e0b789dd        postgres:latest     "docker-entrypoint..."   32 minutes ago      Up 32 minutes       5432/tcp                  dockerPostgreSQL

To stop one of the containers, do:

$ docker stop dockerPostgreSQL

To start a container:

$ docker start dockerPostgreSQL

Finally, to remove a container, first stop it and then:

$ docker rm dockerPostgreSQL

Customizing PostgreSQL settings in the container

Let’s say you want to define different parameters for the PostgreSQL server running on Docker. You can create a custom postgresql.conf configuration file to replace the default one used with Docker:

$ docker run --name dockerPostgreSQL11 -p 50433:5432 -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf -e POSTGRES_PASSWORD=scret -d postgres:11

Another example of how to pass startup arguments is changing the directory which stores the wal files:

$ docker run --name dockerPostgreSQL11 -p 50433:5432 -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf -e POSTGRES_PASSWORD=scret -e POSTGRES_INITDB_WALDIR=/backup/wal -d postgres:11

Running Different PostgreSQL Versions in Different Containers

You can launch multiple PostgreSQL containers, each running a different PostgreSQL version. In the example below, you will find how to start up a 10.5 PostgreSQL version:

$ docker run --name dockerPostgreSQL105 -p 50433:5432 -e POSTGRES_PASSWORD=scret  -d postgres:10.5
7f51187d32f339688c2f450ecfda6b7552e21a93c52f365e75d36238f5905017

Here is how you could launch a PostgreSQL 11 container:

$  docker run --name dockerPostgreSQL11 -p 50432:5432 -e POSTGRES_PASSWORD=scret -d postgres:11
7f88656c4864864953a5491705ac7ca882882df618623b1e5664eabefb662733

After that, by reissuing the status command, you will see all of the already created ones:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                     NAMES
e06e637ae090        postgres:10.5       "docker-entrypoint..."   4 seconds ago       Up 4 seconds        0.0.0.0:50433->5432/tcp   dockerPostgreSQL105
7f88656c4864        postgres:11         "docker-entrypoint..."   About an hour ago   Up About an hour    0.0.0.0:50432->5432/tcp   dockerPostgreSQL11
15b9e0b789dd        postgres:latest     "docker-entrypoint..."   2 hours ago         Up 2 hours          5432/tcp                  dockerPostgreSQL

You will notice that both 10.5 and 11 were created with different port mappings. Locally into the container, both are listening on 5432 but are externally mapped to different ports. Therefore, you will be able to access all of them externally.

You will find even more details on the PostgreSQL Hub Docker site.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community in a single distribution, designed and tested to work together.

Download Percona Distribution for PostgreSQL Today!

Aug
31
2021
--

Installing MongoDB With Docker

installing MongoDB with Docker

installing MongoDB with DockerFollowing the series of blogs written with the intention to describe basic operations matching Docker and open source databases, in this article, I will demonstrate how to proceed with installing MongoDB with Docker.

The first one, written by Peter Zaitsev, was Installing MySQL with Docker.

Before proceeding, it is important to double warn by quoting Peter’s article:

“The following instructions are designed to get a test instance running quickly and easily; you do not want to use these for production deployments”. 

Also, a full description of Docker is not the objective of this article, though I assume prior knowledge of Docker, and also you that already have it installed and configured to move on.

Docker quickly deploys standalone MongoDB containers. If you are looking for fast deployments of both replica sets and shards, I suggest looking at the mlaunch tool.

Peter mentioned, in his article, how MySQL has two different “official” repositories, and with MongoDB, it’s the same: MongoDB has one repository maintained by MongoDB and another maintained by Docker. I wrote this article based on the Docker-maintained repository.

Installing the Latest Version of MongoDB

The following snippet is one example of how to initiate a container of the latest MongoDB version from the Docker repository. 

docker run --name mongodb_dockerhub \
                -e MONGO_INITDB_ROOT_USERNAME=admin 
                -e MONGO_INITDB_ROOT_PASSWORD=secret \
                -d mongo:latest

Now, if you want to check the container status right after creating it:

docker ps

CONTAINER ID        IMAGE COMMAND                  CREATED STATUS PORTS               NAMES
fdef23c0f32a        mongo:latest "docker-entrypoint..."   4 seconds ago Up 4 seconds 27017/tcp           mongodb_dockerhub

Connecting to MongoDB Server Docker Container

Having the container installed up and running, you will notice that no extra step or dependency installation was previously required, apart from the docker binaries. Now, it is time to access the container MongoDB shell and issue a basic command like “show dbs”. 

docker exec -it mongodb_dockerhub mongo -u admin -p secret
MongoDB shell version v4.2.5
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("89c3fb4a-a8ed-4724-8363-09d65024e458") }
MongoDB server version: 4.2.5
Welcome to the MongoDB shell.

> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB

> exit
bye

It is also possible to connect to the containerized MongoDB using a host mongo shell. On the docker ps output, the container id has a field that informs its port mapping, and then it is a simple connection using that port.

In the below example, we connected to the mongodb_dockerhub36. It’s up and running locally on port 27017  but mapped to the host 27018 port:

docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                      NAMES
60ffc759fab9        mongo:3.6           "docker-entrypoint..."   20 seconds ago      Up 19 seconds       0.0.0.0:27018->27017/tcp   mongodb_dockerhub36

Hence, the mongo shell connection string will be executed against the external IP and port 27018

mongo admin -u admin --host 172.16.0.10 --port 27018 -psecret

Managing MongoDB Server in Docker Container

The following commands will demonstrate basic management operations when managing a MongoDB container. 

  • Starting the container and checking the Status
docker start mongodb_dockerhub

docker ps

CONTAINER ID        IMAGE COMMAND                  CREATED STATUS PORTS               NAMES
fdef23c0f32a        mongo:latest "docker-entrypoint..."   47 minutes ago Up 2 seconds 27017/tcp           mongodb_dockerhub

  • Stopping the container and checking the Status
docker stop mongodb_dockerhub

docker ps

CONTAINER ID        IMAGE COMMAND

  • Taking a look at the MongoDB log entries
docker logs mongodb_dockerhub
...
about to fork child process, waiting until server is ready for connections.
forked process: 25
2020-04-10T14:04:49.808+0000 I  CONTROL [main] ***** SERVER RESTARTED *****
2020-04-10T14:04:49.812+0000 I  CONTROL [main] Automatically disabling TLS 1.0, to f
...

Passing Command Line Options to MongoDB Server in Docker Container

It is also possible to define the instance’s parameters when launching the MongoDB container. In the example below, I will show how to set WiredTiger cache.

docker run --name mongodb_dockerhub \
                -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=secret \
                -d mongo:latest --wiredTigerCacheSizeGB 1.0

Running Different MongoDB Server Versions in Docker

Another possibility is getting two MongoDB containers running in parallel but under different versions. The below snippet will describe how to build that scenario

  • Launching a 4.0 container
docker run --name mongodb_dockerhub40 \
 -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=secret \
 -d mongo:4.0

  • Launching a 3.6 container
docker run --name mongodb_dockerhub36 \
 -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=secret \
 -d mongo:3.6

  • Checking the container’s status
docker ps

CONTAINER ID        IMAGE COMMAND                  CREATED STATUS PORTS               NAMES
e9f480497f2d        mongo:3.6 "docker-entrypoint..."   32 seconds ago Up 32 seconds 27017/tcp           mongodb_dockerhub36
3a799f8a907c        mongo:4.0 "docker-entrypoint..."   41 seconds ago Up 41 seconds 27017/tcp           mongodb_dockerhub40

If for some reason you need both containers running simultaneously and access them externally, then use different port mappings. In the below example, both MongoDB’s containers are deployed on the local 27017 port, nevertheless, I am setting different external port maps for each one.

  • MongoDB 4.0 mapping the port 27017
docker run --name mongodb_dockerhub40 \
 -p 27017:27017 \
 -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=secret \
 -d mongo:4.0

  • MongoDB 3.6 mapping the port 27018
docker run --name mongodb_dockerhub36 \
 -p 27018:27017 \
 -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=secret \
 -d mongo:3.6

  • Checking both Container’s status
docker ps

CONTAINER ID        IMAGE COMMAND                  CREATED STATUS PORTS                      NAMES
78a79e5606ae        mongo:4.0 "docker-entrypoint..."   3 seconds ago Up 2 seconds 0.0.0.0:27017->27017/tcp   mongodb_dockerhub40
60ffc759fab9        mongo:3.6 "docker-entrypoint..."   20 seconds ago Up 19 seconds 0.0.0.0:27018->27017/tcp   mongodb_dockerhub36

There are a lot of extra details on Docker’s MongoDB Hub page and you will find more options to use on your MongoDB Docker deployment. Enjoy it!

Percona Distribution for MongoDB is a freely available MongoDB database alternative, giving you a single solution that combines the best and most important enterprise components from the open source community, designed and tested to work together.

Download Percona Distribution for MongoDB Today!

Apr
28
2021
--

Running Percona Monitoring and Management v2 on Windows with Docker

PMM Windows Docker

PMM Windows DockerOne way to run Percona Monitoring and Management (PMM) v2 on Windows is using the Virtual Appliance installation method. It works well but requires you to run extra software to run Virtual Appliances, such as Oracle VirtualBox, VMWare Workstation Player, etc. Or, you can use Docker.

Docker is not shipped with Windows 10, though you can get it installed for free with Docker Desktop. Modern Docker on Windows actually can either run using Hyper-V backend or using Windows Subsystem for Linux v2 (WSL2). Because WSL2-based Docker installation is more involved, I’m sticking to what is now referred to as “legacy” Hyper-V backend.

On my Windows 10 Pro (19042) system, I had to change some BIOS settings:

 

PMM on Windows Docker

 

And install the Hyper-V component for Docker Desktop to install correctly:

 

Percona Monitoring and Management Windows

 

With this, we’re good to go to install Percona Monitoring and Management on Windows.

 

Register for Percona Live ONLINE
A Virtual Event about Open Source Databases

 

Official installing PMM with Docker instructions assumes you’re running a Linux/Unix-based system that needs to be slightly modified for Windows. First, there is no sudo on Windows, and second, “Command Prompt” does not interpret “\”  the same way as shell does.

With those minor fixes though, basically, the same command to start PMM works:

docker pull percona/pmm-server:2
docker create --volume /srv --name pmm-data percona/pmm-server:2 /bin/true
docker run --detach --restart always --publish 80:80 --publish 443:443 --volumes-from pmm-data --name pmm-server percona/pmm-server:2

That’s it! You should have your Percona Monitoring and Management (PMM) v2 running!

Looking at PMM Server Instance through PMM itself you may notice it does not have all resources by default, as Docker on Linux would.

 

PMM

 

Note there are only 2 CPU cores, 2GB of Memory, and some 60GB of Disk space available.  This should be enough for a test but if you want to put some real load on this instance you may want to adjust those settings. It is done through the “Resources” setting in Docker Desktop:

 

PMM

 

Adjust these to suit your load needs and you’re done!  Enjoy your Percona Monitoring and Management!

Mar
18
2021
--

Slapdash raises $3.7M seed to ship a workplace apps command bar

The explosion in productivity software amid a broader remote work boom has been one of the pandemic’s clearest tech impacts. But learning to use a dozen new programs while having to decipher which data is hosted where can sometimes seem to have an adverse effect on worker productivity. It’s all time that users can take for granted, even when carrying out common tasks like navigating to the calendar to view more info to click a link to open the browser to redirect to the native app to open a Zoom call.

Slapdash is aiming to carve a new niche out for itself among workplace software tools, pushing a desire for peak performance to the forefront with a product that shaves seconds off each instance where a user needs to find data hosted in a cloud app or carry out an action. While most of the integration-heavy software suites to emerge during the remote work boom have focused on promoting visibility or re-skinning workflows across the tangled weave of SaaS apps, Slapdash founder Ivan Kanevski hopes that the company’s efforts to engineer a quicker path to information will push tech workers to integrate another tool into their workflow.

The team tells TechCrunch that they’ve raised $3.7 million in seed funding from investors that include S28 Capital, Quiet Capital, Quarry Ventures and Twenty Two Ventures. Angels participating in the round include co-founders at companies like Patreon, Docker and Zynga.

Image Credits: Slapdash

Kanevski says the team sought to emulate the success of popular apps like Superhuman, which have pushed low-latency command line interface navigation while emulating some of the sleek internal tools used at companies like Facebook, where he spent nearly six years as a software engineer.

Slapdash’s command line widget can be pulled up anywhere, once installed, with a quick keyboard shortcut. From there, users can search through a laundry list of indexable apps including Slack, Zoom, Jira and about 20 others. Beyond command line access, users can create folders of files and actions inside the full desktop app or create their own keyboard shortcuts to quickly hammer out a task. The app is available on Mac, Windows, Linux and the web.

“We’re not trying to displace the applications that you connect to Slapdash,” he says. “You won’t see us, for example, building document editing, you won’t see us building project management, just because our sort of philosophy is that we’re a neutral platform.”

The company offers a free tier for users indexing up to five apps and creating 10 commands and spaces; any more than that and you level up into a $12 per month paid plan. Things look more customized for enterprise-wide pricing. As the team hopes to make the tool essential to startups, Kanevski sees the app’s hefty utility for individual users as a clear asset in scaling up.

“If you anticipate rolling this out to larger organizations, you would want the people that are using the software to have a blast with it,” he says. “We have quite a lot of confidence that even at this sort of individual atomic level, we built something pretty joyful and helpful.”

Mar
16
2021
--

Docker nabs $23M Series B as new developer focus takes shape

It was easy to wonder what would become of Docker after it sold its enterprise business in 2019, but it regrouped last year as a cloud native container company focused on developers, and the new approach appears to be bearing fruit. Today, the company announced a $23 million Series B investment.

Tribe Capital led the round with participation from existing investors Benchmark and Insight Partners. Docker has now raised a total of $58 million including the $35 million investment it landed the same day it announced the deal with Mirantis.

To be sure, the company had a tempestuous 2019 when they changed CEOs twice, sold the enterprise division and looked to reestablish itself with a new strategy. While the pandemic made 2020 a trying time for everyone, Docker CEO Scott Johnston says that in spite of that, the strategy has begun to take shape.

“The results we think speak volumes. Not only was the strategy strong, but the execution of that strategy was strong as well,” Johnston told me. He indicated that the company added 1.7 million new developer registrations for the free version of the product for a total of more than 7.3 million registered users on the community edition.

As with any open-source project, the goal is to popularize the community project and turn a small percentage of those users into paying customers, but Docker’s problem prior to 2019 had been finding ways to do that. While he didn’t share specific numbers, Johnston indicated that annual recurring revenue (ARR) grew 170% last year, suggesting that they are beginning to convert more successfully.

Johnston says that’s because they have found a way to turn a certain class of developer in spite of a free version being available. “Yes, there’s a lot of upstream open-source technologies, and there are users that want to hammer together their own solutions. But we are also seeing these eight to 10 person ‘two-pizza teams’ who want to focus on building applications, and so they’re willing to pay for a service,” he said.

That open-source model tends to get the attention of investors because it comes with that built-in action at the top of the sales funnel. Tribe’s Arjun Sethi, whose firm led the investment, says his company actually was a Docker customer before investing in the company and sees a lot more growth potential.

“Tribe focuses on identifying N-of-1 companies — top-decile private tech firms that are exhibiting inflection points in their growth, with the potential to scale toward outsized outcomes with long-term venture capital. Docker fits squarely into this investment thesis [ … ],” Sethi said in a statement.

Johnston says as they look ahead post-pandemic, he’s learned a lot since his team moved out of the office last year. After surveying employees, they were surprised to learn that most have been happier working at home, having more time to spend with family, while taking away a grueling commute. As a result, he sees going virtual first, even after it’s safe to reopen offices.

That said, he is planning to offer a way to get teams together for in-person gatherings and a full company get-together once a year.

“We’ll be virtual first, but then with the savings of the real estate that we’re no longer paying for, we’re going to bring people together and make sure we have that social glue,” he said.


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

Mar
10
2021
--

Aqua Security raises $135M at a $1B valuation for its cloud native security platform

Aqua Security, a Boston- and Tel Aviv-based security startup that focuses squarely on securing cloud-native services, today announced that it has raised a $135 million Series E funding round at a $1 billion valuation. The round was led by ION Crossover Partners. Existing investors M12 Ventures, Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates and Acrew Capital also participated. In total, Aqua Security has now raised $265 million since it was founded in 2015.

The company was one of the earliest to focus on securing container deployments. And while many of its competitors were acquired over the years, Aqua remains independent and is now likely on a path to an IPO. When it launched, the industry focus was still very much on Docker and Docker containers. To the detriment of Docker, that quickly shifted to Kubernetes, which is now the de facto standard. But enterprises are also now looking at serverless and other new technologies on top of this new stack.

“Enterprises that five years ago were experimenting with different types of technologies are now facing a completely different technology stack, a completely different ecosystem and a completely new set of security requirements,” Aqua CEO Dror Davidoff told me. And with these new security requirements came a plethora of startups, all focusing on specific parts of the stack.

Image Credits: Aqua Security

What set Aqua apart, Dror argues, is that it managed to 1) become the best solution for container security and 2) realized that to succeed in the long run, it had to become a platform that would secure the entire cloud-native environment. About two years ago, the company made this switch from a product to a platform, as Davidoff describes it.

“There was a spree of acquisitions by CheckPoint and Palo Alto [Networks] and Trend [Micro],” Davidoff said. “They all started to acquire pieces and tried to build a more complete offering. The big advantage for Aqua was that we had everything natively built on one platform. […] Five years later, everyone is talking about cloud-native security. No one says ‘container security’ or ‘serverless security’ anymore. And Aqua is practically the broadest cloud-native security [platform].”

One interesting aspect of Aqua’s strategy is that it continues to bet on open source, too. Trivy, its open-source vulnerability scanner, is the default scanner for GitLab’s Harbor Registry and the CNCF’s Artifact Hub, for example.

“We are probably the best security open-source player there is because not only do we secure from vulnerable open source, we are also very active in the open-source community,” Davidoff said (with maybe a bit of hyperbole). “We provide tools to the community that are open source. To keep evolving, we have a whole open-source team. It’s part of the philosophy here that we want to be part of the community and it really helps us to understand it better and provide the right tools.”

In 2020, Aqua, which mostly focuses on mid-size and larger companies, doubled the number of paying customers and it now has more than half a dozen customers with an ARR of over $1 million each.

Davidoff tells me the company wasn’t actively looking for new funding. Its last funding round came together only a year ago, after all. But the team decided that it wanted to be able to double down on its current strategy and raise sooner than originally planned. ION had been interested in working with Aqua for a while, Davidoff told me, and while the company received other offers, the team decided to go ahead with ION as the lead investor (with all of Aqua’s existing investors also participating in this round).

“We want to grow from a product perspective, we want to grow from a go-to-market [perspective] and expand our geographical coverage — and we also want to be a little more acquisitive. That’s another direction we’re looking at because now we have the platform that allows us to do that. […] I feel we can take the company to great heights. That’s the plan. The market opportunity allows us to dream big.”

 

Aug
13
2020
--

Mirantis acquires Lens, an IDE for Kubernetes

Mirantis, the company that recently bought Docker’s enterprise business, today announced that it has acquired Lens, a desktop application that the team describes as a Kubernetes-integrated development environment. Mirantis previously acquired the team behind the Finnish startup Kontena, the company that originally developed Lens.

Lens itself was most recently owned by Lakend Labs, though, which describes itself as “a collective of cloud native compute geeks and technologists” that is “committed to preserving and making available the open-source software and products of Kontena.” Lakend open-sourced Lens a few months ago.

Image Credits: Mirantis

“The mission of Mirantis is very simple: We want to be — for the enterprise — the fastest way to [build] modern apps at scale,” Mirantis CEO Adrian Ionel told me. “We believe that enterprises are constantly undergoing this cycle of modernizing the way they build applications from one wave to the next — and we want to provide products to the enterprise that help them make that happen.”

Right now, that means a focus on helping enterprises build cloud-native applications at scale and, almost by default, that means providing these companies with all kinds of container infrastructure services.

“But there is another piece of the story that’s always been going through our minds, which is, how do we become more developer-centric and developer-focused, because, as we’ve all seen in the past 10 years, developers have become more and more in charge off what services and infrastructure they’re actually using,” Ionel explained. And that’s where the Kontena and Lens acquisitions fit in. Managing Kubernetes clusters, after all, isn’t trivial — yet now developers are often tasked with managing and monitoring how their applications interact with their company’s infrastructure.

“Lens makes it dramatically easier for developers to work with Kubernetes, to build and deploy their applications on Kubernetes, and it’s just a huge obstacle-remover for people who are turned off by the complexity of Kubernetes to get more value,” he added.

“I’m very excited to see that we found a common vision with Adrian for how to incorporate Lens and how to make life for developers more enjoyable in this cloud-native technology landscape,” Miska Kaipiainen, the former CEO of Kontena and now Mirantis’ director of Engineering, told me.

He describes Lens as an IDE for Kubernetes. While you could obviously replicate Lens’ functionality with existing tools, Kaipiainen argues that it would take 20 different tools to do this. “One of them could be for monitoring, another could be for logs. A third one is for command-line configuration, and so forth and so forth,” he said. “What we have been trying to do with Lens is that we are bringing all these technologies [together] and provide one single, unified, easy to use interface for developers, so they can keep working on their workloads and on their clusters, without ever losing focus and the context of what they are working on.”

Among other things, Lens includes a context-aware terminal, multi-cluster management capabilities that work across clouds and support for the open-source Prometheus monitoring service.

For Mirantis, Lens is a very strategic investment and the company will continue to develop the service. Indeed, Ionel said the Lens team now basically has unlimited resources.

Looking ahead, Kaipiainen said the team is looking at adding extensions to Lens through an API within the next couple of months. “Through this extension API, we are actually able to collaborate and work more closely with other technology vendors within the cloud technology landscape so they can start plugging directly into the Lens UI and visualize the data coming from their components, so that will make it very powerful.”

Ionel also added that the company is working on adding more features for larger software teams to Lens, which is currently a single-user product. A lot of users are already using Lens in the context of very large development teams, after all.

While the core Lens tools will remain free and open source, Mirantis will likely charge for some new features that require a centralized service for managing them. What exactly that will look like remains to be seen, though.

If you want to give Lens a try, you can download the Windows, macOS and Linux binaries here.

Jul
09
2020
--

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier.

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for compute services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.

May
28
2020
--

Mirantis releases its first major update to Docker Enterprise

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year, and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90% of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”

May
27
2020
--

Docker expands relationship with Microsoft to ease developer experience across platforms

When Docker sold off its enterprise division to Mirantis last fall, that didn’t mark the end of the company. In fact, Docker still exists and has refocused as a cloud-native developer tools vendor. Today it announced an expanded partnership with Microsoft around simplifying running Docker containers in Azure.

As its new mission suggests, it involves tighter integration between Docker and a couple of Azure developer tools including Visual Studio Code and Azure Container Instances (ACI). According to Docker, it can take developers hours or even days to set up their containerized environment across the two sets of tools.

The idea of the integration is to make it easier, faster and more efficient to include Docker containers when developing applications with the Microsoft tool set. Docker CEO Scott Johnston says it’s a matter of giving developers a better experience.

“Extending our strategic relationship with Microsoft will further reduce the complexity of building, sharing and running cloud-native, microservices-based applications for developers. Docker and VS Code are two of the most beloved developer tools and we are proud to bring them together to deliver a better experience for developers building container-based apps for Azure Container Instances,” Johnston said in a statement.

Among the features they are announcing is the ability to log into Azure directly from the Docker command line interface, a big simplification that reduces going back and forth between the two sets of tools. What’s more, developers can set up a Microsoft ACI environment complete with a set of configuration defaults. Developers will also be able to switch easily between their local desktop instance and the cloud to run applications.

These and other integrations are designed to make it easier for Azure and Docker common users to work in in the Microsoft cloud service without having to jump through a lot of extra hoops to do it.

It’s worth noting that these integrations are starting in Beta, but the company promises they should be released some time in the second half of this year.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com