Dec
04
2018
--

Microsoft and Docker team up to make packaging and running cloud-native applications easier

Microsoft and Docker today announced a new joint open-source project, the Cloud Native Application Bundle (CNAB), that aims to make the lifecycle management of cloud-native applications easier. At its core, the CNAB is nothing but a specification that allows developers to declare how an application should be packaged and run. With this, developers can define their resources and then deploy the application to anything from their local workstation to public clouds.

The specification was born inside Microsoft, but as the team talked to Docker, it turns out that the engineers there were working on a similar project. The two decided to combine forces and launch the result as a single open-source project. “About a year ago, we realized we’re both working on the same thing,” Microsoft’s Gabe Monroy told me. “We decided to combine forces and bring it together as an industry standard.”

As part of this, Microsoft is launching its own reference implementation of a CNAB client today. Duffle, as it’s called, allows users to perform all the usual lifecycle steps (install, upgrade, uninstall), create new CNAB bundles and sign them cryptographically. Docker is working on integrating CNAB into its own tools, too.

Microsoft also today launched Visual Studio extension for building and hosting these bundles, as well as an example implementation of a bundle repository server and an Electron installer that lets you install a bundle with the help of a GUI.

Now it’s worth noting that we’re talking about a specification and reference implementations here. There is obviously a huge ecosystem of lifecycle management tools on the market today that all have their own strengths and weaknesses. “We’re not going to be able to unify that tooling,” said Monroy. “I don’t think that’s a feasible goal. But what we can do is we can unify the model around it, specifically the lifecycle management experience as well as the packaging and distribution experience. That’s effectively what Docker has been able to do with the single-workload case.”

Over time, Microsoft and Docker would like for the specification to end up in a vendor-neutral foundation. Which one remains to be seen, though the Open Container Initiative seems like the natural home for a project like this.

Dec
03
2018
--

Percona Live 2019 Call for Papers is Now Open!

Percona Live CFP 2019

Percona Live 2019Announcing the opening of the Percona Live 2019 Open Source Database Conference call for papers. It will be open from now until January 20, 2019. The Percona Live Open Source Database Conference 2019 takes place May 28-30 in Austin, Texas.

Our theme this year is CONNECT. ACCELERATE. INNOVATE.

As a speaker at Percona Live, you’ll have the opportunity to CONNECT with your peers—open source database experts and enthusiasts who share your commitment to improving knowledge and exchanging ideas. ACCELERATE your projects and career by presenting at the premier open source database event, a great way to build your personal and company brands. And influence the evolution of the open source software movement by demonstrating how you INNOVATE!

Community initiatives remain core to the open source ethos, and we are proud of the contribution we make with Percona Live in showcasing thought leading practices in the open source database world.

With a nod to innovation, this year we are introducing a business track to benefit those business leaders who are exploring the use of open source and are interested in learning more about its costs and benefits.

Speaking Opportunities

The Percona Live Open Source Database Conference 2019 Call for Papers is open until January 20, 2019. We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Classes and talks are invited for Foundation (either entry-level or of general interest to all), Core (intermediate), and Masterclass (advanced) levels.

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. We encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

If your proposal is selected for breakout or tutorial sessions, you will receive a complimentary full conference pass.

Topics and Themes

We want proposals that cover the many aspects of application development using all open source databases, as well as new and interesting ways to monitor and manage database environments. Did you just embrace open source databases this year? What are the technical and business values of moving to or using open source databases? How did you convince your company to make the move? Was there tangible ROI?

Best practices and current trends, including design, application development, performance optimization, HA and clustering, cloud, containers and new technologies –  what’s holding your focus? Share your case studies, experiences and technical knowledge with an engaged audience of open source peers.

In the submission entry, indicate which of these themes your proposal best fits: tutorial, business needs; case studies/use cases; operations; or development. Also include which track(s) from the list below would be best suited to your talk.

Tracks

The conference committee is looking for proposals that cover the many aspects of using, deploying and managing open source databases, including:

  • MySQL. Do you have an opinion on what is new and exciting in MySQL? With the release of MySQL 8.0, are you using the latest features? How and why? Are they helping you solve any business issues, or making deployment of applications and websites easier, faster or more efficient? Did the new release influence you to change to MySQL? What do you see as the biggest impact of the MySQL 8.0 release? Do you use MySQL in conjunction with other databases in your environment?
  • MariaDB. Talks highlighting MariaDB and MariaDB compatible databases and related tools. Discuss the latest features, how to optimize performance, and demonstrate the best practices you’ve adopted from real production use cases and applications.
  • PostgreSQL. Why do you use PostgreSQL as opposed to other SQL options? Have you done a comparison or benchmark of PostgreSQL vs. other types of databases related to your applications? Why, and what were the results? How does PostgreSQL help you with application performance or deployment? How do you use PostgreSQL in conjunction with other databases in your environment?
  • MongoDB. Has the 4.0 release improved your experience in application development or time-to-market? How are the new features making your database environment better? What is it about MongoDB 4.0 that excites you? What are your experiences with Atlas? Have you moved to it, and has it lived up to its promises? Do you use MongoDB in conjunction with other databases in your environment?
  • Polyglot Persistence. How are you using multiple open source databases together? What tools and technologies are helping you to get them interacting efficiently? In what ways are multiple databases working together helping to solve critical business issues? What are the best practices you’ve discovered in your production environments?
  • Observability and Monitoring. How are you designing your database-powered applications for observability? What monitoring tools and methods are providing you with the best application and database insights for running your business? How are you using tools to troubleshoot issues and bottlenecks? How are you observing your production environment in order to understand the critical aspects of your deployments? 
  • Kubernetes. How are you running open source databases on the Kubernetes, OpenShift and other container platforms? What software are you using to facilitate their use? What best practices and processes are making containers a vital part of your business strategy? 
  • Automation and AI. How are you using automation to run databases at scale? Are you using automation to create self-running, self-healing, and self-tuning databases? Is machine learning and artificial intelligence (AI) helping you create a new generation of database automation?
  • Migration to Open Source Databases. How are you migrating to open source databases? Are you migrating on-premises or to the cloud? What are the tools and strategies you’ve used that have been successful, and what have you learned during and after the migration? Do you have real-world migration stories that illustrate how best to migrate?
  • Database Security and Compliance. All of us have experienced security and compliance challenges. From new legislation like GDPR, PCI and HIPAA, exploited software bugs, or new threats such as ransomware attacks, when is enough “enough”? What are your best practices for preventing incursions? How do you maintain compliance as you move to the cloud? Are you finding that security and compliance requirements are preventing your ability to be agile?
  • Other Open Source Databases. There are many, many great open source database software and solutions we can learn about. Submit other open source database talk ideas – we welcome talks for both established database technologies as well as the emerging new ones that no one has yet heard about (but should).
  • Business and Enterprise. Has your company seen big improvements in ROI from using Open Source Databases? Are there efficiency levels or interesting case studies you want to share? How did you convince your company to move to Open Source?

How to Respond to the Call for Papers

For information on how to submit your proposal, visit our call for papers page.

Sponsorship

If you would like to obtain a sponsor pack for Percona Live Open Source Database Conference 2019, you will find more information including a prospectus on our sponsorship page. You are welcome to contact me, Bronwyn Campbell, directly.

Nov
06
2018
--

VMware acquires Heptio, the startup founded by 2 co-founders of Kubernetes

During its big customer event in Europe, VMware announced another acquisition to step up its game in helping enterprises build and run containerised, Kubernetes-based architectures: it has acquired Heptio, a startup out of Seattle that was co-founded by Joe Beda and Craig McLuckie, who were two of the three people who co-created Kubernetes back at Google in 2014 (it has since been open sourced).

Beda and McLuckie and their team will all be joining VMware in the transaction.

Terms of the deal are not being disclosed — VMware said in a release that they are not material to the company — but as a point of reference, when Heptio last raised money — a $25 million Series B in 2017, with investors including Lightspeed, Accel and Madrona — it was valued at $117 million post-money, according to data from PitchBook.

Given the pedigree of Heptio’s founders, this is a signal of the big bet that VMware is taking on Kubernetes, and the belief that it will become an increasing cornerstone in how enterprises run their businesses. The larger company already works with 500,000+ customers globally, and 75,000 partners. It’s not clear how many customers Heptio worked with but they included large, tech-forward businesses like Yahoo Japan.

It’s also another endorsement of the ongoing rise of open source and its role in cloud architectures, a paradigm that got its biggest boost at the end of October with IBM’s acquisition of RedHat, one of the biggest tech acquisitions of all time at $34 billion.

Heptio provides professional services for enterprises that are adopting or already use Kubernetes, providing training, support and building open-source projects for managing specific aspects of Kubernetes and related container clusters, and this deal is about VMware expanding the business funnel and margins for Kubernetes within it its wider cloud, on-premise and hybrid storage and computing services with that expertise.

“Kubernetes is emerging as an open framework for multi-cloud infrastructure that enables enterprise organizations to run modern applications,” said Paul Fazzone, senior vice president and general manager, Cloud Native Apps Business Unit, VMware, in a statement. “Heptio products and services will reinforce and extend VMware’s efforts with PKS to establish Kubernetes as the de facto standard for infrastructure across clouds upon closing. We are thrilled that the Heptio team led by Craig and Joe will be joining VMware to help us guide customers as they move to a multi-cloud world.”

VMware and its Pivotal business already offer Kubernetes-related services by way of PKS, which lets organizations run cloud-agnostic apps. Heptio will become a part of that wider portfolio.

“The team at Heptio has been focused on Kubernetes, creating products that make it easier to manage multiple clusters across multiple clouds,” said Craig McLuckie, CEO and co-founder of Heptio. “And now we will be tapping into VMware’s cloud native resources and proven ability to execute, amplifying our impact. VMware’s interest in Heptio is a recognition that there is so much innovation happening in open source. We are jointly committed to contribute even more to the community—resources, ideas and support.”

VMware has made some 33 acquisitions overall, according to Crunchbase, but this appears to have been the first specifically to boost its position in Kubernetes.

The deal is expected to close by fiscal Q4 2019, VMware said.

Oct
11
2018
--

New Relic acquires Belgium’s CoScale to expand its monitoring of Kubernetes containers and microservices

New Relic, a provider of analytics and monitoring around a company’s internal and external facing apps and services to help optimise their performance, is making an acquisition today as it continues to expand a newer area of its business, containers and microservices. The company has announced that it has purchased CoScale, a provider of monitoring for containers and microservices, with a specific focus on Kubernetes.

Terms of the deal — which will include the team and technology — are not being disclosed, as it will not have a material impact on New Relic’s earnings. The larger company is traded on the NYSE (ticker: NEWR) and has been a strong upswing in the last two years, and its current market cap its around $4.6 billion.

Originally founded in Belgium, CoScale had raised $6.4 million and was last valued at $7.9 million, according to PitchBook. Investors included Microsoft (via its ScaleUp accelerator), PMV and the Qbic Fund, two Belgian investors.

We are thrilled to bring CoScale’s knowledge and deeply technical team into the New Relic fold,” noted Ramon Guiu, senior director of product management at New Relic. “The CoScale team members joining New Relic will focus on incorporating CoScale’s capabilities and experience into continuing innovations for the New Relic platform.”

The deal underscores how New Relic has had to shift in the last couple of years: when the company was founded years ago, application monitoring was a relatively easy task, with the web and a specified number of services the limit of what needed attention. But services, apps and functions have become increasingly complex and now tap data stored across a range of locations and devices, and processing everything generates a lot of computing demand.

New Relic first added container and microservices monitoring to its stack in 2016. That’s a somewhat late arrival to the area, New Relic CEO Lew Cirne believes that it’s just at the right time, dovetailing New Relic’s changes with wider shifts in the market.

‘We think those changes have actually been an opportunity for us to further differentiate and further strengthen our thesis that the New Relic  way is really the most logical way to address this,” he told my colleague Ron Miller last month. As Ron wrote, Cirne’s take is that New Relic has always been centered on the code, as opposed to the infrastructure where it’s delivered, and that has helped it make adjustments as the delivery mechanisms have changed.

New Relic already provides monitoring for Kubernetes, Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (EKS), Microsoft Azure Kubernetes Service (AKS), and RedHat Openshift, and the idea is that CoScale will help it ramp up across that range, while also adding Docker and OpenShift to the mix, as well as offering new services down the line to serve the DevOps community.

“The visions of New Relic and CoScale are remarkably well aligned, so our team is excited that we get to join New Relic and continue on our journey of helping companies innovate faster by providing them visibility into the performance of their modern architectures,” said CoScale CEO Stijn Polfliet, in a statement. “[Co-founder] Fred [Ryckbosch] and I feel like this is such an exciting space and time to be in this market, and we’re thrilled to be teaming up with the amazing team at New Relic, the leader in monitoring modern applications and infrastructure.”

Aug
15
2018
--

Twistlock snares $33 million Series C investment to secure cloud native environments

As the world shifts to a cloud native approach, the way you secure applications as they get deployed is changing too. Twistlock, a company built from the ground up to secure cloud native environments, announced a $33 million Series C round today led by Iconiq Capital.

Previous investors YL Ventures, TenEleven, Rally Ventures, Polaris Partners and Dell Technologies Capital also participated in the round. The company reports it has received a total of $63 million in venture investment to date.

Twistlock is solving a hard problem around securing containers and serverless, which are by their nature ephemeral. They can live for fractions of seconds making it hard track problems when they happen. According to company CEO and co-founder Ben Bernstein, his company came out of the gate building a security product designed to protect a cloud-native environment with the understanding that while containers and serverless computing may be ephemeral, they are still exploitable.

“It’s not about how long they live, but about the fact that the way they live is more predictable than a traditional computer, which could be running for a very long time and might have humans actually using it,” Bernstein said.

Screenshot: Twistlock

As companies move to a cloud native environment using Dockerized containers and managing them with Kubernetes and other tools, they create a highly automated system to deal with the deployment volume. While automation simplifies deployment, it can also leave companies vulnerable to host of issues. For example, if a malicious actor were to get control of the process via a code injection attack, they could cause a lot of problems without anyone knowing about it.

Twistlock is built to help prevent that, while also helping customers recognize when an exploit happens and performing forensic analysis to figure out how it happened.

It’s not a traditional Software as a Service as we’ve come to think of it. Instead, it is a service that gets installed on whatever public or private cloud that the customer is using. So far, they count just over 200 customers including Walgreens and Aetna and a slew of other companies you would definitely recognize, but they couldn’t name publicly.

The company, which was founded in 2015, is based in Portland, Oregon with their R&D arm in Israel. They currently have 80 employees. Bernstein said from a competitive standpoint, the traditional security vendors are having trouble reacting to cloud native, and while he sees some startups working at it, he believes his company has the most mature offering, at least for now.

“We don’t have a lot of competition right now, but as we start progressing we will see more,” he said. He plans to use the money they receive today to help expand their marketing and sales arm to continue growing their customer base, but also engineering to stay ahead of that competition as the cloud-native security market continues to develop.

Aug
09
2018
--

How to Change Settings for PMM Deployed via Docker

change settings for PMM deployed docker

When deployed through Docker Percona Monitoring and Management (PMM) uses environment variables for its configuration

For example, if you want to adjust metrics resolution you can pass

-e METRICS_RESOLUTION=Ns

  as  an option to the

docker run

  command:

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=2s \
  percona/pmm-server:latest

You would think if you want to change the setting for existing installation you can just stop the container with

docker stop

  and when you want to start, passing new environment variable with

docker start

Unfortunately, this is not going to work as

docker start

 does not support changing environment variables, at least not at the time of writing. I assume the idea is to keep container immutable and if you want container with different properties—like environment variables—you should run a new container instead. Here’s how.

Stop and Rename the old container, just in case you want to go back

docker stop pmm-server
docker rename pmm-server pmm-server-old

Refresh the container with the latest version

docker pull percona/pmm-server:latest

Do not miss this step!  When you destroy and recreate the container, all the updates you have done through PMM Web interface will be lost. What’s more, the software version will be reset to the one in the Docker image. Running an old PMM version with a data volume modified by a new PMM version may cause unpredictable results. This could include data loss.

Run the container with the new settings, for example changing METRICS_RESOLUTION

docker run -d \
  -p 80:80 \
  --volumes-from pmm-data \
  --name pmm-server \
  --restart always \
  -e METRICS_RESOLUTION=5s \
  percona/pmm-server:latest

After you’re happy with your new container deployment you can remove the old container

docker rm pmm-server-old

That’s it! You should have running the latest PMM version with updated configuration settings.

The post How to Change Settings for PMM Deployed via Docker appeared first on Percona Database Performance Blog.

Jun
25
2018
--

Running Percona XtraDB Cluster in Kubernetes/OpenShift

Diagram of Percona XtraDB Cluster / MySQL running in Kubernetes Open Shift

Kubernetes, and its most popular distribution OpenShift, receives a lot of interest as a container orchestration platform. However, databases remain a foreign entity, primarily because of their stateful nature, since container orchestration systems prefer stateless applications. That said, there has been good progress in support for StatefulSet applications and persistent storage, to the extent that it might be already comfortable to have a production database instance running in Kubernetes. With this in mind, we’ve been looking at running Percona XtraDB Cluster in Kubernetes/OpenShift.

While there are already many examples on the Internet of how to start a single MySQL instance in Kubernetes, for serious usage we need to provide:

  • High Availability: how can we guarantee availability when an instance (or Pod in Kubernetes terminology) crashes or becomes unresponsive?
  • Persistent storage: we do not want to lose our data in case of instance failure
  • Backup and recovery
  • Traffic routing: in the case of multiple instances, how do we direct an application to the correct one
  • Monitoring

Percona XtraDB Cluster in Kubernetes/OpenShift

Schematically it looks like this:


Percona XtraDB Cluster in Kubernetes/OpenShift a possible configuration for a resilient solution

The picture highlights the components we are going to use

Running this in Kubernetes assumes a high degree of automation and minimal manual intervention.

We provide our proof of concept in this project: https://github.com/Percona-Lab/percona-openshift. Please treat it like a source for ideas and as an alpha-quality project, in no way it is production ready.

Details

In our implementation we rely on Helm, the package manager for Kubernetes.  Unfortunately OpenShift does not officially support Helm out of the box, but there is a guide from RedHat on how to make it work.

In the clustering setup, it is quite typical to use a service discovery software like Zookeeper, etcd or Consul. It may become necessary for our Percona XtraDB Cluster deployment, but for now, to simplify deployment, we are going to use the DNS service discovery mechanism provided by Kubernetes. It should be enough for our needs.

We also expect the Kubernetes deployment to provide Dynamic Storage Provisioning. The major cloud providers (like Google Cloud, Microsoft Azure or Amazon Cloud) should have it. Also, it might not be easy to have Dynamic Storage Provisioning for on-premise deployments. You may need to setup GlusterFS or Ceph to provide Dynamic Storage Provisioning.

The challenge with a distributed file system is how many copies of data you will end up having. Percona XtraDB Cluster by itself has three copies, and GlusterFS will also require at least two copies of the data, so in the end we will have six copies of the data. This can’t be good for write intensive applications, but it’s also not good from the capacity standpoint.

One possible approach is to have local data copies for Percona XtraDB Cluster deployments. This will provide better performance and less impact on the network, but in the case of a big dataset (100GB+ ) the node failure will require SST with a big impact on the cluster and network. So the individual solution should be tailored for your workload and your requirements.

Now, as we have a basic setup working, it would be good to understand the performance impact of running Percona XtraDB Cluster in Kubernetes.  Is the network and storage overhead acceptable or it is too big? We plan to look into this in the future.

Once again, our project is located at https://github.com/Percona-Lab/percona-openshift, we are looking for your feedback and for your experience of running databases in Kubernetes/OpenShift.

Before you leave …

Percona XtraDB Cluster

If this article has interested you and you would like to know more about Percona XtraDB Cluster, you might enjoy our recent series of webinar tutorials that introduce this software and how to use it.

The post Running Percona XtraDB Cluster in Kubernetes/OpenShift appeared first on Percona Database Performance Blog.

Jun
13
2018
--

Docker aims to federate container management across clouds

When Docker burst on the scene in 2013, it brought the idea of containers to a broad audience. Since then Kubernetes has emerged as a way to orchestrate the delivery of those containerized apps, but Docker saw a gap that wasn’t being addressed beyond pure container deployment that they are trying to address with the next release of Docker Enterprise Edition. Docker made the announcement today at DockerCon in San Francisco.

Scott Johnston, chief product officer at Docker says that Docker Enterprise Edition’s new federated application management feature helps operations manage multiple clusters, whether those clusters are on premise, in the cloud or across different public cloud providers. This allows federated management of application wherever they live and supports managed Kubernetes tools from the big three public cloud providers including Azure AKS, AWS EKS and Google GKE.

Johnston says that deploying the containers is just the first part of the problem. There is a whole set of issues to deal with outside of Kubernetes (and other orchestration tools) once your application begins being deployed. “So, you know, you get portability of containers with the Docker format and the Kubernetes or Compose description files, but once you land on an environment, that environment has deployment scripts, security models, user management and [so forth]. So while the app is portable, the management of these applications is not,” he explained.

He says that can lead to a set of separate deployment tools creating a new level of complexity that using containers was supposed to eliminate. This is especially true when deploying across multiple clouds (and on prem sometimes too). If you need load balancing, security, testing and so forth — the kinds of tasks the operations team has to undertake — and you want to apply these in a consistent way regardless of the environment, Johnston says that Docker EE should help by creating a single place to manage across environments and achieve that cloud native goal of managing all your applications and data and infrastructure in a unified way.

In addition to the federated management component, Docker also announced Windows Server containers on Kubernetes for Docker Enterprise Edition. It had previously announced support for Linux containers last year.

Finally, the company is introducing a template-based approach to Docker deployment to enable people in the organization with a bit less technical sophistication to deploy from a guided graphical process instead of a command line interface.

The federated application management is available in Beta starting the second half of this year, support for Windows Server Containers will be included in the next release of Docker Enterprise Edition later this year and Templates will be available in Docker Desktop in Beta later this year.

Jun
12
2018
--

Sumo Logic brings data analysis to containers

Sumo Logic has long held the goal to help customers understand their data wherever it lives. As we move into the era of containers, that goal becomes more challenging because containers by their nature are ephemeral. The company announced a product enhancement today designed to instrument containerized applications in spite of that.

They are debuting these new features at DockerCon, Docker’s customer conference taking place this week in San Francisco.

Sumo’s CEO Ramin Sayer says containers have begun to take hold over the last 12-18 months with Docker and Kubernetes emerging as tools of choice. Given their popularity, Sumo wants to be able to work with them. “[Docker and Kubernetes] are by far the most standard things that have developed in any new shop, or any existing shop that wants to build a brand new modern app or wants to lift and shift an app from on prem [to the cloud], or have the ability to migrate workloads from Vendor A platform to Vendor B,” he said.

He’s not wrong of course. Containers and Kubernetes have been taking off in a big way over the last 18 months and developers and operations alike have struggled to instrument these apps to understand how they behave.

“But as that standardization of adoption of that technology has come about, it makes it easier for us to understand how to instrument, collect, analyze, and more importantly, start to provide industry benchmarks,” Sayer explained.

They do this by avoiding the use of agents. Regardless of how you run your application, whether in a VM or a container, Sumo is able to capture the data and give you feedback you might otherwise have trouble retrieving.

Screen shot: Sumo Logic (cropped)

The company has built in native support for Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS). It also supports the open source tool Prometheus favored by Kubernetes users to extract metrics and metadata. The goal of the Sumo tool is to help customers fix issues faster and reduce downtime.

As they work with this technology, they can begin to understand norms and pass that information onto customers. “We can guide them and give them best practices and tips, not just on what they’ve done, but how they compare to other users on Sumo,” he said.

Sumo Logic was founded in 2010 and has raised $230 million, according to data on Crunchbase. Its most recent round was a $70 million Series F led by Sapphire Ventures last June.

May
29
2018
--

Deploying PMM on DigitalOcean

Log in to DigitalOcean panel and click "Create Droplet."

It’s very easy to install Percona Monitoring and Management (PMM) on DigitalOcean. If you’ve never used DigitalOcean before, you will find that it is user-friendly and not very expensive. For $5/month you can easily host your PMM on it, letting you monitor your simple infrastructure or try out PMM before implementing it to monitor your production environments.

Let’s prepare the DigitalOcean instance

Log in to DigitalOcean (DO) control panel and click “Create Droplet.”

Log in to DigitalOcean panel and click "Create Droplet."

Thanks to DO you can skip the boring OS setup and save time by using the Docker “One click app” in DO and the Docker image from PMM.

Create Droplet on DigitalOcean

Note: After clicking on “Docker…” choose an instance size that accommodates your budget – PMM can run on as little as the 1GB 1vCPU instance!

Choose Droplet Size

Note: Scroll again!

Next step – select a nearby region

Since the next Percona Live Europe, 2018 will be in Frankfurt (https://www.percona.com/blog/2018/04/05/percona-live-europe-2018-save-the-date/ ) for me the location choice is obvious.

Choose DigitalOcean datacenter region

The final step in this section is ‘Set Hostname’

I recommend you add ‘pmm-server-‘ at the beginning so that you can easily find it in your control panel. The name in my case is ‘pmm-server-docker-s-1vcpu-1gb-fra1-01’ and I’ll use it later in this tutorial.

Finalize and create Droplet hostname

Click “Create” and wait a while.You can follow the process on the dashboard:

Creating the instance of DigitalOcean Droplet

When the Droplet is created, you’ll get an email with your login details.

The next step is ‘Set up PMM into the Droplet’

SSH to the server, change the password, and let’s prepare to install the PMM server.

==================
random@random-vb:~$ ssh root@X.X.X.X
...
"ufw" has been enabled. All ports except 22 (SSH), 80 (http) and 443 (https)
have been blocked by default.
...
Changing password for root.
(current) UNIX password:
Enter new UNIX password:
Retype new UNIX password:
root@pmm-server-docker-s-1vcpu-1gb-fra1-01:~#
====================

Note the output for the first login. You are getting Ubuntu 16.04 with pre-installed Docker.

The instructions for installing PMM are very simple. You can read them at https://www.percona.com/doc/percona-monitoring-and-management/deploy/server/docker.html

1) Pull the latest version from Docker Hub:

docker pull percona/pmm-server:latest

Wait for some time (this depends on your internet connection)

2) Create a container for persistent PMM data

docker create
-v /opt/prometheus/data
-v /opt/consul-data
-v /var/lib/mysql
-v /var/lib/grafana
--name pmm-data
percona/pmm-server:latest /bin/true

3) Create and launch PMM Server in one command

docker run -d
-p 80:80
--volumes-from pmm-data
--name pmm-server
--restart always
percona/pmm-server:latest

Just to confirm that your containers are available, go ahead and run “docker ps.” You’ll see something like this:

root@pmm-server-docker-s-1vcpu-1gb-fra1-01:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5513858041f7 percona/pmm-server:latest "/opt/entrypoint.sh" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 443/tcp pmm-server

That’s all! Congratulations! Your PMM server is running.

If you open the IP of your server in the browser, you’ll see something like this:

PMM running in DigitalOcean Droplet instance

There you can see that PMM has already started monitoring itself.

Now you need to install PMM client on your database server and configure it, instructions for this are at https://www.percona.com/doc/percona-monitoring-and-management/deploy/client/index.html

Please note, if you also use DO for the database server by external IP, you’ll probably face “the firewall problem.” In this case, you need to open ports using the “ufw” tool. (See the welcome message from Digital Ocean). For testing purposes, you can use

ufw allow 42000:42999/tcp

To open only pmm-client related ports, follow https://www.percona.com/doc/percona-monitoring-and-management/glossary.terminology.html#term-ports  To run ufw, you need to use the terminal, and you can find more information about ufw at https://www.digitalocean.com/community/tutorials/ufw-essentials-common-firewall-rules-and-commands  Once you have opened up the ports, PMM should now work correctly for this setup.

Final recommendation: Depending on your load you may need to monitor your System Overview dashboard which you’ll find at http://X.X.X.X/graph/somesymbols/system-overview

If you are out of space, upgrade your DO Droplet.

The post Deploying PMM on DigitalOcean appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com