Jun
14
2018
--

Percona Monitoring and Management: Look After Your pmm-data Container

looking after pmm-datamcontainers

looking after pmm-datamcontainersIf you have already deployed PMM server using Docker you might be aware that we begin by creating a special container for persistent PMM data. In this post, I aim to explain the importance of pmm-data container when you deploy PMM server with Docker. By the end of this post, you will have a fair idea of why this Docker container is needed.

Percona Monitoring and Management (PMM) is a free and open-source solution for database troubleshooting and performance optimization that you can run in your own environment. It provides time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

What is the purpose of pmm-data?

Well, as simple as its name suggests, when PMM Server runs via Docker its data is stored in the pmm-data container. It’s a dedicated data only container which you create with bind mounts using -v i.e data volumes for holding persistent PMM data. We use pmm-data to compartmentalize the persistent data so you can more easily backup up and move data consistently across instances or containers. It acts as a single access point from which other running containers (in this case pmm-server) can access data volumes.

pmm-data container does not run, but data from the container is used by pmm-server to build graphs. PMM Server is the core of PMM that aggregates collected data and presents it in the form of tables, dashboards, and graphs in a web interface.

Why do we use docker create ?

The

docker create

  command instructs the Docker daemon to create a writable container layer over the docker image. When you execute

docker create

  using the steps shown, it will create a Docker container named pmm-data and initialize data volumes using the -v flag in conjunction with the create command. (e.g. /opt/prometheus/data).

Option -v is used multiple times in current versions of PMM to mount multiple data volumes. This allows you to create the data volume containers, and then use them from another container i.e pmm-server. We do not want to run the pmm-data container, but only to create it. nb: the number of data volumes bind mounted may change with versions of PMM

$ docker create \
   -v /opt/prometheus/data
   -v /opt/consul-data \
   -v /var/lib/mysql \
   -v /var/lib/grafana \
   --name pmm-data \
   percona/pmm-server:latest /bin/true

Make sure that the data volumes you initialize with the -v option match those given in the example. PMM Server expects you to have bind mounted those directories exactly as demonstrated in the deployment steps. For using different mount points for PMM deployment, please refer to this blog post. Data volumes are very useful as once designated and created you can share them and be include them as part of other containers. If you use -v or –volume to bind-mount a file or directory that does not yet exist on the Docker host, -v creates the endpoint for you. It is always created as a directory. Data in the pmm-data volume are actually hosted on the host’s filesystem.

Why does pmm-data not run ?

As we used

docker create

  container and not

docker run

  for pmm-data, this container does not run. It simply exists to make sure you retain all PMM data when you upgrade to a newer PMM Server image. Data volumes bind mounted on pmm-data container are shared to the running pmm-server container as the

--volumes-from

  option is used for pmm-server launch. Here we persisted data using Docker without binding it to the pmm-server by storing files in the host machine. As long as pmm-data exists, the data exists.

You can stop, destroy, or replace a container. When a non-running container is using a volume, the volume is still available to Docker and is not removed automatically. You can easily replace the pmm-server of the running container by a newer version without any impact or loss of data. For that reason, because of the need to store persistent data, we do it in a data volume. In our case, pmm-data container does not write to the same volumes as it could cause possible corruption.

Why can’t I remove pmm-data container ? What happens if I delete it ?

Removing pmm-data container results in the loss of collected metrics data.

If you remove containers that mount volumes, including the initial pmm-server container, or any subsequent containers mounted, such as pmm-server-2, you do not delete the volumes. This allows you to upgrade — or effectively migrate — data volumes between containers. Your data container might be based on an old version of container, with known security problems. It is not a big problem since it doesn’t actually run anything, but it doesn’t feel right.

As noted earlier, pmm-data stores metrics data as per the retention. You should not remove or recreate pmm-data container unless you need to wipe out all PMM data and start again. To delete the volume from disk, you must explicitly call docker rm -v against the container with a reference to the volume.

Some do’s and don’ts

  • Allocate enough disk space on the host for pmm-data to retain data.
    By default, Prometheus stores time-series data for 30 days, and QAN stores query data for 8 days.
  • Manage data retention appropriately as per your disk space available.
    You can take backup of pmm-data by extracting data from container to avoid data-loss in any situation by using steps mentioned here.

In case of any issues with metrics, here’s a good blog post regarding troubleshooting.

The post Percona Monitoring and Management: Look After Your pmm-data Container appeared first on Percona Database Performance Blog.

Feb
02
2018
--

This Week in Data with Colin Charles 26: Percona Live Schedule is Near Completion, FOSDEM Underway and a Percona Toolkit Use Case

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Santa Clara 2018 update: tutorials have been picked, and the schedule/press release should be announced by next week. We’ve (the committee) rated over 300+ talks, and easily 70% of the schedule should go live next week as well.

There’s a lot happening for FOSDEM this week — so expect a longer report of some sort next week.

A friend, Yanwei Zhou, DBA at Qunar in China, gave an excellent presentation in Chinese on how they use Percona Toolkit. Check it out:

Are you on Twitter? Hope you’re following the @planetmysql account.

Releases

Link List

Upcoming appearances

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Oct
06
2017
--

This Week in Data with Colin Charles 9: Oracle OpenWorld and Percona Live Europe Post Mortem

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

This week: a quick roundup of releases, a summary of my thoughts about Percona Live Europe 2017 Dublin, links to look at and upcoming appearances. Oracle OpenWorld happened in San Francisco this past week, and there were lots of MySQL talks there as well (and a good community reception). I have a bit on that as well (from afar).

Look for these updates on Planet MySQL.

Releases

Percona Live Europe 2017Percona Live Europe Dublin

I arrived on Sunday and chose to rest for my tutorial on Monday. Ronald Bradford and I delivered a tutorial on MySQL Security, and in the morning we chose to rehearse. Percona Live Europe had a full tutorial schedule this year, albeit with one cancellation: MySQL and Docker by Giuseppe Maxia, whom we missed this conference. Check out his blog for further posts about MySQL, Docker, and SQL Roles in MySQL 8!

We had the welcome reception at Sinott’s Bar. There was a large selection of food on each table, as well as two drinks for each of us. It was lively, and I think we overtook most of the basement. Later that evening, there were drinks around the hotel bar, as people started to stream in for Tuesday’s packed schedule!

Tuesday was the conference kickoff, with Peter Zaitsev doing the opening keynote on the state of the open source database ecosystem. The bonus of this keynote was also the short 5-minute talks that would help you get a pick on the important topics and themes around the conference. I heard good things about this from attendees. While most people attended the talks, I spent most of my day in meetings! Then the Community Dinner (thank you Oracle for sponsoring), where we held this year’s Lightning Talks (and plenty more to drink). A summary of the social events is at Percona Live Europe Social.

Wednesday morning we definitely wanted to start a few minutes later, considering people were streaming in slower thanks to the poor weather (yes, it rained all day). The State of the Dolphin ensured we found out lots of new things coming to MySQL 8.0 (exciting!), then the sponsor keynote by Continuent given by MC Brown, followed by a database reliability engineering panel with the authors of Database Reliability Engineering Charity Majors and Laine Campbell. Their book signing went quickly too – they have many fans. We also heard from Pepper Media on their happy journey with Percona. Another great day of talks before the evening reception (which had less folk, since people were flying off that evening). Feel free to also read Matthias Crauwels, Percona Live Europe 2017 Review.

Percona Live Europe 2017 Dublin had over 350+ attendees, over 140+ speakers – all in a new location! If you have any comments please feel free to shoot me an email.

Oracle Open WorldOracle OpenWorld from Afar

At this year’s Oracle OpenWorld there was talk about Oracle’s new self-driving, machine-learning based autonomous database. There was a focus on Amazon SLAs.

It’s unclear if this will also be what MySQL gets eventually, but we have in the MySQL world lossless semi-sync replication. Amazon RDS for MySQL is still DRBD based, and Google Cloud SQL does use semisync – but we need to check further if this is lossless semisync or not.

Folk like Panoply.io claim they can do autonomous self-driving databases, and have many platform integrations to boot. Anyone using this?

Nice to see a Percona contribution to remove InnoDB buffer pool mutex get accepted, and apparently it was done the right way. This is sustainable engineering: fix and contribute back upstream!

I was particularly interested in StorageTapper released by Uber to do real-time MySQL change data streaming and transformation. The slide deck is worth a read as well.

Booking.com also gave a talk. My real takeaway from this was about why MySQL is strong: “thousands of instances, a handful of DBAs.” Doug Henschen also talks about a lot of custom automation capabilities, the bonus of which is many are probably already open source. There are some good talks and slide decks to review.

It wouldn’t be complete without Dimitri Kravtchuk doing some performance smackdowns, and I highly recommend you read MySQL Performance: 2.1M QPS on 8.0-rc.

And for a little bit of fun: there was also an award given to Alexander Rubin for fixing MySQL#2: does not make toast. It’s quite common for open source projects to have such bugs, like the famous Ubuntu bug #1. I’ve seen Alexander demo this before, and if you want to read more check out his blog post from over a year ago: Fixing MySQL Bug#2: now MySQL makes toast! (Yes, it says April 1! but really, it was done!) Most recently it was done at Percona Live Santa Clara 2017.

Link List

Upcoming appearances

Percona’s website keeps track of community events, to see where to listen to a Perconian speak. My upcoming appearances are:

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com