Feb
19
2020
--

Thomas Kurian on his first year as Google Cloud CEO

“Yes.”

That was Google Cloud CEO Thomas Kurian’s simple answer when I asked if he thought he’d achieved what he set out to do in his first year.

A year ago, he took the helm of Google’s cloud operations — which includes G Suite — and set about giving the organization a sharpened focus by expanding on a strategy his predecessor Diane Greene first set during her tenure.

It’s no secret that Kurian, with his background at Oracle, immediately put the entire Google Cloud operation on a course to focus on enterprise customers, with an emphasis on a number of key verticals.

So it’s no surprise, then, that the first highlight Kurian cited is that Google Cloud expanded its feature lineup with important capabilities that were previously missing. “When we look at what we’ve done this last year, first is maturing our products,” he said. “We’ve opened up many markets for our products because we’ve matured the core capabilities in the product. We’ve added things like compliance requirements. We’ve added support for many enterprise things like SAP and VMware and Oracle and a number of enterprise solutions.” Thanks to this, he stressed, analyst firms like Gartner and Forrester now rank Google Cloud “neck-and-neck with the other two players that everybody compares us to.”

If Google Cloud’s previous record made anything clear, though, it’s that technical know-how and great features aren’t enough. One of the first actions Kurian took was to expand the company’s sales team to resemble an organization that looked a bit more like that of a traditional enterprise company. “We were able to specialize our sales teams by industry — added talent into the sales organization and scaled up the sales force very, very significantly — and I think you’re starting to see those results. Not only did we increase the number of people, but our productivity improved as well as the sales organization, so all of that was good.”

He also cited Google’s partner business as a reason for its overall growth. Partner influence revenue increased by about 200% in 2019, and its partners brought in 13 times more new customers in 2019 when compared to the previous year.

Nov
15
2019
--

Don’t Use MongoDB Profiling Level 1

MongoDB Profiling

MongoDB ProfilingTLDR: It is not profile level 1 that is the problem; it’s a gotcha with the optional ‘slowms’ argument that causes users to accidentally set verbose logging and fill their disk with log files.

In MongoDB, there are two ways to see, with individual detail, which operations were executed and how long they took.

  • Profiling. Saves the operation details to a capped collection system.profile. You access this information through a mongodb connection.
  • Log lines of “COMMAND” component type in the mongod log files. (Also mongos v4.0+ log files). You have to access the files as Unix or Windows user and work with them as text.

Profiling is low-cost, but it is limited to keeping only a small snapshot. It has levels: 0 (off), 1 (slowOp(s only)), 2 (all).

The (log file) logger also has levels of its own, but there is no ‘off’. Even at level 0, it prints any slow operation in a “COMMAND” log line. ‘Slow’ is defined by the configured slowOpThresholdMs option (originally “slowMS”). That is 100ms by default, which is a good default i.m.o.

Usability problem

The log-or-not code and profile-or-not are unrelated systems to the user, but they share the same global variable for ‘slow’: serverGlobalParams.slowMS

The basic post-mortem will go:

  • Someone used db.setProfilingLevel(1/*slowOp*/, 0/*ms*/) to start profiling all operations (DON’T DO THIS – use db.setProfilingLevel(2/*all*/) instead.)
  • The logger starts writing every command to the log files.
  • They executed db.setProfilingLevel(0/*off*/) to turn the profiler off.
  • The logger continues writing every command to the log files because slowMS is still 0ms.
  • The DBA gets paged after hours because the disk filled up with log file and thought ‘oh, I should have set up log rotation on that server; I’ll do it in the morning’.
  • The DBA gets woken up in the small hours of the morning because the new primary node has also crashed due to a full disk.

So that’s the advice: Until MongoDB is enhanced to have separate slow-op threshold options for the profiler, never use db.setProfilingLevel(1, …). Even if you know the gotcha, someone learning over your shoulder won’t see it.

What to do instead:

  • Use only db.setProfilingLevel(0 /*off*/) <-> db.setProfilingLevel(2 /*all*/) and don’t touch slowms
    • There is still a valid use case for using profiler at level 1, but if you are not taking on the responsibility of looking after the log file’s code interdependence on the slowMS value, don’t go there.
  • If you want the logger to print “COMMAND” lines for every command, including fast ones, use db.setLogLevel(1, "command"). And run db.setLogLevel(-1, "command") to stop it again. ‘Use log level 1, not profile level 1’ could almost be the catchphrase of this article.
    (db.setLogLevel(1) + db.setLogLevel(0) is an alternative to the above, but is an older, blunter method.)
  • If you want to set the sampleRate you can do that without changing level (or ‘slowms’) with the following command: db.setProfilingLevel(db.getProfilingStatus().was, {“sampleRate”: <new float value>})
  • If you want a different threshold for slowMS permanently, use the slowOpThresholdMs option in the config file, but you can also do it dynamically as for the sampleRate instructions above.

Some other gotchas to do with profiling and logging:

  • The logger level is global; the slowOpThresholdMs a.k.a. slowMS value is global, but profiling level is per db namespace.
  • All of the above are local only to the mongod (or mongos) the commands are run on / the config file is set for. If you run it on a primary it does not change the secondaries, and if you run it on a mongos it does not change the shard or the configsvr replicaset nodes.
  • mongos nodes only provide a subset of these diagnostic features. They have no collections of their own, so for starters, they cannot make a system.profile collection.
bool shouldDBProfile(bool shouldSample = true) {
    // Profile level 2 should override any sample rate or slowms settings.
    if (_dbprofile >= 2)
        return true;

    if (!shouldSample || _dbprofile <= 0)
        return false;

      /* Blog: and by elimination if _dbprofile == 1: */
    return elapsedTimeExcludingPauses() >= Milliseconds{serverGlobalParams.slowMS};
}

bool CurOp::completeAndLogOperation(OperationContext* opCtx,
                                    logger::LogComponent component,
                                    boost::optional<size_t> responseLength,
                                    boost::optional<long long> slowMsOverride,
                                    bool forceLog) {
    // Log the operation if it is eligible according to the current slowMS and sampleRate settings.
    const bool shouldLogOp = (forceLog || shouldLog(component, logger::LogSeverity::Debug(1)));
    const long long slowMs = slowMsOverride.value_or(serverGlobalParams.slowMS);
    ...
    ...
    const bool shouldSample =
        client->getPrng().nextCanonicalDouble() < serverGlobalParams.sampleRate;

    if (shouldLogOp || (shouldSample && _debug.executionTimeMicros > slowMs * 1000LL)) {
        auto lockerInfo = opCtx->lockState()->getLockerInfo(_lockStatsBase);

    ...
    // Return 'true' if this operation should also be added to the profiler.
    return shouldDBProfile(shouldSample);
}

 

Oct
28
2019
--

Setting up MongoDB with Member x509 auth and SSL + easy-rsa

MongoDB Member with x509 auth

MongoDB Member with x509 authHi everyone! This is one of the most requested subjects to our support team and I’d like to share the steps as a tutorial blog post. Today, we will set up internal authentication using x.509 certificates as well as enabling TSL/SSL.

If using authentication in MongoDB, there are two ways to configure intra-cluster authentication:

  • Using a Key File
  • Using x509 certs

Key files are very straight forward; just create a random text file and share it with all the members in the replicaset/sharding. However, this is not the most secure way and for this reason, it is very common to use certificates instead.

It is perfectly possible to have self-signed certificates, but in this blog, we will use easy-rsa to make real certificates signed by one certificate authority. By the documentation, easy-rsa is a CLI utility to build and manage a PKI CA. In laymen’s terms, this means to create a root certificate authority, and request and sign certificates, including sub-CAs and certificate revocation lists (CRL). This project is hosted on GitHub on https://github.com/OpenVPN/easy-rsa and we are going to use release 2.x for this tutorial.

We will use Percona Server for MongoDB v3.6 – which is currently one of the most used versions – but this works for any MongoDB version starting at 3.2. The steps as to how to create a user will be omitted in this blog. We are considering the primary is configured with authentication and the first user was already created.

Steps:

  1. Download and configure easy-rsa:
    yum install git -y
    mkdir /usr/share/easy-rsa
    git clone -b release/2.x https://github.com/OpenVPN/easy-rsa.git
    cp easy-rsa/easy-rsa/2.0/* /usr/share/easy-rsa
    cd  /usr/share/easy-rsa
  2. Edit the source files with information about your company:
    cd /usr/share/easy-rsa
    nano vars
    
    # These are the default values for fields
    # which will be placed in the certificate.
    # Don't leave any of these fields blank.
    
    export KEY_COUNTRY="US" <your data>
    export KEY_PROVINCE="NC" <your data>
    export KEY_CITY="DURHAM" <your data>
    export KEY_ORG="Percona" <your data>
    export KEY_EMAIL="me@percona.com" <your data>
    export KEY_OU="MongoDB" <your data>
    
    #You may need to add the following variable:
    #Bug: https://bugs.launchpad.net/serverguide/+bug/1504676
    export KEY_ALTNAMES=""
  3. Load the variables with the source command:
    source ./vars
  4.  Edit the openssl-1.0.0.cnf file commenting the keys right after [ usr_cert ]
    #extendedKeyUsage=clientAuth
    #keyUsage = digitalSignature

    More info here on Extended Key Usage

  5. Now everything is prepared to create our CA file. Let’s create the CA and the members’ certificates:
    cd /usr/share/easy-rsa
    # this command will clean all the data in the ./keys folder
    ./clean-all 
    # It will generate a key for the CA as well as a certificate
    ./built-ca 
    # It will generate a key for the server as well as a certificate
    ./built-key <server_name>
  6. We suggest keeping the default values for the CA and informing the FQN or the hostname in the certificates. (It will be validated by MongoDB.)This is the expected output:
    Generating a 2048 bit RSA private key
    ......................................................................................+++
    ............................+++
    writing new private key to 'server_name.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [US]:
    State or Province Name (full name) [NC]:
    Locality Name (eg, city) [Durham]:
    Organization Name (eg, company) [Percona]:
    Organizational Unit Name (eg, section) [MongoDB]:
    Common Name (eg, your name or your server's hostname) [server_name]:
    Name [EasyRSA]:
    Email Address [percona@percona.com]:
    
    Please enter the following 'extra' attributes
    to be sent with your certificate request
    A challenge password []:
    An optional company name []:
    Using configuration from /usr/share/easy-rsa/openssl-1.0.0.cnf
    Check that the request matches the signature
    Signature ok
    ...
    Certificate is to be certified until Mar  9 11:29:40 2028 GMT (3650 days)
    Sign the certificate? [y/n]:y
    
    1 out of 1 certificate requests certified, commit? [y/n]y
    Write out database with 1 new entries
    Data Base Updated
  7. After creating all the certificates, we need to combine the keys and its certificate in order to create the .pem file.
    cd keys
    -rw-r--r-- 1 root root 1,7K Mar  9 09:35 ca.crt
    -rw------- 1 root root 1,7K Mar  9 09:35 ca.key
    -rw-r--r-- 1 root root 4,1K Mar 12 08:29 server_name.crt
    -rw-r--r-- 1 root root 1,1K Mar 12 08:29 server_name.csr
    -rw------- 1 root root 1,7K Mar 12 08:29 server_name.key
    
    # combining .key and .crt into a single file.
    
    cat server_name.key server_name.crt > server_name.pem

    Repeat this process to all the server keys.

  8. Now that we have the server .pem files prepared we need to edit the mongod.conf, considering the keys were moved to /var/lib/mongodb/ 
    security.clusterAuthMode : x509
    security.authorization : enabled 
    net:
      port: 27017
      bindIp: <ip_number>
      ssl:
        mode: requireSSL
        PEMKeyFile: /var/lib/mongodb/server_name.pem
        CAFile: /var/lib/mongodb/ca.crt
  9. Once the changes are made, the services must be started and the members should start normally.
  10. It is now time to configure the clients, as otherwise, no one will be able to log in to this environment. Again we need to edit the openssl-1.0.0.cnf removing the comments. Clients need to have those keys in the certificate.
    cd /usr/share/easy-rsa 
    extendedKeyUsage=clientAuth
    keyUsage = digitalSignature
  11. After editing the file, create the client file, it is as simple as creating a new key:
    cd /usr/share/easy-rsa 
    ./build-key <client_name>

    There is a caveat here, the Organization Unit must be different than MongoDB. I recommend calling as a MongoDBClient, and once the files are created repeat the process of linking the client_name.crt and the client_name.key file in a single file and using it to log in to the environment.

    ./build-key client_name
    …. 
    Country Name (2 letter code) [US]:
    State or Province Name (full name) [NC]:
    Locality Name (eg, city) [DURHAM]:
    Organization Name (eg, company) [Percona]:
    Organizational Unit Name (eg, section) [MongoDB]:MongoDBClient
    Common Name (eg, your name or your server's hostname) [client_name]:
    
    cd keys
    cat client_name.key client_name.crt > client_name.pem
  12. Connecting to the database is simple; we need to specify the ca file along with the certificate the client is connecting.
    Please be aware you’ll need to connect to the server local IP instead of localhost, and you may need to edit the /etc/hosts in order to force the databases and clients to resolve the hostnames.

    mongo --ssl --host server_name --sslCAFile /usr/share/easy-rsa/keys/ca.crt  \
       --sslPEMKeyFile /usr/share/easy-rsa/keys/client_name.pem --port 27017 \
       -u <user> -p --authenticationDatabase admin

With these described steps you should be able to enable SSL + member authentication in your environment. Please feel free to give us feedback here or tweet to @AdamoTonete or @Percona on Twitter!

Oct
10
2019
--

Percona Server for MongoDB 3.6.14-3.4 Now Available

Percona Server for MongoDB 3.6.14-3.4

Percona Server for MongoDB 3.6.14-3.4Percona announces the release of Percona Server for MongoDB 3.6.14-3.4 on October 10, 2019. Download the latest version from the Percona website or the Percona software repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.6 Community Edition. It supports MongoDB 3.6 protocols and drivers.

Percona Server for MongoDB extends Community Edition functionality by including the Percona Memory Engine storage engine, as well as several enterprise-grade features. Also, it includes MongoRocks storage engine, which is now deprecated. Percona Server for MongoDB requires no changes to MongoDB applications or code.

Percona Server for MongoDB 3.6.14-3.4 is based on MongoDB 3.6.14. In this release, the license of RPM and DEB packages has been changed from AGPLv3 to SSPL.

Bugs Fixed

  • PSMDB-447: The license for RPM and DEB packages has been changed from AGPLv3 to SSPL.

The Percona Server for MongoDB 3.6.14-3.4 release notes are available in the official documentation.

Oct
09
2019
--

Percona Monitoring and Management (PMM) 2.0.1 Is Now Available

Percona Monitoring and Management 2.0.1

Percona Monitoring and Management 2.0.1Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring your database performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL®, MariaDB®, MongoDB®, and PostgreSQL® servers to ensure that your data works as efficiently as possible.

In this release, we are introducing the following PMM enhancements:

  • Securely share Dashboards with Percona – let Percona engineers see what you see!
  • Improved navigation – Now PMM remembers which host and service, and applies these as filters when navigating to Query Analytics

Securely share Dashboards with Percona

A dashboard snapshot is a way to securely share what you’re seeing, with Percona. When created, we strip sensitive data like queries (metrics, template variables, and annotations) along with panel links. The shared dashboard will only be available for Percona engineers to view as it will be protected by Percona’s two-factor authentication system. The content on the dashboard will assist Percona engineers in troubleshooting your case.

Improved navigation

Now when you transition from looking at metrics into Query Analytics, PMM will remember the host and service, and automatically apply these as filters:

Improvements

  • PMM-4779: Securely share dashboards with Percona
  • PMM-4735: Keep one old slowlog file after rotation
  • PMM-4724: Alt+click on check updates button enables force-update
  • PMM-4444: Return “what’s new” URL with the information extracted from the pmm-update package changelog

Fixed bugs

  • PMM-4758: Remove Inventory rows from dashboards
  • PMM-4757qan_mysql_perfschema_agent failed querying events_statements_summary_by_digest due to data types conversion
  • PMM-4755: Fixed a typo in the InnoDB AHI Miss Ratio formula
  • PMM-4749: Navigation from Dashboards to QAN when some Node or Service was selected now applies filtering by them in QAN
  • PMM-4742: General information links were updated to go to PMM 2 related pages
  • PMM-4739: Remove request instances list
  • PMM-4734: A fix was made for the collecting node_name formula at MySQL Replication Summary dashboard
  • PMM-4729: Fixes were made for formulas on MySQL Instances Overview
  • PMM-4726: Links to services in MongoDB singlestats didn’t show Node name
  • PMM-4720machine_id could contain trailing \n
  • PMM-4640: It was not possible to add MongoDB remotely if password contained a # symbol

Help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Sep
30
2019
--

Avoid Vendor Lock-in with Percona Backup for MongoDB

Percona Backup for MongoDB

Percona Backup for MongoDBPercona Backup for MongoDB v1 is the first GA version of our new MongoDB backup tool. This has been custom-built to assist those users who don’t want to pay for MongoDB Enterprise and Ops Manager but do want a fully-supported community backup tool that can perform cluster-wide consistent backups in MongoDB.

Please visit our webpage to download the latest version of this software.

In a nutshell, what can it do?

Currently, Percona Backup for MongoDB is designed to give you an easy command-line interface which allows you to perform a consistent backup/restore of clusters and non-sharded replica sets. It uses S3 (or S3-compatible) object storage for the remote store. Percona Backup for MongoDB can improve your cluster backup consistency compared to the filesystem snapshot method, and can save you time and effort if you are implementing MongoDB backups for the first time.

Why did we create Percona Backup for MongoDB?

Many people in the community and within our customer base told us that they wanted a tool that could easily back up and restore clusters. This feature was something they felt “locked” them into MongoDB’s enterprise subscription.

Percona is anti-vendor-lock-in and strives to create and provide tools, software, and services to help our customers achieve the freedom they need to use, manage, and move their data easily. We will continue to develop and add new features to Percona Backup for MongoDB, enabling users to have more freedom of choice when selecting a backup software provider.

Couldn’t I do all this myself?

It is possible to build your own scripts and tools which enable you to perform consistent backups across a cluster; in fact, many in the community have done this. But, not all users and enterprises have the technical skill or knowledge required to build something that they can feel confident will consistently back up their databases. Many users are looking for a fully-supported, community-driven tool to fill in the gaps. This is especially important given the steady evolution of MongoDB’s replication and sharding protocols. Keeping up with new features and code can be challenging for DBAs, who usually also have a range of additional responsibilities to meet.

What other features are coming in the future?

Percona Backup for MongoDB v1 met the original goal laid out by the community; to create a tool that can create a consistent backup for clusters as easily as mongodump does for a non-sharded single replica set. On top of that the restore is as simple as “pbm list” + “pbm restore <backup-you-want>”.

However, there is still a lot more we plan to do in order to extend and enhance this tool. Our short-term feature roadmap includes:

  • Point in time restores
  • Better User Interface: Additional Status and Logging
  • Distributed transaction handling

Help us make it better!

We would love your help in making this tool even better! Yes, we accept code contributions. We know many of you have already solved tricky issues around backup in MongoDB and would appreciate any contributions you have which would improve Percona Backup for MongoDB. We would also love to hear what features you think we need to include in future versions.

For more details on our commitment to MongoDB and our latest software, please visit our website.

Sep
23
2019
--

Percona Backup for MongoDB 1.0.0 GA Is Now Available

Percona Backup for MongoDB

Percona Backup for MongoDBPercona is happy to announce the GA release of our latest software product Percona Backup for MongoDB 1.0.0 on September 23, 2019.

Percona Backup for MongoDB is a distributed, low-impact solution for consistent backups of MongoDB sharded clusters and replica sets. This is a tool for creating consistent backups across a MongoDB sharded cluster (or a single replica set), and for restoring those backups to a specific point in time. The project was inspired by (and intends to replace) the Percona-Lab/mongodb_consistent_backup tool.

Percona Backup for MongoDB supports Percona Server for MongoDB or MongoDB Community Server version 3.6 or higher with MongoDB replication enabled. Binaries for the supported platforms as well as the tarball with source code are available from the Percona Backup for MongoDB download page. For more information about and the installation steps, see the documentation.

Percona Backup for MongoDB 1.0.0 features the following:

  • The architecture and the authentication of Percona Backup MongoDB have been simplified.
  • Stores backup data on Amazon Simple Storage Service or compatible storages, such as MinIO.
  • The output of pbm list shows all backups created from the connected MongoDB sharded cluster or replica set.

Percona Backup for MongoDB, a free open source back-up solution, will enable you to manage your backups without third party involvement or costly licenses. Percona Backup for MongoDB is distributed under the terms of Apache License v2.0.

Please report any bugs you encounter in our bug tracking system.

Sep
20
2019
--

Percona Kubernetes Operator for MongoDB 1.2.0 Is Now Available

Kubernetes for MongoDB

Kubernetes for MongoDBWe are glad to announce the 1.2.0 release of the Percona Kubernetes Operator for Percona Server for MongoDB.

The Operator simplifies the deployment and management of the Percona Server for MongoDB in Kubernetes-based environments. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.

The Operator source code is available in our Github repository. All of Percona’s software is open-source and free.

New features and improvements

  • A Service Broker was implemented for the Operator, allowing a user to deploy Percona XtraDB Cluster on the OpenShift Platform, configuring it with a standard GUI, following the Open Service Broker API.
  • Now the Operator supports Percona Monitoring and Management 2, which means being able to detect and register to PMM Server of both 1.x and 2.0 versions.
  • Data-at-rest encryption is now enabled by default unless EnableEncryption=false is explicitly specified in the deploy/cr.yaml  configuration file.
  • Now it is possible to set the schedulerName option in the operator parameters. This allows using storage which depends on a custom scheduler, or a cloud provider which optimizes scheduling to run workloads in a cost-effective way.
  • The resource constraint values were refined for all containers to eliminate the possibility of an out of memory error.

Fixed bugs

  • Oscillations of the cluster status between “initializing” and “ready” took place after an update.
  • The Operator was removing other cron jobs in case of the enabled backups without defined tasks (contributed by Marcel Heers).

Percona Server for MongoDB is an enhanced, open-source and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition. It supports MongoDB® protocols and drivers. Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.
Sep
19
2019
--

A Guide to Installing Percona Monitoring and Management (PMM)2 For the First Time

Percona Monitoring and Management 2

Percona Monitoring and Management 2Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL®, MongoDB®, and PostgreSQL® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL, MongoDB, and PostgreSQL servers to ensure that your data works as efficiently as possible.

Please see our previous blog for full information on the new features included in PMM2, our latest software release.

To use PMM2 you will need to remove any earlier versions of PMM Server and Client as there is no in-place upgrade path for historical data, and then download and run our latest software.

Install PMM Server with docker

The easiest way to install PMM Server is to deploy it with Docker. Running the PMM2 Docker container with PMM Server can be done by the following commands (note the version tag of 2):

docker create -v /srv --name pmm-data percona/pmm-server:2 /bin/true
docker run -d -p 80:80 -p 443:443 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:2

Now you can check the newly installed PMM Server in your browser, going to your server by its IP address and using the default credentials:

username: admin

password: admin

You will be prompted to change the admin password after logging in.

Install PMM Client

You can get PMM2 Client package from the official download page on the Percona web site, or use the official percona-release package from Percona to enable the tools repository, as follows (please see percona-release official documentation for further details on this new tool).

Ubuntu/Debian instructions

Install the Percona repository:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

Install the pmm2-client package:

apt-get update
apt-get install pmm2-client

Red Hat/CentOS Instructions

Install the Percona repository:

sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Install the pmm2-client package:

sudo yum update
sudo yum install pmm2-client

Users who have previously installed pmm2-client pre-release versions should fully remove the package and then install the latest available version.

Configure PMM

Once PMM Client is installed, run the pmm-admin config command with your PMM Server IP address in order to register with the PMM Server:

Note that you need to pass the authentication string as part of the --server-url. These credentials are the same ones you used to login to PMM Server.

sudo pmm-admin config --server-insecure-tls --server-url=https://admin:admin@<IP Address>:443

You should see the following:

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.

Adding MySQL Metrics and Query Analytics

MySQL server can be added for the monitoring in its normal way. Here is a command which adds it using the PERFORMANCE_SCHEMA source:

pmm-admin add mysql --query-source=perfschema --username=pmm --password=pmm

where username and password are credentials for the monitored MySQL access, which will be used locally on the database host.

The syntax to add MySQL services (Metrics and Query Analytics) using the Slow Log source is the following:

pmm-admin add mysql --query-source=slowlog --username=pmm --password=pmm

When the server is added, you can check your MySQL dashboards and Query Analytics in order to view its performance information!

Adding MongoDB Metrics and Query Analytics

Please refer to PMM 2 documentation to find out how to set up the required permissions and enable profiling with MongoDB.

You can add MongoDB services (Metrics and Query Analytics) with a similar command:

pmm-admin add mongodb --username=pmm --password=pmm

Adding PostgreSQL Metrics and Query Analytics

Please refer to PMM2 documentation to find out how to add PostgreSQL extension for queries monitoring, as well as set up the required user permissions and authentication.

Add PostgreSQL monitoring to your PMM Server with the following command:

pmm-admin add postgresql --username=pmm --password=pmm

Where username and password parameters should contain actual PostgreSQL user credentials.

Add ProxySQL monitoring service

You can add ProxySQL service as follows:

pmm-admin add proxysql --username=admin --password=admin

Please help us improve our software quality by reporting any Percona Monitoring and Management bugs you encounter using our bug tracking system.

Sep
19
2019
--

Manage Your Complex Database Environments with Percona Monitoring and Management (PMM)2 GA

Percona Monitoring and Management 2

Percona Monitoring and Management 2We are pleased to announce the General Availability of PMM2, our latest software release!

Created specifically to support open source software users, Percona Monitoring and Management (PMM) is a leading, free, open-source platform that allows you to actively manage and monitor the performance of your MySQL, MongoDB, and PostgreSQL database environments.

A recent Percona survey revealed that 41% of enterprises are running databases on multiple cloud providers and 61% of enterprises are running a combination of hybrid on-premises and cloud setups. Additionally, 92% of companies surveyed have more than one database in their environment, and 89% run more than one open-source database. Complex, multi-database, hybrid, and multi-cloud environments are a reality for many companies.

PMM provides a single pane of glass to manage your complex database environments in public, private, or on-premise environments. Our award-winning database monitoring tool has been built by database performance and scalability experts using best-of-breed tools. It is specifically designed to help DBAs and developers gain deep insight into their applications and databases and is used by thousands of organizations around the globe.

PMM allows users to do the following:

  • View historical data, trends, and analysis of your most critical applications and databases.
    • Examine by default one month of metrics and queries at high resolution – don’t miss rapidly-oscillating conditions that can negatively impact performance.
  • Find database-specific queries that limit scalability and negatively impact your end-users quickly and efficiently.
    • Query Analytics lets you leverage filters and labels to drill down on groups of servers.
  • Customize to reflect your specific workloads and environment – PMM can evolve with your applications and workloads.
    • PMM collects data at different intervals based on the likelihood of the data experiencing a change, so the systems under observation are put under the least amount of load from PMM as possible. For example, MySQL server hostname will be collected every minute, however, MySQL Threads Running will be collected more aggressively.
  • Integrate with existing monitoring applications and frameworks, extending those platforms to include the top database monitoring tool on the market.
    • Our new API makes it easier than ever to integrate with your existing configuration management system, easing the lifecycle management of systems and services in your environment.
  • Extend to add customer monitoring and database.
    • PMM allows you to sample your database contents and build custom metric series so that you can plot activity that is meaningful to your application alongside your database metrics. This is available using the Custom Queries functionality for MySQL and PostgreSQL.
    • PMM also provides for custom data collection for data collection outside of the database i.e. if you can script it, you can build a metric series. This is enabled by the textfile collector.

Our new PMM2 release introduces a number of enhancements and additional feature improvements, including:

  • Detailed query analytics and filtering technologies which enable you to identify issues faster than ever before.
  • A better user experience: Service-level dashboards give you immediate access to the data you need.
  • The new addition of PostgreSQL query tuning.
  • Enhanced security protocols to ensure your data is safe.
  • Our new API allows you to extend and interact with third-party tools.

PMM2 Feature Focus

Query Analytics featuring PostgreSQL

In response to demand, we’ve invested a lot of effort into bringing you an enhanced Query Analytics experience. With Query Analytics you’re able to examine the queries consuming the most amount of time across your entire fleet of servers, and then using filters and labels, drill down to find and isolate problematic queries requiring additional review. We’ve also added several long-awaited features such as:

  • Add multiple columns – see more than just load, query response time – want to examine MySQL InnoDB Rows Read, or PostgreSQL rows returned? You can add these columns, and more, to the table layout and see the aggregated values per query.
  • Sorting – you can now sort by any column. Want to know which query is reading the most amount of data via MySQL InnoDB Read Operations? Add the column, then sort by it.
  • Labeling – benefit from a variety of standard and custom labels in order to better categorize your servers. For example use Cluster or Replication Set to unify servers based on your topology. Or take advantage of the ability to define custom labels, for example, Availability Zone or Rack to define a physical location for your server instances.
  • PostgreSQL – In PMM2 you can visualize queries from PostgreSQL alongside your MySQL and MongoDB queries, and isolate them by selecting relevant filter options.

There is a filtering panel on the left, or use the search by bar to set filters using key:value syntax. For example, drill down to review just the queries related to pg2 service:

…or the detailed information about the selected query:

Rearranged dashboards and menu structure

PMM2 has the new taxonomy of the dashboards, which are now split up into the following groups:

  • Overview Dashboards provides a general view of all objects for the selected type (MySQL, OS, MongoDB, etc.).
  • Summary Dashboards are used for a more detailed view of the object.
  • Compare Dashboards show objects of the same type placed side-by-side allowing you to spot differences in their parameters, if any.
  • Details Dashboards supply you with more object-specific information.

Along with this rearrangement, we have a new simplified main menu. Below is the new look for its top-level:

Selecting a specific service (MySQL, MongoDB, PostgreSQL, etc.) will guide you to the next level of the hierarchy, where you can access a Service Compare Dashboard or one of the Details Dashboards related to this Service:

More on Service-level Dashboards

One of the most important navigational changes we’re making with PMM is how we approach visualizing your servers. We are calling these the Service-level dashboards and they are designed to give you a high-level overview of your fleets of servers by type of service. These dashboards provide summaries of important characteristics such as; number of Virtual CPUs, total disk space, number of instances, and more.

In the example below we show how you can navigate from the home dashboard to the MySQL Service dashboard, and then drill down to compare multiple nodes against each other:

PMM Inventory

PMM2 introduces a new data model, on which all objects visible to PMM are either nodes, services, or agents. These three types of objects together form an established hierarchy with Node at the top, then Service and Agents assigned to a Node. Using the new PMM Inventory Dashboard, you’ll be able to review associated objects to PMM Server. This dashboard contains three tabs for nodes, agents, and services:

API

We are pleased to provide an API which enables PMM to be used more easily and simply with popular configuration management tools such as Ansible, Chef, and Puppet. This API lets you programatically interact with PMM Server so that you can list, add, and remove services.

You can browse, explore, and even test the API using Swagger:

Using the API to execute commands is simple. The following example shows how to get a list of Nodes from PMM Server:

$ curl -k -u admin:admin -X POST "https://192.168.1.3/v1/inventory/Nodes/List"

{
  "generic": [
    {
      "node_id": "/node_id/c9672fa5-d9b4-4db0-9b40-ea8c2e3c800a",
      "node_name": "osboxes",
      "address": "10.0.2.15",
      "machine_id": "/machine_id/8b5f6f78e736e3e29ba6f1505c797297",
      "distro": "linux"
    },
    {
      "node_id": "pmm-server",
      "node_name": "pmm-server",
      "address": "127.0.0.1"
    }
  ]
}

Read our guide on installing PMM2 to learn more.

Contacting Percona

We welcome any thoughts, comments, and suggestions you have on this latest software version.

Percona offers a range of open source software and services to help your business thrive and grow. Organizations interested in ensuring their databases perform at the highest level can also benefit from Percona Managed Services, Support, and Consulting.

To learn how Percona can help you make the most of your data, please contact us at 1-888-316-9775 or 0-800-051-8984 in Europe, or email sales@percona.com

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com