Mar
30
2022
--

MySQL Shell For VS Code – Your New GUI?

MySQL Shell For VS Code

MySQL Shell For VS CodeMySQL Shell For VS Code integrates the MySQL Shell directly into VS Code development workflows and was released last week. This extension to the popular VS Code platform enables interactive editing and execution of SQL for MySQL Databases and optionally the MySQL Database Service for several simultaneous sessions.  It is a preview release and not ready for production but it does have several features that may make the MySQL GUI of choice.

Installation

The installation itself is easy but you will need to download the code from here and not the usual places for MySQL products.  You will, of course, have to have VS Code installed first, and be warned that some of the more tantalizing links for things like documentation are not connected.

install screen

MySQL Shell for VS Code installation screen and yes, you will need VS Code installed first.

Usage

The interface is familiar to that of MySQL Workbench but seems more intuitive.  You will need to set up a connection to your server with all the usual host, user, and password information. From there you can create a session to that server.

The big change is to remember to use Control-ENTER to send commands to the MySQL instance.  The GUI allows easy SQL query editing. But be warned there are problems with the preview.  A familiar query to the world database did not work with the GUI but ran perfectly with the stand-alone version of MySQL Shell.  Other queries worked well.

The first query is wrong

The first query was not correct. The second query went a little better.

For some reason, the first query did not get sent correctly to the server.

correct answer

The correct answer to the first query.

Quick Take

This is an impressive product even though it is a preview.  This is far better than the old MySQL CLI and the GUI editing makes the very good MySQL Shell features extremely useful.

Mar
29
2022
--

Migrating to utf8mb4: Things to Consider

Migrating to utf8mb4

Migrating to utf8mb4The utf8mb4 character set is the new default as of MySQL 8.0, and this change neither affects existing data nor forces any upgrades.

Migration to utf8mb4 has many advantages including:

  • It can store more symbols, including emojis
  • It has new collations for Asian languages
  • It is faster than utf8mb3

Still, you may wonder how migration affects your existing data. This blog covers multiple aspects of it.

Storage Requirements

As the name suggests, the maximum number of bytes that one character can take with character set utf8mb4 is four bytes. This is larger than the requirements for utf8mb3 which takes three bytes and many other MySQL character sets.

Fortunately, utf8mb3 is a subset of utf8mb4, and migration of existing data does not increase the size of the data stored on disk: each character takes as many bytes as needed. For example, any digit or letter in the Latin alphabet will require one byte. Characters from other alphabets can take up to four bytes. This can be verified with a simple test.

mysql?> set names utf8mb4;
Query OK, 0 rows affected (0,00 sec)

mysql?> CREATE TABLE charset_len( name VARCHAR(255), val CHAR(1) ) CHARACTER SET=utf8mb4;
Query OK, 0 rows affected (0,03 sec)

mysql?> INSERT INTO charset_len VALUES('Latin A', 'A'),  ('Cyrillic ?', '?'), ('Korean ?', '?'), ('Dolphin ?', '?');
Query OK, 4 rows affected (0,02 sec)
Records: 4  Duplicates: 0  Warnings: 0

mysql?> SELECT name, val, HEX(val), BIT_LENGTH(val)/8 FROM charset_len;
+--------------+------+----------+-------------------+
| name         | val  | HEX(val) | BIT_LENGTH(val)/8 |
+--------------+------+----------+-------------------+
| Latin A      | A    | 41       |            1.0000 |
| Cyrillic ?   | ?    | D090     |            2.0000 |
| Korean ?    | ?    | E389BF   |            3.0000 |
| Dolphin ?   | ?    | F09F90AC |            4.0000 |
+--------------+------+----------+-------------------+
4 rows in set (0,00 sec)

As a result, all your data that uses a maximum of three bytes would not change and you will be able to store characters that require 4-bytes encoding.

Maximum Length of the Column

While the data storage does not change, when MySQL calculates the maximum amount of data that the column can store, it may fail for some column size definitions that work fine for utf8mb3. For example, you can have a table with this definition:

mysql?> CREATE TABLE len_test(
      -> foo VARCHAR(16384)
      -> ) ENGINE=InnoDB CHARACTER SET utf8mb3;
Query OK, 0 rows affected, 1 warning (0,06 sec)

If you decide to convert this table to use the utf8mb4 character set, the operation will fail:

mysql?> ALTER TABLE len_test CONVERT TO CHARACTER SET utf8mb4;
ERROR 1074 (42000): Column length too big for column 'foo' (max = 16383); use BLOB or TEXT instead

The reason for this is that the maximum number of bytes that MySQL can store in a VARCHAR column is 65,535, and that is 21845 characters for utf8mb3 character set and 16383 characters for the utf8mb4 character set.

Therefore, if you have columns that could contain more than 16383 characters, you will need to convert them to the TEXT or LONGTEXT data type.

You can find all such columns if you run the query:

SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME,
       CHARACTER_MAXIMUM_LENGTH, DATA_TYPE
FROM information_schema.columns
WHERE CHARACTER_MAXIMUM_LENGTH > 16383 AND
      DATA_TYPE NOT LIKE '%text%' AND 
      DATA_TYPE NOT LIKE '%blob%' AND
      TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema');

For example, in my test environment, it returns:

*************************** 1. row ***************************
TABLE_SCHEMA: test
TABLE_NAME: setup
COLUMN_NAME: value
CHARACTER_MAXIMUM_LENGTH: 20000
DATA_TYPE: varchar
1 row in set (0,02 sec

 

Index Storage Requirement

MySQL does not know in advance which characters you will store in the column when you are creating indexes. Therefore, when it calculates the storage required for the index, it takes the maximum value for the character set chosen. As a result, you may hit the index storage limit when converting from another character set to utf8mb4. For InnoDB, the maximum size of the index is 767 bytes for REDUNDANT and COMPACT row formats, and 3072 bytes for DYNAMIC and COMPRESSED row formats. See The User Reference Manual for details.

That means you need to check if you have indexes that could grow to exceed these values before performing the update. You can do this with the following query:

WITH indexes AS (
     WITH tables AS  (
          SELECT SUBSTRING_INDEX(t.NAME, '/', 1) AS `database`, SUBSTRING_INDEX(t.NAME, '/', -1) AS `table`, i.NAME AS `index`, ROW_FORMAT
          FROM information_schema.INNODB_INDEXES i JOIN information_schema.INNODB_TABLES t USING(TABLE_ID)
    )
    SELECT `database`, `table`, `index`, ROW_FORMAT, GROUP_CONCAT(kcu.COLUMN_NAME) AS columns,
           SUM(c.CHARACTER_MAXIMUM_LENGTH) * 4 AS index_len_bytes
    FROM tables JOIN information_schema.KEY_COLUMN_USAGE kcu
         ON (`database` = TABLE_SCHEMA AND `table` = kcu.TABLE_NAME AND `index` = kcu.CONSTRAINT_NAME)
         JOIN information_schema.COLUMNS c ON (kcu.COLUMN_NAME = c.COLUMN_NAME AND `database` = c.TABLE_SCHEMA AND `table` = c.TABLE_NAME)
    WHERE c.CHARACTER_MAXIMUM_LENGTH IS NOT NULL
    GROUP BY `database`, `table`, `index`, ROW_FORMAT ORDER BY index_len_bytes
) SELECT * FROM indexes WHERE index_len_bytes >= 768;

Here is the result of running the query in my test environment:

+----------+--------------+---------+------------+------------+-----------------+
| database | table        | index   | ROW_FORMAT | columns    | index_len_bytes |
+----------+--------------+---------+------------+------------+-----------------+
| cookbook | hitcount     | PRIMARY | Dynamic    | path       |            1020 |
| cookbook | phrase       | PRIMARY | Dynamic    | phrase_val |            1020 |
| cookbook | ruby_session | PRIMARY | Dynamic    | session_id |            1020 |
+----------+--------------+---------+------------+------------+-----------------+
3 rows in set (0,04 sec)

Once you have identified such indexes, check the columns and adjust the table definition accordingly.

Note: The query uses CTE, available as of MySQL 8.0. If you are still on version 5.7 or earlier, you will need to rewrite the query.

Temporary Tables

One more issue you can hit after converting to the utf8mb4 character set is an increased size of the implicit temporary tables that MySQL creates to resolve queries. Since utf8mb4 may store more data than other character sets, the column size of such implicit tables will also be bigger. To figure out if you are affected by this issue, watch the global status variable Created_tmp_disk_tables. If this starts significantly increasing after the migration, you may consider updating RAM on your machine and increasing the maximum size of the temporary tables. Note that this issue could be a symptom that some of your queries are poorly optimized.

Conclusion

Converting to the utf8mb4 character set brings you the advantages of better performance, a larger range of characters that you can use, including emojis and new collations (sorting rules). This conversion comes at almost no price, and it can be done smoothly.

Ensure:

  • You converted all VARCHAR columns that could store more than 16383 characters to the TEXT or LONGTEXT data type
  • You adjusted index definitions that could take more than 767 bytes for the REDUNDANT and COMPACT row formats, and 3072 bytes for DYNAMIC and COMPRESSED row formats after migration.
  • You optimized your queries so that they should not start using internal disk-based temporary tables
Mar
29
2022
--

How to create a unicorn toddler bed?

Unicorns are one of the most popular imaginary creatures among young children, and what could be more magical than a unicorn toddler bed? JoJo Siwa Toddler bed by Delta Children, which is sporting a large, colorful, and glittery unicorn headboard, is all the rage right now – and sold out in many stores.

But don’t worry: with a bit of imagination, you can turn any toddler bed into a magical unicorn paradise. Here’s how!

Start with a white or pastel-colored toddler bed.

First, you’ll need to purchase a bed frame designed for toddlers. Ensure the frame is made of sturdy materials and has high sides to prevent your child from falling out.

Dream On Me Portland Toddler Bed

Our first pick for the perfect unicorn bed frame! It’s made of durable wood and features two side rails for safety.

The bed sits low to the floor, making it easy to get in & out. The Dream On Me Portland Toddler bed makes it simple for your toddler to transition from a crib to a bed.

The bed frame comes in classic colors, including white and pale pink. Ideal for unicorn makeover!

Delta Children Wood Sleigh Toddler Bed

Delta Children may not have their unicorn toddler bed in stock, but they have plenty of other great toddler beds that would work perfectly for this project.

This toddler bed is made of sustainable New Zealand pine wood and comes in white, grey, and natural wood finishes. It has a low to the ground design, making it easy for your little one to get in and out of bed. It also features two side rails for safety.

The headboard and footboard feature an elegant sleigh design. The Delta Children Wood Sleigh Toddler Bed would be perfect for any little princess – or unicorn enthusiast!

Add a unicorn-themed toddler bed canopy.

A canopy adds an extra touch of magic to any bed, and a unicorn-themed canopy is a perfect way to transform a regular toddler bed into a unicorn toddler bed! There are plenty of options available online, from simple bed canopies to more elaborate ones with lights and tulle. Have a look at our favorites!

Unicorn Princess Pink Canopy

This pink canopy will make your little girl feel like a unicorn princess! The top of the canopy is adorned with a gold unicorn horn and ears and a flower crown. It’s an extra-long two-layer chiffon fabric and has hook & loop fasteners for easy installation.

The Unicorn Princess Pink Canopy would look great paired with the Delta Children Wood Sleigh Toddler Bed or any other white or pale-colored bed frame.

The Unicorn Princess Pink Canopy can be hung from the ceiling or attached to the bed frame. It’s sure to add some magic to any toddler bedroom!

White Bed Canopy with Glow in The Dark Unicorns, Stars, and Rainbows

This bed canopy is perfect for any unicorn enthusiast! It features 50 different glow-in-the-dark elements: unicorns, stars, and rainbows.

This toddler bed canopy is made of polyester, which is a fire-resistant material. The unicorns and the rest of the design are applied to the net using advanced thermal printing technology, securing the attached drawings from ever falling off.

The White Bed Canopy with Glow in the Dark Unicorns, Stars, and Rainbows will make bedtime even more magical!

Decorate the bed with unicorn toddler bedding.

Now that you have the perfect bed frame and canopy, it’s time to add some unicorn-themed bedding! There are plenty of adorable options available, from quilts and blankets to sheets and pillowcases. Have a look at our favorites!

Funhouse 4 Piece Toddler Bedding Set

This lovely set includes a reversible quilted bedspread, a standard-size pillowcase that may be reversed, and a fitted sheet. Fits most crib/toddler mattresses.

The quilt features a unicorn design on one side and a hearts design on the other. The Funhouse 4 Piece Toddler Bedding Set would make a colorful addition to any unicorn toddler bed!

Carter’s Rainbow Unicorn 4 Piece Toddler Bedding Set

This whimsical toddler set features a double-sided comforter, fitted bottom sheet, flat top sheet, and reversible standard-sized pillowcase in vibrant pink. It’s perfect for any toddler who loves unicorns!

URBONUR 4-Piece Toddler Bedding Set

Made of super-soft microfiber, this set includes a quilt, fitted sheet, flat sheet, and pillowcase. It’s machine washable and dryer safe for easy care.

The quilt’s pink and blue ombre design is adorned with sparkling gold unicorns, making this set extra special.

Wowelife Rainbow Unicorn Toddler Bedding Set 4 Piece

This toddler bedding set combines unicorn and rainbow in pink, create a warm and dreamy bedroom and bring more color and fun to life. It includes a quilt, fitted sheet, flat sheet, and pillowcase.

The Wowelife Rainbow Unicorn Toddler Bedding Set 4 Piece is perfect for any little girl who loves unicorns and rainbows!

Complete the look with unicorn-themed bedroom accessories.

Now that you have the perfect bed and bedding, it’s time to accessorize! There are plenty of ways to add a touch of magic to any unicorn toddler bedroom with wall art, rugs, lamps, and more.

Once you have the bed set up, help your child into it and tuck them in with their favorite stuffed animal. Then, tell them a bedtime story about a magical unicorn kingdom where they can fly and gallop among the stars.

The post How to create a unicorn toddler bed? appeared first on Comfy Bummy.

Mar
28
2022
--

Talking Drupal #340 – Storybook

Today we are talking about storybook with Randy Oest.

www.talkingDrupal.com/340

Topics

  • What is Storybook
  • Why are component libraries so popular
  • Difference between Storybook and Patternlab
  • Why choose Storybook
  • Useful Addons
    • Docs
    • Controls
    • Accessibility
    • Screen Size
    • Figma
    • Zeppelin
    • Write your own
    • Chromatic visual testing
  • Integration with Drupal
  • Headless environments
  • Emulsify
  • When would you not use Storybook
  • Interesting use cases
  • Chromatic (not the Drupal agency)
  • Resources for getting started

Resources

Guests

Randy Oest – randyoest.com @amazingrando

Hosts

Nic Laflin – www.nLighteneddevelopment.com @nicxvan John Picozzi – www.epam.com @johnpicozzi Mike Anello – drupaleasy.com @ultimike

MOTW

Perimeter Basic perimeter defence for a Drupal site. This module bans the IPs who send suspicious requests to the site. The concept is: if you have no business here, go away.

Mar
28
2022
--

MariaDB 10.9 Quick Peek

MariaDB 10.9 Quick Peek

MariaDB 10.9 Quick PeekMariaDB 10.9 is a preview release of the popular open source database server and is considered alpha level code (pronounced: Not for production). It offers a glimpse of the evolution of the product and introduces some new features, at least for MariaDB.  Since I took a peek at the MySQL 8.0.28 release notes recently, it is time to see what MariaDB announced (https://mariadb.com/kb/en/mdb-1090-rn/) for their next release. My own comments are in italics and do not reflect anyone else’s opinion. 

The 10.9 server is offered in a few varieties, ala the old MySQL Labs releases where you can try some of the new features.  This allows them to independently develop the new features without having to integrate them too. This iteration has four choices. 

MariaDB 10.9 Alpha Download Options

You have a few choices for testing new MariaDB features to choose from for this alpha version of the database.

The TL;DR

The TL;DR synopsis is that there is a lot of work being done on MariaDB and this is a first glimpse of what will become 10.9 and as such you probably will want to wait to download and evaluate unless one of the highlights below catches your fancy. There is nothing here that seems revolutionary but it is the first step of an evolutionary process for the MariaDB 10.9 server.

The New Stuff

JSON_OVERLAPS() is a new function that returns true if there is any commonality between two JSON documents. And JSON_PATH() gets both range notation and negative indexes. This will be interesting to contrast the long-existing MySQL versions.

SHOW EXPLAIN adds a JSON formatted output and EXPLAIN FOR CONNECTION gains syntax support for SHOW EXPLAIN.

Write to the redo log can be written asynchronously. Might want to test his under simulated duress just in case as the redo log looks like a good thing to keep synchronous. But this may work ‘well enough’ in most cases.

Better GTID filtering for uses of mysqlbinlog by adding the –do-domain-ids, –ignore-domain-ids, and –ignore-server-ids options. This should be handy for point-in-time recovery.

Local temporary tables now appear in information_schema.tables.table_type.

The merger of the old to old_mode sql variable. Hopefully, there will be no realy_old_mode in the future.

There is a Vault Key Management Plugin for Hashicorp’s vault.

There is now a JSON file interface to wsrep node state / SST progress logging. Apparently, Codership is adding a new feature to Galera cluster nodes to allow access to some wsrep status variables from a dedicated JSON file, that then can be read by an external monitoring tool. Or a human for that matter. I probably will not want to be that human.

The innodb_log_file_size to be changed dynamically. Handy and long desired.

Please send comments and questions to the author.

Mar
25
2022
--

Okta – Percona’s statement

On 22nd March 2022 08:43 UTC, we became aware of the issue affecting Okta, a third-party identity provider that Percona uses for https://id.percona.com. Initially, there was no statement from Okta, so our Security Operations team reviewed the information available from LAPSUS$ and other public sources.

Based on the public information available about the issue, we evaluated the potential exposure to Percona and determined that the impact was minimal. Percona uses Okta integrations so https://id.percona.com can be used to authenticate against Percona’s deployments of:

  • forums.percona.com (Discourse)
  • percona.service-now.com (ServiceNow) 
  • portal.percona.com (Dashboard portal interface where users & clients can add their PMM integration). 

Integrations of PMM with Percona’s portal does not at this time allow for management from the portal.percona.com interface (read: No commands may be issued to the PMM server).

At the time of writing, Percona is aware that the level of compromise allowed LAPSUS$ access to force a reset of both password and MFA Secrets for individual users. Information released by Okta noted that passwords were not discoverable and stated that only 2.5% of Okta’s customers had been affected. 

On 2022-03-24 20:04 GMT/UTC Percona received notice of no impact from Okta.

Whilst the notice states that Percona was not impacted, we strongly urge users of https://id.percona.com to follow best practices by ensuring they update their password with a wholly unique password that is not shared with other platforms, is sufficient complexity and length, and deploy 2FA/MFA where ever possible to do so.

Even though the impact on Percona is minimal, we are taken actions to further strengthen the Percona services and projects that use Okta for identity management services. The Security Operations team will continue to monitor public information and Okta’s response as it becomes available. We will further assess additional security actions that need to be taken and the alternative identity management providers, if necessary. 

Percona’s clients’ and users’ security, is at the core of our Security Operations team’s values and will continue to remain our core focus. This means we will always strive to ensure that our chosen third-party vendors introduce minimal viable risk. However, when service providers create a risk to our customers, and the response from the service provider is not provided in a timely manner, we strive to ensure we are exploring all aspects of information being made available to arrive at our own conclusions and strengthen our security posture.

If you should have any concerns or questions related to this or other security-related queries at Percona please review https://www.percona.com/security for your channels for enquiry.

 

Kind Regards

 

David Busby

Information Security Architect Percona 

 

Okta’s updates links:

 

https://www.okta.com/blog/2022/03/oktas-investigation-of-the-january-2022-compromise/

https://www.okta.com/blog/2022/03/updated-okta-statement-on-lapsus/

 

Thanks:

 

Tibor Korocz (Percona)- for raising the issue early and getting this to the top of my backlog.

John Lionis (Percona) – for assisting with the review, deep dive, and evidence collection for this issue.

Written by in: MySQL,Zend Developer |
Mar
25
2022
--

Percona Monitoring and Management Security Threat Tool

Percona Monitoring and Management Security Threat Tool

Percona Monitoring and Management (PMM) is a multi-faceted tool that includes the Security Threat Tool which provides the ability to check your databases for potential configuration or performance problems.  And boy was I shocked to find I had problems when I installed PMM for testing.

The complete list of checks that PMM runs daily can be found at https://docs.percona.com/percona-platform/checks.html and they range from unsecured log file permissions to low cache rates, depending on the database.  PMM checks MySQL, MongoDB, and PostgreSQL instances. These checks are categorized as critical, major, or trivial depending on their respective impact and you can silence them if the issue is chronic but has been decided as something that can be tolerated. 

I installed PMM and Percona Distribution for MySQL on a test server and enabled PMM security.  On the home screen, the alert menu was instantly displayed.  

Security Alert

It is a little shocking to find your new server has potential security issues.

Yup, my test system had problems!  I clicked on the above section of the PMM home page fearful of what horrors awaited me.  

The Security Warnings

The warnings from PMM’s Security Threat Tool are clear and concise, often offering setting recommendations

There were no ‘critical’ problems but there were two ‘major’ and three ‘trivial’ issues. The first of the ‘major’ problems was that the master_verify_checksum is disabled.  The master_verify_checksum variable is used in replication. Since this was a test machine and not part of any replication topology, there really is not a need to have a replication source verify events read from the binary log by examining checksums, stopping in the case of a mismatch. BTW master_verify_checksum is disabled by default.

The last ‘major’ issue is that the binary log files are rotated too quickly and PMM suggested a value for this setting. Once again for an ephemeral test system, I could live with this issue as nobody else was dependent on this system.

The ‘trivial’ issues were somethings that may not be considered trivial by all.  The first of these is the InnoDB flush method.  My test system was set up to use fsync while PMM recommends O_DIRECT which is usually the choice for use with local disks. I will not review all the options and opinions (and there are many) but it was nice to get a gentle prodding from PMM about this issue. If this test system was going to be around for a while, I would definitely change the current setting to the recommended. 

My second ‘trivial’ problem was more of a warning about a user who had DBA privileges. And the last problem was a recommendation to change the binlog_row_image being set to full when minimal would provide better performance.  You might consider these nagging by PMM but both are issues a DBA or a Reliability Engineer would gladly be reminded of.  

To enable the Security threat Tool, select the Gear Icon on the left side of the main PMM display and click on the gear icon for the Configuration option and then the second gear icon for Settings.

Config Settings

Please pick the configuration ‘gear’ icon

 

Then select advanced settings

Security Threat Tool

And finally, enable the Security Threat Tool and I would stick with the default values on intervals when you begin to explore this tool.

Conclusion

The Percona Monitoring and Management Security Threat Tool is a handy way to gain insight into your MySQL, PostgreSQL, or MongoDB database.  It provides information that general security tools will not provide and is packed with Percona’s easy-to-use PMM interface.  This is an open source tool that you need to have at your disposal.

Mar
24
2022
--

PostgreSQL From the Perspective of a MySQL DBA

PostgreSQL From the Perspective of a MySQL DBA

PostgreSQL From the Perspective of a MySQL DBADBAs can be set in their ways. Oftentimes, we start with a particular flavor and from that moment until the end of time, it will always be “the best”. In some cases, the debate is actually based on matching the use case to the proper tech (I’m looking at you SQL vs NoSQL). Lately, I’m starting to see many teams working with multiple flavors of the same general technology which begs the question: which tech “is better”?

In the spirit of full disclosure, I’m one of those “set in my ways” DBAs that has worked extensively with one flavor. MySQL has been my focus for the better part of the last 15 years. While part of me wants to sit on the porch, waving my cane, and tell other flavors to “get off my lawn”, I’m instead using this as an opportunity to really examine PostgreSQL vs MySQL in an objective manner.

This is the first post in a series exploring PostgreSQL from the perspective of a MySQL DBA. I’ll start at a very high level – what are both technologies, how are they similar, how are they different, etc. In later posts, I’ll look at operational, schema, and other aspects as I work to learn more about PostgreSQL.

MySQL

MySQL is an RDBMS (Relational Database Management System). This means that it has all the standard features one would expect – tables, views, foreign keys, stored procedures, and ACID compliance (when using InnoDB). It works very well for most OLTP workloads and some OLAP workloads as well.

While I’ve seen some implementations doing very complicated and non-trivial workloads, MySQL tends to shine with standard, relational schemas, and web-based workloads. Simple asynchronous replication allows for easy read-scaling and report query offloading. Synchronous replication allows for straightforward HA while still maintaining ACID and high throughput.

PostgreSQL

PostgreSQL is an ORDBMS (Object Relational Database Management System). This takes all the standard RDBMS features and adds support for complex objects, table inheritance, and additional data types beyond JSON. This may seem like a small difference, but it allows PostgreSQL to support much more complex workloads and schema designs.

Similar to MySQL, replication allows teams to build out varying architectures. This can help with HA and read scalability. While it can definitely support standard OLTP/OLAP workloads, the vast community is constantly developing new features and functionality. This allows for easier adoption for a wider variety of workloads.

Key Feature Differences

A major difference between the two flavors is that PostgreSQL offers custom object definitions and table inheritance. This greatly extends the standard relational database model and provides support for very complex workloads.

Some other differences include:

  • PostgreSQL supports more modern data types (JSON, XML, etc) compared to MySQL which only supports JSON
  • PostgreSQL support Materialized Views
  • PostgreSQL uses an open source license similar to BSD/MIT where MySQL uses the GPL license

Initial Takeaways

After looking through the high-level differences and similarities, both flavors definitely have their place in the open source database ecosystem. While both options work for basic relational workloads, MySQL excels in web-based applications while PostgreSQL shines with complex workloads. This matches a key trend I’ve been seeing – new applications/microservices tend to launch on MySQL as they are designed with very basic data structures and relationships. In contrast, PostgreSQL is commonly a target for migration from large, enterprise databases moving to open source.

So which is “better”? Like every good consulting answer, it depends. Both have strengths and weaknesses which make the target application the key deciding factor. This also leads to more hybrid environments as large enterprises modernize (aka rewrite) applications while also migrating legacy monolithic applications to open source to avoid licensing costs.

If you are having trouble deciding which flavor suits your needs, Percona’s Professional Services team can help analyze your application and requirements to look for the best fit. With support and distributions for both flavors, we don’t play favorites and will help get your business into the right solution!

Mar
24
2022
--

A Dive Into MySQL Multi-Threaded Replication

MySQL Multi-Threaded Replication

MySQL Multi-Threaded ReplicationFor a very long part of its history, MySQL replication has been limited in terms of performance. Because there was no way of knowing if transactions or updates were independent, the updates had to be executed on a replica following the exact same sequence of operations as on the primary server. The only way to guarantee the same sequence of operations on the replica was to use a single thread. In this post, we’ll do a dive into the MySQL multi-threaded replication (MTR) implementation and explore the available tuning options.

MTR is the culmination of the evolution in the development of parallel replication which followed the path:

  1. Single-threaded replication
  2. Per-database replication
  3. Logical clock replication

We’ll leave aside, for now, the recent dependency tracking feature.

Context

Before we discuss the multi-threaded implementation, let’s review in detail the limitations of replication with a single thread. The most obvious one is the CPU processing capacity. With a single thread, a process is bound to a single CPU core. If the updates the replica has to execute are CPU intensive, the replica is likely to lag behind. This situation is, however, quite exceptional, replication loads are rarely CPU intensive, at least they shouldn’t. With row-based replication, the only way a replica can run with a high CPU is if the schema is improperly indexed. This is obviously a pathological case.

The latency of IO operations is the most common performance limitation of a single thread. Spinning disks have a latency of a few milliseconds, essentially related to the time it takes for the platters to turn and the head to move to the desired cylinder. If this latency is 10ms, a single thread won’t be able to perform more than 100 IOPs per second. The latency of a flash storage device is much lower but often accessed over the network. The latency is thus still significant.

To illustrate the single-thread replication issue, here’s a simple sysbench indexed update benchmark with a setup sensitive to IO latency. In yellow are the rates of com_update for the primary server with one to 16 threads for short two minute executions of sysbench. In green are the corresponding com_update rates of the replica. The area under each curve, for a given number of sysbench threads, is the same. The replica has to execute the same updates and it is allowed to catch up with the primary before the next run is started. While at one thread the rates are the same, at 16 threads the difference is huge.

sysbench indexed update benchmark

In real-world scenarios we rarely see single-threaded traffic running on the primary server. Modern applications are relying on concurrent scalability. This means that single-threaded replication is very likely to suffer from lag which will lead to problems. For one, a lag can prevent load balancing because the replica has outdated data. Also, a failover can potentially take longer because the replica needs to recover from lag, etc… Until the arrival of MySQL 5.6 and especially 5.7, these were the well too familiar issues with MySQL replication.

Per-Database Replication

Prior to jumping into the analysis of the actual topic, it is worth mentioning one of the parallel types of replication introduced in the 5.6 version. This is called per-database replication or per-schema replication. It assumes that transactions running in different schemas can be applied in parallel on the replica. This was an important performance improvement, especially in sharded environments where multiple databases receive writes in parallel on the primary server.

Group Commit

The single-threaded limitations of MySQL replication were not the only performance-related issues. The rate of transactions MySQL/InnoDB could handle was quite limited in a fully durable setup. Every commit, even if implicit, needed to accomplish three fsync operations. fsync is probably the slowest type of IO operation, even on flash devices, it can take more than a millisecond.

When a file is fsynced, all the pending dirty buffers of the file are flushed to the storage, not just the ones written by the thread doing the fsync. If there are 10 threads at the commit stage, the first thread to fsync the InnoDB log file and the binary log file flushes the data of all the other nine threads. This process is called a group commit. The group commit was introduced in 5.6 and it considerably improved the scalability of MySQL for such workloads. It will also have a huge impact on replication.

Logical_clock Replication

In order to understand parallel replication, one has to realize that transactions within a group commit are independent. A transaction that is dependent on another one at the commit stage is locked by InnoDB. That information is extremely important, as it allows the transactions within a group commit to being applied in parallel.

MySQL 5.7 added markers in binary logs to indicate the group commit boundary and a new replication mode to benefit from these markers, logical_clock. You can see these markers in the binary logs with the mysqlbinlog tool. Here’s an example:

root@ip-172-31-10-84:~# mysqlbinlog ip-172-31-10-84-bin.000047 | grep last_committed | awk '{ print $11" "$12}' | more
last_committed=0 sequence_number=1
last_committed=0 sequence_number=2
last_committed=0 sequence_number=3
last_committed=0 sequence_number=4
last_committed=0 sequence_number=5
last_committed=0 sequence_number=6
last_committed=0 sequence_number=7
last_committed=0 sequence_number=8
last_committed=0 sequence_number=9
last_committed=0 sequence_number=10
last_committed=0 sequence_number=11
last_committed=0 sequence_number=12
last_committed=0 sequence_number=13 
last_committed=1 sequence_number=14 <= trx 1 committed alone
last_committed=1 sequence_number=15
last_committed=1 sequence_number=16
last_committed=1 sequence_number=17
last_committed=8 sequence_number=18 <= trx 2 to 8 committed together
last_committed=8 sequence_number=19
last_committed=8 sequence_number=20
last_committed=8 sequence_number=21
last_committed=8 sequence_number=22
last_committed=8 sequence_number=23
last_committed=17 sequence_number=24 <= trx 9 to 17 committed together
last_committed=17 sequence_number=25

There is one important thing to remember with multi-threaded replication and logical_clock: if you want the replica to apply transactions in parallel, there must be group commits on the primary.

Test Configuration

It is not as easy as it may seem to show clean replication results, even with something as simple as indexed updates benchmarks with sysbench. We must wait for replication to sync up and flushing easily messes around with the results, adding a lot of noise. To get cleaner results, you need: short sysbench runs, delayed flushing, and waiting for flushing to complete before starting a new run. Even then, the results are not as clean as we would have hoped for.

We have used AWS EC2 instances for our benchmark along with the new gp3 general-purpose, SSD-based EBS volumes. Our setup can be summarized as follows:

  • Primary and Replica servers: r5b.large (2vcpu, 16GB of RAM)
  • gp3 EBS of 128GB (Stable 3000 IOPS, IO latency of about 0.7ms)
  • Dataset of 56GB, 12 tables of 4.7GB (20M rows)
  • Percona server 8.0.26 on Ubuntu 20.04

The size of the dataset was chosen to be more than four times the size of the InnoDB buffer pool (12GB). Since the updates are random, the benchmarks are IO-bound, 79% of the time a page needs to be read from storage. The benchmark runs were chosen to be short, one or two minutes, in order to avoid issues with page flushing.

Sysbench itself was executed from a third EC2 VM, a c6i.xlarge instance, to avoid the competition for resources with the database servers. This instance also hosted Percona Monitoring and Management (PMM) to collect performance metrics of the database servers. The figures of this post are directly taken from a custom dashboard in the PMM.

How Good is MTR?

We have seen earlier, the limitations of replication with a single thread. For the same fully durable database cluster, let’s re-run the sysbench indexed updates benchmarks but this time, the replica has replica_parallel_workers set to 16.

replica_parallel_workers

As we can see, the replica is fully able to keep up with replication. It starts to lag only when sysbench uses nearly as many threads as we have defined in replica_parallel_workers. Keep in mind though it is a simple and well-defined workload, a real-world workload may not experience the same level of improvement.

The Cost of Durability

The durability is the “D” of the ACID acronym. By now you should be well aware of the importance of group commit for replication scalability and the prime way of achieving group commit is durability. Two variables control durability with MySQL/InnoDB:

innodb_flush_log_at_trx_commit (1 is durable, 0 and 2 are not)

sync_binlog (1 is durable, 0 is not)

There are cases however where a durable primary server is not desirable or, said otherwise, durability induces too much latency. An example of such a case is click-ad recording. What is important with this workload is to record, as fast as possible, clicks on web advertisements. When a new promotion is out, there can be a huge spike in clicks. What matters in such cases is to record the clicks as fast as possible. To be able to capture the spike is more important than potentially losing some clicks because of a crash.

The absence of durability on the primary server makes it faster for small writes but it essentially disables group commit. It becomes very difficult for the replica to keep up and we are back to the original replication performance issue.

The following figure shows four sysbench executions of one minute each with 1, 2, 3, and 4 threads. The primary server (yellow) is not durable. Although the replica (green) is configured with 16 parallel worker threads, it is obviously unable to keep up. So, between a “slow” durable primary server that allows replicas to keep and a “fast” non-durable primary server that replicas can’t follow, is there the possibility of a compromise?

Multi-Threaded Replication

Sync Delay

Instead of relying on the hardware (fsync), what if we have the possibility to define the duration of the transaction grouping time. This is exactly the purpose of the binlog_group_commit_sync_delay variable. This variable defines a time, in microseconds, during which transactions are grouped. The first transaction starts a timer and until it expires, the following transactions are grouped with it. The main advantage of this variable compared to the original behavior is that the interval can be tuned. The following figure shows the same non-durable results as the previous one but this time, a small grouping delay of 5us was added.

Sync Delay

The figure shows the same com_update rates of the primary (yellow) and replica (green). For our very simple workload, the impacts on the replica of that small delay are significant. With four sysbench threads, the replica execution rate is close to 2000/s, an increase of close to 30%. There are also impacts on the primary, especially for the lower sysbench thread counts. The rate of sysbench updates with one thread is lower by close to 30% with the delay.

So, the cost of adding a delay is somewhat similar to the cost of durability. The transaction latency is increased and this affects mostly low concurrency workloads. One could argue that such workloads are not the most likely to cause replication lag issues. At a higher concurrency level, when more threads are active at the same time, the delay is still present but less visible. For a given throughput, the database will just use a few more threads to overcome the delay impacts.

Sync No Delay Count

Although a few more running threads is normally not a big issue, database workloads often present bursts during which the number of running threads rises suddenly. The addition of a grouping delay makes things worse and contention can rise. To avoid this problem, it is possible to configure the primary so that it gives up waiting if too many transactions are already delayed. The variable controlling this behavior is binlog_group_commit_sync_no_delay_count. The meaning of the variable is: if the number of transactions waiting for the sync delay is at the value of the variable, stop waiting and proceed.

This variable provides a safeguard against the worst impacts of a sync delay. One could set an aggressive delay to help the replica keep up with the normal workload and have a low no-delay count to absorb the peaks. As a rule of thumb, if the no delay count is used, it should be set to the number of replica worker threads.

In order to better understand the impacts of the no-delay count value, the following figure shows the sysbench update rates for a high sync delay (12ms) but with various values of no delay count.

In all cases, sysbench used 16 threads and the replica had 16 worker threads. A value of “0” (left) disables the no delay count feature. This leads to a very low rate on the primary but a perfectly synchronized replica. A value of “1” essentially disables the sync delay feature. This leads to a high rate of transaction on the primary but the replica struggles. For higher values, the stability of the primary transaction rates surprised me. I was expecting a decrease in performance with higher no delay count values. The replica on another hand behaves as expected, the larger the grouping, the higher are its transaction rates.

Conclusion

In this post, we discussed the MySQL LOGICAL_CLOCK multi-threaded replication implementation of 5.7 and its relation with group commit. Although outstanding, group commit relies on durability and not all workloads can deal with the additional latency. The LOGICAL_CLOCK implementation also supports a more flexible grouping delay algorithm along with a max grouping count for these latency-sensitive workloads.

The improvements to replication performance since 5.7 have been simply amazing. These helped to unlock the performance bottleneck that has plagued replication for years. Hopefully, this post will help you understand this piece of technology and how to tune it.

Mar
23
2022
--

Winnie The Pooh Baby Clothes – You Can’t Go Wrong With These!

You can’t go wrong when you dress your baby in Winnie the Pooh baby clothes. After all, who can resist that lovable, huggable bear? Pooh is one of the most popular cartoon characters for babies, and with good reason – he’s irresistibly cute!

There are many styles and designs of Winnie the Pooh baby clothes to choose from. You can find everything from sleepers and rompers to shirts and hats. No matter what you’re looking for, you’ll find it in Pooh’s clothing line.

One of the great things about Winnie the Pooh baby clothes is that they are very affordable, and you can find some fantastic deals on quality clothes that will keep your child looking cute all year long. So what are you waiting for? Dress your baby in Pooh and watch them light up with happiness!

One of the great things about Winnie the Pooh baby clothes is that they’re not just for babies. You can also find toddler and even adult sizes. So, if you want to dress your whole family in Winnie the Pooh clothes, you can!

The Best Winnie The Pooh Baby Clothes On Amazon

If you’re looking for a great deal on Winnie the Pooh clothes, you’ll definitely want to check out Amazon. They have a vast selection of Pooh clothes for both babies and adults, and they often have sales or discounts available. Plus, if you have Amazon Prime, you can get free shipping on your order!

We’ve scoured the website to find the cutest, most stylish, and most affordable Pooh clothes for your little one.

Take a look at our top picks, and get ready to dress your baby in the cutest clothes!

Amazon Essentials Disney Family Matching Pajama Sleep Sets

These pajamas are soft, comfortable, and perfect for a lazy day at home. You can find rompers and one-piece pajamas with snaps for easy dressing for a baby or a toddler.

For adults, you have long sleeve top and pants. Your child will love being able to match Mommy and Daddy!

The fabric is very soft and breathable. These PJs are machine-washable and come in a variety of sizes. They’re also affordably priced so that you can stock up on a few sets.

Disney Winnie The Pooh Sleeper for Baby

If you’re looking for a super-soft, comfy sleeper for your baby, look no further than this one from Disney. It’s made of super soft fleece and has a cute Pooh bear on the front.

The sleeper is designed to keep your baby warm and comfortable all night long. Long sleeves and legs help to keep them cozy, and the front zipper makes it easy to get your baby in and out.

Non-skid feet help to prevent your baby from slipping and sliding. This sleeper stole our hearts with its design, comfort, and affordability.

Winnie The Pooh Baby Toddler Girls Fit and Flare Ultra Soft Dress

Winnie the Pooh dress will make any little girl smile! This adorable dress includes her favorite Winnie the Pooh characters: Pooh Bear, Piglet, Eeyore, Owl, Rabbit, and Tigger! Pooh and friends are printed all over the dress, and it has a ruffle trim. The dress’s bodice is fitted, while a skirt flares out from the waist to create a flattering silhouette.

This dress is perfect for any special occasion or just because. It’s machine-washable and comes in a variety of sizes. We love that it’s both cute and affordable!

The Winnie The Pooh dress is made from 95% polyester and 5% spandex. This lovely dress is made of buttery soft polyester fabric with a stretchy elastic waistband that is inside-lined. Matching bloomer diaper covers are available for the baby sizes!

Komar Kids Girls’ Disney Baby Footed Sleep & Play

The Disney Winnie the Pooh Sleeper is lovely, comfy, and has a zipper guard to keep your infant’s skin safe while playing and sleeping.

The quality of this sleeper is exceptional, and it’s made of 100% cotton that’s smooth, comfy, and will help your baby get a good night’s sleep. It is also ideal for relaxing in comfort!

This sleeper adheres to safety regulations that are so important when baby products are considered.

Disney Winnie The Pooh First Birthday Layette Set

The first birthday is easily the most important milestone in a baby’s life, so make sure you’re prepared with this adorable Winnie the Pooh set!

Your little honey will be celebrating in style with this Winnie the Pooh first birthday set. The set includes a bodysuit and socks to keep them comfy and cozy and a bib with self-stick fabric closure to keep them neat when the cake is served. Long sleeve bodysuit features Winnie the Pooh with a balloon and “1” applique.

This set is machine washable and super affordable, so you can use it again for your next baby if you wish!

LLmoway Kids Baby Toddler Infant Knit Hat Beanie Cap

This LLmoway baby beanie is too cute! It features 3D ears that will make your baby look like their transformed into Winnie The Pooh!

We love the adorable design. It’s made of high-quality, soft knit material to keep your little one’s head warm all winter long.

A Summary – Winnie The Pooh Clothing For Infants And Toddlers

If you’re looking for clothing for your infant or toddler that features everyone’s favorite honey-loving bear, Winnie the Pooh, we’ve got you covered. We’ve found some of the cutest and most affordable items available today.

Our top picks include a super-soft sleeper, an adorable footed sleep and play, a first birthday set, and a cozy knit beanie. These items are made from high-quality materials and adhere to safety regulations, so you can rest assured that your child is dressed in the best of the best.

We hope you enjoy these products as much as we do!

The post Winnie The Pooh Baby Clothes – You Can’t Go Wrong With These! appeared first on Comfy Bummy.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com