Sep
30
2019
--

AWS IQ matches AWS customers with certified service providers

AWS has a lot going on, and it’s not always easy for customers to deal with the breadth of its service offerings on its own. Today, the company announced a new service called AWS IQ that is designed to connect customers with certified service providers.

“Today I would like to tell you about AWS IQ, a new service that will help you to engage with AWS Certified third party experts for project work,” AWS’s Jeff Barr wrote in a blog post introducing the new feature. This could involve training, support, managed services, professional services or consulting. All of the companies available to help have received associate, specialty or professional certification from AWS, according to the post.

You start by selecting the type of service you are looking for such as training or professional services, then the tool walks you through the process of defining your needs including providing a title, description and what you are willing to pay for these services. The service then connects the requestor with a set of providers that match the requirements. From there, the requestor can review expert profiles and compare the ratings and offerings in a kind of online marketplace.

AWS IQ start screen

You start by selecting the type of service you want to engage.

Swami Sivasubramanian, vice president at AWS says they wanted to offer a way for customers and service providers to get together. “We built AWS IQ to serve as a bridge between our customers and experts, enabling them to get to work on new projects faster and easier, and removing many of the hassles and roadblocks that both groups usually encounter when dealing with project-based work,” he said in a statement.

The company sees this as a particularly valuable tool for small and medium sized vendors, who might lack the expertise to find help with AWS services. The end result is that everyone should win. Customers get direct access to this community of experts, and the experts can more easily connect with potential customers to build their AWS consulting practice.

Sep
30
2019
--

Microsoft’s Windows Virtual Desktop service is now generally available

Microsoft today announced that Windows Virtual Desktop (WVD), its Azure-based system for virtualizing the Windows and Office user experience it announced last September, is now generally available. Using WVD, enterprises can give their employees access to virtualized applications and remote desktops, including the ability to provide multi-session Windows 10 experiences, something that sets Microsoft’s own apart from that of other vendors that offer virtualized Windows desktops and applications.

In addition to making the service generally available, Microsoft is also rolling it out globally, whereas the preview was U.S.-only and the original plan was to slowly roll it out globally. Scott Manchester, the principal engineering lead for WVD, also told me that more than 20,000 companies signed up for the preview. He also noted that Microsoft Teams is getting enhanced support in WVD with a significantly improved video conferencing experience.

Shortly after announcing the preview of WVD, Microsoft acquired a company called FSLogix, which specialized in provisioning the same kind of virtualized Windows environments that Microsoft offers through WVD. As Microsoft’s corporate VP for Microsoft 365 told me ahead of today’s announcement, the company took a lot of the know-how from FSLogix to ensure that the user experience on WVD is as smooth as possible.

Brad Anderson, CVP of Microsoft 365, noted that just as enterprises are getting more comfortable with moving some of their infrastructure to the cloud (and have others worry about managing it), there is now also growing demand from organizations that want this same experience for their desktop experiences. “They look at the cloud as a way of saying, ‘listen, let the experts manage the infrastructure. They can optimize it; they can fine-tune it; they can make sure that it’s all done right.’ And then I’ll just have a first-party service — in this case Microsoft — that I can leverage to simplify my life and enable me to spin up and down capacity on demand,” Anderson said. He also noted, though, that making sure that these services are always available is maybe even more critical than for other workloads that have moved to the cloud. If your desktop stops working, you can’t get much done, after all.

Anderson also stressed that if a customer wants a multi-session Windows 10 environment in the cloud, WVD is the only way to go because that is the only way to get a license to do so. “We’ve built the operating system, we built the public cloud, so that combination is going to be unique and this gives us the ability to make sure that that Windows 10 experience is the absolute best on top of that public cloud,” he noted.

He also stressed that the FSLogix acquisition enabled his team to work with the Office team to optimize the user experience there. Thanks to this, when you spin up a new virtualized version of Outlook, for example, it’ll just take a second or two to load instead of almost a minute.

A number of companies are also still looking to upgrade their old Windows 7 deployments. Microsoft will stop providing free security patches for them very soon, but on WVD, these users will still be able to get access to virtualized Windows 7 desktops with free extended security updates until January 2023. Anderson does not believe that this will be a major driver for WVD adoption, but he does see “pockets of customers who are working on their transition.”

Enterprises can access Windows 10 Enterprise and Windows 7 Enterprise on WVD at no additional licensing cost (though, of course, the Azure resources they consume will cost them) if they have an eligible Windows 10 Enterprise or Microsoft 365 license.

 

Sep
30
2019
--

Confluent adds free tier to Kafka real-time streaming data cloud service

When Confluent launched a cloud service in 2017, it was trying to reduce some of the complexity related to running a Kafka streaming data application. Today, it introduced a free tier to that cloud service. The company hopes to expand its market beyond large technology company customers, and the free tier should make it easier for smaller companies to get started.

The new tier provides up to $50 of service a month for up to three months. Company CEO Jay Kreps says that while $50 might not sound like much, it’s actually hundreds of gigabytes of throughput and makes it easy to get started with the tool.

“We felt like we can make this technology really accessible. We can make it as easy as we can. We want to make it something where you can just get going in seconds, and not have to pay anything to start building an application that uses real-time streams of data,” Kreps said.

Kafka has been available as an open-source product since 2011, so it’s been free to download, install and build applications, but still required a ton of compute and engineering resources to pull off. The cloud service was designed to simplify that, and the free tier lets developers get comfortable building a small application without making a large financial investment.

Once they get used to working with Kafka on the free version, users can then buy in whatever increments make sense for them, and only pay for what they use. It can be pennies’ worth of Kafka or hundreds of dollars, depending on a customer’s individual requirements. “After free, you can buy 11 cents’ worth of Kafka or you can buy it $10 worth, all the way up to these massive users like Lyft that use Confluent Cloud at huge scale as part of their ridesharing service,” he said.

While a free SaaS trial might feel like a common kind of marketing approach, Kreps says for a service like Kafka, it’s actually much more difficult to pull off. “With something like a distributed system where you get a whole chunk of infrastructure, it’s actually technically an extraordinarily difficult thing to provide zero to elastic scale up capabilities. And a huge amount of engineering goes into making that possible,” Kreps explained.

Kafka processes massive streams of data in real time. It was originally developed inside LinkedIn and open-sourced in 2011. Confluent launched as a commercial entity on top of the open-source project in 2014. In January the company raised $125 million on a $2.5 billion valuation. It has raised than $205 million, according to Crunchbase data.

Sep
30
2019
--

Avoid Vendor Lock-in with Percona Backup for MongoDB

Percona Backup for MongoDB

Percona Backup for MongoDBPercona Backup for MongoDB v1 is the first GA version of our new MongoDB backup tool. This has been custom-built to assist those users who don’t want to pay for MongoDB Enterprise and Ops Manager but do want a fully-supported community backup tool that can perform cluster-wide consistent backups in MongoDB.

Please visit our webpage to download the latest version of this software.

In a nutshell, what can it do?

Currently, Percona Backup for MongoDB is designed to give you an easy command-line interface which allows you to perform a consistent backup/restore of clusters and non-sharded replica sets. It uses S3 (or S3-compatible) object storage for the remote store. Percona Backup for MongoDB can improve your cluster backup consistency compared to the filesystem snapshot method, and can save you time and effort if you are implementing MongoDB backups for the first time.

Why did we create Percona Backup for MongoDB?

Many people in the community and within our customer base told us that they wanted a tool that could easily back up and restore clusters. This feature was something they felt “locked” them into MongoDB’s enterprise subscription.

Percona is anti-vendor-lock-in and strives to create and provide tools, software, and services to help our customers achieve the freedom they need to use, manage, and move their data easily. We will continue to develop and add new features to Percona Backup for MongoDB, enabling users to have more freedom of choice when selecting a backup software provider.

Couldn’t I do all this myself?

It is possible to build your own scripts and tools which enable you to perform consistent backups across a cluster; in fact, many in the community have done this. But, not all users and enterprises have the technical skill or knowledge required to build something that they can feel confident will consistently back up their databases. Many users are looking for a fully-supported, community-driven tool to fill in the gaps. This is especially important given the steady evolution of MongoDB’s replication and sharding protocols. Keeping up with new features and code can be challenging for DBAs, who usually also have a range of additional responsibilities to meet.

What other features are coming in the future?

Percona Backup for MongoDB v1 met the original goal laid out by the community; to create a tool that can create a consistent backup for clusters as easily as mongodump does for a non-sharded single replica set. On top of that the restore is as simple as “pbm list” + “pbm restore <backup-you-want>”.

However, there is still a lot more we plan to do in order to extend and enhance this tool. Our short-term feature roadmap includes:

  • Point in time restores
  • Better User Interface: Additional Status and Logging
  • Distributed transaction handling

Help us make it better!

We would love your help in making this tool even better! Yes, we accept code contributions. We know many of you have already solved tricky issues around backup in MongoDB and would appreciate any contributions you have which would improve Percona Backup for MongoDB. We would also love to hear what features you think we need to include in future versions.

For more details on our commitment to MongoDB and our latest software, please visit our website.

Sep
29
2019
--

Why is Dropbox reinventing itself?

According to Dropbox CEO Drew Houston, 80% of the product’s users rely on it, at least partially, for work.

It makes sense, then, that the company is refocusing to try and cement its spot in the workplace; to shed its image as “just” a file storage company (in a time when just about every big company has its own cloud storage offering) and evolve into something more immutably core to daily operations.

Earlier this week, Dropbox announced that the “new Dropbox” would be rolling out to all users. It takes the simple, shared folders that Dropbox is known for and turns them into what the company calls “Spaces” — little mini collaboration hubs for your team, complete with comment streams, AI for highlighting files you might need mid-meeting, and integrations into things like Slack, Trello and G Suite. With an overhauled interface that brings much of Dropbox’s functionality out of the OS and into its own dedicated app, it’s by far the biggest user-facing change the product has seen since launching 12 years ago.

Shortly after the announcement, I sat down with Dropbox VP of Product Adam Nash and CTO Quentin Clark . We chatted about why the company is changing things up, why they’re building this on top of the existing Dropbox product, and the things they know they just can’t change.

You can find these interviews below, edited for brevity and clarity.

Greg Kumparak: Can you explain the new focus a bit?

Adam Nash: Sure! I think you know this already, but I run products and growth, so I’m gonna have a bit of a product bias to this whole thing. But Dropbox… one of its differentiating characteristics is really that when we built this utility, this “magic folder”, it kind of went everywhere.

Sep
27
2019
--

Upgrading to MySQL 8? Meet the MySQL Shell Upgrade Checker Utility

MySQL Shell Upgrade Checker Utility

MySQL Shell Upgrade Checker UtilityMySQL Shell is a pretty nice piece of software. Is not just another mysql client but it is also a tool that offers scripting capabilities for JavaScript and Python. And one of the coolest things you can do with it is to check if your MySQL 5.7 server is ready for an upgrade or not. Enter: Upgrade Checker Utility.

MySQL Shell Upgrade Checker Utility

So what is it? It is a script that will check your MySQL 5.7 instance for compatibility errors and issues with upgrading. It’s important to notice the word “check”. It doesn’t fix. Just check. Fix is on you, friendly DBA (or we can happily assist with it).

But isn’t there something that already does that? Close, but no. The mysqlchk program and the –check-upgrade parameter does something similar: Invokes the CHECK TABLE …. FOR UPGRADE command. The Upgrade Checker goes beyond that. There are also instructions to perform the same checks independently, available here: https://dev.mysql.com/doc/refman/8.0/en/upgrade-prerequisites.html

Now, how can I use it? First of all, you’ll have to install the MySQL Shell package:

yum install -y mysql-shell.x86_64
….
….
Transaction test succeeded
Running transaction
  Installing : mysql-shell-8.0.17-1.el7.x86_64                                                                      1/1
  Verifying  : mysql-shell-8.0.17-1.el7.x86_64                                                                      1/1

Installed:
  mysql-shell.x86_64 0:8.0.17-1.el7

Now you are ready to perform the check. Is a simple as executing this one-liner:

mysqlsh root@localhost -e “util.checkForServerUpgrade();”

[root@mysql2 ~]# mysqlsh root@localhost -e "util.checkForServerUpgrade();"
The MySQL server at /var%2Flib%2Fmysql%2Fmysql.sock, version 5.7.27-log - MySQL
Community Server (GPL), will now be checked for compatibility issues for
upgrade to MySQL 8.0.17...

1) Usage of old temporal type
  No issues found

2) Usage of db objects with names conflicting with reserved keywords in 8.0
  No issues found

3) Usage of utf8mb3 charset
  No issues found

4) Table names in the mysql schema conflicting with new tables in 8.0
  No issues found

5) Partitioned tables using engines with non native partitioning
  No issues found

6) Foreign key constraint names longer than 64 characters
  No issues found

7) Usage of obsolete MAXDB sql_mode flag
  No issues found

8) Usage of obsolete sql_mode flags
  No issues found

9) ENUM/SET column definitions containing elements longer than 255 characters
  No issues found

10) Usage of partitioned tables in shared tablespaces
  No issues found

11) Circular directory references in tablespace data file paths
  No issues found

12) Usage of removed functions
  No issues found

13) Usage of removed GROUP BY ASC/DESC syntax
  No issues found

14) Removed system variables for error logging to the system log configuration
  To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
  More information:
    https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-13.html#mysqld-8-0-13-logging

15) Removed system variables
  To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
  More information:
    https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html#optvars-removed

16) System variables with new default values
  To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
  More information:
    https://mysqlserverteam.com/new-defaults-in-mysql-8-0/

17) Schema inconsistencies resulting from file removal or corruption
  No issues found

18) Issues reported by 'check table x for upgrade' command
  No issues found

19) New default authentication plugin considerations
  Warning: The new default authentication plugin 'caching_sha2_password' offers
    more secure password hashing than previously used 'mysql_native_password'
    (and consequent improved client connection authentication). However, it also
    has compatibility implications that may affect existing MySQL installations.
    If your MySQL installation must serve pre-8.0 clients and you encounter
    compatibility issues after upgrading, the simplest way to address those
    issues is to reconfigure the server to revert to the previous default
    authentication plugin (mysql_native_password). For example, use these lines
    in the server option file:

    [mysqld]
    default_authentication_plugin=mysql_native_password

    However, the setting should be viewed as temporary, not as a long term or
    permanent solution, because it causes new accounts created with the setting
    in effect to forego the improved authentication security.
    If you are using replication please take time to understand how the
    authentication plugin changes may impact you.
  More information:
    https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-compatibility-issues
    https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-replication

Errors:   0
Warnings: 1
Notices:  0

No fatal errors were found that would prevent an upgrade, but some potential issues were detected. Please ensure that the reported issues are not significant before upgrading.

Cool, right?

Some things were missing due to lack of parameter. Let’s pass it.

util.checkForServerUpgrade('root@localhost:3306', {"password":"password", "targetVersion":"8.0.11", "configPath":"/etc/my.cnf"})

Now we have 22 warnings instead of one!

13) System variables with new default values
  Warning: Following system variables that are not defined in your
    configuration file will have new default values. Please review if you rely on
    their current values and if so define them before performing upgrade.
  More information:
    https://mysqlserverteam.com/new-defaults-in-mysql-8-0/

  back_log - default value will change
  character_set_server - default value will change from latin1 to utf8mb4
  collation_server - default value will change from latin1_swedish_ci to
    utf8mb4_0900_ai_ci
  event_scheduler - default value will change from OFF to ON
  explicit_defaults_for_timestamp - default value will change from OFF to ON
  innodb_autoinc_lock_mode - default value will change from 1 (consecutive) to
    2 (interleaved)
  innodb_flush_method - default value will change from NULL to fsync (Unix),
    unbuffered (Windows)
  innodb_flush_neighbors - default value will change from 1 (enable) to 0
    (disable)
  innodb_max_dirty_pages_pct - default value will change from 75 (%)  90 (%)
  innodb_max_dirty_pages_pct_lwm - default value will change from_0 (%) to 10
    (%)
  innodb_undo_log_truncate - default value will change from OFF to ON
  innodb_undo_tablespaces - default value will change from 0 to 2
  log_error_verbosity - default value will change from 3 (Notes) to 2 (Warning)
  max_allowed_packet - default value will change from 4194304 (4MB) to 67108864
    (64MB)
  max_error_count - default value will change from 64 to 1024
  optimizer_trace_max_mem_size - default value will change from 16KB to 1MB
  performance_schema_consumer_events_transactions_current - default value will
    change from OFF to ON
  performance_schema_consumer_events_transactions_history - default value will
    change from OFF to ON
  slave_rows_search_algorithms - default value will change from 'INDEX_SCAN,
    TABLE_SCAN' to 'INDEX_SCAN, HASH_SCAN'
  table_open_cache - default value will change from 2000 to 4000
  transaction_write_set_extraction - default value will change from OFF to
    XXHASH64

There’s some work to do there.

And now the question:

Does it work with Percona Server for MySQL? Yes, it does!

To avoid messing with repos I just got the Shell from the MySQL downloads:

wget https://dev.mysql.com/get/Downloads/MySQL-Shell/mysql-shell-8.0.17-linux-glibc2.12-x86-64bit.tar.gz; tar -xvzf mysql-shell-8.0.17-linux-glibc2.12-x86-64bit.tar.gz;  cd mysql-shell-8.0.17-linux-glibc2.12-x86-64bit/bin/;./mysqlsh

In conclusion, doing the pre-checks required to go from MySQL 5.7 to MySQL 8 is easier than ever with the MySQL Shell Upgrade Checker Utility. Besides reading the release notes before doing so, the only necessary step to perform is now a command that will leave little room for mistakes. Happy upgrade!

Sep
27
2019
--

Multiplexing (Mux) in ProxySQL: Use Case

Multiplexing (Mux) in ProxySQL

Multiplexing (Mux) in ProxySQLMultiplexing Background

Historically it’s a technique used in networking to integrate multiple analog and digital signals via a shared medium. The goal of multiplexing over the network was to enable signals to be transmitted more efficiently for a given communication channel, thus achieving cost efficiency. 

Since the term Multiplexing comes from telecommunications, the industry has heavily used a device called Multiplexers – aka Mux. There was even a term called muxing where signals were often analog and digital to be combined in a single line. 

The technique was developed in the early 1870s, it’s origins to be found in telegraphy, and it has become a standard for digital telecommunications in the 20th Century. 

Following multiplexing methods are currently available: 

  • Frequency Division Multiplexing (FDM)
  • Time Division Multiplexing (TDM)
  • Wavelength Division Multiplexing (WDM)

There’s the other way in telco that we aren’t going to get into too much, which is called demultiplexer. Demux involves reanalyzing composite signals to separate them. 

How can we achieve Multiplexing for MySQL database connections?

We would basically need a Mux between the database and the application server. This means a proxy layer that can combine communication channels to the backend database. Again the goal is to reduce the overhead of opening several connections and maintaining a minimal number of open connections to reduce memory footprint. 

Thread pooling is available in the following MySQL distributions:

  • MySQL Enterprise Edition 
  • Percona Server 
  • MariaDB 

The way they work is the listener accepts the connections and hands it over to a group of thread pools. This way each client connection is handled by the same thread pool. The above implementations aren’t as efficient as ProxySQL’s implementation. 

ProxySQL comes to help when we need to Mux our communication channels to a database server, where it can often be flooded.

Main Use Cases

  • Any application with a persistent connection to the database
  • Java applications to be specific with built-in connection pools

Goals to Achieve 

  • Reduce connections similar to Aurora
  • Reduce huge number of connections from hardware 
  • Reduce context switching
  • Reduce mutex/contention 
  • Reduce CPU cache usage

ProxySQL Technique Used:

Threading Models 

  • One thread per connection
    • Easier to develop
    • Blocking I/O
  • Thread Pooling
    • Non-blocking I/O
    • Scalable architecture

Here’s what we can control using ProxySQL

Pros

  • Reduced number of connections and overhead to the backend database  
  • Control over database connections
  • Collect metrics of all database connections and monitor 
  • Can be enabled and disabled. 

Cons

  • Certain conditions and limitations apply to use
  • Can cause unexpected behavior to application logic. 

How it works:  

Ad-hoc enable/disable of multiplexing

mysql_query_rules.multiplexing allows to enable or disable multiplexing based on matching criteria. 

The field currently accepts these values:

  • 0: disable multiplex
  • 1: enable multiplex
  • 2: do not disable multiplex for this specific query containing @

Also, ProxySQL has some default behavior that can lead to unexpected results. In fact, ProxySQL can disable on purpose the multiplexing on a connection in the following cases:

  • When a transaction is active
  • When you issued commands like LOCK TABLE, GET_LOCK() and others
  • When you created a temporary table using CREATE TEMPORARY TABLE
  • When you used in the query session or user variables starting the @ symbol

In the majority of cases, ProxySQL can return the connection to the pool enabling the multiplexing. But only in the last case, when using session and user variables starting with @, the connections never return to the pool.

In a recent investigation on one of our client’s systems, we discovered a high amount of connections and threads used, despite the ProxySQL layer. The problem was related to queries containing the @ symbol in the Java connector when establishing a new connection. In a matter of seconds after enabling it, the multiplexing was disabled for all the connections. The effect is having no multiplexing at all: for example, 2000 connections from the front ends were reflected on the same number of connections to the back ends. We needed a way to avoid automatic disabling of multiplexing.  

How to find on ProxySQL stats all the executed queries containing the @ symbol:

SELECT DISTINCT digest, digest_text FROM stats_mysql_query_digest WHERE digest_text LIKE '%@%';
*************************** 1. row ***************************
digest_text: select @@version_comment limit ?
*************************** 2. row ***************************
digest_text: SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@collation_server AS collation_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout
*************************** 3. row ***************************
digest_text: SELECT @@session.auto_increment_increment AS auto_increment_increment, @@character_set_client AS character_set_client, @@character_set_connection AS character_set_connection, @@character_set_results AS character_set_results, @@character_set_server AS character_set_server, @@init_connect AS init_connect, @@interactive_timeout AS interactive_timeout, @@license AS license, @@lower_case_table_names AS lower_case_table_names, @@max_allowed_packet AS max_allowed_packet, @@net_buffer_length AS net_buffer_length, @@net_write_timeout AS net_write_timeout, @@query_cache_size AS query_cache_size, @@query_cache_type AS query_cache_type, @@sql_mode AS sql_mode, @@system_time_zone AS system_time_zone, @@time_zone AS time_zone, @@tx_isolation AS tx_isolation, @@wait_timeout AS wait_timeout

To create rules on ProxySQL:

INSERT INTO mysql_query_rules(active,digest,multiplex,apply) SELECT DISTINCT 1,digest,2,0 FROM stats_mysql_query_digest WHERE digest_text LIKE '%@%';

LOAD MYSQL QUERY RULES TO RUNTIME;

SAVE MYSQL QUERY RULES TO DISK;

And the result revealed huge savings over connections and thread pool directly captured from PMM.

Conclusion

In conclusion, ProxySQL’s multiplexing feature is something every high traffic site would benefit from. What we always observe is a lower number of connections to the backend lowers the overhead of both memory utilization and context switching. In order to fully benefit multiplexing, be aware of the above limitations and investigate connection types even after implementing. 

References 

https://www.allaboutcircuits.com/technical-articles/an-intro-to-multiplexing-basis-of-telecommunications/

https://www.techopedia.com/definition/8472/multiplexing

Understanding Multiplexing in Telecommunications

https://medium.com/searce/reduce-mysql-memory-utilization-with-proxysql-multiplexing-cbe09da7921c

https://en.wikipedia.org/wiki/Multiplexing

Thanks for their valuable input :

  • Rene Cannao
  • Marco Tusa
  • Daniel Guzman Burgos
  • Corrado Pandiani
  • Tom De Comman

The AnomeMultiplexing diagram, modified, CC BY-SA 3.0

Sep
27
2019
--

Google will soon open a cloud region in Poland

Google today announced its plans to open a new cloud region in Warsaw, Poland to better serve its customers in Central and Eastern Europe.

This move is part of Google’s overall investment in expanding the physical footprint of its data centers. Only a few days ago, after all, the company announced that, in the next two years, it would spend $3.3 billion on its data center presence in Europe alone.

Google Cloud currently operates 20 different regions with 61 availability zones. Warsaw, like most of Google’s regions, will feature three availability zones and launch with all the standard core Google Cloud services, including Compute Engine, App Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner and BigQuery.

To launch the new region in Poland, Google is partnering with Domestic Cloud Provider (a.k.a. Chmury Krajowej, which itself is a joint venture of the Polish Development Fund and PKO Bank Polski). Domestic Cloud Provider (DCP) will become a Google Cloud reseller in the country and build managed services on top of Google’s infrastructure.

“Poland is in a period of rapid growth, is accelerating its digital transformation, and has become an international software engineering hub,” writes Google Cloud CEO Thomas Kurian. “The strategic partnership with DCP and the new Google Cloud region in Warsaw align with our commitment to boost Poland’s digital economy and will make it easier for Polish companies to build highly available, meaningful applications for their customers.”

Sep
26
2019
--

MediaRadar’s new product helps event organizers maximize sales

MediaRadar CEO Todd Krizelman describes his company as having “a very specific objective, which is to help media salespeople sell more advertising” by providing them with crucial data. And with today’s launch of MediaRadar Events, Krizelman hopes to do something similar for event organizers.

These customer groups might actually be one and the same, as plenty of companies (including TechCrunch) see both advertising and events as part of their business. In fact, Krizelman said customer demand “basically pushed us into this business.

He also suggested that after years of seeing traditional ad dollars shifting into digital, “the money is now moving out of digital into events.”

If you’re organizing a trade show, you can use MediaRadar Events to learn about the overall size of the market, and then see who’s been purchasing sponsorships and exhibitor booths at similar events.

The product doesn’t just tell you who to reach out to, but how much these companies have paid for booths and sponsorships in the past, whether there are seasonal patterns in their conference spending and how that spending fits into their overall marketing budget — after all, Krizelman said, “In 2019, very few companies are siloed by media format as a buyer or a seller. Anyone doing that is putting their business at risk.”

He also described collecting the data needed to power MediaRadar Events as “much more complicated than we expected,” which is why it took the team two years to build the product. He said that data comes from three sources — some of it is posted publicly by event organizers, some is shared directly by the event organizers with MediaRadar and, in some cases, members of the MediaRadar team will attend the events themselves.

MediaRadar Events support a wide range of events, although Krizelman acknowledged that it doesn’t have data for every industry. For example, he suggested that a convention for coin-operated laundromat owners might be “too niche” (though he hastened to add that he meant no offense to the laundromat business).

In a statement, James Ogle — chief financial officer at Access Intelligence (which owns the LeadsCon conference and publications like AdExchanger) — said:

Hosting events and the resulting revenue that comes from them is a big part of our business. However, the event space is getting more and more crowded and also more niche. Relevancy equals value, so we want to make sure our attendees are within the right target market for our exhibitors. MediaRadar provides critical transparency into the marketplace.

Sep
26
2019
--

Running Percona XtraDB Cluster on Raspberry PI 3

Percona XtraDB Cluster on Raspberry PI 3

Percona XtraDB Cluster on Raspberry PI 3In a previous post, I showed you how to compile Percona Mysql 5.7 on Raspberry PI 3. Now, I’ll show you how to compile and run the latest version of Percona XtraDB Cluster 5.7.26.

We will need at least 3 RaspberryPi 3 boards, and I recommend you use an external SSD drive to compile and use as MySQL’s “datadir” to avoid the stalls associated with the microSD card, which will cause PXC to run slow.

In this post, we are going to run many OS commands and configure PXC. I recommend having minimal knowledge about PXC and Linux commands.

How to install CentOS

Download the centos image from this link http://mirror.ufro.cl/centos-altarch/7.6.1810/isos/armhfp/CentOS-Userland-7-armv7hl-RaspberryPI-Minimal-1810-sda.raw.xz

I’m using this open source software https://www.balena.io/etcher/ to flash/burn the microSD card because it is very simple and works fine. I recommend using a 16GB card or larger.

If the previous process finished ok, take out the microSD card and put it into the raspberry and boot it. Wait until the boot process finishes and log in with this user and password

user: root
password: centos

By default, the root partition is very small and you need to expand it. Run the next command, this process is fast.

/usr/bin/rootfs-expand

Finally, to finish the installation process it is necessary to install many packages to compile PXC and percona-xtrabackup,

yum install -y cmake wget automake bzr gcc make telnet gcc-c++ vim libcurl-devel readline-devel ncurses-devel screen zlib-devel bison locate libaio libaio-devel autoconf socat libtool  libgcrypt-devel libev-devel vim-common check-devel openssl-devel boost-devel glib2-devel

Repeat all the previous steps over all those 3 dbnodes because some libs are needed to start mysql and to work with percona-xtrabackup.

How to compile PXC + Galera + Xtrabackup

I am using an external SSD disk because it’s faster than the microSD which I mounted on /mnt directory, so we will use this partition to download, compile, and create the datadir.

For the latest PXC version, download and decompress it using the following commands:

cd /mnt
wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-57/Percona-XtraDB-Cluster-5.7.26-31.37/source/tarball/Percona-XtraDB-Cluster-5.7.26-31.37.tar.gz
tar zxf Percona-XtraDB-Cluster-5.7.26-31.37.tar.gz
cd Percona-XtraDB-Cluster-5.7.26-31.37

RaspberryPi3’s are not x86_64 architecture; it uses an ARMv7 which forces us to compile PXC from scratch. For me, this process took 6-ish hours. I recommend running the next commands in a screen session.

screen -SDRL compile_pxc
cmake -DINSTALL_LAYOUT=STANDALONE -DCMAKE_INSTALL_PREFIX=/mnt/pxc5726 -DDOWNLOAD_BOOST=ON -DWITH_BOOST=/mnt/sda1/my_boost -DWITH_WSREP=1 .
make
make install

Create a new OS user to be used by the mysql process.

useradd --no-create-home mysql

Now to continue it is necessary to create the datadir directory. I recommend to use the SSD disk attached and set the mysql permissions to the directory:

mkdir /mnt/pxc5726/data/
chown mysql.mysql /mnt/pxc5726/data/

Configure my.cnf with the minimal params to start mysql.

vim /etc/my.cnf

[client]
socket=/mnt/pxc5726/data/mysql.sock

[mysqld]
datadir = /mnt/pxc5726/data
binlog-format = row
log_bin = /mnt/pxc5726/data/binlog
innodb_buffer_pool_size = 128M
socket=/mnt/pxc5726/data/mysql.sock
symbolic-links=0

wsrep_provider=/mnt/pxc5726/libgalera_smm.so

wsrep_on=ON
wsrep_cluster_name="pxc-cluster"
wsrep_cluster_address="gcomm://192.168.88.134"

wsrep_node_name="pxc1"
wsrep_node_address=192.168.88.134

wsrep_debug=1

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:passw0rd

pxc_strict_mode=ENFORCING

[mysqld_safe]
log-error=/mnt/pxc5726/data/mysqld.log
pid-file=/mnt/pxc5726/data/mysqld.pid

So far we’ve only compiled PXC. Before we can start the mysql process, we also need to compile the Galera library, as, without this library, mysql will start as a standalone db-node and will not have any “cluster” capabilities.

To build libgalera_smm.so you need to install scons.

Download using the next command:

cd /mnt
wget http://prdownloads.sourceforge.net/scons/scons-3.1.0.tar.gz
tar zxf scons-3.1.0.tar.gz
cd scons-3.1.0
python setup.py install

Now we will proceed to compile libgalera. The source code for this library exists in the Percona-XtraDB-Cluster-57 directory download:

cd /mnt/Percona-XtraDB-Cluster-5.7.26-31.37/percona-xtradb-cluster-galera
scons -j4 libgalera_smm.so debug=3 psi=1 BOOSTINCLUDE=/mnt/my_boost/boost_1_59_0/boost BOOST_LIBS=/mnt/my_boost/boost_1_59_0/libs

If the compile process finishes ok, it will create a new file like this:

/mnt/Percona-XtraDB-Cluster-5.7.26-31.37/percona-xtradb-cluster-galera/libgalera_smm.so

I recommend copying libgalera_smm.so to the installed PXC directory to avoid deleting, in that case, remove the source directory.

cp /mnt/Percona-XtraDB-Cluster-5.7.26-31.37/percona-xtradb-cluster-galera/libgalera_smm.so /mnt/pxc5726

Also, this is useful because after installing and compiling all the packages we can create a compressed file and copy to the rest of the db-nodes to avoid having to compile these packages again.

The last step is to download and compile percona-xtrabackup. I used the latest version:

screen -SDRL compile_xtrabackup

cd /mnt
wget https://www.percona.com/downloads/Percona-XtraBackup-2.4/Percona-XtraBackup-2.4.15/source/tarball/percona-xtrabackup-2.4.15.tar.gz
tar zxf percona-xtrabackup-2.4.15.tar.gz
cd percona-xtrabackup-2.4.15
cmake -DBUILD_CONFIG=xtrabackup_release -DWITH_MAN_PAGES=OFF -DCMAKE_INSTALL_PREFIX=/mnt/xtrabackup .
make
make install

If all the previous steps were successful, we will add those binary directories in the PATH variable:

vim /etc/profile.d/percona.sh

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Save and exit and run the “export” command manually to set it in the active session:

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Well, so far we have PXC, Galera-lib, and percona-xtrabackup compiled for the ARMv7 architecture, which is all that is needed to work with PXC, so now we can start playing.

We will call the first host where we compiled “pxc1”. We will proceed to set the hostname:

$ vim /etc/hostname
pxc1

$ hostname pxc1
$ exit

It is necessary to connect again to ssh to refresh the hostname and you will see something like this:

$ ssh ip
[root@pxc1 ~]#

Now we are ready to open the ports needed by PXC;  xtrabackup, galera, and mysql.

$ firewall-cmd --permanent --add-port=3306/tcp
$ firewall-cmd --permanent --add-port=4567/tcp
$ firewall-cmd --permanent --add-port=4444/tcp

$ firewall-cmd --reload

$ firewall-cmd --list-ports
22/tcp 3306/tcp 4567/tcp 4444/tcp

Repeat the above procedures to set the hostname and open ports in the other db nodes.

Finally, to start the first node you’ll need to initialize the system databases and then launch the mysql process with the –wsrep-new-cluster option. This will bootstrap the cluster so that it starts with 1 node. Run the following from the command-line:

$ mysqld --initialize-insecure --user=mysql --basedir=/mnt/pxc5726 --datadir=/mnt/pxc5726/data
$ mysqld_safe --defaults-file=/etc/my.cnf --user=mysql --wsrep-new-cluster &

Check the mysql error log to see any errors:

$ tailf /mnt/pxc5726/data/mysqld.log

...
2019-09-08T23:23:17.415991Z 0 [Note] /mnt/pxc5726/bin/mysqld: ready for connections.
...

Create the new mysql user, and this will be used for xtrabackup to sync another db node on IST or SST process.

$ mysql

CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';
GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
FLUSH PRIVILEGES;

Lastly, I recommend checking if the galera library was loaded successfully:

$ mysql

show global status where Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_vendor' 
or Variable_name='wsrep_local_state' 
or Variable_name='wsrep_local_state_comment' 
or Variable_name='wsrep_cluster_size' 
or Variable_name='wsrep_cluster_status' 
or Variable_name='wsrep_connected' 
or Variable_name='wsrep_ready' 
or Variable_name='wsrep_evs_state' 
or Variable_name like 'wsrep_flow_control%';

You will see something like this:

+----------------------------------+-----------------------------------+
| Variable_name                    | Value                             |
+----------------------------------+-----------------------------------+
| wsrep_flow_control_paused_ns     | 0                                 |
| wsrep_flow_control_paused        | 0.000000                          |
| wsrep_flow_control_sent          | 0                                 |
| wsrep_flow_control_recv          | 0                                 |
| wsrep_flow_control_interval      | [ 100, 100 ]                      |
| wsrep_flow_control_interval_low  | 100                               |
| wsrep_flow_control_interval_high | 100                               |
| wsrep_flow_control_status        | OFF                               |
| wsrep_local_state                | 4                                 |
| wsrep_local_state_comment        | Synced                            |
| wsrep_evs_state                  | OPERATIONAL                       |
| wsrep_cluster_size               | 1                                 | <----
| wsrep_cluster_status             | Primary                           |
| wsrep_connected                  | ON                                |
| wsrep_provider_name              | Galera                            |
| wsrep_provider_vendor            | Codership Oy <info@codership.com> |
| wsrep_ready                      | ON                                |
+----------------------------------+-----------------------------------+

As you can see this is the first node and the cluster size is 1.

How to copy the previous compiled source code to the rest of the db nodes

You don’t need to compile again over all the rest of the db nodes, just compress PXC and Xtrabackup directories and copy to the rest of the db nodes, nothing else.

In the first step, we compiled and installed on the next directories:

/mnt/pxc5726
/mnt/xtrabackup

We’re going to proceed to compress both directories and copy to the other servers (pxc2 and pxc3)

$ cd /mnt
$ tar czf pxc5726.tgz pxc5726
$ tar czf xtrabackup.tgz xtrabackup

Let’s copy to each db node:

$ cd /mnt
$ scp pxc5726.tgz xtrabackup.tgz IP_PXC2:/mnt

Now connect to each db node and start decompressing, configure my.cnf, other stuff, and start mysql.

$ ssh IP_PXC2
$ cd /mnt
$ tar zxf pxc5726.tgz
$ tar zxf xtrabackup.tgz

From here we are going to repeat several steps that we did previously.

Connect to IP_PXC2.

$ ssh IP_PXC2

Add the next directories in the PATH variable:

vim /etc/profile.d/percona.sh

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Save and exit and run the “export” command manually to set it in the active session:

export PATH=$PATH:/mnt/xtrabackup/bin:/mnt/pxc5726/bin

Create a new OS user to be used by mysql process:

useradd --no-create-home mysql

Now to continue it’s necessary to create the datadir directory. I recommend using the SSD disk attached and set the mysql permissions to the directory.

mkdir /mnt/pxc5726/data/
chown mysql.mysql /mnt/pxc5726/data/

Configure my.cnf with the minimal params to start mysql.

vim /etc/my.cnf

[client]
socket=/mnt/pxc5726/data/mysql.sock

[mysqld]
datadir = /mnt/pxc5726/data
binlog-format = row
log_bin = /mnt/pxc5726/data/binlog
innodb_buffer_pool_size = 128M
socket=/mnt/pxc5726/data/mysql.sock
symbolic-links=0

wsrep_provider=/mnt/pxc5726/libgalera_smm.so

wsrep_on=ON
wsrep_cluster_name="pxc-cluster"
wsrep_cluster_address="gcomm://IP_PXC1,IP_PXC2"

wsrep_node_name="pxc2"
wsrep_node_address=IP_PXC2

wsrep_debug=1

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth=sstuser:passw0rd

pxc_strict_mode=ENFORCING

[mysqld_safe]
log-error=/mnt/pxc5726/data/mysqld.log
pid-file=/mnt/pxc5726/data/mysqld.pid

Now we need to start the second db node. This time you don’t need to add “–wsrep-new-cluster” param (from the second and next nodes it’s not needed), just start mysql and run the next command:

$ mysqld_safe --defaults-file=/etc/my.cnf --user=mysql &

Check the mysql error log to see any error:

$ tailf /mnt/pxc5726/data/mysqld.log

...
2019-09-08T23:53:17.415991Z 0 [Note] /mnt/pxc5726/bin/mysqld: ready for connections.
...

Check if the galera library was loaded successfully:

$ mysql

show global status where Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_name' 
or Variable_name='wsrep_provider_vendor' 
or Variable_name='wsrep_local_state' 
or Variable_name='wsrep_local_state_comment' 
or Variable_name='wsrep_cluster_size' 
or Variable_name='wsrep_cluster_status' 
or Variable_name='wsrep_connected' 
or Variable_name='wsrep_ready' 
or Variable_name='wsrep_evs_state' 
or Variable_name like 'wsrep_flow_control%';

You will see something like this:

+----------------------------------+-----------------------------------+
| Variable_name                    | Value                             |
+----------------------------------+-----------------------------------+
| wsrep_flow_control_paused_ns     | 0                                 |
| wsrep_flow_control_paused        | 0.000000                          |
| wsrep_flow_control_sent          | 0                                 |
| wsrep_flow_control_recv          | 0                                 |
| wsrep_flow_control_interval      | [ 100, 100 ]                      |
| wsrep_flow_control_interval_low  | 100                               |
| wsrep_flow_control_interval_high | 100                               |
| wsrep_flow_control_status        | OFF                               |
| wsrep_local_state                | 4                                 |
| wsrep_local_state_comment        | Synced                            |
| wsrep_evs_state                  | OPERATIONAL                       |
| wsrep_cluster_size               | 2                                 | <---
| wsrep_cluster_status             | Primary                           |
| wsrep_connected                  | ON                                |
| wsrep_provider_name              | Galera                            |
| wsrep_provider_vendor            | Codership Oy <info@codership.com> |
| wsrep_ready                      | ON                                |
+----------------------------------+-----------------------------------+

Excellent, we have the second node up, now is it’s part of the cluster and the cluster size is 2.

Repeat the previous steps to configure IP_PXC3 and nexts.

Summary

We are ready to start playing and doing a lot of tests. I recommend running sysbench to test how many transactions this environment supports, kill some nodes to test SST and IST, and I recommend using pt-online-schema-change and check this blog “How to Perform Compatible Schema Changes in Percona XtraDB Cluster” to learn how this tool works in PXC.

I hope you enjoyed this guide, and if you want to compile other PXC 5.7.X versions, please go ahead, the steps will be very similar.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com