Nov
30
2013
--

Editing makes me neurotic

StressI’m sure most people think authors are a calm and level-headed breed. Our bio and PR pictures depict us as well groomed and smiling, and we try to be witty and intellectual at dinner parties. Ours is one of the few professions where glasses and gray hair can actually improve our standing and charisma.

It’s all true (not), but you should sit with me when I’m editing or polishing, like I have been this past week. It’s true that I rarely need haircuts as the final draft takes shape – I’m too busy tearing out my hair. My expense sheets tend to fill up with purchases of coffee and whisky. There is a very good reason why authors have had a reputation of being manics or drunks, or just plain neurotic.

Editing is a love-hate relationship with my book. I’ve already lived with it for over a year and the honeymoon period is over. As I work my way through the manuscript, page by page, line by line, I suffer radical mood swings from “this scene totally rocks, this is NYT bestseller material”, to “what a pile of ****, I’m a terrible writer.” As the saying goes: Feel the fear and do it anyway, right?

Some of the things rattling around my head include: Plot arcs: Is this tense and dramatic enough, does it flow properly, is it exciting and believable? Characters: Will the reader fall in love with them, hate the bad guys, laugh with them, cry with them, dream about them? Structure: Are my paragraphs of varying length, am I overusing adjectives, or adverbs, are all my sentences “he did this, he did that”? Word choice: Should he scowl or frown, chuckle or giggle, is it a chill wind or an icy wind? And my own pet bugbear: Does she look, see, gaze, study, glance, peer, gawk, goggle… or a hundred other words that just mean “she saw something, dammit”?

You might not think that every word matters but it does. So does sentence length, and knowing when to interject a feeling, knowing when not to state the obvious because the reader will get it, when to foreshadow, when to trick the reader, when to layer in backstory… The irony of writing is that if an author does pay attention to all these things, the reader will never know because they will be swept along by the story without the mechanics of the written word getting in the way. Next time you are yanked out of a book because of how a sentence was worded, stop and think about it for a moment and you’ll probably understand what the writer missed. We’re human, mess up we sometimes do. (See what I did there?)

At the editing stage it is so hard to remain objective. I can agonize over a word choice or the form of a sentence only to come back ten minutes later and put it back the way it was. I’ve been known to kill masterful art because it is too verbose or intrusive at that point. On the flip side, I’ve added in horrible and clumsy explanations because I’ve convinced myself that the reader won’t understand without it. There comes a point at which I no longer trust what I’m doing – I’m fiddling with the manuscript because I’m afraid to let it go, let it fly the nest. This is where the beta readers and professional editor save my bacon by bringing fresh objectivity.

Thankfully, I’m a harsher critic of myself than others are. I’ll beat myself up mercilessly, but if my editor or readers give me constructive criticism I will hang on their every word and happily consider their changes. I’m quite harmless in that regard. I might be neurotic but I’m not an axe-murderer, so sleep well at night.

So if you ever consider me neurotic and overly-sensitive, with a tendency to flip-flop and waffle, then you know you’ve caught me editing and polishing. Just smile, back away, and when you turn the corner, run like hell.

Oh, did I say how much I love writing? Yes, even editing. Love-hate, remember?

 

Nov
28
2013
--

Book Reading Survey

Book Hello everyone, and Happy Thanksgiving to those that celebrate it!

Just for a bit of fun, I thought I’d post a short survey about book reading preferences. If you read novels or novellas (and if you don’t why are you here? ;) ), please take a couple of minutes to complete my survey.

I’m not sure if you will be able to see the results on the destination site, but if not I’ll be sure to post them here at a later date.

Take the survey

Thanks for taking part!

Nov
28
2013
--

MySQL Error: Too many connections

MySQL Error: Too many connectionsWe have always received quite few questions here at Percona Support on how to avoid the dreaded “Too many connections” error, as well as what is the recommended value for max_connections. So, in this article I will try to cover best possible answers to these questions so others can mitigate similar kinds of issues.

My colleague Aurimas wrote a wonderful post some time back about changing max_connections value via GDB when MySQL server is running to get rid of the “Too many connections” error without restarting MySQL. You can check here for details.

By default 151 is the maximum permitted number of simultaneous client connections in MySQL 5.5. If you reach the limit of max_connections you will get the “Too many connections” error when you to try to connect to your MySQL server – which means all available connections are in use by other clients.

MySQL permits one extra connection on top of the max_connections limit which is reserved for the database user having SUPER privilege in order to diagnose connection problems. Normally the administrator user has this SUPER privilege. You should avoid granting SUPER privilege to app users.

MySQL uses one thread per client connection and many active threads are performance killer. Usually a high number of concurrent connections executing queries in parallel can cause significant slowdown and increase chances for deadlocks. Prior to MySQL 5.5, it doesn’t scale well although MySQL is getting better and better since then – but still if you have hundreds of active connections doing real work (this doesn’t count sleeping [idle] connections) then the memory usage will grow. Each connection requires per thread buffers. Also implicit in memory tables require more memory plus memory requirement for global buffers. On top of that, tmp_table_size/max_heap_table_size that each connection may use, although they are not allocated immediately per new connection.

Most of the time, an overly high number of connections is the result of either bugs in applications not closing connections properly or because of wrong design, like the connection to mysql is established, but then the application is busy doing something else before closing MySQL handler. In cases where an application doesn’t close connections properly, wait_timeout is an important parameter to tune and discard unused or idle connections to minimize the number of active connections to your MySQL server – and this will ultimately help to avoid the “Too many connections” error. Although some systems are running alright with even a high number of connected threads, most of the connections are idle. In general sleeping threads do not take too much memory – 512 KB or less. Threads_running is a valuable metric to monitor as it doesn’t count sleeping threads – it shows active and the amount of queries currently processing, while threads_connected status variables show all connected threads value including idle connections as well.  Peter wrote a nice post on it. You can find it here for further details on it.

If you are using connection pool on the application side, max_connections should be bigger than max connections on the pool side. Connection pooling is also a good alternative if you are expecting a high number of connections. Now what should be the recommended value for max_connections ?

There is no single right answer for that question. It depends on the amount of RAM available and memory usage for each connection. Increasing max_connections value increases the number of file descriptors that mysqld requires. Note: there is no hard limit to setting up maximum max_connections value. So, you have to choose max_connections wisely as per your workload, number of simultaneous connections to MySQL server etc. In general allowing too high of a max_connections value is not recommended because in case of some locking conditions or slowdowns if all those connections running huge contention issue may raise. In case of active connections using temporary/memory tables memory usage can go even higher. On systems with small RAM or with hard number of connections control on the application side we can use small max_connections values like 100-300. Systems with 16G RAM or higher max_connections=1000 is a good idea, of course per-connection buffer should have good/default values while on some systems we can see up to 8k max connections, but such systems usually became down in case of load spikes.

To deal with it, Oracle and the MariaDB team implemented thread pool. Percona Server ported this feature from MariaDB. Read it here about its implementation in Percona Server. With a properly configured thread pool you may expect throughput to NOT decrease for up to few thousand concurrent connections, for some types of workload at least.

NOTE: Beware, that in MySQL 5.6 a lot of memory is allocated when you set max_connections value too high. Check this bug report http://bugs.mysql.com/bug.php?id=68514

Conclusion:
There is no fixed rule to choose the appropriate value for max_connections because it depends on your workload. Take into account that each thread connected needs memory and expensive context switching. I would recommend choosing a reasonable number for max_connections as per your workload and try to avoid too many connections opened at the same time so the application functions properly.

The post MySQL Error: Too many connections appeared first on MySQL Performance Blog.

Nov
27
2013
--

Percona XtraBackup – A workaround to the failed assertion bug

I recently conducted a test backup of my “master-slave” setup in my VirtualBox as I was migrating from Percona Server 5.6.12 to version 5.6.13-rel61.0 with Percona XtraBackup v2.2.0 rev. 4885. However, doing the backup on my slave, I encountered this problem:

[04] Compressing and streaming ./test/checksum.ibd
[01] Compressing and streaming ./mysql/slave_master_info.ibd
Assertion "to_read % cursor->page_size == 0" failed at fil_cur.cc:293
innobackupex: Error: The xtrabackup child process has died at /usr/bin/innobackupex line 2641.

This is related to a bug posted by my colleague George https://bugs.launchpad.net/percona-xtrabackup/+bug/1177201. While I was tracing the code, in fil_cur.cc line 293, it shows that this is caused by the assertion failing to return a 0 based result shown below:

xb_a(to_read > 0 && to_read <= 0xFFFFFFFFLL);
xb_a(to_read % cursor->page_size == 0);
npages = (ulint) (to_read >> cursor->page_size_shift);

which xb_a() is defined as:

#define xb_a(expr)                                                      \
        do {                                                            \
                if (!(expr)) {                                          \
                        msg("Assertion \"%s\" failed at %s:%lu\n",      \
                            #expr, __FILE__, (ulong) __LINE__);         \
                        abort();                                        \
                }                                                       \
        } while (0);

in file common.h of line 29. The problem relies in this part as:

xb_a(to_read % cursor->page_size == 0);

I tried to modify the code to add some verbosity for the values of to_read and its page_size value. So the “to_read” and “cursor->page_size” points as the file size of your *.ibd file and it’s page_size defined in XB base on the current version of MySQL you’re trying to back up. As of MySQL 5.6.4 or Percona Server 5.5, innodb_page_size is introduced which can be set by 16k, 8k and 4k. In relation to that, the part of the code I add was just a simple copy of “msg()” function which shows as:

msg("\n\n<<<<<< to_read value is: \"%lld\", value of page size is: \"%llu\", and modulus is: %d\n\n", (long long) to_read, cursor->page_size, to_read % cursor->page_size);

This shows as:

<<<<<< to_read value is: "98318", value of page size is: "16384", and modulus is: 14
Assertion "to_read % cursor->page_size == 0" failed at fil_cur.cc:296
innobackupex: Error: The xtrabackup child process has died at /usr/bin/innobackupex line 2641.

which shows as the file size of my *.ibd file as below:

-rw-rw---- 1 mysql mysql 98318 Nov  6 12:03 slave_master_info.ibd

Still, I got no clue how the 14 bytes were added to the tablespace and what was the cause of it (my bad was not able to backup or copy the file for further investigation). I tried to search for some possible bug that is relevant to this but I could only find this bug #67963 that Jeremy Cole have posted, which might be relevant to the root cause of those 14 bytes. For the past few days I tried and have few attempts to corrupt my tablespace for other tables as well but I still got no luck (any comments regarding this could really be awesome!). Still, my clues led to some random garbage that was inserted which consists of 14 bytes in the tablespace, and to this belief, I couldn’t point the culprit to Percona XtraBackup. To fix the problem, I came up with running “ALTER TABLE” syntax as:

mysql> alter table mysql.slave_master_info engine=InnoDB;
Query OK, 1 row affected (0.36 sec)
Records: 1  Duplicates: 0  Warnings: 0

Which then shows as…

-rw-rw---- 1 mysql mysql 98304 Nov  6 12:17 slave_master_info.ibd

where…

#> echo "scale=5;98304/16384"|bc
6.00000

…is divisible of 16KiB, and that shows that the tablespace is now aligned accordingly to the desired page size (default one). I had a conversation on this with our dev’s specially George or Aleks who have expressed that this could be a workaround if such case is encountered. Basically, the ALTER TABLE syntax here (I presume OPTIMIZE TABLE might work as well) does defragments the tablespace of the affected *.ibd file which causes to remove those unwanted bytes. By defragmenting your tablespace, the ALTER TABLE statement creates a copy of the original table slave_master_info which those garbage are destroyed by deleting the old copy and the new copy is renamed to the original designated name of the table, which is slave_master_info. In any case, your innodb_page_size value is set to 4KiB or 8KiB, XtraBackup will be able to identify this via your my.cnf file. You can verify this by setting and modifying it from 4KiB to 8KiB, and try to run:

# xtrabackup_56 --help

As a supplementary for prior checking if your *.ibd files too are in line in accordance to your page size, you can try to do some pre-check by using awk:

find . -name "*.ibd" -exec ls -alt {} \; | awk '{print $9 ": " $5 " mod of 16384 is: " $5 % 16384}'

or you can check thru INFORMATION_SCHEMA:

SELECT NAME, PAGE_SIZE, (PAGE_SIZE%16384) MOD_RES FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES;

However, there’s one thing I haven’t tried yet! Since XtraBackup has the ability to read from your *.cnf file, I am not sure if the bug might occur from prior versions when such innodb_page_size is change via its codebase, i.e. done thru innobase/include/univ.i. If it fails, that could be either your *.ibd file is not within space bounds and by running or doing the pre-check, you can have the list of tablespace files that was not aligned accordingly base in your InnoDB page size.

Moreover, on this blog post, if you find other workarounds to alleviate the failed assertion bug, please post in the comments below. I will add some further investigation regarding this issue in the future and as well try with the old version of MySQL. Thank you!

The post Percona XtraBackup – A workaround to the failed assertion bug appeared first on MySQL Performance Blog.

Nov
26
2013
--

Talking Drupal #025 – Giving Thanks

Talking about the Thanksgiving tradition and what we are thankful for in the Drupal community. 

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

Jason Pamental – www.hwdesignco.com @jpamental

John Picozzi – www.RubicDesign.com @johnpicozzi 

Nic Laflin – www.nLightened.net @nicxvan

Nov
26
2013
--

Talking Drupal #025 – Giving Thanks

Talking about the Thanksgiving tradition and what we are thankful for in the Drupal community. 

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

Jason Pamental – www.hwdesignco.com @jpamental

John Picozzi – www.RubicDesign.com @johnpicozzi 

Nic Laflin – www.nLightened.net @nicxvan

Nov
26
2013
--

Geographical disaster recovery with PRM: Register for Dec. 4 Webinar

Dec. 4 webinar: MySQL High Availability and Geographical Disaster Recovery with Percona Replication ManagerDowntime caused by a disaster is probable in your application’s lifetime. While caused by a large-scale geographical disaster, cyber attack, spiked consumer demand, or a relatively less catastrophic event, downtime equates to lost business. Setting up geographical disaster recovery (geo-DR) has always been challenging but Percona replication manager (PRM) with booth provides a solution for an efficient geo-DR deployment.

Join me on wednesday December 4th at 10 a.m. PST for a presentation of the geo-DR capabilities of PRM followed by a step by step setup of a full geo-DR solution. The title of the webinar is, “MySQL High Availability and Geographical Disaster Recovery with Percona Replication Manager” and you can register here.

Feel free to ask questions in advance here in the comments section. The webinar will be recorded and available for replay here shortly afterward.

I hope to see you on Wednesday December 4th!

The post Geographical disaster recovery with PRM: Register for Dec. 4 Webinar appeared first on MySQL Performance Blog.

Nov
22
2013
--

New wsrep_provider_options in Galera 3.x and Percona XtraDB Cluster 5.6

Now that Percona XtraDB Cluster 5.6 is out in beta, I wanted to start a series talking about new features in Galera 3 and PXC 5.6.  On the surface, Galera 3 doesn’t reveal a lot of new features yet, but there has been a lot of refactoring of the system in preparation for great new features in the future.

Galera vs MySQL options

wsrep_provider_options is a semi-colon separated list of key => value configurations that set low-level Galera library configuration.  These tweak the actual cluster communication and replication in the group communication system.  By contrast, other PXC global variables (like ‘wsrep%’) are set like other mysqld options and generally have more to do with MySQL/Galera integration.   This post will cover the Galera options and mysql-level changes will have to wait for another post.

Here are the differences in the wsrep_provider_options between 5.5 and 5.6:

$ diff 5.5 5.6
38a39
> gmcast.segment = 0;
51,52c52,56
< replicator.causal_read_timeout = PT30S;
< replicator.commit_order = 3 |
---
> repl.causal_read_timeout = PT30S;
> repl.commit_order = 3;
> repl.key_format = FLAT8;
> repl.proto_max = 5;
> socket.checksum = 2

gmcast.segment=0

This is a new setting in 3.x and allows us to distinguish between nodes in different WAN segments.  For example, all nodes in a single datacenter would be configured with the same segment number, but each datacenter would have its own segment.

Segments are currently used in two main ways:

  1. Replication traffic between segments is minimized.  Writesets originating in one segment should be relayed through only one node in every other segment.  From those local relays replication is propagated to the rest of the nodes in each segment respectively.
  2. Segments are used in Donor-selection.  Yes, donors in the same segment are preferred, but not required.
2013-11-21 17:26:59 3853 [Warning] WSREP: There are no nodes in the same segment that will ever be able to become donors, yet there is a suitable donor outside. Will use that one.

replicator -> repl

The older ‘replicator’ tag is now renamed to ‘repl’ and the causal_read_timeout and commit_order settings have moved there.  No news here really.

repl.key_format = FLAT8

Every writeset in Galera has associated keys.  These keys are effectively a list of primary, unique, and foreign keys associated with all rows modified in the writeset.  In Galera 2 these keys were replicated as literal values, but in Galera 3 they are hashed in either 8 or 16 byte values (FLAT8 vs FLAT16).  This should generally make the key sizes smaller, especially with large CHAR keys.

Because the keys are now hashed, there can be collisions where two distinct literal key values result in the same 8-byte hashed value.  This means practically that the places in Galera that rely on keys may falsely believe that there is a match between two writesets when there really is not.  This should be quite rare.  This false positive could affect:

  • Local certification failures (Deadlocks on commit) that are unnecessary.
  • Parallel apply – things could be done in a stricter order (i.e., less parallelization) than necessary

Neither case affects data consistency.  The tradeoff is more efficiency in keys and key operations generally making writesets smaller and certification faster.

repl.proto_max

Limits the Galera protocol version that can be used in the cluster.  Codership’s documentation states it is for debugging only.

socket.checksum = 2

This modifies the previous network packet checksum algorithm (CRC32) to support CRC32-C which is hardware accelerated on supported gear.  Packet checksums also can now be completely disabled (=0).

In the near future I’ll write some posts about WAN segments in more detail and about the other global and status variables introduced in PXC 5.6.

The post New wsrep_provider_options in Galera 3.x and Percona XtraDB Cluster 5.6 appeared first on MySQL Performance Blog.

Nov
21
2013
--

Talking Drupal #24 – SASS and Compass

Show Topics

– What is SASS 

– Benefits 

– How SASS works

– What is Compass?

– Key features

– Getting started with SASS

– SASS and Drupal

Module of the Week

– ThemeKey – https://drupal.org/project/themekey

Links

– SASS – http://sass-lang.com/

– Compass – http://compass-style.org/

– Why SASS – A List Apart – http://alistapart.com/article/why-sass

– Book: Sass and Compass for Designers – http://www.amazon.com/Sass-Compass-Designers-Ben-Frain-ebook/dp/B00CITNQI4/ref=sr_1_1?ie=UTF8&qid=1384973043&sr=8-1&keywords=sass+and+compass+for+designers

– Mixins – http://compass-style.org/index/mixins/

– Drupal Nights – Jason’s giving a talk on ‘Thinking Responsively’ at BioRAFT: https://groups.drupal.org/node/373083

Hosts

– Stephen Cross – www.ParallaxInfoTech.com @stephencross

– Jason Pamental – www.hwdesignco.com @jpamental

– John Picozzi – www.RubicDesign.com @johnpicozzi 

– Nic Laflin – www.nLightened.net @nicxvan

– Tim Dickens – www.ParallaxInfoTech.com @Tregonian 

Nov
21
2013
--

Talking Drupal #24 – SASS and Compass

Show Topics

– What is SASS 

– Benefits 

– How SASS works

– What is Compass?

– Key features

– Getting started with SASS

– SASS and Drupal

Module of the Week

– ThemeKey – https://drupal.org/project/themekey

Links

– SASS – http://sass-lang.com/

– Compass – http://compass-style.org/

– Why SASS – A List Apart – http://alistapart.com/article/why-sass

– Book: Sass and Compass for Designers – http://www.amazon.com/Sass-Compass-Designers-Ben-Frain-ebook/dp/B00CITNQI4/ref=sr_1_1?ie=UTF8&qid=1384973043&sr=8-1&keywords=sass+and+compass+for+designers

– Mixins – http://compass-style.org/index/mixins/

– Drupal Nights – Jason’s giving a talk on ‘Thinking Responsively’ at BioRAFT: https://groups.drupal.org/node/373083

Hosts

– Stephen Cross – www.ParallaxInfoTech.com @stephencross

– Jason Pamental – www.hwdesignco.com @jpamental

– John Picozzi – www.RubicDesign.com @johnpicozzi 

– Nic Laflin – www.nLightened.net @nicxvan

– Tim Dickens – www.ParallaxInfoTech.com @Tregonian 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com