Apr
30
2014
--

Galera 3.5 for Percona XtraDB Cluster 5.6 is now available

Galera 3.5 improvements are now available for use with Percona XtraDB Cluster 5.6.

Bugs fixed in Galera 3.5 include:

  • A crash due to of-by-one error in certification index cleanup has been fixed. Bugs fixed #1309227 and #1267507.
  • Fixed the corruption in gcache that could lead to a cluster crash. Bugs fixed #1301616 and #1152565.
  • Joining node crashed under IST when number of writesets was high and under load. Bug fixed #1284803.
  • Due to a bug in certification index cleanup, attempt to match against an empty key would cause node shutdown and possible node consistency compromise. Bug fixed #1274199.
  • New wsrep_provider_options option repl.max_ws_size has been introduced to make the maximum writeset size configurable. Bug fixed #1270921.

Users affected by these issues can now upgrade their Percona XtraDB Cluster 5.6 deployments to use Galera 3.5 without having to upgrade other Percona XtraDB Cluster components. The updated Galera binaries are available in the Percona repositories.

All of Percona‘s software is open-source and free, all the details about this release can be found in the Galera 3.5 milestone at Launchpad.

Documentation for Percona XtraDB Cluster is available online along with the installation and upgrade instructions. We did our best to eliminate bugs and problems during the testing release, but this is a software, so bugs are expected. If you encounter them, please report them to our bug tracking system.

The post Galera 3.5 for Percona XtraDB Cluster 5.6 is now available appeared first on MySQL Performance Blog.

Apr
30
2014
--

Talking Drupal #047 – Backups

Topics

  • Backup horror stories or success stories
  • Personal development backup
  • Server backup
  • Code and files backup
  • Database backup

Modules

  • Backup and Migrate – https://drupal.org/project/backup_migrate
  • Backup and Migrate Files – https://drupal.org/project/backup_migrate_files

Resources

  • Node Squirrel – http://www.nodesquirrel.com/
  • Jason’s upcoming book – typeresponsively.com

Module of the Week

  • Shiny Theme – https://drupal.org/project/shiny

Hosts

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • Jason Pamental – www.hwdesignco.com @jpamental
  • John Picozzi – www.oomphinc.com t@johnpicozzi
  • Nic Laflin – www.nLightened.net @nicxvan
Apr
30
2014
--

Talking Drupal #047 – Backups

Topics

  • Backup horror stories or success stories
  • Personal development backup
  • Server backup
  • Code and files backup
  • Database backup

Modules

  • Backup and Migrate – https://drupal.org/project/backup_migrate
  • Backup and Migrate Files – https://drupal.org/project/backup_migrate_files

Resources

  • Node Squirrel – http://www.nodesquirrel.com/
  • Jason’s upcoming book – typeresponsively.com

Module of the Week

  • Shiny Theme – https://drupal.org/project/shiny

Hosts

  • Stephen Cross – www.ParallaxInfoTech.com @stephencross
  • Jason Pamental – www.hwdesignco.com @jpamental
  • John Picozzi – www.oomphinc.com t@johnpicozzi
  • Nic Laflin – www.nLightened.net @nicxvan
Apr
30
2014
--

Red Hat Buys Inktank For $175M In Cash To Beef Up Its Cloud Storage Offerings

11847079564_738048a775_b Red Hat, the open source software provider, is squaring up to Amazon in the storage market. It has just announced that it is buying Inktank, a developer of open-source storage systems, for $175 million in cash. Red Hat says it will combine Inktank’s primary product, Inktank Ceph Enterprise, with its own GlusterFS-based storage offering. Red Hat says the deal will make it into the largest… Read More

Apr
30
2014
--

Aviso Aims To Take Guesswork Out Of Earnings Forecasts

Eye glasses on stock quote page in newspaper. Aviso came out of stealth mode today with the goal of helping companies provide more accurate earnings forecasts. K.V. Rao, co-founder and CEO at Aviso explained that the company has been working on the problem for two years and they hope to make earnings forecasting, which when wrong can cost companies millions in market cap, less about gut and instinct and more about data-driven decision… Read More

Apr
29
2014
--

Microsoft Is Technology’s Comeback Kid

shutterstock_140495338-msft Microsoft. For a generation of technology executives, the name strikes fear into even the most iron-willed business leaders. A lion among gazelles, its very gaze into a market could cause investors and analysts to flee in terror. Yet, its name has become a punchline among today’s technorati, a joke about formerly dominant companies evolving into large, plodding kludges. Missed deadlines, delayed… Read More

Apr
29
2014
--

ScaleArc: Real-world application testing with WordPress (benchmark test)

ScaleArc recently hired Percona to perform various tests on its database traffic management product. This post is the outcome of the benchmarks carried out by me and ScaleArc co-founder and chief architect, Uday Sawant.

The goal of this benchmark was to identify ScaleArc’s overhead using a real-world application – the world’s most popular (according to wikipedia) content management system and blog engine: WordPress.

The tests also sought to identify the benefit of caching for this type of workload. The caching parameters represent more real-life circumstances than we applied in the sysbench performance tests – the goal here was not just to saturate the cache. For this reason, we created an artificial WordPress blog with generated data.

The size of the database was roughly 4G. For this particular test, we saw that using ScaleArc introduces very little overhead and caching increased the throughput 3.5 times at peak capacity. In terms of response times, response times on queries for which we had a cache hit decreased substantially. For example, a 5-second main page load became less than 1 second when we had cache hits on certain queries. It’s a bit hard to talk about response time here in general, because WordPress itself has different requests that are associated with different costs (computationally) and which have different response times.

Test description

The pre-generated test database contained the following:

  • 100 users
  • 25 categories
  • 100.000 posts (stories)
  • 300.000 comments (3 per post)

One iteration of the load contained the following:

  • Homepage retrieval
  • 10 story (post) page retrieval
  • 3 category page retrieval
  • Log in as a random user
  • That random user posted a new story and commented on an existing post

We think that the usage pattern is close to reality – most people just visit blogs, but some write posts and comments. For the test, we used WordPress version 3.8.1. We wrote a simple shell script that could do these iterations using multiple processes. Some of this testing pattern, however, is not realistic. Some posts will always have many more comments than others, and some posts won’t have any comments at all. This test doesn’t take that nuance into account, but that doesn’t change the big picture. Choosing a random post to comment on will give us a uniform comment distribution.

We measured 3 scenarios:

  • Direct connection to the database (direct_wp).
  • Connection through ScaleArc without caching.
  • Connection through ScaleArc with caching enabled.

When caching is enabled, queries belonging to comments were cached for 5 minutes, queries belonging to the home page were cached for 15 minutes, and queries belonging to stories (posts) were cached for 30 minutes.

We varied the number of parallel iterations. Each test ran for an hour.

Results for direct database connection

Threads: 1, Iterations: 180, Time[sec]: 3605
   Threads: 2, Iterations: 356, Time[sec]: 3616
   Threads: 4, Iterations: 780, Time[sec]: 3618
   Threads: 8, Iterations: 1408, Time[sec]: 3614
   Threads: 16, Iterations: 2144, Time[sec]: 3619
   Threads: 32, Iterations: 2432, Time[sec]: 3646
   Threads: 64, Iterations: 2368, Time[sec]: 3635
   Threads: 128, Iterations: 2432, Time[sec]: 3722

The result above is the summary output of the script we used. The data shows we reach peak capacity at 32 concurrent threads.

Results for connecting through ScaleArc

Threads: 1, Iterations: 171, Time[sec]: 3604
   Threads: 2, Iterations: 342, Time[sec]: 3606
   Threads: 4, Iterations: 740, Time[sec]: 3619
   Threads: 8, Iterations: 1304, Time[sec]: 3609
   Threads: 16, Iterations: 2048, Time[sec]: 3625
   Threads: 32, Iterations: 2336, Time[sec]: 3638
   Threads: 64, Iterations: 2304, Time[sec]: 3678
   Threads: 128, Iterations: 2304, Time[sec]: 3675

The results are almost identical. Because a typical query in this example is quite expensive, the overhead of ScaleArc here is barely measurable.

Results for connecting through ScaleArc with caching enabled

Threads: 1, Iterations: 437, Time[sec]: 3601
   Threads: 2, Iterations: 886, Time[sec]: 3604
   Threads: 4, Iterations: 1788, Time[sec]: 3605
   Threads: 8, Iterations: 3336, Time[sec]: 3600
   Threads: 16, Iterations: 6880, Time[sec]: 3606
   Threads: 32, Iterations: 8832, Time[sec]: 3600
   Threads: 64, Iterations: 9024, Time[sec]: 3614
   Threads: 128, Iterations: 8576, Time[sec]: 3630

Caching improved response time even for a single thread. At 32 threads, we see more than 3.5x improvement in throughput. Caching is a great help here for the same reason the overhead is barely measurable: the queries are more expensive in general, so more resources are spared when they are not run.

Throughput
From the web server’s access log, we created a per-second throughput graph. We are talking about requests per second here. Please note that the variance is relatively high, because the requests are not identical – retrieving the main page is a different request and has a different cost then retrieving a story page.

throughput

The red and blue dots are literally plotted on top of each other – the green bar is always on top of them. The green ones have a greater variance because even though we had caching enabled during the test, we used more realistic TTLs in this cache, so cached items did actually expire during the test. When the cache was expired, requests took longer, so the throughput was lower. When the cache was populated, requests took a shorter amount of time, so the throughput was higher.

CPU utilization

cpu_util

CPU utilization characteristics are pretty much the same on the left and right sides (direct connection on the left and ScaleArc without caching on the right). In the middle, we can see that the web server’s CPU gets completely utilized sooner with caching. Because data comes faster from the cache, it serves more requests, which costs more resources computationally. On the other hand, the database server’s CPU utilization is significantly lower when caching is used. The bar is on the top on the left and right sides – in the middle, we have bars both at the top and at the bottom. The test is utilizing the database server’s CPU completely only when we hit cache misses.

Because ScaleArc serves the cache hits, and these requests are not hitting the database, the database is not used at all when requests are served from the cache. In the case of tests with caching on, the bottleneck became the web server, which is a component that is a lot easier to scale than the database.

There are two more key points to take away here. First, regardless of whether caching is turned on or off, this workload is not too much for ScaleArc. Second, the client we ran the measurement scripts on was not the bottleneck.

Conclusion
The goal of these benchmarks was to show that ScaleArc has very little overhead and that caching can be beneficial for a real-world application, which has a “read mostly” workload with relatively expensive reads (expensive means that network round trip is not a significant contributor in the read’s response time). A blog is exactly that type – typically, more people are visiting than commenting. The test showed that ScaleArc is capable of supporting this scenario well, delivering 3.5x throughput at peak capacity. It’s worth mentioning that if this system needs to be scaled, more web servers could be added as well as more read slaves. Those read slaves can take up read queries by a WordPress plugin which allows this, or by ScaleArc’s read-write splitting facility (it threats autocommit selects as reads), in the later case, the caching benefit is present for the slaves as well.

The post ScaleArc: Real-world application testing with WordPress (benchmark test) appeared first on MySQL Performance Blog.

Apr
29
2014
--

Tidemark Adds Finance Playbooks And $32M In New Funding

Tidemark-Spring14-Playbook Tidemark, a business financial forecasting startup, today announced a major upgrade to their product and another $32M in funding. The product update, which CEO Christian Gheorghe described as a major upgrade, allows you to model the old board books, full of Excel spreadsheets. But because you are in a digital format, instead of a static printed book or .pdf, you can drill down into any financial… Read More

Apr
29
2014
--

Tag Management Firm Tealium Gets $20M In New Financing

Tealium Tealium has received $20 million in new financing from Silver Lake Waterman. The San Diego-based startup will use the capital to grow all its business units, including engineering, sales, customer service, and marketing. Tealium makes a tag management system that captures clean data streams for enterprise marketers doing analytics on websites, mobile sites, and mobile apps. This lets companies… Read More

Apr
28
2014
--

Authy Brings Two-Factor Authentication To The Desktop

chrome-app When you use two-factor authentication, chances are you are getting your second factor from a mobile phone app like Google Authenticator or Authy. This makes sense, given that you want to ensure that nobody who has access to your computer also has access to the application that provides you with your second key for accessing your private accounts. Authy is turning this idea on its head today by… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com