Apr
25
2019
--

Creating Custom Sysbench Scripts

sysbench-lua for benchmark tooling

sysbench-lua for benchmark toolingSysbench has long been established as the de facto standard when it comes to benchmarking MySQL performance. Percona relies on it daily, and even Oracle uses it when blogging about new features in MySQL 8. Sysbench comes with several pre-defined benchmarking tests. These tests are written in an easy-to-understand scripting language called Lua. Some of these tests are called: oltp_read_write, oltp_point_select, tpcc, oltp_insert. There are over ten such scripts to emulate various behaviors found in standard OLTP applications.

But what if your application does not fit the pattern of traditional OLTP? How can you continue to utilize the power of load-testing, benchmarking, and results analysis with sysbench? Just write your own Lua script!

For those that want to jump ahead and see the full source, here you go.

Sysbench API

To start off, each Lua script you create must implement three core sysbench-Lua API functions. These are thread_init, thread_done, and event. You can read the comments in the code below for the meaning of each function and what is happening inside.

-- Called by sysbench one time to initialize this script
function thread_init()
  -- Create globals to be used elsewhere in the script
  -- drv - initialize the sysbench mysql driver
  drv = sysbench.sql.driver()
  -- con - represents the connection to MySQL
  con = drv:connect()
end
-- Called by sysbench when script is done executing
function thread_done()
  -- Disconnect/close connection to MySQL
  con:disconnect()
end
-- Called by sysbench for each execution
function event()
  -- If user requested to disable transactions,
  -- do not execute BEGIN statement
  if not sysbench.opt.skip_trx then
    con:query("BEGIN")
  end
  -- Run our custom statements
  execute_selects()
  execute_inserts()
  -- Like above, if transactions are disabled,
  -- do not execute COMMIT
  if not sysbench.opt.skip_trx then
    con:query("COMMIT")
  end
end

That’s all pretty simple and should function as a good template in your scripts. Now let’s take a look at the rest of the script.

Sanity checks and options

Now let’s get into the core code. At the top you’ll find the following sections:

if sysbench.cmdline.command == nil then
   error("Command is required. Supported commands: run")
end
sysbench.cmdline.options = {
  point_selects = {"Number of point SELECT queries to run", 5},
  skip_trx = {"Do not use BEGIN/COMMIT; Use global auto_commit value", false}
}

The first section is a sanity check to make sure the user actually wants to run this test. Other test scripts, mentioned above, support commands like prepare, run, and cleanup. Our script only supports run as the data we are using is pre-populated by our core application.

The second section allows us, the script writer, to let the user pass some options specific to our test script. In the code above, we can see an option for the number of SELECT statements that will be ran on each thread/iteration (default is 5) and the other option allows the user to disable BEGIN/COMMIT if they so desire (default is false). If you want more customization in your script, simply add more options. You’ll see how to reference these parameters later on.

The queries

Now it is time to define the custom queries we want to execute in our script.

-- Array of categories to be use in the INSERTs
local page_types = { "actor", "character", "movie" }
-- Array of COUNT(*) queries
local select_counts = {
  "SELECT COUNT(*) FROM imdb.title"
}
-- Array of SELECT statements that have 1 integer parameter
local select_points = {
  "SELECT * FROM imdb.title WHERE id = %d",
  "SELECT * FROM imdb.comments ORDER BY id DESC limit 10",
  "SELECT AVG(rating) avg FROM imdb.movie_ratings WHERE movie_id = %d",
  "SELECT * FROM imdb.users ORDER BY RAND() LIMIT 1"
}
-- Array of SELECT statements that have 1 string parameter
local select_string = {
  "SELECT * FROM imdb.title WHERE title LIKE '%s%%'"
}
-- INSERT statements
local inserts = {
  "INSERT INTO imdb.users (email_address, first_name, last_name) VALUES ('%s', '%s', '%s')",
  "INSERT INTO imdb.page_views (type, viewed_id, user_id) VALUES ('%s', %d, %d)"
}

The above code defines several arrays/lists of different queries. Why is this necessary? Later on in the code, we will have to parse each SQL statement and populate/replace the various parameters with randomly generated values. It would not do us any good to repeat the same SELECT * FROM fooTable WHERE id = 44 every time, now would it? Certainly not. We want to generate random numbers and have our queries select from the entire dataset.

Some queries have no parameters, some have integer-based, and some string-based. We will handle these differently below, which is why they are in different arrays above. This method also allows for future expansion. When you want to run additional queries within the script, just add another line to each array; no need to change any other code.

Parse and execute

The function below, execute_selects, will be called from the parent function, event, which we discussed earlier in the post. You can see for-loops for each of the three SELECT categories we created above. The comments inline should help explain what is happening. Note the use of the user-provided option –point-selects in the second loop below, which we created previously in the ‘Sanity and Options’ section.

function execute_selects()
  -- Execute each simple, no parameters, SELECT
  for i, o in ipairs(select_counts) do
    con:query(o)
  end
  -- Loop for however many queries the
  -- user wants to execute in this category
  for i = 1, sysbench.opt.point_selects do
    -- select random query from list
    local randQuery = select_points[math.random(#select_points)]
    -- generate random id and execute
    local id = sysbench.rand.pareto(1, 3000000)
    con:query(string.format(randQuery, id))
  end
  -- generate random string and execute
  for i, o in ipairs(select_string) do
    local str = sysbench.rand.string(string.rep("@", sysbench.rand.special(2, 15)))
    con:query(string.format(o, str))
  end
end

Two more things to mention for this code. First, you will notice the use of sysbench.rand.pareto to generate a random number between 1 and 3,000,000. For our dataset, we know that each table referenced in all queries relating to WHERE id = ? has that many number of rows, at minimum. This is specific to our data. Your values will certainly be different. Second, notice the use of sysbench.rand.string, and string.rep. The string.rep segment will generate a string comprised of ‘@’ symbols, between 2 and 15 characters long. That string of ‘@’ symbols will then be passed to sysbench.rand.string, which will swap out each ‘@’ for a random alphanumeric value. For example, ‘@@@@@@’ could be changed to ‘Hk9EdC’ which will then replace the ‘%s’ inside the query string (string.format) and be executed.

Handle inserts

Our INSERT statements require values. Again, sysbench calls the function execute_inserts from event on each iteration. Inside execute_inserts, we generate some fake string data using built-in functions described above.

Those strings are then formatted into the SQL and executed.

function create_random_email()
  local username = sysbench.rand.string(string.rep("@",sysbench.rand.uniform(5,10)))
  local domain = sysbench.rand.string(string.rep("@",sysbench.rand.uniform(5,10)))
  return username .. "@" .. domain .. ".com"
end
function execute_inserts()
  -- generate fake email/info
  local email = create_random_email()
  local firstname = sysbench.rand.string("first-" .. string.rep("@", sysbench.rand.special(2, 15)))
  local lastname = sysbench.rand.string("last-" .. string.rep("@", sysbench.rand.special(2, 15)))
  -- INSERT for new imdb.user
  con:query(string.format(inserts[1], email, firstname, lastname))
  -- INSERT for imdb.page_view
  local page = page_types[math.random(#page_types)]
  con:query(string.format(inserts[2], page, sysbench.rand.special(2, 500000), sysbench.rand.special(2, 500000)))
end

Example run

$ sysbench imdb_workload.lua \
    --mysql-user=imdb --mysql-password=imdb \
    --mysql-db=imdb --report-interval=1 \
    --events=0 --time=0 run
WARNING: Both event and time limits are disabled, running an endless test
sysbench 1.0.17 (using system LuaJIT 2.0.4)
Running the test with following options:
Number of threads: 1
Report intermediate results every 1 second(s)
Initializing random number generator from current time
Initializing worker threads...
Threads started!
[ 1s ] thds: 1 tps: 15.96 qps: 177.54 (r/w/o: 112.71/31.92/32.91) lat (ms,95%): 158.63 err/s: 0.00 reconn/s: 0.00
[ 2s ] thds: 1 tps: 15.01 qps: 169.09 (r/w/o: 109.06/30.02/30.02) lat (ms,95%): 137.35 err/s: 0.00 reconn/s: 0.00
[ 3s ] thds: 1 tps: 26.00 qps: 285.00 (r/w/o: 181.00/52.00/52.00) lat (ms,95%): 108.68 err/s: 0.00 reconn/s: 0.00
[ 4s ] thds: 1 tps: 15.00 qps: 170.00 (r/w/o: 108.00/32.00/30.00) lat (ms,95%): 164.45 err/s: 0.00 reconn/s: 0.00

And there we have it! Custom queries specific to our application and dataset. Most of the sysbench parameters are self-explanatory, but let me talk about –report-interval=1 which shows statistics every 1 second. Normally sysbench does not output stats until the end of the run, however, the example execution will run forever (–events=0 –time=0) so we need stats to show all the time. You can adjust the parameters to your liking. For instance, if you only want to run a test for 5 minutes, set –events=0 –run-time=300.

Conclusion

Sysbench is a very well designed application that allows you to load-test your MySQL instances using pre-defined and custom queries. Using the Lua scripting language, you can create just about any scenario to fit your needs. The above is just one example that we use within Percona’s Training and Education department. It is by no means an exhaustive example of all of the capabilities of sysbench-Lua.


Photo by Lachlan Donald on Unsplash

May
14
2015
--

MySQL QA Episode 2: Build a MySQL server – Git, Bazaar, Compiling & Build tools

Welcome to MySQL QA Episode 2: Build a MySQL Server – Git, Bazaar (bzr), Compiling, and Build Tools

In this episode you’ll learn how to build Percona Server and/or MySQL Server for QA purposes & more in this short 25 minute tutorial.

In HD quality (set your player to 720p!)

To watch the other episodes in this series, see the MySQL QA & Bash Linux Training Series post. If you missed MySQL QA Episode 1, it was titled “Bash/GNU Tools & Linux Upskill & Scripting Fun.” You are watch it here.

If you have any questions or comments, please leave them below.

The post MySQL QA Episode 2: Build a MySQL server – Git, Bazaar, Compiling & Build tools appeared first on MySQL Performance Blog.

Mar
17
2015
--

MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun

MySQL QA Episode #1: Bash/GNU Tools & Linux Upskill & Scripting Fun

This episode consists of 13 parts, and an introduction. See videos below

In HD quality (set your player to 720p!)

Introduction

Part 1: echo, ls, cp, rm, vi, cat, df, du, tee, cd, clear, uname, date, time, cat, mkdir

Part 2: find, wc, sort, shuf, tr, mkdir, man, more

Part 3: Redirection, tee, stdout, stderr, /dev/null, cat

Part 4: Vars, ‘ vs “, $0, $$, $!, screen, chmod, chown, export, set, whoami, sleep, kill, sh, grep, sudo, su, pwd

Part 5: grep, regex (regular expressions), tr

Part 6: sed, regex (regular expressions)

Part 7: awk

Part 8: xargs

Part 9: subshells, shells, sh

Part 10: if, for, while, seq, head, grep & grep -q, sleep, tee, read & more

Part 11: Arrays, lynx, grep, egrep, awk, redirection, variable, printf, while, wget, read

Part 12: Production scripting examples

Part 13: Gnuwin32, Gnuwin32 escaping & path name/binary selection gotcha’s, untar, unzip, gzip for Windows

If you enjoyed these video’s leave us a comment below!

The post MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun appeared first on MySQL Performance Blog.

Mar
17
2015
--

Free MySQL QA & Bash/Linux Training Series

Free MySQL QA & Bash/Linux Training Series from Percona

Welcome to the MySQL QA Training Series!

If you have not read our introductory blog post on pquery yet, I’d recommend reading that one first to get a bit of background. The community is enthuastic about pquery, and today I am happy to announce a full training series on pquery and more. Whether you are a Linux or MySQL newbie or a seasoned QA engineer, there is something here for you. From Bash scripting (see episode 1 below), to every aspect of the new pquery framework, it is my hope that you enjoy this series. If you do, please leave us a comment :)

Database quality assurance is not as straightforward as it may seem. It’s not a matter of point-and-click, but rather of many intertwined tools and scripts. Beyond that, due to the complexity of the underlying product, it’s about having an overall plan or vision on how to adequately test the product in every aspect.

Take for example the SELECT statement; it allows specifying about 30 different clauses or modifiers (GROUP BY, WHERE, ORDER, LIMIT, HAVING, …). Then, think further about what one could do inside these clauses, or inside subselects etc. The number of possible combinations (exhaustive testing) of all commands (and all formats and variations thereof) plus all mysqld options (nearly 500 of them) is for all intents and purposes infinite, and thus seemingly impossible to test.

In Episode 13, an approach is proposed which, in our view, adequately solves this test infinite-possibility coverage problem through the use of random spread coverage testing and sql interleaving.Knowing your Bash/Linux/Gnu scripting well is also an almost definite prerequisite to getting started with mysqld QA. Episode 1 in this series is over 3.5 hours of in-depth (from easy to advanced) training on many often-used Bash commands and topics. From ls to sed, from cp to xargs and from variables to arrays. Enjoy!

Without further ado, here are the planned upcoming episodes:

MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun
MySQL QA Episode 2: Build a MySQL Server: Git, Compiling, Build Tools
MySQL QA Episode 3: Debugging: GDB, Backtraces, Frames, Library Dependencies
MySQL QA Episode 4: QA Framework Setup Time! percona-qa, pquery, reducer & more
MySQL QA Episode 5: Preparing Your QA Run: mtr_to_sql.sh and pquery-run.sh
MySQL QA Episode 6: Analyzing & Filtering: pquery-prep-red.sh, -clean-known.sh & pquery-results.sh
MySQL QA Episode 7: Reducing Testcases for Beginners: single-threaded reducer.sh
MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh
MySQL QA Episode 9: Reducing Testcases for Experts: multi-threaded reducer.sh
MySQL QA Episode 10: Reproducing and Simplifying: How to get it Right
MySQL QA Episode 11: Valgrind Testing: Pro’s, Con’s, Why and How
MySQL QA Episode 12: Multi-node Cluster Testing Using Docker
MySQL QA Episode 13: A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving

A short introduction on each episode:

As episodes are finished, the series titles above will be linked so it’s easy to check this page for updates.

Enjoy!

The post Free MySQL QA & Bash/Linux Training Series appeared first on MySQL Performance Blog.

Aug
01
2013
--

Percona celebrates its 7th anniversary by giving to open source ecosystem

Percona celebrates its 7th anniversaryToday we’re celebrating Percona’s 7th anniversary.  A lot has changed in these past 7 years – we have grown from a two-person outfit focused exclusively on consulting to a 100-person company with teammates in 22 different countries and 18 different states, now providing Support, Consulting, RemoteDBA, Server Development and Training services.

We also made our mark in open source software development, creating some of the most popular products for the MySQL ecosystem – Percona Toolkit, Percona Xtrabackup, Percona XtraDB Cluster, Percona Server and others. Additionally, we’re into our second year of hosting the Percona Live conference series for the MySQL community. We have grown to serve over 2,000 customers and I’m proud to say we could do it all in bootstrap mode without attracting outside investors and keeping the company owned by its employees.

So how are we celebrating our anniversary? We decided to celebrate by supporting the open source ecosystem, making donations to a number of open source initiatives that have helped us through all these years. We would not be here without you!

As such we’re supporting:

  • MariaDB Foundation for supporting MariaDB, one of the MySQL alternatives that we fully support at Percona.
  • Free Software Foundation as an organization instrumental to the success of the open source movement.
  • Linux Foundation for supporting Linux, by far the most popular platform among our customers.
  • Debian for creating a foundation for some of the most popular Linux distributions out there.
  • Jenkins for the Continuous Integration server we use for our development projects.
  • OpenSSH for software that helps us to access customer systems securely.
  • Drupal for powering our website as well as the websites of many of our customers.

We’re happy to enjoy the growth that’s allowing us to support other projects in our ecosystem. If you have the chance I encourage you do the same. There is a tremendous amount of work going into open source software, which is made free to use, but it is by far not free to create and maintain.

The post Percona celebrates its 7th anniversary by giving to open source ecosystem appeared first on MySQL Performance Blog.

Jun
12
2013
--

Percona MySQL University @Portland next Monday!

Percona MySQL University @Portland, June 17, 2013We’re less than a week away from Percona MySQL University at Portland, Oregon next Monday, June 17. The latest in a series of FREE one-day educational events, we are pleased to feature 10 technical talks by members of Team Percona as well as local members of the MySQL Community:

The daylong event will be held at Portland State University’s Smith Memorial Student Union, located at 1825 SW Broadway, Suite 327/8/9 Portland, Oregon 97201. Afterward, we’ll have a networking reception at the famed Paddy’s Bar and Grill sponsored by Tag1 Consulting featuring great networking possibilities and free drinks for event attendees.

If you’re in the Portland area and work with MySQL, then this is an event you can’t afford to miss… :)   So register now!

Please also join the Portland MySQL Meetup group for more MySQL-focused events in Portland

If you love the ideal of Percona MySQL University and would like us to bring the event to your city, please let us know!

The post Percona MySQL University @Portland next Monday! appeared first on MySQL Performance Blog.

Jun
06
2013
--

Summertime Percona MySQL training update

Summertime Percona MySQL training updateNow that June has arrived it is time to plan what you will do over the summer months. In addition to your summer vacation plans, give thought to MySQL training for you and your team.

Summer is the time to brush up on those critical skills needed to ensure all systems are ready for the holiday shopping season.

In addition to our revised courses, that I talked about in a previous post, we are also running our new Moving to MySQL 5.6 class. This class covers new features in MySQL 5.6, migration planning, and application verification. This class was designed with the experienced MySQL DBA in mind–so it is a fast paced 2-day course.

Percona has a packed summer MySQL training schedule. In June we have:

In July we have:

In August we have:

We have a 10% discount code for use when ordering, register early and save even more as the 10% discount can be applied to the early registration price.  Just use discount code mpb10 when checking to receive the discount.

The post Summertime Percona MySQL training update appeared first on MySQL Performance Blog.

Apr
12
2013
--

Moving to MySQL 5.6? We can help

Percona TrainingIf you are looking for a class that is designed to jump-start your knowledge on MySQL 5.6 features, a class that provides hands-on labs, and a class that shows various migration methods – look no further.

We have been hard at work building a new class to ensure you have the knowledge and skills needed to verify your applications, and plan for the migration to MySQL 5.6. The class is called Moving to MySQL 5.6 and is a 2-day workshop.

The Moving to MySQL 5.6 workshop is being offered over the summer in numerous European countries and throughout the United States. Our goal is to provide you with the most up to date knowledge on MySQL 5.6, show you how to verify your application, and plan for a successful migration.

You may also want to check out our other MySQL training events. All of our workshops have been updated to MySQL 5.6–so you know you will be learning about the latest MySQL features and best practices. Just go to Percona Training for the full list of upcoming workshops. If you use discount code mpb10 you can save 10% when you register.

Last, I would be remiss if I did not remind you that is is not too late to purchase tickets for Percona Live MySQL Conference and Expo Community in Santa Clara, California, April 22-25. We have discounts available, just drop us line and we can help you out.

 

 

The post Moving to MySQL 5.6? We can help appeared first on MySQL Performance Blog.

Jan
02
2013
--

MySQL Training from Percona: January – March 2013

Now that we are in the New Year, it is time to settle back into work and make plans for 2013. As part of your professional development planning, consider Percona MySQL Training.

Percona will be holding the following MySQL Training classes in the first quarter:

  • January
    • Live Virtual Training – DBA Training for MySQL: January 7-10, 2013
    • Chicago, Illinois, USA : January 14-17, 2013
    • London, UK: January 14-17, 2013
  • February
    • Frankfurt, DE: February 4-7, 2013
    • San Francisco, CA, USA: February 4-7, 2013
  • March
    • New York, NY, USA: March 11-14, 2013

To view these training events, and others, go to percona.com/training.

As a thank-you for making 2012 a fantastic year for Percona Training, we are offering 10% off of any public training class that is purchased in January. The class does not need to be taken in January. You will receive the discount when you check out of our online store by entering coupon code MPB10J.

If you have a team that you would like to train, it may be more cost effective to bring us to you. If you would like to be contacted about our custom training, to go our Contact Me form and send us your request.

The post MySQL Training from Percona: January – March 2013 appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com