Nov
15
2018
--

Canonical plans to raise its first outside funding as it looks to a future IPO

It’s been 14 years since Mark Shuttleworth first founded and funded Canonical and the Ubuntu project. At the time, it was mostly a Linux distribution. Today, it’s a major enterprise player that offers a variety of products and services. Throughout the years, Shuttleworth self-funded the project and never showed much interest in taking outside money. Now, however, that’s changing.

As Shuttleworth told me, he’s now looking for investors as he looks to get the company on track to an IPO. It’s no secret that the company’s recent re-focusing on the enterprise — and shutting down projects like the Ubuntu phone and the Unity desktop environment — was all about that, after all. Shuttleworth sees raising money as a step in this direction — and as a way of getting the company in shape for going public.

“The first step would be private equity,” he told me. “And really, that’s because having outside investors with outside members of the board essentially starts to get you to have to report and be part of that program. I’ve got a set of things that I think we need to get right. That’s what we’re working towards now. Then there’s a set of things that private investors are looking for and the next set of things is when you’re doing a public offering, there’s a different level of discipline required.”

It’s no secret that Shuttleworth, who sports an impressive beard these days, was previously resistant to this, and he acknowledged as much. “I think that’s a fair characterization,” he said. “I enjoy my independence and I enjoy being able to make long-term calls. I still feel like I’ll have the ability to do that, but I do appreciate keenly the responsibility of taking other people’s money. When it’s your money, it’s slightly different.”

Refocusing Canonical on the enterprise business seems to be paying off already. “The numbers are looking good. The business is looking healthy. It’s not a charity. It’s not philanthropy,” he said. “There are some key metrics that I’m watching, which are the gate for me to take the next step, which would be growth equity.” Those metrics, he told me, are the size of the business and how diversified it is.

Shuttleworth likens this program of getting the company ready to IPO to getting fit. “There’s no point in saying: I haven’t done any exercise in the last 10 years but I’m going to sign up for tomorrow’s marathon,” he said.

The move from being a private company to taking outside investment and going public — especially after all these years of being self-funded — is treacherous, though, and Shuttleworth admitted as much, especially in terms of being forced to setting short-term goals to satisfy investors that aren’t necessarily in the best interest of the company in the long term. Shuttleworth thinks he can negotiate those issues, though.

Interestingly, he thinks the real danger is quite a different one. “I think the most dangerous thing in making that shift is the kind of shallowness of the unreasonably big impact that stock price has on people’s mood,” he said. “Today, at Canonical, it’s 600 people. You might have some that are having a really great day and some that are having a shitty day. And they have to figure out what’s real about both of those scenarios. But they can kind of support each other. […] But when you have a stock ticker, everybody thinks they’re having a great day, or everybody thinks they’re having a shitty day in a way that may be completely uncorrelated to how well they’re actually doing.”

Shuttleworth does not believe that IBM’s acquisition of its competitor Red Hat will have any immediate effect on its business, though. What he does think, however, is that this move is making a lot of people rethink for the first time in years the investment they’ve been making in Red Hat and its enterprise Linux distribution. Canonical’s promise is that Ubuntu, as well as its OpenStack tools and services, are not just competitive but also more cost-effective in the long run, and offer enterprises an added degree of agility. And if more businesses start looking at Canonical and Ubuntu, that can only speed up Shuttleworth’s (and his bankers’) schedule for hitting Canonical’s metrics for raising money and going public.

Jul
11
2018
--

AMD EPYC Performance Testing… or Don’t get on the wrong side of SystemD

Ubuntu 16 AMD EPYC

Ever since AMD released their EPYC CPU for servers I wanted to test it, but I did not have the opportunity until recently, when Packet.net started offering bare metal servers for a reasonable price. So I started a couple of instances to test Percona Server for MySQL under this CPU. In this benchmark, I discovered some interesting discrepancies in performance between  AMD and Intel CPUs when running under systemd .

The set up

To test CPU performance, I used a read-only in-memory sysbench OLTP benchmark, as it burns CPU cycles and no IO is performed by Percona Server.

For this benchmark I used Packet.net c2.medium.x86 instances powered by AMD EPYC 7401P processors. The OS is exposed to 48 CPU threads.

For the OS I tried

  • Ubuntu 16.04 with default kernel 4.4 and upgraded to 4.15
  • Ubuntu 18.04 with kernel 4.15
  • Percona Server started from SystemD and without SystemD (for reasons which will become apparent later)

To have some points for comparison, I also ran a similar workload on my 2 socket Intel CPU server, with CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz. I recognize this is not most recent Intel CPU, but this was the best I had at the time, and it also gave 48 CPU Threads.

Ubuntu 16

First, let’s review the results for Ubuntu 16

Or in tabular format:

Threads Ubuntu 16, kernel 4.4; systemd Ubuntu 16, kernel 4.4;

NO systemd

Ubuntu 16, kernel 4.15
1 943.44 948.7 899.82
2 1858.58 1792.36 1774.45
4 3533.2 3424.05 3555.94
8 6762.35 6731.57 7010.51
12 10012.18 9950.3 10062.82
16 13063.39 13043.55 12893.17
20 15347.68 15347.56 14756.27
24 16886.24 16864.81 16176.76
30 18150.2 18160.07 17860.5
36 18923.06 18811.64 19528.27
42 19374.86 19463.08 21537.79
48 20110.81 19983.05 23388.18
56 20548.51 20362.31 23768.49
64 20860.51 20729.52 23797.14
72 21123.71 21001.06 23645.95
80 21370 21191.24 23546.03
90 21622.54 21441.73 23486.29
100 21806.67 21670.38 23392.72
128 22161.42 22031.53 23315.33
192 22388.51 22207.26 22906.42
256 22091.17 21943.37 22305.06
512 19524.41 19381.69 19181.71

 

There are few conclusions we can see from this data

  1. AMD EPYC CPU scales quite well to the number of CPU Threads
  2. The recent kernel helps to boost the throughput.

Ubuntu 18.04

Now, let’s review the results for Ubuntu 18.04

Threads Ubuntu 18, systemd
Ubuntu 18, NO systemd
1 833.14 843.68
2 1684.21 1693.93
4 3346.42 3359.82
8 6592.88 6597.48
12 9477.92 9487.93
16 12117.12 12149.17
20 13934.27 13933
24 15265.1 15152.74
30 16846.02 16061.16
36 18488.88 16726.14
42 20493.57 17360.56
48 22217.47 17906.4
56 22564.4 17931.83
64 22590.29 17902.95
72 22472.75 17857.73
80 22421.99 17766.76
90 22300.09 17773.57
100 22275.24 17646.7
128 22131.86 17411.55
192 21750.8 17134.63
256 21177.25 16826.53
512 18296.61 17418.72

 

This is where the result surprised me: on Ubuntu 18.04 with SystemD running Percona Server for MySQL as a service the throughput was up to 24% better than if Percona Server for MySQL is started from a bash shell. I do not know exactly what causes this dramatic difference—systemd uses different slices for services and user commands, and somehow it affects the performance.

Baseline benchmark

To establish a baseline, I ran the same benchmark on my Intel box, running Ubuntu 16, and I tried two kernels: 4.13 and 4.15

Threads Ubuntu 16, kernel 4.13, systemd Ubuntu 16, kernel 4.15, systemd
Ubuntu 16, kernel 4.15, NO systemd
1 820.07 798.42 864.21
2 1563.31 1609.96 1681.91
4 2929.63 3186.01 3338.47
8 6075.73 6279.49 6624.49
12 8743.38 9256.18 9622.6
16 10580.14 11351.31 11984.64
20 12790.96 12599.78 14147.1
24 14213.68 14659.49 15716.61
30 15983.78 16096.03 17530.06
36 17574.46 18098.36 20085.9
42 18671.14 19808.92 21875.84
48 19431.05 22036.06 23986.08
56 19737.92 22115.34 24275.72
64 19946.57 21457.32 24054.09
72 20129.7 21729.78 24167.03
80 20214.93 21594.51 24092.86
90 20194.78 21195.61 23945.93
100 20753.44 21597.26 23802.16
128 20235.24 20684.34 23476.82
192 20280.52 20431.18 23108.36
256 20410.55 20952.64 22775.63
512 20953.73 22079.18 23489.3

 

Here we see the opposite result with SystemD: Percona Server running from a bash shell shows the better throughput compared with the SystemD service. So for some reason, systemd works differently for AMD and Intel CPUs. Please let me know if you have any ideas on how to deal with the impact that systemd has on performance.

Conclusions

So there are some conclusions from these results:

  1. AMD EPYC shows a decent performance scalability; the new kernel helps to improve it
  2. systemd shows different effects on throughput for AMD and Intel CPUs
  3. With AMD the throughput declines for a high concurrent workload with 512 threads, while Intel does not show a decline.

The post AMD EPYC Performance Testing… or Don’t get on the wrong side of SystemD appeared first on Percona Database Performance Blog.

Feb
14
2018
--

Update on Percona Platform Lifecycle for Ubuntu “Stable” Versions

Percona Platform Lifecycle

Percona Platform LifecycleThis blog post highlights changes to the Percona Platform Lifecycle for Ubuntu “Stable” Versions.

We have recently made some changes to our Percona Platform and Software Lifecycle policy in an effort to more strongly align with upstream Linux distributions. As part of this, we’ve set our timeframe for providing supported builds for Ubuntu “Stable” (non-LTS) releases to nine (9) months. This matches the current Ubuntu distribution upstream policy.

In the future, we will continue to shift as necessary to match the upstream policy specified by Canonical. Along with this, as we did with Debian 9 before, we will only produce 64-bit builds for this platform ongoing. It has been our intention for some time to slowly phase out 32-bit builds, as they are rarely downloaded and largely unnecessary in contemporary times.

If you have any questions or concerns, please feel free to contact Percona Support or post on our Community Forums.

Jul
31
2017
--

Platform End of Life (EOL) Announcement for RHEL 5 and Ubuntu 12.04 LTS

End of Life

End of LifeUpstream platform vendors have announced the general end of life (EOL) for Red Hat Enterprise Linux 5 (RHEL 5) and its derivatives, as well as Ubuntu 12.04 LTS. With this announcement comes some implications to support for Percona software running on these operating systems.

RHEL 5 was EOL as of March 31st, 2017 and Ubuntu 12.04 LTS was end of life as of April 28th, 2017. Pursuant to our end of life policies, we are announcing that these EOLs will go into effect for Percona software on August 1st, 2017. As of this date, we will no longer be producing new packages, binary builds, hotfixes, or bug fixes for Percona software on these platforms.

We generally align our platform end of life dates with those of the upstream platform vendor. The platform end of life dates are published in advance on our website under the page Supported Linux Platforms and Versions.

Per our policies, Percona will continue to provide operational support for your databases on EOLed platforms. However, we will be unable to provide any bug fixes, builds or OS-level assistance if you encounter an issue outside the database itself.

Each platform vendor has a supported migration or upgrade path to their next major release.  Please reach out to us if you need assistance in migrating your database to your vendor’s supported platform – Percona will be happy to assist you.

Apr
05
2017
--

Canonical ends development of its phone to focus on cloud and IoT

 Canonical, the company behind the Ubuntu Linux distribution, long had aspirations to become a player in the mobile phone and tablet world, too. You can’t easily buy an Ubuntu-powered phone today (at least not in the U.S.), but over the years, a few have come and gone. As Canonical founder Mark Shuttleworth announced today, however, the company will end its investment in the phone… Read More

Oct
13
2016
--

Update the Signing Key for Percona Debian and Ubuntu Packages

signing key for Percona Debian and Ubuntu

signing key for Percona Debian and UbuntuIn this blog post, we’ll explain how to update the signing key for Percona Debian and Ubuntu packages.

Some of the users might have already noticed following warning on Ubuntu 16.04 (Xenial Xerus):

W: http://repo.percona.com/apt/dists/xenial/InRelease: Signature by key 430BDF5C56E7C94E848EE60C1C4CBDCDCD2EFD2A uses weak digest algorithm (SHA1)

when running apt-get update.

Percona .deb packages are signed with a key that uses an algorithm now considered weak. Starting with the next release, Debian and Ubuntu packages are signed with a new key that uses the much stronger SHA-512 algorithm. All future package release will also contain the new algorithm.

You’ll need to do one of the following in order to use the new key:

  • If you installed the Percona repository package as described here, this package is automatically updated to a new package version (percona-release_0.1-4). This package currently contains both the old and new keys. This helps make the transition easier (until all packages are signed with the new key).
  • Install the new Percona repository package as described in the installation guide.
  • Manually download and add the key from either keys.gnupg.net or keyserver.ubuntu.com by running:
    apt-key adv --keyserver keys.gnupg.net --recv-keys 8507EFA5 or
    apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8507EFA5

It’s important that you add the new key before the next release. Otherwise you’ll see the following warning:

W: GPG error: http://repo.percona.com xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9334A25F8507EFA5

Leave any questions about updating the signing key for Percona Debian and Ubuntu packages in the comments below.

Jul
06
2016
--

Canonical-Pivotal partnership makes Ubuntu preferred Linux distro for Cloud Foundry

Pivotal offices. Pivotal, developers of the Cloud Foundry open source cloud development platform and Canonical, the company behind the popular Ubuntu Linux distribution, announced a partnership today where Ubuntu becomes the preferred operating system for Cloud Foundry.
In fact, the two companies have been BFFs since the earliest days of Cloud Foundry when it was an open source project developed at VMware. Read More

Mar
30
2016
--

Docker MySQL Replication 101

Docker

Precona Server DockerIn this blog post, we’ll discuss some of the basics regarding Docker MySQL replication. Docker has gained widespread popularity in recent years as a lightweight alternative to virtualization. It is ideal for building virtual development and testing environments. The solution is flexible and seamlessly integrates with popular CI tools.

 

This post walks through the setup of MySQL replication with Docker using Percona Server 5.6 images. To keep things simple we’ll configure a pair of instances and override only the most important variables for replication. You can add whatever other variables you want to override in the configuration files for each instance.

Note: the configuration described here is suitable for development or testing. We’ve also used the operating system repository packages; for the latest version use the official Docker images. The steps described can be used to setup more slaves if required, as long as each slave has a different server-id.

First, install Docker and pull the Percona images (this will take some time and is only executed once):

# Docker install for Debian / Ubuntu
apt-get install docker.io
# Docker install for Red Hat / CentOS (requires EPEL repo)
yum install epel-release # If not installed already
yum install docker-io
# Pull docker repos
docker pull percona

Now create locally persisted directories for the:

  1. Instance configuration
  2. Data files
# Create local data directories
mkdir -p /opt/Docker/masterdb/data /opt/Docker/slavedb/data
# Create local my.cnf directories
mkdir -p /opt/Docker/masterdb/cnf /opt/Docker/slavedb/cnf
### Create configuration files for master and slave
vi /opt/Docker/masterdb/cnf/config-file.cnf
# Config Settings:
[mysqld]
server-id=1
binlog_format=ROW
log-bin
vi /opt/Docker/slavedb/cnf/config-file.cnf
# Config Settings:
[mysqld]
server-id=2

Great, now we’re ready start our instances and configure replication. Launch the master node, configure the replication user and get the initial replication co-ordinates:

# Launch master instance
docker run --name masterdb -v /opt/Docker/masterdb/cnf:/etc/mysql/conf.d -v /opt/Docker/masterdb/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysecretpass -d percona:5.6
00a0231fb689d27afad2753e4350192bebc19ab4ff733c07da9c20ca4169759e
# Create replication user
docker exec -ti masterdb 'mysql' -uroot -pmysecretpass -vvv -e"GRANT REPLICATION SLAVE ON *.* TO repl@'%' IDENTIFIED BY 'slavepass'G"
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
GRANT REPLICATION SLAVE ON *.* TO repl@"%"
--------------
Query OK, 0 rows affected (0.02 sec)
Bye
### Get master status
docker exec -ti masterdb 'mysql' -uroot -pmysecretpass -e"SHOW MASTER STATUSG"
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************
             File: mysqld-bin.000004
         Position: 310
     Binlog_Do_DB:
 Binlog_Ignore_DB:
Executed_Gtid_Set:

If you look carefully at the “docker run” command for masterdb, you’ll notice we’ve defined two paths to share from local storage:

/opt/Docker/masterdb/data:/var/lib/mysql

  • This maps the local “/opt/Docker/masterdb/data” to the masterdb’s container’s “/var/lib/mysql path”
  • All files within the datadir “/var/lib/mysql” persist locally on the host running docker rather than in the container
/opt/Docker/masterdb/cnf:/etc/mysql/conf.d

  • This maps the local “/opt/Docker/masterdb/cnf” directory to the container’s “/etc/mysql/conf.d” path
  • The configuration files for the masterdb instance persist locally as well
  • Remember these files augment or override the file in “/etc/mysql/my.cnf” within the container (i.e., defaults will be used for all other variables)

We’re done setting up the master, so let’s continue with the slave instance. For this instance the “docker run” command also includes the “–link masterdb:mysql” command, which links the slave instance to the master instance for replication.

After starting the instance, set the replication co-ordinates captured in the previous step:

docker run --name slavedb -d -v /opt/Docker/slavedb/cnf:/etc/mysql/conf.d -v /opt/Docker/slavedb/data:/var/lib/mysql --link masterdb:mysql -e MYSQL_ROOT_PASSWORD=mysecretpass -d percona:5.6
eb7141121300c104ccee0b2df018e33d4f7f10bf5d98445ed4a54e1316f41891
docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e'change master to master_host="mysql",master_user="repl",master_password="slavepass",master_log_file="mysqld-bin.000004",master_log_pos=310;"' -vvv
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
change master to master_host="mysql",master_user="repl",master_password="slavepass",master_log_file="mysqld-bin.000004",master_log_pos=310
--------------
Query OK, 0 rows affected, 2 warnings (0.23 sec)
Bye

Almost ready to go! The last step is to start replication and verify that replication running:

# Start replication
docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e"START SLAVE;" -vvv
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
START SLAVE
--------------
Query OK, 0 rows affected, 1 warning (0.00 sec)
Bye
# Verify replication is running OK
docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e"SHOW SLAVE STATUSG" -vvv
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
SHOW SLAVE STATUS
--------------
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: mysql
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysqld-bin.000004
          Read_Master_Log_Pos: 310
               Relay_Log_File: mysqld-relay-bin.000002
                Relay_Log_Pos: 284
        Relay_Master_Log_File: mysqld-bin.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 310
              Relay_Log_Space: 458
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 230d005a-f1a6-11e5-b546-0242ac110004
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
1 row in set (0.00 sec)
Bye

Finally, we have a pair of dockerized Percona Server 5.6 master-slave servers replicating!

As mentioned before, this is suitable for a development or testing environment. Before going into production with this configuration, think carefully about the tuning of the “my.cnf” variables and the choice of disks used for the data/binlog directories. It is important to remember that newer versions of Docker recommend using “networks” rather than “linking” for communication between containers.

Aug
16
2015
--

IBM Teams With Canonical To Put Ubuntu Linux On Mainframes

IBM LinuxOne Mainframe You might not think that ‘Linux’ and ‘mainframe’ belong in the same sentence, but IBM has been putting various flavors of Linux on its mainframe computers for 15 years. Today IBM and Canonical announced that the two companies were teaming up to build one running Ubuntu Linux. The new unit is called the LinuxOne. The announcement comes as part of a broader strategy… Read More

Mar
25
2015
--

Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more

elp IT! A talk with 3 Yelp MySQL DBAs heading to Percona Live 2015Founded in 2004 to help people find great local businesses, Yelp has some 135 million monthly unique visitors. With those traffic volumes Yelp’s 300+ engineers are constantly working to keep things moving smoothly – and when you move that fast you learn many things.

Fortunately for the global MySQL community, three Yelp DBAs will be sharing what they’ve learned at the annual Percona Live MySQL Conference and Expo this April 13-16 in Santa Clara, California.

Say “hello” to Susanne Lehmann, Jenni Snyder and Josh Snyder! I chatted with them over email about their presentations, on how MySQL is used at Yelp, and about the shortage of women in MySQL.

***

Tom: Jenni, you and Josh will be co-presenting “Next generation monitoring: moving beyond Nagios ” on April 14.

You mentioned that Yelp’s databases scale dynamically, and so does your monitoring of those databases. And to minimize human intervention, you’ve created a Puppet and Sensu monitoring ensemble… because “if it’s not monitored, it’s not in production.” Talk to me more about Yelp’s philosophy of “opt-out monitoring.” What does that entail? How does that help Yelp?

Jenni: Before we moved to Sensu, our Nagios dashboards were a sea of red, muted, acknowledged, or disabled service checks. In fact, we even had a cluster check to make sure that we never accidentally put a host into use that was muted or marked for downtime. It was possible for a well-meaning operator to acknowledge checks on a host and forget about it, and I certainly perpetrated a couple of instances of disks filling up after acknowledging a 3am “warning” page that I’d rather forget about. With Sensu, hosts and services come out of the downtime/acknowledgement state automatically after a number of days, ensuring that we’re kept honest and stay on top of issues that need to be addressed.

Also, monitoring is deployed with a node, not separate monitoring configuration. Outside of a grace period we employ when a host is first provisioned or rebooted, if a host is up, it’s being monitored and alerting. Also, alerting doesn’t always mean paging. We also use IRC and file tickets directly into our tracking system when we don’t need eyes on a problem right away.


Tom: Susanne, in your presentation, titled “insert cassandra into prod where use_case=?;” you’ll discuss the situations you’ve encountered where MySQL just wasn’t the right tool for the job.

What led up to that discovery and how did you come up with finding the right tools (and what were they) to run alongside and support MySQL?

Susanne: Our main force behind exploring other datastores alongside MySQL was that Yelp is growing outside the US market a lot. Therefore we wanted the data to be nearer to the customer and needed multi-master writes.

Also, we saw use cases where our application data was organized very key-value like and not relational, which made them a better fit for a NoSQL solution.

We decided to use Cassandra as a datastore and I plan to go more into detail why during my talk. Now we offer developers more choices on how to store our application data, but we also believe in the “right tool for the job” philosophy and might add more solutions to the mix in the future.


Tom: Jenni, you’ll also be presenting “Schema changes multiple times a day? OK!” I know that you and your fellow MySQL DBAs are always improving and also finding better ways of supporting new and existing features for Yelp users like me. Delivering on such a scale must entail some unique processes and tools. Does this involve a particular mindset among your fellow DBAs? Also, what are some of those key tools – and processes and how are they used?

Jenni: Yelp prizes the productivity of our developers and our ability to iterate and develop new features quickly. In order to do that, we need to be able to not only create new database tables, but also modify existing ones, many of which are larger than MySQL can alter without causing considerable replication delay. The first step is to foster a culture of automated testing, monitoring, code reviews, and partnership between developers and DBAs to ensure that we can quickly & safely roll out schema changes. In my talk, I’ll be describing tools that we’ve talked about before, like our Gross Query Checker, as well as the way the DBA team works with developers while still getting the rest of our work done. The second, easy part is using a tool like pt-online-schema-change to run schema changes online without causing replication delay or degrading performance :)


Tom: Josh, you’ll also be speaking on “Bootstrapping databases in a single command: elastic provisioning for the win.” What is “elastic provisioning” and how are you using it for Yelp’s tooling?

Josh: When I say that we use elastic provisioning, I mean that we can reliably and consistently build a database server from scratch, with minimal human involvement. The goal is to encompass every aspect of the provisioning task, including configuration, monitoring, and even load balancing, in a single thoroughly automated process. With this process in place, we’ve found ourselves able to quickly allocate and reallocate resources, both in our datacenters and in the cloud. Our tools for implementing the above goals give us greater confidence in our infrastructure, while avoiding single-points of failure and achieving the maximum possible level of performance. We had a lot of fun building this system, and we think that many of the components involved are relevant to others in the field.


Tom: Susanne and Jenni, last year at Percona Live there was a BoF session titled “MySQL and Women (or where are all the women?).” The idea was to discuss why there are “just not enough women working on the technology side of tech.” In a nutshell, the conversation focused on why there are not more women in MySQL and why so relatively few attend MySQL conferences like Percona Live.

The relative scarcity of women in technical roles was also the subject of an article published in the August 2014 issue of Forbes, citing a recent industry report.

Why, in your (respective) views, do you (or don’t) think that there are so few women in MySQL? And how can this trend be reversed?

Susanne: I think there are few women in MySQL and the reasons are manifold. Of course there is the pipeline problem. Then there is the problem, widely discussed right now, that women who are entering STEM jobs are less likely staying in there. These are reasons not specific for MySQL jobs, but rather for STEM in general. What is more specific for database/MySQL jobs is, in my opinion, that often times DBAs need to be on call, they need to stay in the office if things go sideways. Database problems tend often to be problems that can’t wait till the next morning. That makes it more demanding when you have a family for example (which is true for men as well of course, but seems still to be more of a problem for women).

As for how to reverse the trend, I liked this Guardian article because it covers a lot of important points. There is no easy solution.

I like that more industry leaders and technology companies are discussing what they can do to improve diversity these days. In general, it really helps to have a great professional (female) support system. At Yelp, we have AWE, the Awesome Women in Engineering group, in which Jenni and I are both active. We participate in welcoming women to Yelp engineering, speaking at external events and workshops to help other women present their work, mentoring, and a book club.

Jenni: I’m sorry that I missed Percona Live and this BoF last year; I was out on maternity leave. I believe that tech/startup culture is a huge reason that fewer women are entering and staying these days, but a quick web search will lead you to any number of articles debating the subject. I run into quite a few women working with MySQL; it’s large, open community and generally collaborative and supportive nature is very welcoming. As the article you linked to suggests, MySQL has a broad audience. It’s easy to get started with and pull into any project, and as a result, most software professionals have worked with it at some time or another.

On another note, I’m happy to see that Percona Live has a Code of Conduct. I hope that Percona and/or MySQL will consider adopting a Community Code of Conduct like Python, Puppet, and Ubuntu. Doing so raises the bar for all participants, without hampering collaboration and creativity!

* * *

Thanks very much, Susanne, Jenni and Josh! I look forward to seeing you next month at the conference. And readers, if you’d like to attend Percona Live, use the promo code Yelp15 for 15% off! Just enter that during registration. If you’re already attending, be sure to tweet about your favorite sessions using the hashtag #PerconaLive. And if you need to find a great place to eat while attending Percona Live, click here for excellent Yelp recommendations. :)

The post Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com