Oct
10
2018
--

Google Cloud expands its networking feature with Cloud NAT

It’s a busy week for news from Google Cloud, which is hosting its Next event in London. Today, the company used the event to launch a number of new networking features. The marquee launch today is Cloud NAT, a new service that makes it easier for developers to build cloud-based services that don’t have public IP addresses and can only be accessed from applications within a company’s virtual private cloud.

As Google notes, building this kind of setup was already possible, but it wasn’t easy. Obviously, this is a pretty common use case, though, so with Cloud NAT, Google now offers a fully managed service that handles all the network address translation (hence the NAT) and provides access to these private instances behind the Cloud NAT gateway.

Cloud NAT supports Google Compute Engine virtual machines as well as Google Kubernetes Engine containers, and offers both a manual mode where developers can specify their IPs and an automatic mode where IPs are automatically allocated.

Also new in today’s release is Firewall Rules Logging, which is now in beta. Using this feature, admins can audit, verify and analyze the effects of their firewall rules. That means when there are repeated connection attempts that the firewall blocked, you can now analyze those and see whether somebody was up to no good or whether somebody misconfigured the firewall. Because the data is only delayed by about five seconds, the service provides near real-time access to this data — and you can obviously tie this in with other services like Stackdriver Logging, Cloud Pub/Sub and BigQuery to create alerts and further analyze the data.

Also new today is managed TLS certificated for HTTPS load balancers. The idea here is to take the hassle out of managing TLS certificates (the kind of certificates that ensure that your user’s browser creates a secure connection to your app) when there is a load balancer in play. This feature, too, is now in beta.

Oct
05
2017
--

Google Compute Engine goes a little crazy with up to 96 CPU cores and 624 GB of memory

 If you’ve got a resource-hungry app, Google Compute Engine’s latest offering has you covered. It lets you dial up to 96 CPUs and an other-worldly 624 GB of memory. Remember Bill Gates asking who would ever need more than 64K of memory. He obviously didn’t see this coming. If you think that’s a lot, you aren’t kidding, and believe it or not it’s a big boost… Read More

Sep
21
2017
--

Google Cloud adds support for more powerful Nvidia GPUs

 Google Cloud Platform announced support for some powerful Nvidia GPUs on Google Compute Engine today. For starters, the company is making Nvidia K80 GPUs generally available. At the same time, it’s launching support for Nvidia P100 GPUs in Beta along with a new sustained pricing model. For companies working with machine learning workloads, having access to GPUs in the cloud provides… Read More

Jul
28
2015
--

Google Now Lets Developers Bring Their Own Security Keys To Compute Engine

5399162987_5d0402e809_o Starting today, developers who use Google’s Compute Engine infrastructure as a service platform will be able to bring their own security keys to the service. Google argues that using these customer-supplied encryption keys, which are now in public beta, give its users more control over their data security.
By default, Google encrypts all of the data on its service with an AES-256 bit… Read More

Mar
03
2015
--

Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots

As the need for information grows so does the size of data we need to keep in our databases. SST is unavoidable for spinning up new nodes in a PXC cluster and when datasets reach the “terra-byte” range this becomes ever more cumbersome requiring many hours for a new node to synchronize.

More often that not, it is necessary to implement custom “wsrep_sst” scripts or resort to manual synchronization processes. Luckily cloud providers provide convenient methods to leverage disk snapshots that can be used to quickly transfer data between nodes.

This article deals with the actions needed to perform a snapshot on Google’s Compute Engine (GCE) infrastructure. A similar method can be used on AWS EC2 instances using EBS snapshots or any other form of snapshots such as LVM, ZFS or SAN. The steps described can be used to add a new node to a PXC cluster or to avoid SST. The following procedure can also be used to take advantage of the performance benefit of GCE Snapshots. A similar procedure can be used for adding a regular slave provided the binary log co-ordinates have been captured. This article assumes your “datadir” is on a separate disk to your operating system partition using the “ext4″ filesystem:

  1. Select a suitable “donor” node, we will use “node1″ for this example.
  2. Stop the MySQL service on “node1″ or perform a FTWRL with the MySQL service running on a node which is in “desync/donor” mode
    # Take the snapshot from a stopped instance
    [root@node1 /] service mysql stop & tail -f /var/log/mysql/error.log
     
    # OR alternatively take the snapshot from a 'desynced' node
     
    ### desync from cluster replication
    mysql> set global wsrep_desync=ON; 
     
    ### get FTWRL
    mysql> flush tables with read lock;
  3. While the MySQL service is down on “node1″ or the FTWRL is held create a snapshot in the Google Developer Console for the disk or using the GCE API (* this assumes that the datadir is located in a separate standalone disk). This part of the process takes around 15 minutes for a 3.5 TB disk.
    gcloud compute disks snapshot node1-datadir-disk --snapshot-name node1-datadir-disk-snapshot-1
  4. As soon as the snapshot has completed start the MySQL service on “node1″ (verifying the node has successfully joined the cluster) or release the FTWRL
    # Depending on the steps followed in step 1 either start MySQL on node1
    [root@node1 /] service mysql start & tail -f /var/log/mysql/error.log
     
    # OR alternatively release the FTWRL and "sync" the node
     
    ### release FTWRL
    mysql> unlock tables;
     
    ### if there is high load on the cluster monitor wsrep_local_recv_queue 
    ### until it reaches 0 before running the following command to rejoin 
    ### the cluster replication (otherwise it can be run immediately after
    ### releasing the FTWRL):
    mysql> set global wsrep_desync=OFF;


    ***** IMPORTANT NOTE: In case “node1″ is unable to rejoin the cluster or requires an SST you will need to re-create the snapshot from another node or after SST completes.

  5. Now connect to the “joiner” node, we will use “node2″ for this example.
  6. Unmount the existing disk from “node2″ for this example (assuming MySQL service is not running else stop the MySQL service first)

    [root@node2 /] umount /var/lib/mysql
  7. Detach and delete the disk containing the MySQL datadir from the “node2″ instance in the Google Developer Console or using the GCE API
    gcloud compute instances detach-disk node2 --disk node2-datadir-disk
    gcloud compute disks delete node2-datadir-disk
  8. Create and attach a new disk to the “node2″ instance in the Google Developer Console or using the GCE API using the snapshot you created in step 3. This part of the process takes around 10 minutes for a 3.5 TB disk
    gcloud compute disks create node2-datadir-disk --source-snapshot node1-datadir-disk-snapshot-1
    gcloud compute instance attach-disk node2 --disk node2-datadir-disk
  9. [ *** LVM only step *** ]: If you are using LVM the device will not show up in this list until you have activated the Volume Group (“vg_mysql_data” in this example)

    # this command will report the available volume groups
    [root@node2 /] vgscan
      Reading all physical volumes.  This may take a while...
      Found volume group "vg_mysql_data" using metadata type lvm2
     
    # this command will report the available logical volumes, you should see the LV INACTIVE now
    [root@node2 /] lvscan
      INACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB] inherit
     
    # this command will activate all logical volumes within the volume group
    [root@node2 /] vgchange -ay vg_mysql_data
     
    # this command will report the available logical volumes, you should see the LV ACTIVE now
    [root@node2 /] lvscan
      ACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB]
  10. After the device has been added it should show up on the “node2″ operating system – you can retrieve the new UUID using the following command (in case you have mounted using “/dev/disk/by-name” and the name of the new disk is the same as the previous you do not need to update “/etc/fstab” e.g. this holds true for VM instances created using the Percona XtraDB click-to-deploy installer)

    [root@node2 /] ls -l /dev/disk/by-uuid/
    total 0
    lrwxrwxrwx 1 root root 10 Feb 14 15:56 4ad2d22b-500a-4ad2-b929-12f38347659c -> ../../sda1
    lrwxrwxrwx 1 root root 10 Feb 19 03:12 9e48fefc-960c-456f-95c9-9d893bcafc62 -> ../../dm-0   # This is the 'new' disk
  11.  You can now proceed to adding the new UUID you retrieved in step 9 to “/etc/fstab” (unless you are using “/dev/disk/by-name” with the same disk name) and mount the new disk

    [root@node2 /] vi /etc/fstab
    ...
    UUID=9e48fefc-960c-456f-95c9-9d893bcafc62 /var/lib/mysql ext4 defaults,noatime 0 0
    ...
     
    [root@node2 /] mount -a
  12. Verify the data is mounted correctly and the ownership of the data directory and sub-contents are using the correct UID / GID for the MySQL user on the destination system (although this is usually OK, it is good to do a quick check)

    [root@node2 /] ls -lhtR /var/lib/mysql/
  13. You are now ready to start MySQL and verify that the node has in fact initialised with IST (provided you have sufficient “gcache” available there shouldn’t be any other issues)

    [root@node2 /] service mysql start & tail -f /var/log/mysql/error.log

The Percona XtraDB Click-to-deploy tool can be used for automated deployments and further details on creating a cluster on Google Compute Engine using this method can be found in Jay Janssen’s post, “Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process.”

 

The post Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots appeared first on MySQL Performance Blog.

Nov
21
2014
--

Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process

I’m happy to announce that Google has added Click-to-deploy functionality for Percona XtraDB Cluster (PXC) on Google Cloud Compute nodes. This gives you the ability to rapidly spin up a cluster for experimentation, performance testing, or even production use on Google Compute Engine virtual machines.

What is Percona XtraDB Cluster?

Google Cloud Platform adds Percona XtraDB Cluster to click-to-deploy processPercona XtraDB Cluster is a virtually synchronous cluster of MySQL Innodb nodes. Unlike conventional MySQL asynchronous replication which has a specific network topology of master and slaves, PXC’s nodes have no specific topology.  Functionally, this means that there are no masters and slaves, so you can read and write on any node.

Further, any failure in the cluster does not require any re-arranging of the replication topology. Instead, clients just reconnect to another node and continue reading and writing.

We have a ton of material about Percona XtraDB Cluster in previous posts, in the PXC manual, and in various webinars. If you want a concentrated hour overview of Percona XtraDB Cluster, I’d recommend watching this webinar.

How do I use Click-to-deploy?

Simply visit Google Cloud’s solutions page here: https://cloud.google.com/solutions/percona to get started. You are given a simple setup wizard that allows you choose the size and quantity of nodes you want, disk storage type and volume, etc.  Once you ‘Deploy Cluster’, your instances will launch and form a cluster automatically with sane default tunings. After that, it’s all up to you what you want to do.

Seeing it in action

Once your instances launch, you can add an SSH key to access (you can also install the Google Cloud SDK). This, handily, creates a new account based on the username in the SSH public key text. Once I add the key, I can easily just ssh to the public IP of the instance and I have my own account:

jayj@~ [500]$ ssh [public ip]
Linux percona-mysql-niyr 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
jayj@percona-mysql-niyr:~$

Once there I installed sysbench 0.5 from the Percona apt repo:

jayj@percona-mysql-niyr:~$ sudo -i
root@percona-mysql-niyr:~# apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
root@percona-mysql-niyr:~# echo "deb http://repo.percona.com/apt wheezy main" > /etc/apt/sources.list.d/percona.list
root@percona-mysql-niyr:~# apt-get update; apt-get install sysbench -y
...
Setting up sysbench (0.5-3.wheezy) ...
root@percona-mysql-niyr:~# logout
jayj@percona-mysql-niyr:~$

Now we can load some data and run a quick test:

jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-tables-count=16 --oltp-table-size=1000000 --oltp-auto-inc=off --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard] --mysql-db=test run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Random number generator seed is 0 and will be ignored
Threads started!
thread prepare4
Creating table 'sbtest5'...
thread prepare2
Creating table 'sbtest3'...
thread prepare6
Creating table 'sbtest7'...
thread prepare0
Creating table 'sbtest1'...
thread prepare7
Creating table 'sbtest8'...
thread prepare1
Creating table 'sbtest2'...
thread prepare5
Creating table 'sbtest6'...
thread prepare3
Creating table 'sbtest4'...
Inserting 100000 records into 'sbtest5'
Inserting 100000 records into 'sbtest3'
Inserting 100000 records into 'sbtest7'
Inserting 100000 records into 'sbtest1'
Inserting 100000 records into 'sbtest8'
Inserting 100000 records into 'sbtest2'
Inserting 100000 records into 'sbtest6'
Inserting 100000 records into 'sbtest4'
Creating table 'sbtest13'...
Creating table 'sbtest15'...
Inserting 100000 records into 'sbtest13'
Inserting 100000 records into 'sbtest15'
Creating table 'sbtest11'...
Creating table 'sbtest12'...
Inserting 100000 records into 'sbtest11'
Creating table 'sbtest16'...
Inserting 100000 records into 'sbtest12'
Creating table 'sbtest10'...
Creating table 'sbtest14'...
Creating table 'sbtest9'...
Inserting 100000 records into 'sbtest16'
Inserting 100000 records into 'sbtest9'
Inserting 100000 records into 'sbtest10'
Inserting 100000 records into 'sbtest14'
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           608
        other:                           32
        total:                           640
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 608    (11.33 per sec.)
    other operations:                    32     (0.60 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          53.6613s
    total number of events:              10000
    total time taken by event execution: 0.0042s
    response time:
         min:                                  0.00ms
         avg:                                  0.00ms
         max:                                  0.02ms
         approx.  95 percentile:               0.00ms
Threads fairness:
    events (avg/stddev):           1250.0000/3307.19
    execution time (avg/stddev):   0.0005/0.00
jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/update_index.lua --oltp-tables-count=16 --oltp-table-size=1000000 --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard]--mysql-db=test --max-requests=0 --max-time=10 --report-interval=1 --oltp-auto-inc=off --rand-init=on --rand-type=uniform run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Report intermediate results every 1 second(s)
Initializing random number generator from timer.
Random number generator seed is 0 and will be ignored
Threads started!
[   1s] threads: 8, tps: 0.00, reads: 0.00, writes: 1396.43, response time: 10.85ms (95%), errors: 0.00, reconnects:  0.00
[   2s] threads: 8, tps: 0.00, reads: 0.00, writes: 1314.13, response time: 14.91ms (95%), errors: 0.00, reconnects:  0.00
[   3s] threads: 8, tps: 0.00, reads: 0.00, writes: 1382.87, response time: 12.34ms (95%), errors: 0.00, reconnects:  0.00
[   4s] threads: 8, tps: 0.00, reads: 0.00, writes: 949.09, response time: 12.88ms (95%), errors: 0.00, reconnects:  0.00
[   5s] threads: 8, tps: 0.00, reads: 0.00, writes: 1312.01, response time: 11.27ms (95%), errors: 0.00, reconnects:  0.00
[   6s] threads: 8, tps: 0.00, reads: 0.00, writes: 898.92, response time: 11.64ms (95%), errors: 0.00, reconnects:  0.00
[   7s] threads: 8, tps: 0.00, reads: 0.00, writes: 1541.71, response time: 10.59ms (95%), errors: 0.00, reconnects:  0.00
[   8s] threads: 8, tps: 0.00, reads: 0.00, writes: 1551.35, response time: 11.48ms (95%), errors: 0.00, reconnects:  0.00
[   9s] threads: 8, tps: 0.00, reads: 0.00, writes: 923.07, response time: 10.40ms (95%), errors: 0.00, reconnects:  0.00
[  10s] threads: 8, tps: 0.00, reads: 0.00, writes: 1273.99, response time: 11.01ms (95%), errors: 0.00, reconnects:  0.00
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           12551
        other:                           0
        total:                           12551
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 12551  (1254.65 per sec.)
    other operations:                    0      (0.00 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          10.0036s
    total number of events:              12551
    total time taken by event execution: 79.9602s
    response time:
         min:                                  1.13ms
         avg:                                  6.37ms
         max:                                389.52ms
         approx.  95 percentile:              11.68ms
Threads fairness:
    events (avg/stddev):           1568.8750/4.81
    execution time (avg/stddev):   9.9950/0.00

So, we see ~1200 tps on an update test in our little cluster, not too bad!

What it is not

This wizard is a handy way to get a cluster setup for some experimentation or testing. However, it is not a managed service:  after you launch it, you’re responsible for maintaining, tuning, etc. You could use it for production, but you may want some further fine tuning, operational procedures, etc. All of this is absolutely something Percona can help you with.

The post Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process appeared first on MySQL Performance Blog.

Apr
07
2014
--

Red Hat Teams With Google Compute Engine To Ease Cloud Management

Hand holding cloud cut out with crown Red Hat announced a deal today in which Red Hat Enterprise Linux customers could move their subscriptions to Google Compute Engine, a deal that could benefit both companies. Red Hat offers what it calls a “bring your own subscription” plan. For instance, Red Hat Linux customers can shift their installations from on-premises to a public cloud provider of their choice as long as it’s part of… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com