Google Cloud opens its Seoul region

Google Cloud today announced that its new Seoul region, its first in Korea, is now open for business. The region, which it first talked about last April, will feature three availability zones and support for virtually all of Google Cloud’s standard service, ranging from Compute Engine to BigQuery, Bigtable and Cloud Spanner.

With this, Google Cloud now has a presence in 16 countries and offers 21 regions with a total of 64 zones. The Seoul region (with the memorable name of asia-northeast3) will complement Google’s other regions in the area, including two in Japan, as well as regions in Hong Kong and Taiwan, but the obvious focus here is on serving Korean companies with low-latency access to its cloud services.

“As South Korea’s largest gaming company, we’re partnering with Google Cloud for game development, infrastructure management, and to infuse our operations with business intelligence,” said Chang-Whan Sul, the CTO of Netmarble. “Google Cloud’s region in Seoul reinforces its commitment to the region and we welcome the opportunities this initiative offers our business.”

Over the course of this year, Google Cloud also plans to open more zones and regions in Salt Lake City, Las Vegas and Jakarta, Indonesia.


AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.


Google Cloud gets capacity reservations, extends committed use discounts beyond CPUs

Google Cloud made two significant pricing announcements today. Those, you’ll surely be sad to hear, don’t involve the usual price drops for compute and storage. Instead, Google Cloud today announced that it is extending its committed-use discounts, which give you a significant discount when you commit to using a certain number of resources for one or three years, to GPUs, Cloud TPU Pods and local SSDs. In return for locking yourself into a long-term plan, you can get discounts of 55% off on-demand prices.

In addition, Google is launching a capacity reservation system for Compute Engine that allows users to reserve resources in a specific zone for later use to ensure that they have guaranteed access to these resources when needed.

At first glance, capacity reservations may seem like a weird concept in the cloud. The promise of cloud computing, after all, is that you can just spin machines up and down at will — and never really have to think about availability.

So why launch a reservation system? “This is ideal for use cases like disaster recovery or peace of mind, so a customer knows that they have some extra resources, but also for retail events like Black Friday or Cyber Monday,” Google senior product manager Manish Dalwadi told me.

These users want to have absolute certainty that when they need the resources, they will be available to them. And while many of us think of the large clouds as having a virtually infinite amount of virtual machines available at any time, some machine types may occasionally only be available in a different availability zone, for example, that is not the same zone as where the rest of your compute resources are.

Users can create or delete reservations at any time and any existing discounts — including sustained use discounts and committed use discounts — will be applied automatically.

As for committed-use discounts, it’s worth noting that Google always took a pretty flexible approach to this. Users don’t have to commit to using a specific machine type for three years, for example. Instead, they commit to using a specific number of CPU cores and memory, for example.

“What we heard from customers was that other commit models are just too inflexible and their utilization rates were very low, like 70, 60% utilization,” Google product director Paul Nash told me. “So one of our design goals with committed-use discounts was to figure out how we could provide something that gives us the capacity planning signal that we need, provides the same amount of discounts that we want to pass on to customers, but do it in a way that customers actually feel like they are getting a great deal and so that they don’t have to hyper-manage these things in order to get the most out of them.”

Both the extended committed-use discounts and the new capacity reservation system for Compute Engine resources are now live in the Google Cloud.


Google Cloud expands its networking feature with Cloud NAT

It’s a busy week for news from Google Cloud, which is hosting its Next event in London. Today, the company used the event to launch a number of new networking features. The marquee launch today is Cloud NAT, a new service that makes it easier for developers to build cloud-based services that don’t have public IP addresses and can only be accessed from applications within a company’s virtual private cloud.

As Google notes, building this kind of setup was already possible, but it wasn’t easy. Obviously, this is a pretty common use case, though, so with Cloud NAT, Google now offers a fully managed service that handles all the network address translation (hence the NAT) and provides access to these private instances behind the Cloud NAT gateway.

Cloud NAT supports Google Compute Engine virtual machines as well as Google Kubernetes Engine containers, and offers both a manual mode where developers can specify their IPs and an automatic mode where IPs are automatically allocated.

Also new in today’s release is Firewall Rules Logging, which is now in beta. Using this feature, admins can audit, verify and analyze the effects of their firewall rules. That means when there are repeated connection attempts that the firewall blocked, you can now analyze those and see whether somebody was up to no good or whether somebody misconfigured the firewall. Because the data is only delayed by about five seconds, the service provides near real-time access to this data — and you can obviously tie this in with other services like Stackdriver Logging, Cloud Pub/Sub and BigQuery to create alerts and further analyze the data.

Also new today is managed TLS certificated for HTTPS load balancers. The idea here is to take the hassle out of managing TLS certificates (the kind of certificates that ensure that your user’s browser creates a secure connection to your app) when there is a load balancer in play. This feature, too, is now in beta.


Google Compute Engine goes a little crazy with up to 96 CPU cores and 624 GB of memory

 If you’ve got a resource-hungry app, Google Compute Engine’s latest offering has you covered. It lets you dial up to 96 CPUs and an other-worldly 624 GB of memory. Remember Bill Gates asking who would ever need more than 64K of memory. He obviously didn’t see this coming. If you think that’s a lot, you aren’t kidding, and believe it or not it’s a big boost… Read More


Google Cloud adds support for more powerful Nvidia GPUs

 Google Cloud Platform announced support for some powerful Nvidia GPUs on Google Compute Engine today. For starters, the company is making Nvidia K80 GPUs generally available. At the same time, it’s launching support for Nvidia P100 GPUs in Beta along with a new sustained pricing model. For companies working with machine learning workloads, having access to GPUs in the cloud provides… Read More


Google Now Lets Developers Bring Their Own Security Keys To Compute Engine

5399162987_5d0402e809_o Starting today, developers who use Google’s Compute Engine infrastructure as a service platform will be able to bring their own security keys to the service. Google argues that using these customer-supplied encryption keys, which are now in public beta, give its users more control over their data security.
By default, Google encrypts all of the data on its service with an AES-256 bit… Read More


Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots

As the need for information grows so does the size of data we need to keep in our databases. SST is unavoidable for spinning up new nodes in a PXC cluster and when datasets reach the “terra-byte” range this becomes ever more cumbersome requiring many hours for a new node to synchronize.

More often that not, it is necessary to implement custom “wsrep_sst” scripts or resort to manual synchronization processes. Luckily cloud providers provide convenient methods to leverage disk snapshots that can be used to quickly transfer data between nodes.

This article deals with the actions needed to perform a snapshot on Google’s Compute Engine (GCE) infrastructure. A similar method can be used on AWS EC2 instances using EBS snapshots or any other form of snapshots such as LVM, ZFS or SAN. The steps described can be used to add a new node to a PXC cluster or to avoid SST. The following procedure can also be used to take advantage of the performance benefit of GCE Snapshots. A similar procedure can be used for adding a regular slave provided the binary log co-ordinates have been captured. This article assumes your “datadir” is on a separate disk to your operating system partition using the “ext4″ filesystem:

  1. Select a suitable “donor” node, we will use “node1″ for this example.
  2. Stop the MySQL service on “node1″ or perform a FTWRL with the MySQL service running on a node which is in “desync/donor” mode
    # Take the snapshot from a stopped instance
    [root@node1 /] service mysql stop & tail -f /var/log/mysql/error.log
    # OR alternatively take the snapshot from a 'desynced' node
    ### desync from cluster replication
    mysql> set global wsrep_desync=ON; 
    ### get FTWRL
    mysql> flush tables with read lock;
  3. While the MySQL service is down on “node1″ or the FTWRL is held create a snapshot in the Google Developer Console for the disk or using the GCE API (* this assumes that the datadir is located in a separate standalone disk). This part of the process takes around 15 minutes for a 3.5 TB disk.
    gcloud compute disks snapshot node1-datadir-disk --snapshot-name node1-datadir-disk-snapshot-1
  4. As soon as the snapshot has completed start the MySQL service on “node1″ (verifying the node has successfully joined the cluster) or release the FTWRL
    # Depending on the steps followed in step 1 either start MySQL on node1
    [root@node1 /] service mysql start & tail -f /var/log/mysql/error.log
    # OR alternatively release the FTWRL and "sync" the node
    ### release FTWRL
    mysql> unlock tables;
    ### if there is high load on the cluster monitor wsrep_local_recv_queue 
    ### until it reaches 0 before running the following command to rejoin 
    ### the cluster replication (otherwise it can be run immediately after
    ### releasing the FTWRL):
    mysql> set global wsrep_desync=OFF;

    ***** IMPORTANT NOTE: In case “node1″ is unable to rejoin the cluster or requires an SST you will need to re-create the snapshot from another node or after SST completes.

  5. Now connect to the “joiner” node, we will use “node2″ for this example.
  6. Unmount the existing disk from “node2″ for this example (assuming MySQL service is not running else stop the MySQL service first)

    [root@node2 /] umount /var/lib/mysql
  7. Detach and delete the disk containing the MySQL datadir from the “node2″ instance in the Google Developer Console or using the GCE API
    gcloud compute instances detach-disk node2 --disk node2-datadir-disk
    gcloud compute disks delete node2-datadir-disk
  8. Create and attach a new disk to the “node2″ instance in the Google Developer Console or using the GCE API using the snapshot you created in step 3. This part of the process takes around 10 minutes for a 3.5 TB disk
    gcloud compute disks create node2-datadir-disk --source-snapshot node1-datadir-disk-snapshot-1
    gcloud compute instance attach-disk node2 --disk node2-datadir-disk
  9. [ *** LVM only step *** ]: If you are using LVM the device will not show up in this list until you have activated the Volume Group (“vg_mysql_data” in this example)

    # this command will report the available volume groups
    [root@node2 /] vgscan
      Reading all physical volumes.  This may take a while...
      Found volume group "vg_mysql_data" using metadata type lvm2
    # this command will report the available logical volumes, you should see the LV INACTIVE now
    [root@node2 /] lvscan
      INACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB] inherit
    # this command will activate all logical volumes within the volume group
    [root@node2 /] vgchange -ay vg_mysql_data
    # this command will report the available logical volumes, you should see the LV ACTIVE now
    [root@node2 /] lvscan
      ACTIVE            '/dev/vg_mysql_data/lv_mysql' [20.00 TiB]
  10. After the device has been added it should show up on the “node2″ operating system – you can retrieve the new UUID using the following command (in case you have mounted using “/dev/disk/by-name” and the name of the new disk is the same as the previous you do not need to update “/etc/fstab” e.g. this holds true for VM instances created using the Percona XtraDB click-to-deploy installer)

    [root@node2 /] ls -l /dev/disk/by-uuid/
    total 0
    lrwxrwxrwx 1 root root 10 Feb 14 15:56 4ad2d22b-500a-4ad2-b929-12f38347659c -> ../../sda1
    lrwxrwxrwx 1 root root 10 Feb 19 03:12 9e48fefc-960c-456f-95c9-9d893bcafc62 -> ../../dm-0   # This is the 'new' disk
  11.  You can now proceed to adding the new UUID you retrieved in step 9 to “/etc/fstab” (unless you are using “/dev/disk/by-name” with the same disk name) and mount the new disk

    [root@node2 /] vi /etc/fstab
    UUID=9e48fefc-960c-456f-95c9-9d893bcafc62 /var/lib/mysql ext4 defaults,noatime 0 0
    [root@node2 /] mount -a
  12. Verify the data is mounted correctly and the ownership of the data directory and sub-contents are using the correct UID / GID for the MySQL user on the destination system (although this is usually OK, it is good to do a quick check)

    [root@node2 /] ls -lhtR /var/lib/mysql/
  13. You are now ready to start MySQL and verify that the node has in fact initialised with IST (provided you have sufficient “gcache” available there shouldn’t be any other issues)

    [root@node2 /] service mysql start & tail -f /var/log/mysql/error.log

The Percona XtraDB Click-to-deploy tool can be used for automated deployments and further details on creating a cluster on Google Compute Engine using this method can be found in Jay Janssen’s post, “Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process.”


The post Improving Percona XtraDB Cluster SST startup with Google Compute Engine snapshots appeared first on MySQL Performance Blog.


Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process

I’m happy to announce that Google has added Click-to-deploy functionality for Percona XtraDB Cluster (PXC) on Google Cloud Compute nodes. This gives you the ability to rapidly spin up a cluster for experimentation, performance testing, or even production use on Google Compute Engine virtual machines.

What is Percona XtraDB Cluster?

Google Cloud Platform adds Percona XtraDB Cluster to click-to-deploy processPercona XtraDB Cluster is a virtually synchronous cluster of MySQL Innodb nodes. Unlike conventional MySQL asynchronous replication which has a specific network topology of master and slaves, PXC’s nodes have no specific topology.  Functionally, this means that there are no masters and slaves, so you can read and write on any node.

Further, any failure in the cluster does not require any re-arranging of the replication topology. Instead, clients just reconnect to another node and continue reading and writing.

We have a ton of material about Percona XtraDB Cluster in previous posts, in the PXC manual, and in various webinars. If you want a concentrated hour overview of Percona XtraDB Cluster, I’d recommend watching this webinar.

How do I use Click-to-deploy?

Simply visit Google Cloud’s solutions page here: https://cloud.google.com/solutions/percona to get started. You are given a simple setup wizard that allows you choose the size and quantity of nodes you want, disk storage type and volume, etc.  Once you ‘Deploy Cluster’, your instances will launch and form a cluster automatically with sane default tunings. After that, it’s all up to you what you want to do.

Seeing it in action

Once your instances launch, you can add an SSH key to access (you can also install the Google Cloud SDK). This, handily, creates a new account based on the username in the SSH public key text. Once I add the key, I can easily just ssh to the public IP of the instance and I have my own account:

jayj@~ [500]$ ssh [public ip]
Linux percona-mysql-niyr 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

Once there I installed sysbench 0.5 from the Percona apt repo:

jayj@percona-mysql-niyr:~$ sudo -i
root@percona-mysql-niyr:~# apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
root@percona-mysql-niyr:~# echo "deb http://repo.percona.com/apt wheezy main" > /etc/apt/sources.list.d/percona.list
root@percona-mysql-niyr:~# apt-get update; apt-get install sysbench -y
Setting up sysbench (0.5-3.wheezy) ...
root@percona-mysql-niyr:~# logout

Now we can load some data and run a quick test:

jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/parallel_prepare.lua --oltp-tables-count=16 --oltp-table-size=1000000 --oltp-auto-inc=off --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard] --mysql-db=test run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Random number generator seed is 0 and will be ignored
Threads started!
thread prepare4
Creating table 'sbtest5'...
thread prepare2
Creating table 'sbtest3'...
thread prepare6
Creating table 'sbtest7'...
thread prepare0
Creating table 'sbtest1'...
thread prepare7
Creating table 'sbtest8'...
thread prepare1
Creating table 'sbtest2'...
thread prepare5
Creating table 'sbtest6'...
thread prepare3
Creating table 'sbtest4'...
Inserting 100000 records into 'sbtest5'
Inserting 100000 records into 'sbtest3'
Inserting 100000 records into 'sbtest7'
Inserting 100000 records into 'sbtest1'
Inserting 100000 records into 'sbtest8'
Inserting 100000 records into 'sbtest2'
Inserting 100000 records into 'sbtest6'
Inserting 100000 records into 'sbtest4'
Creating table 'sbtest13'...
Creating table 'sbtest15'...
Inserting 100000 records into 'sbtest13'
Inserting 100000 records into 'sbtest15'
Creating table 'sbtest11'...
Creating table 'sbtest12'...
Inserting 100000 records into 'sbtest11'
Creating table 'sbtest16'...
Inserting 100000 records into 'sbtest12'
Creating table 'sbtest10'...
Creating table 'sbtest14'...
Creating table 'sbtest9'...
Inserting 100000 records into 'sbtest16'
Inserting 100000 records into 'sbtest9'
Inserting 100000 records into 'sbtest10'
Inserting 100000 records into 'sbtest14'
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           608
        other:                           32
        total:                           640
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 608    (11.33 per sec.)
    other operations:                    32     (0.60 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          53.6613s
    total number of events:              10000
    total time taken by event execution: 0.0042s
    response time:
         min:                                  0.00ms
         avg:                                  0.00ms
         max:                                  0.02ms
         approx.  95 percentile:               0.00ms
Threads fairness:
    events (avg/stddev):           1250.0000/3307.19
    execution time (avg/stddev):   0.0005/0.00
jayj@percona-mysql-niyr:~$ sysbench --test=/usr/share/doc/sysbench/tests/db/update_index.lua --oltp-tables-count=16 --oltp-table-size=1000000 --num-threads=8 --mysql-user=root --mysql-password=[the admin password you set in the wizard]--mysql-db=test --max-requests=0 --max-time=10 --report-interval=1 --oltp-auto-inc=off --rand-init=on --rand-type=uniform run
sysbench 0.5:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 8
Report intermediate results every 1 second(s)
Initializing random number generator from timer.
Random number generator seed is 0 and will be ignored
Threads started!
[   1s] threads: 8, tps: 0.00, reads: 0.00, writes: 1396.43, response time: 10.85ms (95%), errors: 0.00, reconnects:  0.00
[   2s] threads: 8, tps: 0.00, reads: 0.00, writes: 1314.13, response time: 14.91ms (95%), errors: 0.00, reconnects:  0.00
[   3s] threads: 8, tps: 0.00, reads: 0.00, writes: 1382.87, response time: 12.34ms (95%), errors: 0.00, reconnects:  0.00
[   4s] threads: 8, tps: 0.00, reads: 0.00, writes: 949.09, response time: 12.88ms (95%), errors: 0.00, reconnects:  0.00
[   5s] threads: 8, tps: 0.00, reads: 0.00, writes: 1312.01, response time: 11.27ms (95%), errors: 0.00, reconnects:  0.00
[   6s] threads: 8, tps: 0.00, reads: 0.00, writes: 898.92, response time: 11.64ms (95%), errors: 0.00, reconnects:  0.00
[   7s] threads: 8, tps: 0.00, reads: 0.00, writes: 1541.71, response time: 10.59ms (95%), errors: 0.00, reconnects:  0.00
[   8s] threads: 8, tps: 0.00, reads: 0.00, writes: 1551.35, response time: 11.48ms (95%), errors: 0.00, reconnects:  0.00
[   9s] threads: 8, tps: 0.00, reads: 0.00, writes: 923.07, response time: 10.40ms (95%), errors: 0.00, reconnects:  0.00
[  10s] threads: 8, tps: 0.00, reads: 0.00, writes: 1273.99, response time: 11.01ms (95%), errors: 0.00, reconnects:  0.00
OLTP test statistics:
    queries performed:
        read:                            0
        write:                           12551
        other:                           0
        total:                           12551
    transactions:                        0      (0.00 per sec.)
    read/write requests:                 12551  (1254.65 per sec.)
    other operations:                    0      (0.00 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
General statistics:
    total time:                          10.0036s
    total number of events:              12551
    total time taken by event execution: 79.9602s
    response time:
         min:                                  1.13ms
         avg:                                  6.37ms
         max:                                389.52ms
         approx.  95 percentile:              11.68ms
Threads fairness:
    events (avg/stddev):           1568.8750/4.81
    execution time (avg/stddev):   9.9950/0.00

So, we see ~1200 tps on an update test in our little cluster, not too bad!

What it is not

This wizard is a handy way to get a cluster setup for some experimentation or testing. However, it is not a managed service:  after you launch it, you’re responsible for maintaining, tuning, etc. You could use it for production, but you may want some further fine tuning, operational procedures, etc. All of this is absolutely something Percona can help you with.

The post Google Compute Engine adds Percona XtraDB Cluster to click-to-deploy process appeared first on MySQL Performance Blog.


Red Hat Teams With Google Compute Engine To Ease Cloud Management

Hand holding cloud cut out with crown Red Hat announced a deal today in which Red Hat Enterprise Linux customers could move their subscriptions to Google Compute Engine, a deal that could benefit both companies. Red Hat offers what it calls a “bring your own subscription” plan. For instance, Red Hat Linux customers can shift their installations from on-premises to a public cloud provider of their choice as long as it’s part of… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com