Jun
24
2021
--

Deploy a Dedicated Percona Server for MySQL 8.0 in Azure

MySQL in Azure

MySQL in AzureThis quickstart shows you how to use the Azure portal to deploy a dedicated Percona Server for MySQL 8.0 drop-in replacement for MySQL that provides superior performance, scalability, and instrumentation. It also shows you how to connect to the server.

Prerequisites

Existing Azure subscription. If you don’t have a subscription, create a free Azure account before starting.

Create Percona Server for MySQL 8.0 Azure Instance

  1. Go to the Azure portal to create a MySQL database using Percona Server for MySQL 8.0 image. Search for and select Percona Server for MySQL 8.0:Azure MySQL
  2. On the Percona Server for MySQL 8.0 page, press Create:
  3. Create a virtual machine:
    – Create a new Resource group:create a virtual machine
    – Adjust Instance details options:Azure MySQL– Select a VM size to support the workload that you want to run(Size):– Setup Administrator account access:

    Setting Description
    Subscription All resources in an Azure subscription are billed together.
    Resource group A resource group is a collection of resources that share the same lifecycle, permissions, and policies.
    Virtual machine name Virtual machines in Azure have two distinct names: virtual machine name used as the Azure resource identifier, and guest hostname. When you create a VM in the portal, the same name is used for both the virtual machine name and the hostname. The virtual machine name cannot be changed after the VM is created. You can change the hostname when you log into the virtual machine.
    Region Choose the Azure region that’s right for you and your customers. Not all VM sizes are available in all regions
    Availability options Azure offers a range of options for managing availability and resiliency for your applications. Architect your solution to use replicated VMs in Availability Zones or Availability Sets to protect your apps and data from datacenter outages and maintenance events. Learn more
    Availability zone You can optionally specify an availability zone in which to deploy your VM. If you choose to do so, your managed disk and public IP (if you have one) will be created in the same availability zone as your virtual machine.
    Image Choose Percona Server ps_8.0.22-13 – Gen1 image as the base operating system for the VM
    Size Select a VM size to support the workload that you want to run. The size that you choose then determines factors such as processing power, memory, and storage capacity. Azure offers a wide variety of sizes to support many types of uses. Azure charges an hourly price based on the VM’s size and operating system. Learn more about Virtual Machine sizes
    Authentication type Choose whether the administrator account will use username/password or SSH keys for authentication.
    Username The administrator username for the VM

    – Open Management tab and select Enable with custom storage account boot diagnostics option to abe to use Serial Console:
    *Note: Serial Console is currently incompatible with a managed boot diagnostics storage account. To use Serial Console, ensure that you are using a custom storage account.

    – Press Review + create to provision Percona Server for MySQL 8.0, review settings and save private ssh key:

  4. Once the instance is created, you can look around and check all its settings:
  5. Select Go to resource and open Serial Console to check VM boot log. VM is up and running and no issues were reported:
    *Note: Please pay attention to the message from Percona Server for MySQL 8.0. It provides the end-user with a temporary MySQL password and informs them about the need to change it.

  6. Select Deployment details -> INSTANCE_NAME-ip and copy instance public IP for connecting to the Percona MySQL server:

Test Local and Remote Connection to the Percona MySQL Server in Azure

Local connection:

  1. Connect to the server using the previously defined username and IP:
  2. Open mysqld.log and find a temporary password that was set during MySQL Percona Server instance creation:

    [azureuser@ps-80 ~]$ sudo less /var/log/mysqld.log

  3. Connect to the MySQL database and change the root’s password:

    ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyVeRySeCuReP@ZzW0Rd';

  4. Create a test database with a table in it and add one record:

    CREATE DATABASE ps80;
    CREATE TABLE ps80.example (id INT PRIMARY KEY, message VARCHAR(30));
    INSERT INTO ps80.example VALUES (1, 'Hello Percona-Server 8.0');
    SELECT * FROM ps80.example;

Remote connection:

  1. Create a new test MySQL user for remote connections:

    CREATE USER 'ps80'@'%' IDENTIFIED BY 'MyVeRySeCuReP@ZzW0Rd_2';
    GRANT ALL PRIVILEGES ON ps80.* TO 'ps80'@'%';
    flush privileges;
  2. Connect from the remote host to the Percona Server for MySQL in Azure and check our test database and record in the created table:

    mysql -u ps80 -h 52.249.221.105 -p
    ...
    SELECT * FROM ps80.example;

Percona Server for MySQL is trusted by thousands of enterprises to provide better performance and concurrency for their most demanding workloads than other MySQL servers and delivers greater value to MySQL server users with optimized performance, greater scalability and availability, enhanced backups and increased visibility.
Now Percona Server for MySQL is available in Azure.

Apr
16
2021
--

Percona Monitoring and Management 2.16 Brings Microsoft Azure Monitoring via a Technical Preview

Percona Monitoring and Management 2.16 release

Percona Monitoring and Management 2.16 releaseThis week we release Percona Monitoring and Management 2.16 (PMM), which brings some exciting new additions we’d like to highlight!

Amazon RDS PostgreSQL Monitoring

AWS monitoring in PMM now covers PostgreSQL RDS and PostgreSQL Aurora types. PMM will include them in a Discovery UI where they can be added which will result in node-related metrics as well as PostgreSQL database performance metrics. Before this release, this was available only to MySQL-related instances from Amazon RDS.

Security Threat Tool Scheduling

Security Threat Tool users are now able to control the Security Check execution time intervals for groups of checks, move checks between groups, and disable individual checks if necessary, allowing for an even more configurable experience for users.

Microsoft Azure Discovery and Node Metrics Extraction

Percona Monitoring and Management now monitors Azure instances and can collect Azure DB metrics as well as available System metrics. (Please note that only basic metrics are provided by Azure Portal.)

This means that as of today our Technical Preview has PMM providing the same level of support for Microsoft Azure Database as a Service (DBaaS) as we have for AWS’s DBaaS (RDS/Aurora on MySQL or PostgreSQL). Users are able to easily discover and add Azure databases for monitoring by PMM complete with node-level monitoring. This feature is available only if you explicitly activate it on the PMM Settings page. Deactivating it will not remove added services from monitoring, but will just hide the ability to discover and add new Microsoft Azure Services. Read more about Microsoft Azure monitoring within Percona Monitoring and Management.

Percona Monitoring and Management 2.16

Percona Live, the open source database conference, is going to be even BIGGER and BETTER in 2021. Registration is now OPEN! 

Improvements to Integrated Alerting within PMM

The PMM 2.16 release also brings numerous improvements to the Technical Preview of Integrated Alerting within Percona Monitoring and Management. You can read more on the design and implementation details of this work at that link.

Additional PMM 2.16 release highlights include…

Support for pg_stat_monitor v0.8

Technical Preview: Added compatibility with pg_stat_monitor plugin v 0.8.0. This is not exposing the new features for the plugin in PMM yet but ensures Query Analytics metrics are collected to the same degree it was with version 0.6.0 of the plugin.

[DBaaS] Resource planning and prediction (Resource calculator)

The Preview of DBaaS in PMM: While creating a DB cluster a user can see a prediction of the resources this cluster will consume with all components as well as the current total and available resources in the Kubernetes cluster. Users will be warned that if they attempt to create a DB cluster; it may be unsuccessful because of available resources in the Kubernetes cluster.

[DBaaS] Percona Server for MongoDB 1.7.0 Operator Support

The Preview of DBaaS in PMM will be using the recently-released Percona Kubernetes Operator for Percona Server for MongoDB 1.7.0 to create MongoDB clusters.

Conclusion

The release of PMM 2.16 includes many impressive enhancements AND brand new features for our user base. We hope as always that you will continue to let us know your thoughts on these new PMM v2 features as well as any ideas you have for improvement!

Download and try Percona Monitoring and Management today! Read the PMM 2.16 full release notes.

 

Apr
07
2021
--

Percona Kubernetes Operators and Azure Blob Storage

Percona Kubernetes Operators and Azure Blob Storage

Percona Kubernetes Operators allow users to simplify deployment and management of MongoDB and MySQL databases on Kubernetes. Both operators allow users to store backups on S3-compatible storage and leverage Percona XtraBackup and Percona Backup for MongoDB to deliver backup and restore functionality. Both backup tools do not work with Azure Blob Storage, which is not compatible with the S3 protocol.

This blog post explains how to run Percona Kubernetes Operators along with MinIO Gateway on Azure Kubernetes Service (AKS) and store backups on Azure Blob Storage:

Percona Kubernetes Operators along with MinIO Gateway

Setup

Prerequisites:

  • Azure account
  • Azure Blob Storage account and container (the Bucket in AWS terms)
  • Cluster deployed with Azure Kubernetes Service (AKS)

Deploy MinIO Gateway

I have prepared the manifest to deploy the MinIO gateway to Kubernetes, you can find them in the Github repo here.

First, create a separate namespace:

kubectl create namespace minio-gw

Create the secret which contains credentials for Azure Blob Storage:

$ cat minio-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: minio-secret
stringData:
  AZURE_ACCOUNT_NAME: Azure_account_name
  AZURE_ACCOUNT_KEY: Azure_account_key

$ kubectl -n minio-gw apply -f minio-secret.yaml

Apply

minio-gateway.yaml

 from the repository. This manifest does two things:

  1. Creates MinIO Pod backed by Deployment object
  2. Exposes this Pod on port 9000 as a ClusterIP through a Service object
$ kubectl -n minio-gw apply -f blog-data/operators-azure-blob/minio-gateway.yaml

It is also possible to use Helm Charts and deploy the Gateway with MinIO Operator. You can read more about it here. Running a MinIO Operator might be a good choice, but it is an overkill for this blog post.

Deploy PXC

Get the code from Github:

git clone -b v1.7.0 https://github.com/percona/percona-xtradb-cluster-operator

Deploy the bundle with Custom Resource Definitions:

cd percona-xtradb-cluster-operator 
kubectl apply -f deploy/bundle.yaml

Create the Secret object for backup. You should use the same Azure Account Name and Key that you used to setup MinIO:

$ cat deploy/backup-s3.yam
apiVersion: v1
kind: Secret
metadata:
  name: azure-backup
type: Opaque
data:
  AWS_ACCESS_KEY_ID: BASE64_ENCODED_AZURE_ACCOUNT_NAME
  AWS_SECRET_ACCESS_KEY: BASE64_ENCODED_AZURE_ACCOUNT_KEY

Add storage configuration into

cr.yaml

 under

spec.backup.storages

.

storages:
  azure-minio:
    type: s3
    s3:
      bucket: test-azure-container
      credentialsSecret: azure-backup
      endpointUrl: http://minio-gateway-svc.minio-gw:9000

  • bucket

    is the container created on Azure Blob Storage.

  • endpointUrl

    must point to the MinIO Gateway service that was created in the previous section.

Deploy the database cluster:

$ kubectl apply -f deploy/cr.yaml

Read more about the installation of the Percona XtraDB Cluster Operator in our documentation.

Take Backups and Restore

To take the backup or restore, follow the regular approach by creating corresponding

pxc-backup

or

pxc-restore

Custom Resources in Kubernetes. For example, to take the backup I use the following manifest:

$ cat deploy/backup/backup.yaml
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBClusterBackup
metadata:
  name: backup1
spec:
  pxcCluster: cluster1
  storageName: azure-minio

This creates the Custom Resource object

pxc-backup

and the Operator uploads the backup to the Container in my Storage account:

Read more about backup and restore functionality in the Percona Kubernetes Operator for Percona XtraDB Cluster documentation.

Conclusion

Even though Azure Blob Storage is not S3-compatible, Cloud Native landscape provides production-ready tools for seamless integration. MinIO Gateway will work for both Percona Kubernetes Operators for MySQL and MongoDB, enabling S3-like backup and restore functionality.

The Percona team is committed to delivering smooth integration for its software products for all major clouds. Adding support for Azure Blob Storage is on the roadmap of Percona XtraBackup and Percona Backup for MongoDB, so as the certification on Azure Kubernetes Service for both operators.

Mar
02
2021
--

Microsoft launches Azure Percept, its new hardware and software platform to bring AI to the edge

Microsoft today announced Azure Percept, its new hardware and software platform for bringing more of its Azure AI services to the edge. Percept combines Microsoft’s Azure cloud tools for managing devices and creating AI models with hardware from Microsoft’s device partners. The general idea here is to make it far easier for all kinds of businesses to build and implement AI for things like object detection, anomaly detections, shelf analytics and keyword spotting at the edge by providing them with an end-to-end solution that takes them from building AI models to deploying them on compatible hardware.

To kickstart this, Microsoft also today launches a hardware development kit with an intelligent camera for vision use cases (dubbed Azure Percept Vision). The kit features hardware-enabled AI modules for running models at the edge, but it can also be connected to the cloud. Users will also be able to trial their proofs-of-concept in the real world because the development kit conforms to the widely used 80/20 T-slot framing architecture.

In addition to Percept Vision, Microsoft is also launching Azure Percept Audio for audio-centric use cases.

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, the corporate vice president of Microsoft’s edge and platform group, said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Percept customers will have access to Azure’s cognitive service and machine learning models and Percept devices will automatically connect to Azure’s IoT hub.

Microsoft says it is working with silicon and equipment manufacturers to build an ecosystem of “intelligent edge devices that are certified to run on the Azure Percept platform.” Over the course of the next few months, Microsoft plans to certify third-party devices for inclusion in this program, which will ideally allow its customers to take their proofs-of-concept and easily deploy them to any certified devices.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” said Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

St. Pierre also noted that all of the components of the platform will have to conform to Microsoft’s responsible AI principles — and go through extensive security testing.


Early Stage is the premiere ‘how-to’ event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company-building: Fundraising, recruiting, sales, legal, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included in each for audience questions and discussion.


Feb
09
2021
--

Is overseeing cloud operations the new career path to CEO?

When Amazon announced last week that founder and CEO Jeff Bezos planned to step back from overseeing operations and shift into an executive chairman role, it also revealed that AWS CEO Andy Jassy, head of the company’s profitable cloud division, would replace him.

As Bessemer partner Byron Deeter pointed out on Twitter, Jassy’s promotion was similar to Satya Nadella’s ascent at Microsoft: in 2014, he moved from executive VP in charge of Azure to the chief exec’s office. Similarly, Arvind Krishna, who was promoted to replace Ginni Rometti as IBM CEO last year, also was formerly head of the company’s cloud business.

Could Nadella’s successful rise serve as a blueprint for Amazon as it makes a similar transition? While there are major differences in the missions of these companies, it’s inevitable that we will compare these two executives based on their former jobs. It’s true that they have an awful lot in common, but there are some stark differences, too.

Replacing a legend

For starters, Jassy is taking over for someone who founded one of the world’s biggest corporations. Nadella replaced Steve Ballmer, who had taken over for the company’s face, Bill Gates. Holger Mueller, an analyst at Constellation Research, says this notable difference could have a huge impact for Jassy with his founder boss still looking over his shoulder.

“There’s a lot of similarity in the two situations, but Satya was a little removed from the founder Gates. Bezos will always hover and be there, whereas Gates (and Ballmer) had retired for good. [ … ] It was clear [they] would not be coming back. [ … ] For Jassy, the owner could [conceivably] come back anytime,” Mueller said.

But Andrew Bartels, an analyst at Forrester Research, says it’s not a coincidence that both leaders were plucked from the cloud divisions of their respective companies, even if it was seven years apart.

“In both cases, these hyperscale business units of Microsoft and Amazon were the fastest-growing and best-performing units of the companies. [ … ] In both cases, cloud infrastructure was seen as a platform on top of which and around which other cloud offerings could be developed,” Bartels said. The companies both believe that the leaders of these two growth engines were best suited to lead the company into the future.

Jan
29
2021
--

Subscription-based pricing is dead: Smart SaaS companies are shifting to usage-based models

Software buying has evolved. The days of executives choosing software for their employees based on IT compatibility or KPIs are gone. Employees now tell their boss what to buy. This is why we’re seeing more and more SaaS companies — Datadog, Twilio, AWS, Snowflake and Stripe, to name a few — find success with a usage-based pricing model.

The usage-based model allows a customer to start at a low cost, while still preserving the ability to monetize a customer over time.

The usage-based model allows a customer to start at a low cost, minimizing friction to getting started while still preserving the ability to monetize a customer over time because the price is directly tied with the value a customer receives. Not limiting the number of users who can access the software, customers are able to find new use cases — which leads to more long-term success and higher lifetime value.

While we aren’t going 100% usage-based overnight, looking at some of the megatrends in software —  automation, AI and APIs — the value of a product normally doesn’t scale with more logins. Usage-based pricing will be the key to successful monetization in the future. Here are four top tips to help companies scale to $100+ million ARR with this model.

1. Land-and-expand is real

Usage-based pricing is in all layers of the tech stack. Though it was pioneered in the infrastructure layer (think: AWS and Azure), it’s becoming increasingly popular for API-based products and application software — across infrastructure, middleware and applications.

API-based products and appliacation software – across infrastructure, middleware and applications.

Image Credits: Kyle Poyar / OpenView

Some fear that investors will hate usage-based pricing because customers aren’t locked into a subscription. But, investors actually see it as a sign that customers are seeing value from a product and there’s no shelf-ware.

In fact, investors are increasingly rewarding usage-based companies in the market. Usage-based companies are trading at a 50% revenue multiple premium over their peers.

Investors especially love how the usage-based pricing model pairs with the land-and-expand business model. And of the IPOs over the last three years, seven of the nine that had the best net dollar retention all have a usage-based model. Snowflake in particular is off the charts with a 158% net dollar retention.

Oct
20
2020
--

Microsoft debuts Azure Space to cater to the space industry, partners with SpaceX for Starlink data center broadband

Microsoft is taking its Azure cloud computing platform to the final frontier — space. It now has a dedicated business unit called Azure Space for that purpose, made up of industry heavyweights and engineers who are focused on space-sector services, including simulation of space missions, gathering and interpreting satellite data to provide insights and providing global satellite networking capabilities through new and expanded partnerships.

One of Microsoft’s new partners for Azure Space is SpaceX, the progenitor and major current player in the so-called “New Space” industry. SpaceX will be providing Microsoft with access to its Starlink low-latency satellite-based broadband network for Microsoft’s new Azure Modular Datacenter (MDC) — essentially an on-demand container-based data center unit that can be deployed in remote locations, either to operate on their own or boost local capabilities.

Image Credits: Microsoft

The MDC is a contained unit, and can operate off-grid using its own satellite network connectivity add-on. It’s similar in concept to the company’s work on underwater data centres, but keeping it on the ground obviously opens up more opportunities in terms of locating it where people need it, rather than having to be proximate to an ocean or sea.

The other big part of this announcement focuses on space preparedness via simulation. Microsoft revealed the Azure Orbital Emulator today, which provides in a computer emulated environment the ability to test satellite constellation operations in simulation, using both software and hardware. It’s basically aiming to provide as close to in-space conditions as are possible on the ground in order to get everything ready for coordinating large, interconnected constellations of automated satellites in low Earth orbit, an increasing need as more defense agencies and private companies pursue this approach versus the legacy method of relying on one, two or just a few large geosynchronous spacecraft.

Image Credits: Microsoft

Microsoft says the goal with the Orbital Emulator is to train AI for use on orbital spacecraft before those spacecraft are actually launched — from the early development phase, right up to working with production hardware on the ground before it takes its trip to space. That’s definitely a big potential competitive advantage, because it should help companies spot even more potential problems early on while they’re still relatively easy to fix (not the case on orbit).

This emulated environment for on-orbit mission prep is already in use by Azure Government customers, the company notes. It’s also looking for more partners across government and industry for space-related services, including communication, national security, satellite services including observation and telemetry and more.

Aug
05
2020
--

Amazon inks cloud deal with Airtel in India

Amazon has found a new partner to expand the reach of its cloud services business — AWS — in India, the world’s second largest internet market.

On Wednesday, the e-commerce giant announced it has partnered with Bharti Airtel, the third-largest telecom operator in India with more than 300 million subscribers, to sell a wide-range of AWS offerings under Airtel Cloud brand to small, medium, and large-sized businesses in the country.

The deal could help AWS, which leads the cloud market in India, further expand its dominance in the country. The move follows a similar deal Reliance Jio — India’s largest telecom operator and which has raised more than $20 billion in recent months from Google, Facebook and a roster of other high-profile investors — struck with Microsoft last year to sell cloud services to small businesses. The two announced a 10-year partnership to “serve millions of customers.”

Airtel, which serves over 2,500 large enterprises and more than a million emerging businesses, itself signed a similar cloud deal with Google in January this year. That partnership is still in place, Airtel said.

“AWS brings over 175 services to the table. We pretty much support any workload on the cloud. We have the largest and the most vibrant community of customers,” said Puneet Chandok, President of AWS in India and South Asia, on a call with reporters Wednesday noon.

The two companies, which signed a similar agreement in 2015, will also collaborate on building new services and help existing customers migrate to Airtel Cloud, they said.

Today’s deal illustrates Airtel’s push to build businesses beyond its telecom venture, said Harmeen Mehta, Global CIO and Head of Cloud and Security Business at Airtel, on the call. Last month, Airtel partnered with Verizon — TechCrunch’s parent company — to sell BlueJeans video conferencing service to business customers in India.

Deals with carriers were very common a decade ago in India as tech giants rushed to amass users in the country. Replicating a similar strategy now illustrates the phase of the cloud adoption in the nation.

Nearly half a billion people in India came online last decade. And slowly, small businesses and merchants are also beginning to use digital tools, storage services, and accept online payments.

India has emerged as one of the emerging leading grounds for cloud services. The public cloud services market of the country is estimated to reach $7.1 billion by 2024, according to research firm IDC.

Jul
31
2020
--

Even as cloud infrastructure growth slows, revenue rises over $30B for quarter

The cloud market is coming into its own during the pandemic as the novel coronavirus forced many companies to accelerate plans to move to the cloud, even while the market was beginning to mature on its own.

This week, the big three cloud infrastructure vendors — Amazon, Microsoft and Google — all reported their earnings, and while the numbers showed that growth was beginning to slow down, revenue continued to increase at an impressive rate, surpassing $30 billion for a quarter for the first time, according to Synergy Research Group numbers.

May
06
2020
--

Microsoft to open first data center in New Zealand as cloud usage grows

In spite of being in the midst of a pandemic sowing economic uncertainty, one area that continues to thrive is cloud computing. Perhaps that explains why Microsoft, which saw Azure grow 59% in its most recent earnings report, announced plans to open a new data center in New Zealand once it receives approval from the Overseas Investment Office.

“This significant investment in New Zealand’s digital infrastructure is a testament to the remarkable spirit of New Zealand’s innovation and reflects how we’re pushing the boundaries of what is possible as a nation,” Vanessa Sorenson, general manager at Microsoft New Zealand said in a statement.

The company sees this project against the backdrop of accelerating digital transformation that we are seeing as the pandemic forces companies to move to the cloud more quickly with employees often spread out and unable to work in offices around the world.

As CEO Satya Nadella noted on Twitter, this should help companies in New Zealand that are in the midst of this transformation. “Now more than ever, we’re seeing the power of digital transformation, and today we’re announcing a new datacenter region in New Zealand to help every organization in the country build their own digital capability,” Nadella tweeted.

The company wants to do more than simply build a data center. It will make this part of a broader investment across the country, including skills training and reducing the environmental footprint of the data center.

Once New Zealand comes on board, the company will boast 60 regions covering 140 countries around the world. The new data center won’t just be about Azure, either. It will help fuel usage of Office 365 and the Dynamics 365 back-office products, as well.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com