Oct
07
2022
--

Deploy Percona Server for MongoDB Replica Set With Ansible

Deploy Percona Server for MongoDB Replica Set With Ansible

Deploy Percona Server for MongoDB Replica Set With AnsibleIn this blog post, we will discuss deploying a Percona Server for MongoDB (PSMDB) replica set with Ansible*. Ansible is a popular automation tool that can configure systems, deploy software, and orchestrate more advanced tasks like continuous deployments or zero downtime rolling updates. Its main goals are simplicity and ease of use. By default, it uses the SSH protocol to connect to the server. To know more about it, please go through the documentation.

With Ansible, one can easily configure and deploy the PSMDB replica set (RS). Kindly follow the installation instructions of the Ansible for the respective operating system.

Prerequisites

  • A user must be created, usually “percona” for OS.
  • The OS user (in our case it’s “percona”) needs password-less sudo on the db server.
  • Ansible
  • SSH access from control node to all the DB servers as Ansible uses the SSH protocol to connect to the server.

Below is the inventory, in which we have specified the host’s info.

Inventory:

inventory

# Ansible inventory file
[rs1]
ip-172-31-93-193.ec2.internal mongodb_primary=True
ip-172-31-85-203.ec2.internal
ip-172-31-80-251.ec2.internal

 

Global variable file all defined in group_vars

group_vars/all
---
##################
## General configuration
##################
packages:
  - percona-server-mongodb

#repo version to install
repo_version: psmdb-42
mongo_root_password: your_secrets
rs_port: 27017
mongod_log_path: /var/log/mongo
mongod_path: /var/lib/mongo
#mongo_extra_args: Define argument like --tls --tlsCertificateKeyFile /path/client.pem --tlsCAFile /path/caToValidateServerCertificates.pem

# use_tls: true/false
use_tls: false

# only relevant if use_tls: false
keyfile_path: /var/lib/mongo
keyFile_location: /var/lib/mongo/keyfile

# openssl rand -base64 741 if you want to generate a new one
keyfile_content: |
  8pYcxvCqoe89kcp33KuTtKVf5MoHGEFjTnudrq5BosvWRoIxLowmdjrmUpVfAivh
  CHjqM6w0zVBytAxH1lW+7teMYe6eDn2S/O/1YlRRiW57bWU3zjliW3VdguJar5i9

keyfile_encryption: false
encryption_key_content: |
  vvMTZ3dnSbG7wc6DkPpt+rp3Cc+jF8lJsYlq6QE1yEM=
# path to the keyfile to encrypt
encryption_keyfile: /opt/mongodb.key

 

The playbook main.yml for the Deployment of PSMDB RS is below. Here, we are sharing a high-level TASK to deploy the replica set. To get the complete main.yml playbook, please check the GitHub link.

main.yml

---
- name: install percona rpms, Deploy PSMDB RS
  hosts: all
  become: yes
  tasks:
    - name: install percona key
      rpm_key:
        key: https://downloads.percona.com/downloads/RPM-GPG-KEY-percona
        state: present
      when: ansible_os_family == "RedHat" 

    - name: install percona repo
      package:
        name: "https://repo.percona.com/yum/percona-release-latest.noarch.rpm"
        state: present
      when: ansible_os_family == "RedHat" 

    - name: install deb files from repo
      become: yes
      block:
        - name: download percona repo
          get_url:
            url: "https://repo.percona.com/apt/percona-release_latest.focal_all.deb"
            dest: /home/percona
          when: ansible_os_family == "Debian"

        - name: install repo
          apt:
            deb: /home/percona/percona-release_latest.focal_all.deb
          when: ansible_os_family == "Debian"

        - name: Update and upgrade apt packages
          apt:
            update_cache: yes
          when: ansible_os_family == "Debian" 

    - name: Enable specific version
      shell: "/usr/bin/percona-release enable {{ repo_version }} && /usr/bin/percona-release enable tools"

    - name: install packages
      package:
        name: "{{ item }}"
        state: present
      with_items: "{{ packages }}"

#Deploy the PSMDB Replica set

    - name: copy mongod.conf to rs member
      become: yes
      template:
        src: templates/mongod-replicaset.conf.j2
        dest: /etc/mongod.conf
        owner: root
        group: root
        mode: 0644

    - name: copy init-rs.js file to initialize the replica set
      template:
        src: templates/init-rs.js.j2
        dest: /tmp/init-rs.js
        mode: 0644
      when: mongodb_primary is defined and mongodb_primary

    - name: bootstrap replica sets
      block:
        - name: set up keyfile if not using ssl
          become: yes
          copy:
            dest: "{{ keyFile_location }}"
            content: "{{ keyfile_content }}"
            owner: mongod
            group: root
            mode: 0600
          when: not use_tls | bool

        - name: start mongod on rs member
          become: yes
          service:
            name: mongod
            state: restarted

        - name: wait for few secs so servers finish starting
          pause:
            seconds: 15

        - name: run the init command for rs 
          shell: mongo {{ mongo_extra_args | default("") }} --port {{ rs_port}} < /tmp/init-rs.js
          when: mongodb_primary is defined and mongodb_primary

        - name: wait a few secs so replica sets finish initializing
          pause:
            seconds: 15

#Add a root user to the MongoDB

    - name: create a users for RS
      block:
        - name: prepare the command to create root user
          template:
            src: templates/createRoot.js.j2
            dest: /tmp/createRoot.js
            mode: 0644
          when: mongodb_primary is defined and mongodb_primary

        - name: run the command to create a root user
          shell: mongo admin {{ mongo_extra_args | default("") }} --port {{ rs_port }} < /tmp/createRoot.js
          when: mongodb_primary is defined and mongodb_primary
...

 

Execute the playbook as below:

ansible-playbook -i inventory main.yml

 

If the playbook has been executed successfully, you’ll see output like the below (here showing the trimmed output of the playbook i.e the recap of the playbook):

[centos@ip-172-31-17-26 testAnsible]$ ansible-playbook -i inventory main.yml
PLAY [install percona rpms, Deploy PSMDB RS] *****************************************************************************
.
.
.
PLAY RECAP *****************************************************************************

ip-172-31-80-251.ec2.internal : ok=11   changed=6    unreachable=0    failed=0    skipped=7    rescued=0    ignored=0

ip-172-31-85-203.ec2.internal : ok=11   changed=6    unreachable=0    failed=0    skipped=7    rescued=0    ignored=0

ip-172-31-93-193.ec2.internal : ok=17   changed=10   unreachable=0    failed=0    skipped=3    rescued=0    ignored=0

 

Let’s connect to mongo and verify if PSMDB RS deployment has been set up.

[centos@ip-172-31-93-193 ~]$ mongo -uroot -p
Percona Server for MongoDB shell version v4.2.21-21
Enter password:
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("41f13673-edf8-43e6-8b0e-040f08640f26") }
Percona Server for MongoDB server version: v4.2.21-21
rs1:PRIMARY> rs.conf().members
[
	{
		"_id" : 1,
		"host" : "ip-172-31-93-193.ec2.internal:27017",
		"arbiterOnly" : false,
		"buildIndexes" : true,
		"hidden" : false,
		"priority" : 10,
		"tags" : {

		},
		"slaveDelay" : NumberLong(0),
		"votes" : 1
	},
	{
		"_id" : 2,
		"host" : "ip-172-31-85-203.ec2.internal:27017",
		"arbiterOnly" : false,
		"buildIndexes" : true,
		"hidden" : false,
		"priority" : 1,
		"tags" : {

		},
		"slaveDelay" : NumberLong(0),
		"votes" : 1
	},
	{
		"_id" : 3,
		"host" : "ip-172-31-80-251.ec2.internal:27017",
		"arbiterOnly" : false,
		"buildIndexes" : true,
		"hidden" : false,
		"priority" : 1,
		"tags" : {

		},
		"slaveDelay" : NumberLong(0),
		"votes" : 1
	}
]
rs1:PRIMARY>

 

From the above, we can see, PSMDB RS has been deployed successfully. Our automation with Ansible to deploy a replica set worked properly.

Conclusion

Ansible is the most popular simple, ease-of-use automation tool and plays a major role in configuring systems, deploying software, and continuous deployment. With Ansible, we have easily deployed a PSMDB RS with ease by defining tasks in a playbook. 

We also encourage you to try our products for MongoDB like Percona Server for MongoDB, Percona Backup for MongoDB, and Percona Operator for MongoDB.

*Disclaimer: This blog explains how Ansible can be used to deploy a Percona MongoDB replica set. Ansible is not a Percona product and is not supported by Percona. The information in this blog post is intended for educational purposes only. 

Percona Distribution for MongoDB is a freely available MongoDB database alternative, giving you a single solution that combines the best and most important enterprise components from the open source community, designed and tested to work together.

Download Percona Distribution for MongoDB Today!

Jun
01
2021
--

How to Generate an Ansible Inventory with Terraform

Ansible Inventory with Terraform MongoDB

Ansible Inventory with Terraform MongoDBCreating and maintaining an inventory file is one of the common tasks we have to deal with when working with Ansible. When dealing with a large number of hosts, it can be complex to handle this task manually.

There are some plugins available to automatically generate inventory files by interacting with most cloud providers’ APIs. In this post, however, I am going to show you how to generate and maintain an inventory file directly from within the Terraform code.

To illustrate this, we will generate an Ansible inventory file to deploy a MongoDB sharded cluster on the Google Cloud Platform. If you want to know more about Ansible provisioning, you might want to check out my talk at Percona Live Online 2021.

The Ansible Inventory

The format of the inventory file is closely related to the actual Ansible code that is used to deploy the software. The structure we need is defined as follows:

[cfg] 
tst-mongo-cfg00 mongodb_primary=True 
tst-mongo-cfg01 
tst-mongo-cfg02 

[shard0] 
tst-mongo-shard00svr0 mongodb_primary=True 
tst-mongo-shard00svr1 
tst-mongo-shard00svr2 

[shard1] 
tst-mongo-shard01svr0 mongodb_primary=True 
tst-mongo-shard01svr1 
tst-mongo-shard01svr2 

[mongos]
tst-mongo-router00

We have to specify groups for each shard’s replicaset (shardN), as well as the config servers (cfg) and mongos routers (mongos).

The mongodb_primary tag is used to designate a primary server by giving it higher priority in the replicaset configuration.

Now that we know the structure we need, let’s start by provisioning our hardware resources with Terraform.

Creating Instances in GCP

Here is an example of a Terraform script to generate some instances for the shards. The Terraform count directive is used to create many copies of a given resource, so we define it dynamically.

resource "google_compute_instance" "shard" {
  name = "tst-mongo-shard0${floor(count.index / var.shardsvr_replicas )}svr${count.index % var.shardsvr_replicas}"
  machine_type = var.shardsvr_type
  zone  = data.google_compute_zones.available.names[count.index % var.shardsvr_replicas]
  count = var.shard_count * var.shardsvr_replicas
  labels = { 
    ansible-group = floor(count.index / var.shardsvr_replicas ),
    ansible-index = count.index % var.shardsvr_replicas,
  }  
  boot_disk {
    initialize_params {
    image = lookup(var.centos_amis, var.region)
    }
  }    
  network_interface {
    network = google_compute_network.vpc-network.id
    subnetwork = google_compute_subnetwork.vpc-subnet.id
  }

The above code shuffles the instances through the available zones within a region for high availability. As you can see, there are some expressions used to generate the name and the labels. I will get to this shortly. We can use similar scripts for creating the instances for the config servers and the mongos nodes.

This is an example of the variables file:

variable "centos_amis" {
  description = "CentOS AMIs by region"
  default = {
    northamerica-northeast1 = "centos-7-v20210420"
  }

variable "shardsvr_type" {
	default = "e2-small"
	description = "instance type of the shard server"
  }

variable "region" {
  type    = string
  default = "us-east1"
  }

variable "shard_count" {
  default = "2"
  description = "Number of shards"
  }

variable "shardsvr_replicas" {
  default = "3"
	description = "How many replicas per shard"
  } 
}

Identifying the Resources

We need an easy way to identify our resources to work with them effectively. One way is to define labels on our Terraform code that is used to create the instances. We could also use instance tagging instead.

For the shards, the key part is that we need to generate a group for each shard dynamically, depending on how many shards we are provisioning. We need some extra information:

  • the group names (i.e. shard0, shard1) for each host
  • which member within a group we are working with

We define two labels for the purpose of iterating through the resources:

labels = { 
    ansible-group = floor(count.index / var.shardsvr_replicas ),
    ansible-index = count.index % var.shardsvr_replicas,
  }

For the ansible-group, the variable shardsvr_replicas defines how many members each shard’s replica set has (e.g. 3). So for 2 shards, the expression above gives us the following output:

floor(count.index / 2 )
0
0
0
1
1
1

These values will be useful for matching each host with the corresponding group.

Now let’s see how to generate the list of servers per group. For the ansible-index, the expression gives us the following output:

count.index % 3
0
1
2
0
1
2

For mongos and config servers, it is easier:

labels = { 
    ansible-group = "mongos"
  }

labels = { 
    ansible-group = "cfg"
  }

So with the above, we should have every piece of information we need.

Creating an Output File

Terraform provides the local_file resource to generate a file. We will use this to render our inventory file, based on the contents of the inventory.tmpl template.

We have to pass the values we need as arguments to the template, as it is not possible to reference outside variables from within it. In addition to the counters we had defined, we need the actual hostnames and the total number of shards. This is what it looks like:

resource "local_file" "ansible_inventory" {
  content = templatefile("inventory.tmpl",
    {
     ansible_group_shards = google_compute_instance.shard.*.labels.ansible-group,
     ansible_group_index = google_compute_instance.shard.*.labels.ansible-index,
     hostname_shards = google_compute_instance.shard.*.name,
     ansible_group_cfg = google_compute_instance.cfg.*.labels.ansible-group,
     hostname_cfg = google_compute_instance.cfg.*.name,
     ansible_group_mongos = google_compute_instance.mongos.*.labels.ansible-group,
     hostname_mongos = google_compute_instance.mongos.*.name,
     number_of_shards = range(var.shard_count)
    }
  )
  filename = "inventory"
}

The output will be a file called inventory, stored on the machine where we are running Terraform.

Template Generation

On the template, we need to loop through the different groups and print the information we have in the proper format.

To figure out whether to print the mongodb_primary attribute, we test the loop index for the first value and print the empty string otherwise. For the actual shards, we can generate the group name easily using our previously defined variable, and check if a host should be included in the group.

Anything between %{} contains a directive, while the ${} are used to substitute the arguments we fed the template. The ~ is used to remove the extra newlines.

[cfg]
%{ for index, group in ansible_group_cfg ~}
${ hostname_cfg[index] } ${ index == 0 ? "mongodb_primary=True" : "" }
%{ endfor ~}

%{ for shard_index in number_of_shards ~}
[shard${shard_index}]
%{ for index, group in ansible_group_shards ~}
${ group == tostring(shard_index) && ansible_group_index[index] == "0" ? join(" ", [ hostname_shards[index], "mongodb_primary=True\n" ]) : "" ~} 
${ group == tostring(shard_index) && ansible_group_index[index] != "0" ? join("", [ hostname_shards[index], "\n" ]) : "" ~} 
%{ endfor ~}
%{ endfor ~}

[mongos]
%{ for index, group in ansible_group_mongos ~}
${hostname_mongos[index]} 
%{ endfor ~}

Final Words

Terraform and Ansible together is a powerful combination for infrastructure provisioning and management. We can automate everything from hardware deployment to software installation.

The ability to generate a file from Terraform is quite handy for the purpose of creating our Ansible inventory. Since it is responsible for the actual provisioning, Terraform has all the information we need, although it can be a bit tricky to get the templating right.

Dec
02
2020
--

Fylamynt raises $6.5M for its cloud workflow automation platform

Fylamynt, a new service that helps businesses automate their cloud workflows, today announced both the official launch of its platform as well as a $6.5 million seed round. The funding round was led by Google’s AI-focused Gradient Ventures fund. Mango Capital and Point72 Ventures also participated.

At first glance, the idea behind Fylamynt may sound familiar. Workflow automation has become a pretty competitive space, after all, and the service helps developers connect their various cloud tools to create repeatable workflows. We’re not talking about your standard IFTTT- or Zapier -like integrations between SaaS products, though. The focus of Fylamynt is squarely on building infrastructure workflows. While that may sound familiar, too, with tools like Ansible and Terraform automating a lot of that already, Fylamynt sits on top of those and integrates with them.

Image Credits: Fylamynt

“Some time ago, we used to do Bash and scripting — and then [ … ] came Chef and Puppet in 2006, 2007. SaltStack, as well. Then Terraform and Ansible,” Fylamynt co-founder and CEO Pradeep Padala told me. “They have all done an extremely good job of making it easier to simplify infrastructure operations so you don’t have to write low-level code. You can write a slightly higher-level language. We are not replacing that. What we are doing is connecting that code.”

So if you have a Terraform template, an Ansible playbook and maybe a Python script, you can now use Fylamynt to connect those. In the end, Fylamynt becomes the orchestration engine to run all of your infrastructure code — and then allows you to connect all of that to the likes of DataDog, Splunk, PagerDuty Slack and ServiceNow.

Image Credits: Fylamynt

The service currently connects to Terraform, Ansible, Datadog, Jira, Slack, Instance, CloudWatch, CloudFormation and your Kubernetes clusters. The company notes that some of the standard use cases for its service are automated remediation, governance and compliance, as well as cost and performance management.

The company is already working with a number of design partners, including Snowflake.

Fylamynt CEO Padala has quite a bit of experience in the infrastructure space. He co-founded ContainerX, an early container-management platform, which later sold to Cisco. Before starting ContainerX, he was at VMWare and DOCOMO Labs. His co-founders, VP of Engineering Xiaoyun Zhu and CTO David Lee, also have deep expertise in building out cloud infrastructure and operating it.

“If you look at any company — any company building a product — let’s say a SaaS product, and they want to run their operations, infrastructure operations very efficiently,” Padala said. “But there are always challenges. You need a lot of people, it takes time. So what is the bottleneck? If you ask that question and dig deeper, you’ll find that there is one bottleneck for automation: that’s code. Someone has to write code to automate. Everything revolves around that.”

Fylamynt aims to take the effort out of that by allowing developers to either write Python and JSON to automate their workflows (think “infrastructure as code” but for workflows) or to use Fylamynt’s visual no-code drag-and-drop tool. As Padala noted, this gives developers a lot of flexibility in how they want to use the service. If you never want to see the Fylamynt UI, you can go about your merry coding ways, but chances are the UI will allow you to get everything done as well.

One area the team is currently focusing on — and will use the new funding for — is building out its analytics capabilities that can help developers debug their workflows. The service already provides log and audit trails, but the plan is to expand its AI capabilities to also recommend the right workflows based on the alerts you are getting.

“The eventual goal is to help people automate any service and connect any code. That’s the holy grail. And AI is an enabler in that,” Padala said.

Gradient Ventures partner Muzzammil “MZ” Zaveri echoed this. “Fylamynt is at the intersection of applied AI and workflow automation,” he said. “We’re excited to support the Fylamynt team in this uniquely positioned product with a deep bench of integrations and a nonprescriptive builder approach. The vision of automating every part of a cloud workflow is just the beginning.”

The team, which now includes about 20 employees, plans to use the new round of funding, which closed in September, to focus on its R&D, build out its product and expand its go-to-market team. On the product side, that specifically means building more connectors.

The company offers both a free plan as well as enterprise pricing and its platform is now generally available.

Oct
28
2016
--

Blog Series: MySQL Configuration Management

MySQL Configuration Management

MySQL Configuration ManagementMySQL configuration management remains a hot topic, as I’ve noticed on numerous occasions during my conversations with customers.

I thought it might be a good idea to start a blog series that goes deeper in detail into some of the different options, and what modules potentially might be used for managing your MySQL database infrastructure.

Configuration management has been around since way before the beginning of my professional career. I, myself, originally began working on integrating an infrastructure with my colleagues using Puppet.

Why is configuration management important?
  • ReproducibilityIt’s giving us the ability to provision any environment in an automated way, and feel sure that the new environment will contain the same configuration.
  • Fast restorationThanks to reproducibility, you can quickly provision machines in case of disasters. This makes sure you can focus on restoring your actual data instead of worrying about the deployment and configuration of your machines.
  • Integral part of continuous deploymentContinuous deployment is a terminology everyone loves: being able to deploy changes rapidly and automatically after automated regression testing requires a configuration management solution.
  • Compliance and securitySolutions like Puppet and Chef maintain and enforce configuration parameters on your infrastructure. This can sound bothersome at first, but it’s essential for maintaining a well-configured environment.
  • Documented environmentAlthough reading someone’s puppet code can potentially harm you beyond insanity, it provides you with the real truth about your infrastructure.
  • Efficiency and manageabilityConfiguration management can automate repetitive tasks (for example, user grants, database creation, configuration variables), as well as security updates, service restarts, etc. These can potentially bring you less work and faster rollouts.
Which players are active in this field?

The most popular open source solutions are Puppet, Chef, Ansible, and CFengine (among others). In this series, we will go deeper in the first three of them.

Let’s first start by giving you a quick, high-level introduction.

Puppet

Puppet is a language used to describe the desired state of an environment. The Puppet client reads the catalog of the expected state from the server and enforces these changes on the client. The system works based on a client/server principle.

Puppet has as default four essential components:

  • Puppet Server: A Java virtual machine offering Puppet’s core services.
  • Puppet Agent: A client library that requests configuration catalog info from the puppet-server.
  • Hiera: A key-value lookup database, which can store and modify values for specific hosts.
  • Facter: An application that keeps an inventory of the local node variables.

How can you integrate puppet in your MySQL infrastructure?

This will allow you and your team to create users, databases, install and configure MySQL

Probably my old “code from hell” module is still somewhere out there.

Chef

Chef also consists of a declarative language (like Puppet) based on Ruby which will allow you to write cookbooks for potential integrable technologies. Chef is also based on a server/client solution. The client being chef nodes, the server managing the cookbooks, catalogs and recipes.

In short, Chef consists of:

  • Chef server: Manages the multiple cookbooks and the catalog
  • Chef clients (nodes): The actual system requesting the catalog information from the chef server.
  • Workstations: This is a system that is configured to run Chef command-line tools that synchronize with a Chef-repository or the Chef server. You could also describe this as a Chef development and tooling environment.

How can you integrate Chef in your MySQL infrastructure:

Ansible

Ansible originated with something different in mind. System engineers typically chose to use their own management scripts. This can be troublesome and hard to maintain. Why wouldn’t you use something easy and automated and standardized? Ansible fills in these gaps, and simplifies management of Ansible targets.

Ansible works by connecting to your nodes (by SSH default) and pushes out Ansible modules to them. These modules represent the desired state of the node, and will be used to execute commands to attain the desired state.

This procedure is different to Puppet and Chef, which are essentially preferably client/server solutions.

Some pre-made modules for MySQL are:

Conclusion and Next Steps

Choose your poison (or magical medicine, you pick the wording), every solution has its perks.

Keep in mind that in some situations running a complicated Puppet or Chef infrastructure could be overkill. At this moment, a solution like Ansible might be a quick and easily integrable answer for you.

The next blog post will go over the Puppet Forge MySQL module, so stay tuned!

Oct
16
2015
--

Red Hat Is Buying IT Automation Startup Ansible, For $150M

redhat_hat Open source giant Red Hat is making another acquisition to build out its enterprise IT portfolio: today the company announced that it would buy Ansible, an IT automation solutions specialist that helps companies build and manage hybrid IT deployments across the cloud and on-premise solutions. The acquisition had been rumored to be in the works for a price of over $100 million but a source… Read More

May
18
2015
--

Ansible Partners With Cisco, CSC, HP And Rackspace To Make Deploying And Managing OpenStack Easier

DSC09940 Enterprise IT automation service Ansible today announced that it’s partnering with HP, RackSpace, CSC, Cisco and the open source community to help make deploying and managing OpenStack clouds easier. While the open source cloud computing platform is now extremely powerful, few of its users would argue that it is very easy to stand up an OpenStack cloud. Read More

Jan
27
2015
--

Percona University: Back to school Feb. 12 in Raleigh, N.C.

Percona CEO Peter Zaitsev leads a track at the inaugural Percona University event in Raleigh, N.C. on Jan. 29, 2013.

Percona CEO Peter Zaitsev leads a track at the inaugural Percona University event in Raleigh, N.C. on Jan. 29, 2013.

About two years ago we held our first-ever Percona University event in Raleigh, N.C. It was a great success with high attendance and very positive feedback which led us to organize a number of similar educational events in different locations around the world.

And next month we’ll be back where it all started. On February 12, Percona University comes to Raleigh – and this time the full-day educational event will be much more cool. What have we changed? Take a look at the agenda.

First – this is no longer just a MySQL-focused event. While 10 years ago MySQL was the default, dominating choice for modern companies looking to store and process data effectively – this is no longer the case. And as such the event’s theme is “Smart Data.” In addition to MySQL, Percona and MariaDB technologies (which you would expect to be covered), we have talks about Hadoop, MongoDB, Cassandra, Redis, Kafka, SQLLite.

However the “core” data-store technologies is not the only thing successful data architects should know – one should also be well-versed in the modern approaches to the infrastructure and general data management. This is why we also have talks about Ansible and OpenStack, DBaaS and PaaS as well as a number of more talks about big-picture topics around architecture and technology management.

Second – this is our first multi-track Percona University event – we had so many great speakers interested in speaking that we could not fit them all into one track, so we have two tracks now with 25 sessions which makes that quite an educational experience!

Third – while we’re committed to having those events be very affordable, we decided to charge $10 per attendee. The reason for this is to encourage people to register who actually plan on attending – when hosting free events we found out that way too many registered and never showed up, which was causing the venues to rapidly fill past capacity and forcing us to turn away those who could actually be there. It was also causing us to order more food than needed, causing waste. We trust $10 will not prevent you from attending, but if it does cause hardship, just drop me a note and I’ll give you a free pass.

A few other things you need to know:

This is very much a technically focused event. I have encouraged all speakers to make it about technology rather than sales pitches or marketing presentations.

This is low-key educational event. Do not expect it to be very fancy. If you’re looking for the great conference experience consider attending the Percona Live MySQL Conference and Expo this April.

Although it’s a full-day event, you can come for just part of the day. We recognize many of you will not be able to take a full day from work and may be able to attend only in the morning or the afternoon. This is totally fine. The morning registration hours is when most people will register, however, there will be someone on the desk to get you your pass throughout the day.

Thinking of Attending? Take a look at the day’s sessions and then register as space is limited. The event will be held at North Carolina State University’s McKimmon Conference & Training Center. I hope to see you there!

The post Percona University: Back to school Feb. 12 in Raleigh, N.C. appeared first on MySQL Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com