Oct
24
2017
--

Community Matters

Community Matters

Community MattersBuilding on community

Percona is very committed to open source database software. We think of ourselves as unbiased champions of open source database solutions. With that, we also carry a responsibility to the open source database community – whether MySQL®, MongoDB®, ProxySQL or other open source database technology. We’ve seen that, and taken action by hiring a Community Manager.

That’s me. Which is great… For me!

And my job, in a nutshell, is to help to make our community great for you. By building on the good stuff that’s been done in the past and finding ways to do more.

The common thread tying the community together is the sharing of information, experience, and knowledge. Hundreds of you have taken part in Percona Live or Percona Live Europe — thank you for that! Props if you’ve done both. If you’ve proposed a paper (selected or not), presented a session, given a tutorial, staffed a booth or sponsored the event – kudos!

Maybe you’ve benefited from or run sessions at a Percona University (the next one is in Kiev in November and it’s FREE). Or caught up with Percona staff at one of the many tech conferences we attend during the year.

You might have used our code, added to our code, spotted and logged bugs, given feedback or requested new features. Helped out other users in forums, written to question-and-answer sites like Stack Overflow. Maybe you’ve blogged about using Percona software on your own blog, or looked for help on the Percona Database Performance Blog. You might have recommended our software to your company, or a colleague, or a client or a friend. Or even a stranger. Mentioned us in passing in conversation. Read our e-books, watched our webinars, shared a link or reached out to Percona via social media.

All excellent, valuable and much-appreciated contributions to the community.

Ways you can join in

Have a think about these opportunities to shine, share and make the Percona community best-in-class.

  • Take part in our forum: we really try to keep up, but there are always more questions than we can address. It’s easy to think of the forums as a support queue but honestly, we are MORE than delighted when we have help from you.
  • You have a passion for a particular subject, or maybe an interesting project to share. How about proposing a webinar or blog post? Contact me if you are interested.
  • If you haven’t yet done it, make 2018 the year you attend Percona Live. If you’ve done it before, do it again – network with old friends and make some new ones. Get a new t-Shirt. Enjoy the company. The warmth of the welcome and the generosity of the knowledge shared made a big impression on me in Dublin, I’m convinced you’ll find the same.
  • In-depth knowledge or hardcore learning on-the-job? Don’t forget that the call for papers for Percona Live is opening soon and that speakers get free attendance at the conference. It’s a competitive call, but you’re up for that right? Right! 
  • Don’t want to “do stuff” on the Percona site? Maybe contributing to code or working on the question-and-answer sites is more for you. Or maybe you have a blog already and write about our software and how to use it. If so – thanks again, and please let me have the link!
  • If you haven’t already, don’t forget to subscribe to our newsletters to get early warning of upcoming webinars, and the latest tech and community news

Have you thought about joining Percona? We’re hiring! Don’t forget, too, that all the contributions you make to online communities – Percona or not – really pay off when you want to demonstrate your knowledge and commitment to future employers or clients. A link is worth a thousand words.

What do you think?

Interested? Ideas or comments? Things you think we should do better? Things that you think are great? Things we used to do that were great and you miss? Things that others do and you wished we did? Things that … well, you get the idea!

Get in touch, or just get stuck in. You might find it rewarding*…

free to email me or message me on Skype.

*I have keys to the swag box … ?

Oct
19
2017
--

Percona Blog Poll: How Do You Currently Host Applications and Databases?

Host applications and databases

Host applications and databasesPercona latest blog poll asks how you currently host applications and databases. Select an option below, or leave a comment to clarify your deployment!

With the increased need for environments that respond more quickly to changing business demands, many enterprises are moving to the cloud and hosted deployments for applications and software in order to offload development and maintenance overhead to a third party. The database is no exception. Businesses are turning to using database as a service (DBaaS) to handle their data needs.

DBaaS provides some obvious benefits:

  • Offload physical infrastructure to another vendor. It is the responsibility of whoever is providing the DBaaS service to maintain the physical environment – including hardware, software and best practices.
  • Scalability. You can add or subtract capacity as needed by just contacting your vendor. Have a big event on the horizon? Order more servers!
  • Expense. Since you no longer have shell out for operational costs or infrastructure upgrades (all handled by the vendor now), you can reduce capital and operation expenses – or at least reasonably plan on what they are going to be.

There are some potential disadvantages to a DBaaS as well:

  • Network performance issues. If your database is located off-premises, then it can be subject to network issues (or outages) that are beyond your control. These can translate into performance problems that impact the customer experience.
  • Loss of visibility. It’s harder (though not impossible) to always know what is happening with your data. Decisions around provisioning, storage and architecture are now in the hands of a third party.
  • Security and compliance. You are no longer totally in control of how secure or compliant your data is when using a DBaaS. This can be crucial if your business requires certain standards to operate in your market (healthcare, for example).

How are you hosting your database? On-premises? In the cloud? Which cloud? Is it co-located? Please answer using the poll below. Choose up to three answers. If you don’t see your solutions, use the comments to explain.

Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.

Thanks in advance for your responses – they will help the open source community determine how databases are being hosted.

Oct
03
2017
--

Webinar October 4, 2017: Databases in the Hosted Cloud

Databases in the Hosted Cloud 1

Join Percona’s Chief Evangelist, Colin Charles as he presents Databases in the Hosted Cloud on Wednesday, October 4, 2017, at 7:00 am PDT / 10:00 am EDT (UTC-7).Databases in the Hosted Cloud 1


Today you can use hosted MySQL/MariaDB/Percona Server for MySQL/PostgreSQL in several “cloud providers” as a database as a service (DBaaS). Learn the differences, the access methods and the level of control you have for the various public databases in the hosted cloud offerings:

  • Amazon RDS including Aurora
  • Google Cloud SQL
  • Rackspace OpenStack DBaaS
  • Oracle Cloud’s MySQL Service

The administration tools and ideologies behind each are completely different, and you are in a “locked-down” environment. Some considerations include:

  • Different backup strategies
  • Planning for multiple data centers for availability
  • Where do you host your application?
  • How do you get the most performance out of the solution?
  • What does this all cost?
  • Monitoring

Growth topics include:

  • How do you move from one DBaaS to another?
  • How do you move from a DBaaS to your own hosted platform?

Register for the webinar here.

Securing Your MySQLColin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within many open source communities and has spoken on the conference circuit.

 

May
16
2017
--

Percona Live Open Source Database Conference 2017 Slides and Videos Available

Percona Live

Percona LiveThe slides and videos from the Percona Live Open Source Database Conference 2017 are available for viewing and download. The videos and slides cover the keynotes, breakout sessions and MySQL and MongoDB 101 sessions.

To view slides, go to the Percona Live agenda, and select the talk you want slides for from the schedule, and click through to the talk web page. The slides are available below the talk description. There is also a page with all the slides that is searchable by topic, talk title, speaker, company or keywords.

To view videos, go to the Percona Live 2017 video page. The available videos are searchable by topic, talk title, speaker, company or keywords.

There are a few slides and videos outstanding due to unforeseen circumstances. However, we will upload those as they become available.

Some examples of videos and slide decks from the Percona Live conference:

MongoDB 101: Efficient CRUD Queries in MongoDB
Adamo Tonete, Senior Technical Engineer, Percona
Video: https://www.percona.com/live/17/content/efficient-crud-queries-mongodb
Slides: https://www.percona.com/live/17/sessions/efficient-crud-queries-mongodb

MySQL 101: Choosing a MySQL High Availability Solution
Marcos Albe, Principal Technical Services Engineer, Percona
Video: https://www.percona.com/live/17/content/choosing-mysql-high-availability-solution
Slides: https://www.percona.com/live/17/sessions/choosing-mysql-high-availability-solution

Breakout Session: Using the MySQL Document Store
Mike Zinner, Sr. Software Development Director and Alfredo Kojima, Sr. Software Development Manager, Oracle
Video: https://www.percona.com/live/17/content/using-mysql-document-store
Slides: https://www.percona.com/live/17/sessions/using-mysql-document-store

Keynote: Continuent is Back! But What Does Continuent Do Anyway?
Eero Teerikorpi, Founder and CEO and MC Brown, VP Products, Continuent
Video: https://www.percona.com/live/17/content/continuent-back-what-does-continuent-do-anyway
Slides: https://www.percona.com/live/17/sessions/continuent-back-what-does-continuent-do-anyway

Please let us know if you have any issues. Enjoy the videos!

Percona Live Europe 2017
Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

We look forward to seeing you there!

Apr
28
2017
--

From Percona Live 2017: Thank You, Attendees!

Percona Live 2017

Percona Live 2017From everyone at Percona and Percona Live 2017, we’d like to send a big thank you to all our sponsors, exhibitors, and attendees at this year’s conference.

This year’s conference was an outstanding success! The event brought the open source database community together, with a technical emphasis on the core topics of MySQL, MariaDB, MongoDB, PostgreSQL, AWS, RocksDB, time series, monitoring and other open source database technologies.

We will be posting tutorial and session presentation slides at the Percona Live site, and all of them should be available shortly. 

Highlights This Year:

Thanks to Our Sponsors!

We would like to thank all of our valuable event sponsors, especially our diamond sponsors Continuent and VividCortex – your participation really makes the show happen.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

Percona Live Europe 2017Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We look forward to seeing you there!

Apr
26
2017
--

Percona Live 2017: Deploying MongoDB on Public Clouds

MongoDB on Public Clouds

MongoDB on Public CloudsToday at Percona Live 2017, the afternoon is jam-packed with open source technology lectures filled with community members eager for the latest on the best strategies – including how you should deploy MongoDB on public clouds.

Dharshan Rangegowda (CEO of ScaleGrid) discussed deploying MongoDB on public clouds. ScaleGrid provides a fully managed Database-as-a-Service (DBaaS) solution used today by thousands of developers, startups, and enterprise customers. In this session, Dharshan talked about how public clouds like AWS and Azure have become very popular platforms over the past few years. These public clouds provide a plethora of infrastructure features to help make life easier, He dug into the features/assets that one should be actively leveraging.

On the flip side, there are also a number of potential pitfalls that require attention and might need a workaround. Dharshan reviewed some common architecture patterns you need to have in place to be successful with MongoDB on the public cloud, including high availability, disaster recovery, scaling, performance and others.

After the lecture, Dharshan was kind enough to talk briefly with me about his session. Check it out:

Don’t miss any of tomorrow’s talks! You can find Thursday’s (4/27) session schedule here.

Jan
09
2017
--

MongoDB PIT Backups: Part 2

MongoDB PIT Backups

This blog post is the second in a series covering MongoDB PIT backups. You can find the first part here.

Sharding Makes Everything Fun(ner)

The first blog post in this series looked at MongoDB backups in a simple single-replica set environment. In this post, we’ll look at the scale-out use case. When sharding, we have exactly the same problem as we do on a single replica set. However, now the problem is multiplied by the number of replica sets in the cluster. Additionally, we have a bonus problem: each replica set has unique data. That means to get a truly consistent snapshot of the cluster, we need to orchestrate our backups to capture a single consistent point in time. Just so we’re on the same page, that means that every replica set needs to stop their backups at, or near, the same time that the slowest replica set stops. Are you sufficiently confused now? Let me get to a basic concept that I forgot to cover in the first post, and then I’ll give you a simple description of the problem.

Are you Write Concerned?

So far, I’ve neglected to talk about the very important role of “write concern” when taking consistent backups. In MongoDB, the database is not durable by default. By “durable,” I mean “on disk” when the database acknowledges receipt of an operation from your application. There are most likely several reasons for this. Most likely the biggest one originally was probably throughput given a lack of concurrency.

However, the side effect is possible data loss due to loss of operations applied only in memory. Changing the write concern to “journaled” (

j : true

) will change this behavior so that MongoDB journals changes before acknowledging them (you also need to be running with journal enabled).

TIP: For true durability in a replica set, you should use a write concern of “majority” for operations and the writeConcernMajorityJournalDefault : true on all replica set members (new to v3.4). This has the added benefit of greatly decreasing the chance of rollback after an election.

Wow, you’re inconsistent

At the risk of being repetitive, the crux of this issue is that we need to run a backup on every shard (replica set). This is necessary because every shard has a different piece of the data set. Each piece of that data set is necessary to get an entire picture of the data set for the cluster (and thus, your application). Since we’re using mongodump, we’ll only have a consistent snapshot at the point in time when the backup completes. This means we must end each shard’s backup at a consistent point in time. We cannot expect that the backup will complete in exactly the same amount of time on every shard, which is what we’ll need for a consistent point in time across the cluster. This means that Shard1 might have a backup that is consistent to 12:05 PM, and another shard that is consistent to 12:06 PM. In a high traffic environment (the kind that’s likely to need horizontal scale), this could mean thousands of lost documents. Here’s a diagram:

MongoDB PIT Backups
MongoDB PIT Backups

 

Here’s the math to illustrate the problem:

  • Shard1’s backup will contain 30,000 documents ((100 docs * 60 secs) * 5 mins)
  • Shard2’s backup will contain 36,000 documents ((100 docs * 60 secs) * 6 mins)

In this example, to get a consistent point in time you’d need to remove all insert, update and delete operations that happened on Shard 2 from the time that Shard 1’s backup completed (6,000 documents). This means examining the timestamp of every operation in the oplog and reversing it’s operation. That’s a very intensive process, and will be unique for every mongodump that’s executed. Furthermore, this is a pretty tricky thing to do. The repeatable and much more efficient method is to have backups that finish in a consistent state, ready to restore when needed.

Luckily, Percona has you covered!

You’re getting more consistent

Having data is important, but knowing what data you have is even more important. Here’s how you can be sure you know what you have in your MongoDB backups:

David Murphy has released his MongoDB Consistent Backup Tool in the Percona Labs github account, and has written a very informative blog post about it. My goal with these blog posts is to make it even easier to understand the problem and how to solve it. We’ve already had an exhaustive discussion about the problem on both small and large scales. How about the solution?

It’s actually pretty simple. The solution, at a basic level, is to use a simple algorithm to decide when a cluster-wide consistent point-in-time can be reached. In the MongoDB Consistent Backup tool, this is done by the backup host kicking off backups on a “known good member” of each shard (that’s a pretty cool feature by itself) and then tracking the progress of each dump. At the same time the backup is kicked off, the backup host kicks off a separate thread that tails the oplog on each “known good member” until the mongodump on the slowest shard completes. By using this method, we have a very simple way of deciding when we can get a cluster-wide consistent snapshot. In other words, when the slowest member completes their piece of the workload. Here’s the same workload from Figure 4, but with the MongoDB Consistent Backup Tool methodology:

MongoDB PIT Backups
MongoDB PIT Backups

 

TIP: The amount of time that it takes to perform these backups is often decided by two factors:

  1. How evenly distributed the data is across the shards (balanced)
  2. How much data each shard contains (whether or not it’s balanced).

The takeaway here is that you may need to shard so that each shard has a manageable volume of data. This allows you to hit your backup/restore windows more easily.

…The Proverbial “Monkey Wrench”

There’s always a “gotcha” just when you think you’ve got your mind around any difficult concept. Of course, this is no different.

There is one very critical concept in sharding that we didn’t cover: tracking what data lies on which shard. This is important for routing the workload to the right place, and balancing the data across the shards. In MongoDB, this is completed by the config servers. If you cannot reach (or recover) your config servers, your entire cluster is lost! For obvious reasons, you need to back them up as well. With the Percona Labs MongoDB Consistent Backup Tool, there are actually two modes used to backup config servers: v3.2 and greater, and legacy. The reason is that in v3.2, config servers went from mirrors to a standard replica set. In v3.2 mode, we just treat the config servers like another replica set. They have their own mongodump and oplog tail thread. They get a backup that is consistent to the same point in time as all other shards in the cluster. If you’re on a version of MongoDB prior to v3.2, and you’re interested in an explanation of legacy mode, please refer back to David’s blog post.

The Wrap Up

We’ve examined the problems with getting consistent backups in a running MongoDB environment in this and the previous blog posts. Whether you have a single replica set or a sharded cluster, you should have a much better understanding of what the problems are and how Percona has you covered. If you’re still confused, or you’d just like to ask some additional questions, drop a comment in the section below. Or shoot me a tweet @jontobs, and I’ll make sure to get back to you.

Dec
16
2016
--

MongoDB PIT Backups In Depth

MongoDB PIT Backups

In this blog is an in-depth discussion of MongoDB PIT backups.

Note: INTIMIDATION FREE ZONE!! This post is meant to give the reader most of the knowledge that is needed to understand the material. This includes basic MongoDB knowledge and other core concepts. I have tried to include links for further research where I can. Please use them where you need background and ask questions. We’re here to help!

Intro

In this two-part series, we’re going to fill in some of the gaps you might have to help you get awesome Point-in-Time (PIT) backups in MongoDB. These procedures will work for both MongoDB and Percona Server for MongoDB. This is meant to be a prequel to David Murphy’s MongoDB PIT Backup blog post. In that blog, David shows you how to take and to restore a dump up until a problem operation happened. If you haven’t read that post yet, hold off until you read this one. This foundation will help you better understand the how and why of the necessary steps. Let’s move onto some core concepts in MongoDB – but first, let me tell you what to expect in each part.

Blog 1 (this one): Core concepts – replica set backups, problem statement and solution

Blog 2: Getting Shardy – why backup consistency is tough and how to solve it

Core Concepts

Replica Set (process name: mongod) – MongoDB uses replica sets to distribute data for DR purposes. Replica sets are made up of primaries, secondaries and/or arbiters. These are much like master/slave pairs in MySQL, but with one big caveat. There’s an election protocol included that also handles failover! That’s right, high availability (HA) too! So, in general, there is a “rule of three” when thinking about the minimum number of servers to put in your replica sets. This is necessary to avoid a split-brain scenario.

Oplog (collection name: local.oplog.rs) The oplog is the log that records changes to the data on the MongoDB primary (secondaries also have an oplog). Much like MySQL, the non-primary members (secondaries) pull operations from the oplog and apply them to their own collections. Secondaries can pull from any member of the replica set that is ahead of them. The operations in the oplog are idempotent, meaning they always result in the same change to the database no matter how many times they’re performed.

Sharding (process name: mongos) – MongoDB also has built in horizontal scalability. This is implemented in a “shared nothing” architecture. A sharded cluster is made up of several replica sets. Each replica set contains a unique range of data. The data is distributed amongst replica sets based on a sharding key. There is also a sharding router (mongos) that runs as a routing umbrella over the cluster. In a sharded setup the application solely interfaces with the sharding router (never the replica sets themselves). This is the main function for scaling reads or writes in MongoDB. Scaling both takes very thoughtful design, but may not be possible.

Mongodump (utility name: mongodump) – MongoDB has built in database dump utility that can interface with mongod or mongos. Mongodump can also use the oplog of the server that it is run on to create a consistent point in time backup by using a “roll forward” strategy.

Mongorestore (utility name: mongorestore) – MongoDB has a built in database restore utility. Mongorestore is a rather simple utility that will replay binary dumps created by mongodump. When used with –oplogReplay when restoring a dump made with mongodump’s –oplog switch, it can make for a very functional backup facility.

Tip: make sure that user permissions are properly defined when using –oplogReplay – besides restore, anyAction and anyResource need to be granted.

OK, So What?

We’re going to first need to understand how backups work in a simple environment (a single replica set). Things are going to get much more interesting when we look at sharded clusters in the next post.

Backing Up

In a single replica set, things are pretty simple. There is no sharding router to deal with. You can get an entire data set by interacting with one server. The only problem that you need to deal with is the changes that are being made to the database while your mongodump is in process. If the concept of missed operations is a little fuzzy to you, just consider this simple use case:

We’re going to run a mongodump, and we have a collection with four documents:MongoDB PIT Backups

We start mongodump on this collection. We’re also running our application at the same time, because we can’t take down production. Mongodump scans from first to last in the collection (like a range scan based on ID). In this case mongodump has backed up of all documents from id:1 through id:4MongoDB PIT Backups

At this same moment in time, our application inserts id:3 into the collection.MongoDB PIT Backups

Is the document with id:3 going to be included in the mongodump? The answer is: most likely not. The problem is that you would expect it to be in the completed dump. However, if you need to restore this backup, you’re going to lose id:3. Now, this is perfectly normal in Disaster Recovery scenarios. Knowing that this is happening is the key part. Your backups will have the consistency of swiss cheese if you don’t have a way to track changes being made while the backup is running. Unknown data loss is one of the worst situations one can be in. What we need is a process to capture changes while the backup is running.

Here’s where the inclusion of the –oplog flag is very important. The –oplog flag will allow mongodump to capture changes that are being made to the database while the backup is running. Since the oplog is idempotent, there is chance that we’ll change the data during a restore. This gives the mongodump a consistent snapshot of when the dump completes, like most “clone” type operations.

Restoring

When running mongorestore, you can use the –oplogReplay option. Using oplog recovers to the point in time when the dump completed. Back to the use case, we may not capture id:3 on the first pass in this case, but as long as we’ve captured the oplog up until the backup completes, we’ll have id:3 available. When replaying the oplog during mongorestore, we will basically re-run the insert operation, completing the dataset. The oplog BSON timestamps all entries, so we know for sure until what point in time we’ve captured.

TIP: If you need to convert the timestamp to something human-readable, here’s something helpful

The Wrap Up

Now we have a firm understanding of the problem. Once we understand the problem, we can easily design a solution to ensure our backups have the integrity that our environment demands. In the next post, we’re going to step up the complexity by examining backups in conjunction with the most complex feature MongoDB has: sharding. Until then, post your feedback and questions in the comments section below.

Dec
16
2016
--

Percona Live 2017 Sneak Peek Schedule Up Now! See the Available Sessions!

Percona Live 2017

Percona Live 2017We are excited to announce that the sneak peek schedule for the Percona Live 2017 Open Source Database Conference is up! The Percona Live Open Source Database Conference 2017 is April 24th – 27th, at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

The Percona Live Open Source Database Conference 2017 is the premier event for the rich and diverse MySQL, MongoDB and open source database ecosystems. This conference provides an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience.

Below are some of our top picks for MySQL, MongoDB and open source database sessions:

Tutorials

MySQL 101 Tracks

MongoDB 101 Tracks

Breakout Talks

Register for the Percona Live Open Source Database Conference here.

Early Bird Discounts

Just a reminder to everyone out there: our Early Bird discount rate for the Percona Live Open Source Database Conference 2017 is only available ‘til January 8, 2017, 11:30 pm PST! This rate gets you all the excellent and amazing opportunities that Percona Live offers, at a very reasonable price!

Sponsor Percona Live

Become a conference sponsor! We have sponsorship opportunities available for this annual MySQL, MongoDB and open source database event. Sponsors become a part of a dynamic and growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solutions vendors, and entrepreneurs who attend the event.

Oct
10
2016
--

Take Percona’s One-Click 2017 Top Database Concerns Poll

2017 top database concerns

2017 top database concernsTake Percona’s One-Click 2017 Top Database Concerns Poll.

With 2017 coming quick around the corner, it’s time to start thinking about what next year is going to bring to the open source database community. We just finished Percona Live Europe 2017, and at the conference we looked at new technologies, new techniques, and new ideas. With all this change comes some uncertainty and concern regarding change: how is a changing ecosystem going to affect your database environment? Are you up to speed on the latest technologies and how they can impact your world?

Are your biggest concerns for 2017 scalability, performance, security, monitoring, updates and bugs, or staffing issues? What do you think is going to be your biggest issue in the coming year??

 

Please take a few seconds and answer the following poll. It will help the community get an idea of how the new year could impact their critical database environments.

If you’ve faced specific issues, feel free to comment below. We’ll post a follow-up blog with the results!

Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.

You can see the results of our last blog poll on security here.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com