Apr
17
2018
--

Resy rolls out a new suite of tools for restaurants

Resy launched in the summer of 2014 with a simple premise: If you want a premium reservation at a restaurant on short notice, you should be able to pay for it. Four years and 160 markets later, Resy has changed a lot since then.

But today, the company is about to change things up even more.

This morning, Resy has announced a brand new suite of tools for restaurants, including a new inventory management system called ResyFly.

As it stands now, restaurants have two options when it comes to inventory management for their reservations. They can choose a slot system, where diners are seated at 6pm, 8pm and 10pm, or they can opt for a flex system, where they take reservations as they’re called in and build the night’s reservations based off what comes in first.

Unfortunately, most restaurants have to choose between these two systems, as there are no inventory management systems that offer the ability to do both, according to Resy.

ResyFly uses Resy’s troves of data to determine the best way for restaurants to eliminate gaps in their inventory throughout a given night, taking into account things like date, time, weather and even the average time spent eating at a given restaurant. The tool gives restaurants the ability to schedule different floor plans, reservation grids and hours of operation for special days like Valentine’s Day.

Alongside ResyFly, the company is also introducing Business Intelligence, a window into important information like KPIs, revenue and ratings with third-party information from platforms like Foursquare layered in and integrated with POS software providers to offer real-time revenue reporting.

But sometimes you want direct feedback from the customer. To that end, Resy is launching Resy Surveys, which gives a restaurant the opportunity to send a custom survey to customers about their experience. Resy is also integrating with Upserve, giving Resy’s restaurant partners insights into their guests’ preferences and favorite dishes, as well as info on dining companions, frequency of bookings and historical spend.

And while Resy is focused on refining the product, the company is also focused on growth. That’s why Resy has announced the launch of Resy Global Service, which lets Resy distribute inventory to partners like Airbnb. (It’s worth noting that Airbnb led Resy’s $13 million funding round in 2017.)

Finally, Resy is working on a new membership loyalty program called Resy Select, which will launch at the end of the month. Resy Select is an invite-only program that gives restaurants insights into Resy’s hungriest users, and gives those users benefits such as exclusive booking windows, priority waitlist, early access tickets to events and other exclusive experiences like meeting the chef or touring the kitchen.

Resy books more than 1 million reservations on the platform each week. The company no longer charges users for reservations, but rather charges restaurants by feature, instead of cover, with three tiers ranging from $189/month to $899/month. That said, the company is not yet self-serve on the restaurant side, but founder and CEO Ben Leventhal said the team is thinking about introducing it in the future.

“The key challenge and key opportunity is to do everything we can to make the right choices about what we build and the order we build it in,” said Leventhal. “Our goal is to stay focused on restaurants, as a significant amount of the tech we build is built in conjunction with our restaurant partners.”

Apr
17
2018
--

Webinar Wednesday, April 18, 2018: Percona XtraDB Cluster 5.7 Tutorial

Percona XtraDB Cluster Tutorial

Percona XtraDB Cluster 5.7 TutorialPlease join Percona’s Architect, Tibi Köröcz as he presents Percona XtraDB Cluster 5.7 Tutorial on Wednesday, April 18, 2018, at 7:00 am PDT (UTC-7) / 10:00 am EDT (UTC-4).

Never used Percona XtraDB Cluster before? Come join this 45-minute tutorial where we will introduce you to the concepts of a fully functional Percona XtraDB Cluster.

In this tutorial, we will show you how you can install Percona XtraDB Cluster with ProxySQL, and monitor it with Percona Monitoring and Management (PMM).

We will also cover topics like bootstrap, IST, SST, Certification, common-failure situations and online schema changes.

Register for the webinar now.

Percona XtraDB ClusterTibor Köröcz, Senior Consultant

Tibi joined Percona in 2015 as a Consultant. Before joining Percona, among many other things, he worked at the world’s largest car hire booking service as a Senior Database Engineer. He enjoys trying and working with the latest technologies and applications that can help or work with MySQL. In his spare time, he likes to spend time with his friends, travel around the world and play ultimate frisbee.

The post Webinar Wednesday, April 18, 2018: Percona XtraDB Cluster 5.7 Tutorial appeared first on Percona Database Performance Blog.

Apr
17
2018
--

Drift raises $60 million to be an Amazon for businesses

When you’re raising venture capital, it helps if you’ve had “exits.” In other words, if your company has been acquired or you’ve taken one public, investors are more inclined to take a bet on anything you do.

Boston -based serial entrepreneur David Cancel has sold not just one, but four companies.  And after a few years running product for HubSpot, he’s in the midst of building number five.

That startup, Drift, managed to raise $47 million in its first three years. Now it’s announcing another $60 million led by Sequoia Capital, with participation from existing investors CRV and General Catalyst. The valuation is undisclosed.

So what is Drift? It’s “changing the way businesses buy from businesses,” said Cancel. He wants to eventually build an alternative to Amazon to make it easier for companies to make large orders.

Currently, Drift subscribers can use chatbots to help turn web visits into sales. It has 100,000 clients including Zenefits, MongoDB, Zuora and AdRoll.

Drift “turns those conversations into customers,” Cancel explained. He said that technology is comparable to what is commonly used for customer service. It’s the “same messaging that was used for support, but used in the sales context.”

In the long-run, Cancel says he hopes Drift will expand its offerings to compete with Salesforce.

The company wouldn’t disclose revenue, but says it is ten times better compared to whatever it was in the past year. And it’s on track to grow another five times this year. This, of course, means little without hard numbers.

Yet we’re told that the new round means that Drift will have $90 million in the bank. It plans to use some of the funding to make acquisitions in voice and video technology. Drift also plans to expand its teams in both Boston and San Francisco, with new offices for both. The company presently has 130 employees.

 

Apr
16
2018
--

Binlog and Replication Improvements in Percona Server for MySQL

Percona Server for MySQL

Percona Server for MySQLDue to continuous development and improvement, Percona Server for MySQL incorporates a number of improvements related to binary log handling and replication. This results in replication specifics, distinguishing it from MySQL Server.

Temporary tables and mixed logging format

Summary of the fix:

As soon as some statement involving temporary tables was met when using a mixed binlog format, MySQL switched to row-based logging for all statements until the end of the session (or until all temporary tables used in the session were dropped). This is inconvenient when you have long-lasting connections, including replication-related ones. Percona Server for MySQL fixes the situation by switching between statement-based and row-based logging when necessary.

Details:

The new mixed binary logging format, supported by Percona Server for MySQL, means that the server runs in statement-based logging by default, but switches to row-based logging when replication would be unpredictable. For example, in the case of a nondeterministic SQL statement that could cause data divergence if reproduced on a slave server. The switch is done when matching any condition from a long list, and one of these conditions is the use of temporary tables.

Temporary tables are never logged using row-based format, but any statement that touches a temporary table is logged in row mode. This way, we intercept all the side effects that temporary tables can produce on non-temporary ones.

There is no need to use the row logging format for any other statements, solely because of the temp table presence. However, MySQL undertook such an excessive precaution: once some statement with a temporary table had appeared and the row-based logging was used, MySQL was logging unconditionally put all subsequent statements in row format.

Percona Server for MySQL has implemented more accurate behavior. Instead of switching to row-based logging until the last temporary table is closed, the usual rules of row vs. statement format apply, and we don’t consider the presence of currently opened temporary tables. This change was introduced with the fix of bug #151 (upstream #72475).

Temporary table drops and binloging on GTID-enabled server

Summary of the fix:

MySQL logs DROP statements for all temporary tables regardless of the logging mode under which these tables were created. This produces binlog writes and errand GTIDs on slaves with row and mixed logging. Percona Server for MySQL fixes this by tracking the binlog format at temporary table create time and uses it to decide whether a DROP should be logged or not.

Details:

Even with read_only mode enabled, the server permits some operations, including ones with temporary tables. With the previous fix, temporary table operations are not binlogged in row- or mixed-mode. But MySQL server doesn’t track what the logging mode was when a temporary table was created, and therefore unconditionally logs DROP statements for all temporary tables. These DROP statements receive IF EXISTS addition, which is intended to make them harmless.

Percona Server for MySQL has fixed this with the bug fixes #964, upstream #83003, and upstream #85258. Moreover, with all the binlogging fixes discussed so far nothing involving temporary tables is logged to the binary log in row or mixed format. There is no need to consider CREATE/DROP TEMPORARY TABLE unsafe for use in stored functions, triggers and multi-statement transactions in row/mixed format. Therefore, we introduced an additional fix to mark the creation and drop of temporary tables as unsafe inside transactions in statement-based replication only (the fixed bug is #1816, while the correspondent upstream one is #89467 and it is still open).

Safety of statements with a LIMIT clause

Summary of the fix:

MySQL Server considers all UPDATE/DELETE/INSERT ... SELECT statements with the LIMIT clause unsafe, no matter if they are really producing non-deterministic results or not. Percona Server for MySQL is more accurate because it acknowledges such instructions as safe when they include ORDER BY PK or WHERE condition.

Details:

MySQL Server treats UPDATE/DELETE/INSERT ... SELECT statements with the LIMIT clause as unsafe, considering that they produce an unpredictable number of rows. But some such statements can still produce an absolutely predictable result. One such deterministic case takes place when a statement with the LIMIT clause has an ORDER BY PK or WHERE condition.

The patch, making updates and deletes with a limit to be supposed as safe if they have an ORDER BY pk_column clause, was initially provided on the upstream bug report and incorporated later into Percona Server for MySQL with additional improvements. Bug fixed #44 (upstream #42415).

Performance improvements

There are also two modifications in Percona Server related to multi-source replication that improve performance on slaves.

The first improvement is about relay log position, which was always updated in multi-source replications setups regardless of whether the committed transaction has already been executed or not. Percona Server omits relay log position updates for the already logged GTIDs.

These unconditional relay log position updates caused additional fsync operations in the case of relay-log-info-repository=TABLE. With the higher number of channels transmitting such duplicate (already executed) transactions, the situation became proportionally worse. The problem was solved in Percona Server 5.7.18-14.  Bug fixed  #1786 (upstream #85141).

The second improvement decreases the load on slave nodes configured to update the master status and connection information only on log file rotation. MySQL additionally updated this information in the case of multi-source replication when a slave had to skip the already executed GTID event. This behavior was the cause of substantially higher write loads on slaves and lower replication throughput.

The configuration with master_info_repository=TABLE and sync_master_info=0  makes the slave update the master status and connection information in this table on log file rotation and not after each sync_master_info event, but it didn’t work on multi-source replication setups. Heartbeats sent to the slave to skip GTID events that it had already executed previously were evaluated as relay log rotation events and reacted with mysql.slave_master_info table sync. This inaccuracy could produce a huge (up to five times on some setups) increase in write load on the slave, before this problem was fixed in Percona Server for MySQL 5.7.20-19. Bug fixed  #1812 (upstream #85158).

Current status of fixes

The three issues related to temporary tables that were fixed in Percona Server 5.5 and contributed upstream, and the final fixes of the bugs #72475, #83003, and #85258, have landed into MySQL Server 8.0.4.

The post Binlog and Replication Improvements in Percona Server for MySQL appeared first on Percona Database Performance Blog.

Apr
16
2018
--

Utah’s Pluralsight unveils IPO filing

Pluralsight, the Utah-based education technology company, has revealed its IPO filing. 

Given the timing of the unveiling, the company is likely targeting a May public debut.

Its core business is online software development courses, helping people improve their skills in categories like IT, data and security. Businesses small and large pay Pluralsight to help train their employees. It also has offerings for individual subscribers.

In the filing, the company acknowledges that it is a competitive landscape, and names Cornerstone OnDemand, Udacity, Udemy, LinkedIn Learning as others in a comparable market. It also mentions General Assembly, which was recently acquired by Adecco for $413 million. 

This is the first glimpse we get at Pluralsight’s financials. For 2017, the company brought in $166.8 million in revenue, up from $131.8 million in 2016 and $108.4 million in 2015.

Losses are growing, however. This is partly due to a sizeable increase in sales and marketing expenditures. For 2017, the company lost $96.5 million. This is up from losses of $20.6 million in 2016 and $26.4 million in 2015.

Pluralsight has been around since 2004. Like many startups outside of the San Francisco Bay Area, the company bootstrapped its business and didn’t raise significant outside funding until 2013. Pluralsight previously raised nearly $200 million in financing.

The largest shareholder is Insight Venture Partners, which owned 46.1 percent of the shares prior to the IPO, an unusually high percentage. Co-founder and CEO Aaron Skonnard owned 13.4 percent and investment group ICONIQ owned 8.1 percent.

Morgan Stanley and J.P. Morgan served as lead underwriters. Wilson Sonsini and Goodwin Procter served as counsel.

Pluralsight plans to list on the Nasdaq under the ticker “PS.”

A provision in the JOBS Act from 2012 helped make it so that companies could file confidentially and then reveal financials and other business information just weeks before making public debuts. This helps companies avoid too much scrutiny in the months leading up to an IPO. There is also a quiet period in this time, meaning that companies are limited in what they can say publicly about their businesses.

Like most tech companies, Pluralsight chose to take advantage of this confidential filing provision. But it also announced that it filed, something that companies don’t usually do. Most choose to stay quiet about IPO plans until they make the filings public, unless reporters break the news first.

It was no surprise to those who have been following Utah’s tech scene that Pluralsight is planning to list on the stock market this year. The venture-backed “unicorn” has been a late-stage company for several years now, with a reported valuation of $1 billion as of 2014. 

After a slow first couple of months, there has been a flurry of tech IPO activity in recent weeks. DropboxSpotify and Zuora recently debuted. Pivotal, Smartsheet and Carbon Black are amongst the companies expected to list in the coming weeks.

 

Apr
16
2018
--

ProxySQL 1.4.7 and Updated proxysql-admin Tool Now in the Percona Repository

ProxySQL Admin

ProxySQL 1.4.5ProxySQL 1.4.7, released by ProxySQL, is now available for download in the Percona Repository along with an updated version of Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and its forks (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

The ProxySQL 1.4.7 source and binary packages available at https://percona.com/downloads/proxysql include ProxySQL Admin – a tool, developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 1.4.7 are available as well: https://hub.docker.com/r/percona/proxysql/. You can download the original ProxySQL from https://github.com/sysown/proxysql/releases.

This release fixes the following bugs in ProxySQL Admin:

Usability improvements:

  • Added proxysql-status  tool to dump ProxySQL configuration and statistics.

Bug fixes:

  • PSQLADM-2: ProxySQL galera checker script didn’t check if another instance of itself is already running. While running more then one copy of proxysql_galera_checker in the same runtime environment at the same time is still not supported, the introduced fix is able to prevent duplicate script execution in most cases.
  • PSQLADM-40: ProxySQL scheduler generated a lot of proxysql_galera_checker  and  proxysql_node_monitor processes in case of wrong ProxySQL credentials in proxysql-admin.cnf file.
  • PSQLADM-41: Timeout error handling was improved with clear messages.
  • PSQLADM-42: An inconsistency of the date format in ProxySQL and scripts was fixed.
  • PSQLADM-43: proxysql_galera_checker didn’t take into account the possibility of special characters presence in mysql-monitor_password.
  • PSQLADM-44: proxysql_galera_checker generated unclear errors in the proxysql.log file if wrong credentials where passed.
  • PSQLADM-46: proxysql_node_monitor script incorrectly split the hostname and the port number in URLs containing hyphen character.

ProxySQL is available under OpenSource license GPLv3.

The post ProxySQL 1.4.7 and Updated proxysql-admin Tool Now in the Percona Repository appeared first on Percona Database Performance Blog.

Apr
16
2018
--

Webinar Tuesday April 17, 2018: Which Amazon Cloud Technology Should You Chose? RDS? Aurora? Roll Your Own?

Amazon Cloud Technology

Amazon Cloud TechnologyPlease join Percona’s Senior Technical Operations Engineer, Daniel Kowalewski as he presents Which Amazon Cloud Technology Should You Chose? RDS? Aurora? Roll Your Own? on Tuesday, April 17, 2018, at 10:00 am PDT (UTC-7) / 1:00 pm EDT (UTC-4).

Are you running on Amazon, or planning to migrate there? In this talk, we are going to cover the different technologies for running databases on Amazon Cloud environments.

We will focus on the operational aspects, benefits and limitations for each of them.

Register for the webinar now.

Amazon Cloud TechnologyDaniel Kowalewski, Senior Technical Operations Engineer

Daniel joined Percona in August of 2015. Previously, he earned a B.S. in Computer Science from the University of Colorado in 2006 and was a DBA there until he joined Percona. In addition to MySQL, Daniel also has experience with Oracle and Microsoft SQL Server, but he much prefers to stay in the MySQL world. Daniel lives near Denver, CO with his wife, two-year-old son, and dog. If you can’t reach him, he’s probably in the mountains hiking, camping, or trying to get lost.

The post Webinar Tuesday April 17, 2018: Which Amazon Cloud Technology Should You Chose? RDS? Aurora? Roll Your Own? appeared first on Percona Database Performance Blog.

Apr
16
2018
--

Kolide raises $8M to turn application and device management into a smart database

More devices are coming onto the Internet every single day, and that’s especially true within organizations that have a fleet of devices with access to sensitive data — which means there are even more holes for potential security breaches.

That’s the goal of Kolide. The aim is to ensure that companies have access to tools that give them the ability to get a thorough analysis of every bit of data they have — and where they have it. The Kolide Cloud, its initial major rollout for Mac and Linux devices, turns an entire fleet of apps and devices into what’s basically a table that anyone can query to get an up-to-date look at what’s happening within their business. Kolide looks to provide a robust set of tools that help analyze that data. By doing that, companies may have a better shot at detecting security breaches that might come from even mundane miscalculations or employees being careless about the security of that data. The company said today it has raised $8 million in new venture financing in a round led by Matrix Partners.

“It’s not just an independent event,” Kolide CEO Jason Meller said. “The way I think about it, if you look at any organization, there’s a pathway to a massive security incident, and the pathway is rather innocuous. Let’s say I’m a developer that works at one of these organizations and I need to fix a bug, and pull the production database. Now I have a laptop with this data on this, and I did this and didn’t realize my disk wasn’t encrypted. I went from these innocuous activities to something existentially concerning which could have been prevented if you knew which devices weren’t encrypted and had customer data. A lot of organizations are focused on these very rare events, but the reality is the risk that they face is mishandling of customer data or sensitive information and not thinking about the basics.”

Kolide is built on top of Osquery, a toolkit that allows organizations to essentially view all their devices or operations as if it were a single database. That means that companies can query all of these incidents or any changes in the way employees use data or the way that data is structured. You could run a simple select query for, say, apps and see what is installed where. It allows for a level of granularity that could help drill down into those little innocuous incidents Meller talks about, but all that still needs some simpler approach or interface for larger companies that are frantically trying to handle edge cases but may be overlooking the basics.

Like other companies looking to build a business on top of open source technology, the company looks to offer ways to calibrate those tools for a company’s niche needs that they necessarily don’t actively cover. The argument here is that by basing the company and tools on open source software, they’ll be able to lean on that community to rapidly adapt to a changing environment when it comes to security, and that will allow them to be more agile and have a better sales pitch to larger companies.

There’s going to be a lot of competition in terms of application monitoring and management, especially as companies adopt more and more devices in order to handle their operations. That opens up more and more holes for potential breaches, and in the end, Kolide hopes to create a more granular bird’s-eye view of what’s happening rather than just creating a flagging system without actually explaining what’s happening. There are some startups attacking device management tools, like Fleetsmith does for Apple devices (which raised $7.7 million), and to be sure provisioning and management is one part of the equation. But Kolide hopes to provide a strong toolkit that eventually creates a powerful monitoring system for organizations as they get bigger and bigger.

“We believe data collection is an absolute commodity,” Meller said. “That’s a fundamentally different approach, they believe the actual collection tools are proprietary. We feel this is a solved problem. Our goal isn’t to take info and regurgitate it in a fancy user interface. We believe we should be paid based on the insights and help manage their fleet better. We can tell the whole industry is swinging this way due to the traction OSQuery had. It’s not a new trend, it’s really the end point as a result of companies that have suffered from this black box situation.”

Apr
16
2018
--

Talking Drupal #167 – Update from DrupalCon

In this episode, Stephen and Nic talk with John, who is attending Drupal Con.

Apr
13
2018
--

MongoDB Replica Set Tag Sets

MongoDB Replica Set Tag Sets

MongoDB Replica Set Tag SetsIn this blog post, we will look at MongoDB replica set tag sets, which enable you to use customized write concern and read preferences for replica set members.

This blog post will cover most of the questions that come to mind before using tag sets in a production environment.

  • What scenarios are these helpful for?
  • Do these tag sets work with all read preferences modes?
  • What if we’re already using maxStalenessSeconds along with the read preferences, can we still use a tag set?
  • How can one configure tag sets in a replica set?
  • Do these tags work identically for custom read preferences and write concerns?

Now let’s answer all these questions one by one.

What scenarios are these helpful for?

You can use tags:

  • If replica set members have different configurations and queries need to be redirected to the specific secondaries as per their purpose. For example, production queries can be redirected to the higher configuration member for faster execution and queries used for internal reporting purpose can be redirected to the low configurations secondaries. This will help improve per node resource utilization.
  • When you use custom read preferences, but the reads are routed to a secondary that resides in another data center to make reads more optimized and cost-effective. You can use tag sets to make sure that specific reads are routed to the specific secondary node within the DC.
  • If you want to use custom write concerns with the tag set for acknowledging writes are propagated to the secondary nodes per the requirements.

Do these tag sets work with all read preferences modes?

Yes, these tag-sets work with all the read preferences — except “primary” mode. “Primary” preferred read preference mode doesn’t allow you to add any tag sets while querying.

replicaTest:PRIMARY> db.tagTest.find().readPref('primary', [{"specs" : "low","purpose" : "general"}])
Error: error: {
	"ok" : 0,
	"errmsg" : "Only empty tags are allowed with primary read preference",
	"code" : 2,
	"codeName" : "BadValue"
}

What if we’re already using maxStalenessSeconds along with the read preferences, can tag set still be used?

Yes, you can use tag sets with a maxStalenessSeconds value. In that case, priority is given to staleness first, then tags, to get the most recent data from the secondary member.

How can one configure tag sets in a replica set?

You can configure tags by adding a parameter in the replica set configuration. Consider this test case with a five members replica set:

"members" : [
		{
			"_id" : 0,
			"name" : "host1:27017",
			"stateStr" : "PRIMARY",
		},
		{
			"_id" : 1,
			"name" : "host2:27017",
			"stateStr" : "SECONDARY",
		},
		{
			"_id" : 2,
			"name" : "host3:27017",
			"stateStr" : "SECONDARY",
		},
		{
			"_id" : 3,
			"name" : "host4:27017",
			"stateStr" : "SECONDARY",
		},
		{
			"_id" : 4,
			"name" : "host5:27017",
			"stateStr" : "SECONDARY",
         }
		]

For our test case, members specification of the host are “specs” and the requirement for the query as per the application is the “purpose,” in order to route queries to specific members in an optimized manner.

You must associate tags to each member by adding it to the replica set configuration:

cfg=rs.conf()
cfg.members[0].tags={"specs":"high","purpose":"analytics"}
cfg.members[1].tags={"specs":"high"}
cfg.members[2].tags={"specs":"low","purpose":"general"}
cfg.members[3].tags={"specs":"high","purpose":"analytics"}
cfg.members[4].tags={"specs":"low"}
rs.reconfig(cfg)

After adding tags, you can validate these changes by checking replica set configurations like:

rs.conf()
	"members" : [
		{
			"_id" : 0,
			"host" : "host1:27017",
			"tags" : {
				"specs" : "high",
				"purpose" : "analytics"
			},
		},
		{
			"_id" : 1,
			"host" : "host2:27017",
			"tags" : {
				"specs" : "high"
			},
		},
		{
			"_id" : 2,
			"host" : "host3:27017",
			"tags" : {
				"specs" : "low",
				"purpose" : "general"
			},
		},
		{
			"_id" : 3,
			"host" : "host4:27017",
			"tags" : {
				"specs" : "high",
				"purpose" : "analytics"
			},
		},
		{
			"_id" : 4,
			"host" : "host5:27017",
			"tags" : {
				"specs" : "low"
			},
		}
	]

Now, we are done with the tag-set configuration.

Do these tags work identically for custom read preferences and write concerns?

No, custom read preferences and write concerns consider tag sets in different ways.

Read preferences routes read operations to a required specific member by following tag values assigned to it, but write concerns follows tag values only to check if the value is unique or not. It will not consider tag values while selecting replica members.

Let us see how to use tag sets with write concerns. As per our test case, we have two unique tag values (i.e., “analytics” and “general”) defined as:

cfg=rs.conf()
cfg.settings={ getLastErrorModes: {writeNode:{"purpose": 2}}}
rs.reconfig(cfg)

You can validate these changes by checking the replica set configuration:

rs.conf()
	"settings" : {
			"getLastErrorModes" : {
			"writeNode" : {
				"purpose" : 2
			}<strong>
		},</strong>
	}

Now let’s try to insert a sample document in the collection named “tagTest” with this write concern:

db.tagTest.insert({name:"tom",tech:"nosql",status:"active"},{writeConcern:{w:"writeNode"}})
WriteResult({ "nInserted" : 1 })

Here, the write concern “writeNode” means the client gets a write acknowledgment from two nodes with unique tag set values. If the value set in the configuration exceeds the count of unique values, then it leads to an error at the time of the write:

cfg.settings={ getLastErrorModes: {writeNode:{"purpose": 4}}}
rs.reconfig(cfg)
db.tagTest.insert({name:"tom",tech:"nosql",status:"active"},{writeConcern:{w:"writeNode"}})
WriteResult({
	"nInserted" : 1,
	"writeConcernError" : {
		"code" : 100,
		"codeName" : "CannotSatisfyWriteConcern",
		"errmsg" : "Not enough nodes match write concern mode "writeNode""
	}
}

You can perform read and write operations with tag sets like this:

db.tagTest.find({name:"tom"}).readPref("secondary",[{"specs":"low","purpose":"general"}])
db.tagTest.insert({name:"john",tech:"rdbms",status:"active"},{writeConcern:{w:"writeNode"}})

I hope this helps you to understand how to configure MongoDB replica set tag sets, how the read preferences and write concerns handle them, and where you can use them

The post MongoDB Replica Set Tag Sets appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com