Nov
30
2018
--

PostgreSQL Streaming Physical Replication With Slots

postgres replication using slots

PostgreSQLPostgreSQL streaming physical replication with slots simplifies setup and maintenance procedures. Usually, you should estimate disk usage for the Write Ahead Log (WAL) and provide appropriate limitation to the number of segments and setup of the WAL archive procedure. In this article, you will see how to use replication with slots and understand what problems it could solve.

Introduction

PostgreSQL physical replication is based on WAL. Th Write Ahead Log contains all database changes, saved in 16MB segment files. Normally postgres tries to keep segments between checkpoints. So with default settings, just 1GB of WAL segment files is available.

Replication requires all WAL files created after backup and up until the current time. Previously, it was necessary to keep a huge archive directory (usually mounted by NFS to all slave servers). The slots feature introduced in 9.4 allows Postgres to track the latest segment downloaded by a slave server. Now, PostgreSQL can keep all segments on disk, even without archiving, if a slave is seriously behind its master due to downtime or networking issues. The drawback: the disk space could be consumed infinitely in the case of configuration error. Before continuing, if you need a better understanding of physical replication and streaming replication, I recommend you read “Streaming Replication with PostgreSQL“.

Create a sandbox with two PostgreSQL servers

To setup replication, you need at least two PostgreSQL servers. I’m using pgcli (pgc) to setup both servers on the same host. It’s easy to install on Linux, Windows, and OS X, and provides the ability to download and run any version of PostgreSQL on your staging server or even on your laptop.

python -c "$(curl -fsSL https://s3.amazonaws.com/pgcentral/install.py)"
mv bigsql master
cp -r master slave
$ cd master
master$ ./pgc install pg10
master$ ./pgc start pg10
$ cd ../slave
slave$ ./pgc install pg10
slave$ ./pgc start pg10

First of all you should allow the replication user to connect:

master$ echo "host replication replicator 127.0.0.1/32 md5" >> ./data/pg10/pg_hba.conf

If you are running master and slave on different servers, please replace 127.0.0.1 with the slave’s address.

Next pgc creates a shell environment file with PATH and all the other variables required for PostgreSQL:

master$ source ./pg10/pg10.env

Allow connections from the remote host, and create a replication user and slot on master:

master$ psql
postgres=# CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'replicator';
CREATE ROLE
postgres=# ALTER SYSTEM SET listen_addresses TO '*';
ALTER SYSTEM
postgres=# SELECT pg_create_physical_replication_slot('slot1');
pg_create_physical_replication_slot
-------------------------------------
(slot1,)

To apply system variables changes and hba.conf, restart the Postgres server:

master$ ./pgc stop ; ./pgc start
pg10 stopping
pg10 starting on port 5432

Test table

Create a table with lots of padding on the master:

master$ psql psql (10.6) Type "help" for help.
postgres=# CREATE TABLE t(id INT, pad CHAR(200));
postgres=# CREATE INDEX t_id ON t (id);
postgres=# INSERT INTO t SELECT generate_series(1,1000000) AS id, md5((random()*1000000)::text) AS pad;

Filling WAL with random data

To see the benefits of slots, we should fill the WAL with some data by running transactions. Repeat the update statement below to generate a huge amount of WAL data:

UPDATE t SET pad = md5((random()*1000000)::text);

Checking the current WAL size

You can check total size for all WAL segments from the shell or from psql:

master$ du -sh data/pg10/pg_wal
17M data/pg10/pg_wal
master$ source ./pg10/pg10.env
master$ psql
postgres=# \! du -sh data/pg10/pg_wal
17M data/pg10/pg_wal

Check maximum WAL size without slots activated

Before replication configuration, we can fill the WAL with random data and find that after 1.1G, the data/pg10/pg_wal directory size does not increase regardless of the number of update queries.

postgres=# UPDATE t SET pad = md5((random()*1000000)::text); -- repeat 4 times
postgres=# \! du -sh data/pg10/pg_wal
1.1G data/pg10/pg_wal
postgres=# UPDATE t SET pad = md5((random()*1000000)::text);
postgres=# \! du -sh data/pg10/pg_wal
1.1G data/pg10/pg_wal

Backup master from the slave server

Next, let’s make a backup for our slot1:

slave$ source ./pg10/pg10.env
slave$ ./pgc stop pg10
slave$ rm -rf data/pg10/*
# If you are running master and slave on different servers, replace 127.0.0.1 with master's IP address.
slave$ PGPASSWORD=replicator pg_basebackup -S slot1 -h 127.0.0.1 -U replicator -p 5432 -D $PGDATA -Fp -P -Xs -Rv

Unfortunately pg_basebackup hangs with: initiating base backup, waiting for checkpoint to complete.
We can wait for the next checkpoint, or force the checkpoint on the master. Checkpoint happens every checkpoint_timeout seconds, and is set to five minutes by default.

Forcing checkpoint on master:

master$ psql
postgres=# CHECKPOINT;

The backup continues on the slave side:

pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/92000148 on timeline 1
pg_basebackup: starting background WAL receiver
1073986/1073986 kB (100%), 1/1 tablespace
pg_basebackup: write-ahead log end point: 0/927FDDE8
pg_basebackup: waiting for background process to finish streaming ...
pg_basebackup: base backup completed

The backup copies settings from the master, including its TCP port value. I’m running both master and slave on the same host, so I should change the port in the slave .conf file:

slave$ vim data/pg10/postgresql.conf
# old value port = 5432
port = 5433

Now we can return to the master and run some queries:

slave$ cd ../master
master$ source pg10/pg10.env
master$ psql
postgres=# UPDATE t SET pad = md5((random()*1000000)::text);
UPDATE t SET pad = md5((random()*1000000)::text);

By running these queries, the WAL size is now 1.4G, and it’s bigger than 1.1G! Repeat this update query three times and the WAL grows to 2.8GB:

master$ du -sh data/pg10/pg_wal
2.8G data/pg10/pg_wal

Certainly, the WAL could grow infinitely until whole disk space is consumed.
How do we find out the reason for this?

postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn    | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
1/2A400630  | slot1     |  0/92000000 | 2.38

We have one slot behind the master of 2.38GB.

Let’s repeat the update and check again. The gap has increased:

postgres=# postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn    | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
1/8D400238  |     slot1 | 0/92000000  | 3.93

Wait, though: we have already used slot1 for backup! Let’s start the slave:

master$ cd ../slave
slave$ ./pgc start pg10

Replication started without any additional change to recovery.conf:

slave$ cat data/pg10/recovery.conf
standby_mode = 'on'
primary_conninfo = 'user=replicator password=replicator passfile=''/home/pguser/.pgpass'' host=127.0.0.1 port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres target_session_attrs=any'
primary_slot_name = 'slot1'

pg_basebackup -R option instructs backup to write to the recovery.conf file with all required options, including primary_slot_name.

WAL size, all slots connected

The gap reduced several seconds after the slave started:

postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn    | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
 1/8D400238 |     slot1 |  0/9A000000 | 3.80

And a few minutes later:

postgres=# SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
redo_lsn    | slot_name | restart_lsn | gb_behind
------------+-----------+-------------+-----------
 1/9E5DECE0 |     slot1 |  1/9EB17250 | -0.01
postgres=# \!du -sh data/pg10/pg_wal
1.3G data/pg10/pg_wal

Slave server maintenance

Let’s simulate slave server maintenance with ./pgc stop pg10 executed on the slave. We’ll push some data onto the master again (execute the UPDATE query 4 times).

Now, “slot1” is again 2.36GB behind.

Removing unused slots

By now, you might realize that a problematic slot is not in use. In such cases, you can drop it to allow retention for segments:

master$ psql
postgres=# SELECT pg_drop_replication_slot('slot1');

Finally the disk space is released:

master$ du -sh data/pg10/pg_wal
1.1G data/pg10/pg_wal

Important system variables

  • archive_mode is not required for streaming replication with slots.
  • wal_level – is replica by default
  • max_wal_senders – set to 10 by default, a minimum of three for one slave, plus two for each additional slave
  • wal_keep_segments – 32 by default, not important because PostgreSQL will keep all segments required by slot
  • archive_command – not important for streaming replication with slots
  • listen_addresses – the only option that it’s necessary to change, to allow remote slaves to connect
  • hot_standby – set to on by default, important to enable reads on slave
  • max_replication_slots – 10 by default https://www.postgresql.org/docs/10/static/runtime-config-replication.html

Summary

  • Physical replication setup is really easy with slots. By default in pg10, all settings are already prepared for replication setup.
  • Be careful with orphaned slots. PostgreSQL will not remove WAL segments for inactive slots with initialized restart_lsn.
  • Check pg_replication_slots restart_lsn value and compare it with current redo_lsn.
  • Avoid long downtime for slave servers with slots configured.
  • Please use meaningful names for slots, as that will simplify debug.

References

Nov
30
2018
--

DoJ charges Autonomy founder with fraud over $11BN sale to HP

U.K. entrepreneur turned billionaire investor Mike Lynch has been charged with fraud in the U.S. over the 2011 sale of his enterprise software company.

Lynch sold Autonomy, the big data company he founded back in 1996, to computer giant HP for around $11 billion some seven years ago.

But within a year around three-quarters of the value of the business had been written off, with HP accusing Autonomy’s management of accounting misrepresentations and disclosure failures.

Lynch has always rejected the allegations, and after HP sought to sue him in U.K. courts he countersued in 2015.

Meanwhile, the U.K.’s own Serious Fraud Office dropped an investigation into the Autonomy sale in 2015 — finding “insufficient evidence for a realistic prospect of conviction.”

But now the DoJ has filed charges in a San Francisco court, accusing Lynch and other senior Autonomy executives of making false statements that inflated the value of the company.

They face 14 counts of conspiracy and fraud, according to Reuters — a charge that carries a maximum penalty of 20 years in prison.

We’ve reached out to Lynch’s fund, Invoke Capital, for comment on the latest development.

The BBC has obtained a statement from his lawyers, Chris Morvillo of Clifford Chance and Reid Weingarten of Steptoe & Johnson, which describes the indictment as “a travesty of justice,”

The statement also claims Lynch is being made a scapegoat for HP’s failures, framing the allegations as a business dispute over the application of U.K. accounting standards. 

Two years ago we interviewed Lynch onstage at TechCrunch Disrupt London and he mocked the morass of allegations still swirling around the acquisition as “spin and bullshit.”

Following the latest developments, the BBC reports that Lynch has stepped down as a scientific adviser to the U.K. government.

“Dr. Lynch has decided to resign his membership of the CST [Council for Science and Technology] with immediate effect. We appreciate the valuable contribution he has made to the CST in recent years,” a government spokesperson told it.

Nov
30
2018
--

Enterprise AR is an opportunity to ‘do well by doing good,’ says General Catalyst

A founder-investor panel on augmented reality (AR) technology here at TechCrunch Disrupt Berlin suggests growth hopes for the space have regrouped around enterprise use-cases, after the VR consumer hype cycle landed with yet another flop in the proverbial ‘trough of disillusionment’.

Matt Miesnieks, CEO of mobile AR startup 6d.ai, conceded the space has generally been on another downer but argued it’s coming out of its third hype cycle now with fresh b2b opportunities on the horizon.

6d.ai investor General Catalyst‘s Niko Bonatsos was also on stage, and both suggested the challenge for AR startups is figuring out how to build for enterprises so the b2b market can carry the mixed reality torch forward.

“From my point of view the fact that Apple, Google, Microsoft, have made such big commitments to the space is very reassuring over the long term,” said Miesnieks. “Similar to the smartphone industry ten years ago we’re just gradually seeing all the different pieces come together. And as those pieces mature we’ll eventually, over the next few years, see it sort of coalesce into an iPhone moment.”

“I’m still really positive,” he continued. “I don’t think anyone should be looking for some sort of big consumer hit product yet but in verticals in enterprise, and in some of the core tech enablers, some of the tool spaces, there’s really big opportunities there.”

Investors shot the arrow over the target where consumer VR/AR is concerned because they’d underestimated how challenging the content piece is, Bonatsos suggested.

“I think what we got wrong is probably the belief that we thought more indie developers would have come into the space and that by now we would probably have, I don’t know, another ten Pokémon-type consumer massive hit applications. This is not happening yet,” he said.

“I thought we’d have a few more games because games always lead the adoption to new technology platforms. But in the enterprise this is very, very exciting.”

“For sure also it’s clear that in order to have the iPhone moment we probably need to have much better hardware capabilities,” he added, suggesting everyone is looking to the likes of Apple to drive that forward in the future. On the plus side he said current sentiment is “much, much much better than what it was a year ago”.


Discussing potential b2b applications for AR tech one idea Miesnieks suggested is for transportation platforms that want to link a rider to the location of an on-demand and/or autonomous vehicle.

Another area of opportunity he sees is working with hardware companies — to add spacial awareness to devices such as smartphones and drones to expand their capabilities.

More generally they mentioned training for technical teams, field sales and collaborative use-cases as areas with strong potential.

“There are interesting applications in pharma, oil & gas where, with the aid of the technology, you can do very detailed stuff that you couldn’t do before because… you can follow everything on your screen and you can use your hands to do whatever it is you need to be doing,” said Bonatsos. “So that’s really, really exciting.

“These are some of the applications that I’ve seen. But it’s early days. I haven’t seen a lot of products in the space. It’s more like there’s one dev shop is working with the chief innovation officer of one specific company that is much more forward thinking and they want to come up with a really early demo.

“Now we’re seeing some early stage tech startups that are trying to attack these problems. The good news is that good dollars is being invested in trying to solve some of these problems — and whoever figures out how to get dollars from the… bigger companies, these are real enterprise businesses to be built. So I’m very excited about that.”

At the same time, the panel delved into some of the complexities and social challenges facing technologists as they try to integrate blended reality into, well, the real deal.

Including raising the spectre of Black Mirror style dystopia once smartphones can recognize and track moving objects in a scene — and 6d.ai’s tech shows that’s coming.

Miesnieks showed a brief video demo of 3D technology running live on a smartphone that’s able to identify cars and people moving through the scene in real time.

“Our team were able to solve this problem probably a year ahead of where the rest of the world is at. And it’s exciting. If we showed this to anyone who really knows 3D they’d literally jump out of the chair. But… it opens up all of these potentially unintended consequences,” he said.

“We’re wrestling with what might this be used for. Sure it’s going to make Pokémon game more fun. It could also let a blind person walk down the street and have awareness of cars and people and they may not need a cane or something.

“But it could let you like tap and literally have people be removed from your field of view and so you only see the type of people that you want to look at. Which can be dystopian.”

He pointed to issues being faced by the broader technology industry now, around social impacts and areas like privacy, adding: “We’re seeing some of the social impacts of how this stuff can go wrong, even if you assume good intentions.

“These sort of breakthroughs that we’re having are definitely causing us to be aware of the responsibility we have to think a bit more deeply about how this might be used for the things we didn’t expect.”

From the investor point of view Bonatsos said his thesis for enterprise AR has to be similarly sensitive to the world around the tech.

“It’s more about can we find the domain experts, people like Matt, that are going to do well by doing good. Because there are a tonne of different parameters to think about here and have the credibility in the market to make it happen,” he suggested, noting: “It‘s much more like traditional enterprise investing.”

“This is a great opportunity to use this new technology to do well by doing good,” Bonatsos continued. “So the responsibility is here from day one to think about privacy, to think about all the fake stuff that we could empower, what do we want to do, what do we want to limit? As well as, as we’re creating this massive, augmented reality, 3D version of the world — like who is going to own it, and share all this wealth? How do we make sure that there’s going to be a whole new ecosystem that everybody can take part of it. It’s very interesting stuff to think about.”

“Even if we do exactly what we think is right, and we assume that we have good intentions, it’s a big grey area in lots of ways and we’re going to make lots of mistakes,” conceded Miesnieks, after discussing some of the steps 6d.ai has taken to try to reduce privacy risks around its technology — such as local processing coupled with anonymizing/obfuscating any data that is taken off the phone.

“When [mistakes] happen — not if, when — all that we’re going to be able to rely on is our values as a company and the trust that we’ve built with the community by saying these are our values and then actually living up to them. So people can trust us to live up to those values. And that whole domain of startups figuring out values, communicating values and looking at this sort of abstract ‘soft’ layer — I think startups as an industry have done a really bad job of that.

“Even big companies. There’d only a handful that you could say… are pretty clear on their values. But for AR and this emerging tech domain it’s going to be, ultimately, the core that people trust us.”

Bonatsos also pointed to rising political risk as a major headwind for startups in this space — noting how China’s government has decided to regulate the gaming market because of social impacts.

“That’s unbelievable. This is where we’re heading with the technology world right now. Because we’ve truly made it. We’ve become mainstream. We’re the incumbents. Anything we build has huge, huge intended and unintended consequences,” he said.

“Having a government that regulates how many games that can be built or how many games can be released — like that’s incredible. No company had to think of that before as a risk. But when people are spending so many hours and so much money on the tech products they are using every day. This is the [inevitable] next step.”

Nov
29
2018
--

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS launches a managed Kafka service

Kafka is an open-source tool for handling incoming streams of data. Like virtually all powerful tools, it’s somewhat hard to set up and manage. Today, Amazon’s AWS is making this all a bit easier for its users with the launch of Amazon Managed Streaming for Kafka. That’s a mouthful, but it’s essentially Kafka as a fully managed, highly available service on AWS. It’s now available on AWS as a public preview.

As AWS CTO Werner Vogels noted in his AWS re:Invent keynote, Kafka users traditionally had to do a lot of heavy lifting to set up a cluster on AWS and to ensure that it could scale and handle failures. “It’s a nightmare having to restart all the cluster and the main nodes,” he said. “This is what I would call the traditional heavy lifting that AWS is really good at solving for you.”

It’s interesting to see AWS launch this service, given that it already offers a very similar tool in Kinesis, a tool that also focuses on ingesting streaming data. There are plenty of applications on the market today that already use Kafka, and AWS is clearly interested in giving those users a pathway to either move to a managed Kafka service or to AWS in general.

As with all things AWS, the pricing is a bit complicated, but a basic Kafka instance will start at $0.21 per hour. You’re not likely to just use one instance, so for a somewhat useful setup with three brokers and a good amount of storage and some other fees, you’ll quickly pay well over $500 per month.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

Percona Server for MySQL 5.6.42-84.2 Is Now Available

Percona Server for MySQL

Percona Server for MySQL 5.6Percona announces the release of Percona Server 5.6.42-84.2 on November 29, 2018 (Downloads are available here and from the Percona Software Repositories).

Based on MySQL 5.6.42, including all the bug fixes in it, Percona Server 5.6.42-84.2 is the current GA release in the Percona Server 5.6 series. All of Percona‘s software is open-source and free.

Improvements

  • PS-4790: Improve user statistics accuracy

Bugs Fixed

  • Slave replication could break if upstream bug #74145 (FLUSH LOGS improperly disables the logging if the log file cannot be accessed) occurred in master. Bug fixed PS-1017 (Upstream #83232).
  • The binary log could be corrupted when the disk partition used for temporary. files (tmpdir system variable) had little free space. Bug fixed PS-1107 (Upstream #72457).
  • PURGE CHANGED_PAGE_BITMAPS did not work when the innodb_data_home_dir system variable was used. Bug fixed PS-4723.
  • Setting the tokudb_last_lock_timeout variable via the command line could cause the server to stop working when the actual timeout took place. Bug fixed PS-4943.
  • Dropping TokuDB table with non-alphanumeric characters could lead to a crash. Bug fixed PS-4979.

Other bugs fixed

  • PS-4781: sql_yacc.yy uses SQLCOM_SELECT instead of SQLCOM_SHOW_XXXX_STATS
  • PS-4529: MTR: index_merge_rocksdb2 inadvertently tests InnoDB instead of MyRocks
  • PS-4746: Revert our fix for PS-3851 (Percona Ver 5.6.39-83.1 Failing assertion: sym_node->table != NULL)
  • PS-4773: Percona Server sources can’t be compiled without server
  • PS-4785: Setting version_suffix to NULL leads to handle_fatal_signal (sig=11) in Sys_var_version::global_value_ptr
  • PS-4813: Using flush_caches leads to SELinux denial errors
  • PS-4881: Add LLVM/clang 7 to Travis-CI

Find the release notes for Percona Server for MySQL 5.6.42-84.2 in our online documentation. Report bugs in the Jira bug tracker.

 

Nov
29
2018
--

MySQL High Availability: Stale Reads and How to Fix Them

solutions for MySQL Stale Reads

solutions for MySQL Stale ReadsContinuing on the series of blog posts about MySQL High Availability, today we will talk about stale reads and how to overcome this issue.

The Problem

Stale reads is a read operation that fetches an incorrect value from a source that has not synchronized an update operation to the value (source Wiktionary).

A practical scenario is when your application applies INSERT or UPDATE data to your master/writer node, and has to read it immediately after. If this particular read is served from another server in the replication/cluster topology, the data is either not there yet (in case of an INSERT) or it still provides the old value (in case of an UPDATE).

If your application or part of your application is sensitive to stale reads, then this is something to consider when implementing HA/load balancing.

How NOT to fix stale reads

While working with customers, we have seen a few incorrect attempts to fix the issue:

SELECT SLEEP(X)

The most common incorrect approach that we see in Percona support is when customers add a sleep between the write and the read. This may work in some cases, but it’s not 100% reliable for all scenarios, and it can add latency when there is no need.

Let’s review an example where by the time you query your slave, the data is already applied and you have configured your transaction to start with a SELECT SLEEP(1). In this case, you just added 1000ms latency when there was no need for it.

Another example could be when the slave is lagging behind for more than whatever you configured as the parameter on the sleep command. In this case, you will have to create a login to keep trying the sleep until the slave has received the data: potentially it could take several seconds.

Reference: SELECT SLEEP.

Semisync replication

By default, MySQL replication is asynchronous, and this is exactly what causes the stale read. However, MySQL distributes a plugin that can make the replication semi-synchronous. We have seen customers enabling it hoping the stale reads problem will go away. In fact, that is not the case. The semi-synchronous plugin only ensures that at least one slave has received it (IO Thread has streamed the binlog event to relay log), but the action of applying the event is done asynchronously. In other words, stale reads are still a problem with semi-sync replication.

Reference: Semisync replication.

How to PROPERLY fix stale reads

There are several ways to fix/overcome this situation, and each one has its pros and cons:

1) MASTER_POS_WAIT

Consists of executing a SHOW MASTER STATUS right after your write, getting the binlog file and position, connecting on a slave, and executing the SELECT MASTER_POS_WAIT function, passing the binlog file and position as parameters. The execution will block until the slave has applied the position via the function. You can optionally pass a timeout to exit the function in case of exceeding this timeout.

Pros:

  • Works on all MySQL versions
  • No prerequisites

Cons:

  • Requires an application code rewrite.
  • It’s a blocking operation, and can add significant latency to queries in cases where a slave/node is too far behind.

Reference: MASTER_POS_WAIT.

2) WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS

Requires GTID: this is similar to the previous approach, but in this case, we need to track the executed GTID from the master (also available on SHOW MASTER STATUS).

Pros:

  • Works on all MySQL versions.

Cons:

  • Requires an application code rewrite.
  • It’s a blocking operation, can add significant latency to queries in cases where a slave/node is too far behind.
  • As it requires GTID, it only works on versions from 5.6 onwards.

Reference: WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS

3) Querying slave_relay_log_info

Consists of enabling relay_log_info_repository=TABLE and sync_relay_log_info=1 on the slave, and using a similar approach to option 1. After the write, execute  SHOW MASTER STATUS, connect to the slave, and query mysql.slave_relay_log_info , passing the binlog name and position to verify if the slave is already applying a position after the one you got from SHOW MASTER STATUS.

Pros:

  • This is not a blocking operation.
  • In cases where the slave is missing the position you require, you can try to connect to another slave and repeat the process. There is even an option to fail over back to the master if none of the slaves have the said position.

Cons:

  • Requires an application code rewrite.
  • In cases of checking multiple slaves, this can add significant latency.

Reference: slave_relay_log_info.

4) wsrep-sync-wait

Requires Galera/Percona XtraDB Cluster: Consists of setting a global/session variable to enforce consistency. This will block execution of subsequent queries until the node has applied all write-sets from it’s applier queue. It can be configured to trigger on multiple commands, such as SELECT, INSERT, and so on.

Pros:

  • Easy to implement. Built-in as a SESSION variable.

Cons:

  • Requires an application code rewrite in the event that you want to implement the solution on per session basis.
  • It’s a blocking operation, and can add significant latency to queries if a slave/node is too far behind.

Reference: wsrep-sync-wait

5) ProxySQL 2.0 GTID consistent reads

Requires MySQL 5.7 and GTID: MySQL 5.7 returns the GTID generated by a commit as part of the OK package. ProxySQL with the help of binlog readers installed on MySQL servers can keep track of which GTID the slave has already applied. With this information + the GTID received from the OK package at the moment of the write, ProxySQL will decide if it will route a subsequent read to one of the slaves/read nodes or if the master/write node will serve the read.

Pros:

  • Transparent to the application – no code changes are required.
  • Adds minimal latency.

Cons:

  • This still a new feature of ProxySQL 2.0, which is not yet GA.

Referece: GTID consistent reads.

Conclusions

Undesirable issues can arise from adding HA and distributing the load across multiple servers. Stale reads can cause an impact on applications sensitive to them. We have demonstrated various approaches you can use to overcome them.


Photo by Tim Evans on Unsplash

Nov
29
2018
--

Asana, a work management platform, nabs $50M growth round at a $1.5B valuation

Asana, a service that teams and individuals use to plan and track the progress of work projects, is doubling down on its own project: to shape “the future of work,” in the words of co-founder and CEO Dustin Moskovitz. The startup, whose products are used by millions of free and paying users, today is announcing that it has raised another $50 million in funding — a Series E that catapults Asana into unicorn status with a $1.5 billion valuation — to invest in international and product expansion.

Asana has been on a funding tear: It raised $75 million just 11 months ago at a $900 million post-money valuation, bringing the total this year to $125 million, and $213 million since being founded in 2008.

Led by Generation Investment Management — the London firm co-founded by former US Vice President Al Gore that also led that Series D in January — this latest round also includes existing investors 8VC, Benchmark Capital and Founders Fund as well as new investors Lead Edge Capital and World Innovation Lab.

Asana has lately been focused on international growth — half of its new sales are already coming from outside the US — and expanding its product as it inches toward profitability. These are the areas where its latest investment will go, too.

Specifically, it plans to open an AWS-based data center in Frankfurt in the first half of next year, and it will set down more roots in Asia-Pacific, with offices in Sydney and Tokyo. It is also hiring in both markets. Asana has customers in 195 countries and six languages, and it looks like it’s homing in on these two regions because it’s seeing the most traction there.

On the product side, the company has been gradually adding machine learning, predictive and other AI features and it will continue to do that as part of a “long-term vision for marrying computer and human intelligence to run entire companies.”

“Our role is to help leaders understand where their attention can be most useful and what to be focused on,” Moskovitz, pictured right with co-founder Justin Rosenstein, said to me in an interview earlier this month when describing the company’s AI push.

The funding caps off an active year for Asana.

In addition to raising $75 million in January, it announced 50,000 paying organizations and “millions” of free users in September. It also introduced new products and features, such as a paid tier, Asana for Business, for larger organizations managing multiple projects; Timelines for drilling into sequential tasks and milestones; and its first steps into AI, services that start to anticipate what users need to see first and prioritise, based on previous behaviour, which team the user is on, and so on:

Asana has been close to profitability this year, although it doesn’t look like it has quite reached that point yet. Moskovitz told me that in fact, it has held on to most of its previous funding (that’s before embarking on this next wave of ambitious expansions, though).

“We have so much money in the bank that we have quite a lot of options [and are in a] strong position so choose what makes the most sense strategically,” he said. “We’ve been fortunate with investors. The prime thing is vision match: do they think about the long-term future in the same way we do? Do they have the same values and priorities? Generation nailed that on so many levels as a firm.”

How Asana fits into the mix with Slack, Box and others

Asana’s growth and mission both mirror trends in the wider world of enterprise IT and collaboration within it.

Slack, Microsoft Teams, Workplace from Facebook and other messaging and chat apps have transformed how coworkers communicate with each other, both within single offices and across wider geographies: they have replaced email, phone and other communication channels to some extent.

Meanwhile, the rise of cloud-based services like Dropbox, Box, Google Cloud, AWS and Microsoft’s Azure have transformed how people in organizations manage and ultimately collaborate on files: the rise of mobile and mobile working have increased the need for more flexible file management and access.

The third area that has been less covered is work management: as people continue to multitask on multiple projects – partly spurred by the rise in the other two collaboration categories – they need a platform that helps keep them organised and on top of all that work. This is where Asana sits.

“We think about collaboration as three markets,” Moskovitz said, “file collaboration, messaging, and work management. Each of these has a massive surface area and depth to them. We think it’s important that all companies have tools that they use from each of these big buckets.”

It is not the only one in that big bucket.

Asana alternatives include Airtable, Wrike, Trello and Basecamp. As we have pointed out before, that competitive pressure is another reason Asana is on the path to continue growing and making its service more sticky.

Indeed, just earlier this month Airtable raised $100 million at a $1.1 billion valuation. Airtable has a different approach – its platform can be used for more than project management – but it’s most definitely used to build templates precisely to track projects.

You might even argue that Airtable’s existing offering could present a type of product roadmap for what might be considered next for Asana.

For now, though, Asana is building up big customers for its existing services.

The product initially got its start when Moskovitz and Rosenstein – as respectively as co-founder and early employee of Facebook – built something to help their coworkers  at the social network manage their workloads. Now, it has a range of users that include a number of other tech firms, but also others.

London’s National Gallery, for example, uses Asana to plan and launch exhibitions and business projects; the supermarket chain Tesco’s digital campaigns; Sony Music, which also uses it for marketing management but also to track a digitization project for its back music catalog; Uber, which has managed some 600 city expansions through Asana to date.

“At Generation Investment Management, we’re grounded in the philosophy that through strategic investments in leading, mission-driven companies we can move towards a more sustainable future,” said Colin le Duc, co-founder and partner, Generation Investment Management, in a statement.

“We see Collaborative Work Management as a distinct and rapidly expanding segment, and Asana has the right product and team to lead the market. Through Dustin and the team, Asana is changing how businesses around the world collaborate, epitomizing what it means to deliver results with a mission-driven ethos.”

Nov
28
2018
--

Talking Drupal #188- Static Site Generation

In episode #188 we talk with Brian Perry about tools for generating static websites.  www.talkingdrupal.com/188

Topics

  • What does “static site generation” mean?
  • Why would someone what to create a static website?
  • Tools
    • Jekyll
    • Sculpin
    • Gatsby JS
  • With Drupal – Tome module
  • Build process
  • Dynamic and server side functionality
  • Recommendations to get started

Modules

Tome Module

Resources

Jekyll

Sculpin 

Gatsby JS

Netlify

Contenta CMS

  

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

Brian Perry – www.bounteous.com @bricomedy

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com