Oct
31
2018
--

Percona Server for MongoDB 3.6.8-2.0 Is Now Available

Percona Server for MongoDB

Percona Server for MongoDB

Percona announces the release of Percona Server for MongoDB 3.6.8-2.0 on October 31, 2018. Download the latest version from the Percona website or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.6 Community Edition. It supports MongoDB 3.6 protocols and drivers.

Percona Server for MongoDB extends Community Edition functionality by including the Percona Memory Engine storage engine, as well as several enterprise-grade features. Also, it includes MongoRocks storage engine, which is now deprecated. Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release introduces data at rest encryption for the WiredTiger storage engine. Data at rest encryption for WiredTiger in Percona Server for MongoDB is compatible with the upstream implementation. In this release of Percona Server for MongoDB, this feature is of BETA quality and should not be used in a production environment.

Note that Percona Server for MongoDB 3.6.8-2.0 is based on  MongoDB 3.6.8 which is distributed under the GNU AGPLv3 license. Subsequent releases of Percona Server for MongoDB will change its license to SSPL when we move to the SSPL codebase released by MongoDB. For more information, see Percona Statement on MongoDB Community Server License Change.

This release also contains a fix for bug PSMDB-238

Known Issues

  • PSMDB-233: When starting Percona Server for MongoDB 3.6 with WiredTiger encryption options but using a different storage engine, the server starts normally and produces no warnings that these options are ignored.
  • PSMDB-239: WiredTiger encryption is not disabled with using the Percona Memory Engine storage engine.
  • PSMDB-245: KeyDB’s WiredTiger logs are not properly rotated without restarting the server.

The Percona Server for MongoDB 3.6.8-2.0 release notes are available in the official documentation.

Oct
30
2018
--

Percona Server for MySQL 8.0 Delivers Increased Reliability, Performance and Security

Percona Server for MySQL

Percona Server for MySQLPercona released a Release Candidate (RC) version of Percona Server for MySQL 8.0, the company’s free, enhanced, drop-in replacement for MySQL Community Edition. Percona Server for MySQL 8.0 includes all the features of MySQL Community Edition 8.0, along with enterprise-class features from Percona that make it ideal for enterprise production environments. The latest release offers increased reliability, performance and security.

Percona Server for MySQL 8.0 General Availability (GA) will be available later this year. You learn how to install the release candidate software here. Please note this release candidate is not intended nor recommended for production environments

MySQL databases remain a pillar of enterprise data centers. But as the amount of data collected continues to soar, and as the types of databases deployed for specific applications continue to expand, organizations require a powerful and cost-effective MySQL solution for their production environments. Percona meets this need with its mature, proven open source alternative to MySQL Community Edition. Percona also backs MySQL 8.0 with the support and consulting services enterprises need to achieve optimal performance and maximize the value they obtain from the software – all with lower cost and complexity.

With more than 4,550,000 downloads, Percona Server for MySQL offers self-tuning algorithms and support for extremely high-performance hardware, delivering excellent performance and reliability for production environments. Percona Server for MySQL is trusted by thousands of enterprises to provide better performance. The following capabilities are unique to Percona Server for MySQL 8.0:

  • Greater scalability and availability, enhanced backups, and increased visibility to improve performance, reliability and usability
  • Parallel doublewrite functionality for greatly improved write performance, resulting in increased speed
  • Additional write-optimized storage engine, MyRocks, which takes advantage of modern hardware and database technology to reduce storage requirements, maintenance costs, and increase ROI in both on-premises and cloud-based applications, delivered with a MySQL-compatible interface.
  • Enhanced encryption functionality – including integration with Hashicorp Vault to simplify the management of encryption keys – for increased security
  • Advanced PAM-based authentication, audit logging, and threadpool scalability – enterprise-grade features available in Percona Server for MySQL without a commercial license

Percona Server for MySQL 8.0 also contains all the new features introduced in MySQL Community Edition 8.0, including:

  • Greater Reliability – A new transactional data dictionary makes recovery from failure easier, providing users with a higher level of comfort that data will not be lost. The transactional data dictionary is now crash-safe and centralized. In addition, support for atomic Data Definition Language (DDL) statements ensures that all operations associated with a DDL transaction are committed or rejected as a unit.
  • Enhanced Performance – New functions and expressions (along with Percona’s parallel doublewrite buffer) improve overall performance, allowing users to see their data more quickly. Enhanced JSON functionality improves document storage and query capabilities, and the addition of Window functions provides greater flexibility for aggregation queries. Common Table Expressions (CTE) enable improved query syntax for complex queries.
  • Increased Security – The ability to collect a typical series of permission grant statements for a user into a defined role and then apply that role to a user in MySQL makes the database environment more secure from outside attack by allowing administrators to better manage access permissions for authorized users. SQL Roles also enable administrators to more easily and efficiently determine a set of privileges for multiple associated users, saving time and reducing errors.
  • Expanded Queries – Percona Server for MySQL 8.0 provides support for spatial data types and indexes, as well as for Spatial Reference System (SRS), the industry-standard method for geospatial lookup.

Learn more about the Percona Server for MySQL 8.0 RC release here.

Oct
30
2018
--

Percona Server for MongoDB 4.0 Is an Ideal Solution for Even More Production Use Cases

Percona Server for MongoDB

Percona Server for MongoDBPercona announces the planned release of Percona Server for MongoDB 4.0, the latest version of the company’s free, enhanced, drop-in replacement for MongoDB Community Edition. Percona Server for MongoDB 4.0 includes all the features of MongoDB Community Edition 4.0, along with enterprise-class features from Percona that make it ideal for enterprise production environments.

Percona Server for MongoDB 4.0 will be generally available later this year.

Organizations are increasingly leveraging multiple open source databases to meet their diverse application requirements and improve the customer experience. Percona supports these organizations with a range of enhanced, production-ready open source databases, enterprise-grade support, and consulting services.

With more than 385,000 downloads, Percona Server for MongoDB provides all the cost and agility benefits of free, proven open source software, along with greater security, reliability and flexibility. With Percona Server for MongoDB, an increasing number of organizations can confidently run the document-based NoSQL MongoDB database to support their product catalogs, online shopping carts, Internet of Things (IoT) applications, mobile/social apps and more. Percona also backs Percona Server for MongoDB 4.0 and MongoDB Community Edition with the support and consulting services enterprises need to achieve optimal performance and maximize the value they obtain from the software – all with lower cost and complexity.

Percona Kubernetes Operator for MongoDB enables easier deployment of Percona Server for MongoDB environments – including standalone, replica set, or sharded cluster – inside Kubernetes and OpenShift platforms, without the need to move to an Enterprise Server. Management and backup are provided through the Operator working with Percona Monitoring and Management and hot backup.

Percona Server for MongoDB 4.0 contains all of the features in MongoDB Community Edition 4.0, including important capabilities that greatly expand its use cases:

  • Support for multi-document ACID transactions – ensuring accurate updates to all of the documents involved in a transaction, and moving the complexity of achieving these updates from the application to the database.
  • Support for SCRAM-SHA-256 authentication – making the production environment more secure and less vulnerable to external attack.
  • New type conversion and string operators – providing greater flexibility when performing aggregations.

Percona Server for MongoDB 4.0 also offers essential enterprise features for free, including:

  • Encrypted WiredTiger storage engine (“data at rest encryption”) with local key management. Integration with key management systems will be available in future releases of Percona Server for MongoDB.
  • SASL Authentication plugin for enabling authentication through OpenLDAP or Active Directory.
  • Open-source auditing for visibility into user and process actions in the database, with the ability to redact sensitive information (such as usernames and IP addresses) from log files.
  • Hot backups to protect against data loss in the event of a crash or disaster – backup activity does not impact performance.
  • Percona Memory Engine, a 100 percent open source in-memory storage engine designed for Percona Server for MongoDB, which is ideal for in-memory computing and other applications demanding very low-latency workloads.
  • Integration with Percona Toolkit and Percona Monitoring and Management for query performance analytics and troubleshooting.
  • Enhanced query profiling.
Oct
30
2018
--

Percona XtraBackup 8.0-3-rc1 Is Available

Percona XtraBackup 8.0

Percona XtraBackup 8.0Percona is glad to announce the release candidate of Percona XtraBackup 8.0-3-rc1 on October 31 2018. You can download it from our download site and apt and yum repositories.

This is a Release Candidate quality release and it is not intended for
production. If you want a high quality, Generally Available release, use the current stable version (the most recent stable version at the time of writing is 2.4.12 in the 2.4 series).

This release supports backing up and restoring MySQL 8.0 and Percona Server for MySQL 8.0

Things to Note

  • innobackupex was previously deprecated and has been removed
  • Due to the new MySQL redo log and data dictionary formats the Percona XtraBackup 8.0.x versions will only be compatible with MySQL 8.0.x and the upcoming Percona Server for MySQL 8.0.x
  • For experimental migrations from earlier database server versions, you will need to backup and restore and using XtraBackup 2.4 and then use mysql_upgrade from MySQL 8.0.x

Installation

As this is a release candidate, installation is performed by enabling the testing repository and installing the software via your package manager. For Debian based distributions see apt installation instructions, for RPM based distributions see yum installation instructions. Note that in both cases after installing the current percona-release package, you’ll need to enable the testing repository in order to install Percona XtraBackup 8.0.3-rc1.

Improvements

  • PXB-1655:  The --lock-ddl option is supported when backing up MySQL 8

Bugs Fixed

  • PXB-1678:  Incremental backup prepare run with the --apply-log-only option could roll back uncommitted transactions.
  • PXB-1672:  The MTS slave without GTID could be backed up when the --safe-slave-backup option was applied.
Oct
30
2018
--

Release Candidate for Percona Server 8.0.12-2rc1 Is Available

Percona Server for MySQL

Following the alpha release announced earlier, Percona announces the release candidate of Percona Server for MySQL 8.0.12-2rc1 on October 31, 2018. Download the latest version from the Percona website or from the Percona Software Repositories.

This release is based on MySQL 8.0.12 and includes all the bug fixes in it. It is a Release Candidate quality release and it is not intended for production. If you want a high quality, Generally Available release, use the current Stable version (the most recent stable release at the time of writing in the 5.7 series is 5.7.23-23).

Percona provides completely open-source and free software.

Installation

As this is a release candidate, installation is performed by enabling the testing repository and installing the software via your package manager.  For Debian based distributions see apt installation instructions, for RPM based distributions see yum installation instructions.  Note that in both cases after installing the current percona-release package, you’ll need to enable the testing repository in order to install Percona Server for MySQL 8.0.12-2rc1.  For manual installations you can download from the testing repository directly through our website.

New Features

  • #4550: Native Partitioning support for MyRocks storage engine
  • #3911: Native Partitioning support for TokuDB storage engine
  • #4946: Add an option to prevent implicit creation of column family in MyRocks
  • #4839: Better default configuration for MyRocks and TokuDB
  • InnoDB changed page tracking has been rewritten to account for redo logging changes in MySQL 8.0.11.  This fixes fast incremental backups for PS 8.0
  • #4434: TokuDB ROW_FORMAT clause has been removed, compression may be set by using the session variable tokudb_row_format instead.

Improvements

  • Several packaging changes to bring Percona packages more in line with upstream, including split repositories. As you’ll note from our instructions above we now ship a tool with our release packages to help manage this.

Bugs Fixed

  • #4785: Setting version_suffix to NULL could lead to handle_fatal_signal (sig=11) in Sys_var_version::global_value_ptr
  • #4788: Setting log_slow_verbosity and enabling the slow_query_log could lead to a server crash
  • #4947: Any index comment generated a new column family in MyRocks
  • #1107: Binlog could be corrupted when tmpdir got full
  • #1549: Server side prepared statements lead to a potential off-by-second timestamp on slaves
  • #4937: rocksdb_update_cf_options was useless when specified in my.cnf or on command line.
  • #4705: The server could crash on snapshot size check in RocksDB
  • #4791: SQL injection on slave due to non-quoting in binlogged ROLLBACK TO SAVEPOINT
  • #4953: rocksdb.truncate_table3 was unstable

Other bugs fixed:

  • #4811: 5.7 Merge and fixup for old DB-937 introduces possible regression
  • #4885: Using ALTER … ROW_FORMAT=TOKUDB_QUICKLZ leads to InnoDB: Assertion failure: ha_innodb.cc:12198:m_form->s->row_type == m_create_info->row_type
  • Numerous testsuite failures/crashes

Upcoming Features

Oct
30
2018
--

The hybrid cloud market just got a heck of a lot more compelling

Let’s start with a basic premise that the vast majority of the world’s workloads remain in private data centers. Cloud infrastructure vendors are working hard to shift those workloads, but technology always moves a lot slower than we think. That is the lens through which many cloud companies operate.

The idea that you operate both on prem and in the cloud with multiple vendors is the whole idea behind the notion of the hybrid cloud. It’s where companies like Microsoft, IBM, Dell and Oracle are placing their bets. These died-in-the-wool enterprise companies see their large customers making a slower slog to the cloud than you would imagine, and they want to provide them with the tools and technologies to manage across both worlds, while helping them shift when they are ready.

Cloud-native computing developed in part to provide a single management fabric across on prem and cloud, freeing IT from having two sets of tools and trying somehow to bridge the gap between the two worlds.

What every cloud vendor wants

Red Hat — you know, that company that was sold to IBM for $34 billion this week — has operated in this world. While most people think of the company as the one responsible for bringing Linux to the enterprise, over the last several years, it has been helping customers manage this transition and build applications that could live partly on prem and partly in the cloud.

As an example, it has built OpenShift, its version of Kubernetes. As CEO Jim Whitehurst told me last year, “Our hottest product is OpenShift. People talk about containers and they forget it’s a feature of Linux,” he said. That is an operating system that Red Hat knows a thing or two about.

With Red Hat in the fold, IBM can contend that being open source; they can build modern applications on top of open source tools and run them on IBM’s cloud or any of their competitors, a real hybrid approach.

Microsoft has a huge advantage here, of course, because it has a massive presence in the enterprise already. Many companies out there could be described as Microsoft shops, and for those companies moving from on prem Microsoft to cloud Microsoft represents a less daunting challenge than starting from scratch.

Oracle brings similar value with its core database products. Companies using Oracle databases — just about everyone — might find it easier to move that valuable data to Oracle’s cloud, although the numbers don’t suggest that’s necessarily happening (and Oracle has stopped breaking out its cloud revenue).

Dell, which spent $67 billion for EMC, making the Red Hat purchase pale by comparison, has been trying to pull together a hybrid solution by combining VMware, Pivotal and Dell/EMC hardware.

Cloud vendors reporting

You could argue that hybrid is a temporary state, that at some point, the vast majority of workloads will eventually be running in the cloud and the hybrid business as we know it today will continually shrink over time. We are certainly seeing cloud infrastructure revenue skyrocketing with no signs of slowing down as more workloads move to the cloud.

In their latest earnings reports, those who break out such things, the successful ones, reported growth in their cloud business. It’s important to note that these companies define cloud revenue in different ways, but you can see the trend is definitely up:

  • AWS reported revenue of $6.7 billion in revenue for the quarter, up from $4.58 billion the previous year.
  • Microsoft Intelligent Cloud, which incorporates things like Azure and server products and enterprise services, was at $8.6 billion, up from $6.9 billion.
  • IBM Technology Services and Cloud Platforms, which includes infrastructure services, technical support services and integration software reported revenue of $8.6 billion, up from $8.5 billion the previous year.
  • Others like Oracle and Google didn’t break out their cloud revenue.

Show me the money

All of this is to say, there is a lot of money on the table here and companies are moving more workloads at an increasingly rapid pace.  You might also have noticed that IBM’s growth is flat compared to the others. Yesterday in a call with analysts and press, IBM CEO Ginni Rometty projected that revenue for the hybrid cloud (however you define that) could reach $1 trillion by 2020. Whether that number is exaggerated or not, there is clearly a significant amount of business here, and IBM might see it as a way out of its revenue problems, especially if they can leverage consulting/services along with it.

There is probably so much business that there is room for more than one winner, but if you asked before Sunday if IBM had a shot in this mix against its formidable competitors, especially those born in the cloud like AWS and Google, most probably wouldn’t have given them much chance.

When Red Hat eventually joins forces with IBM, it at least gives their sales teams a compelling argument, one that could get them into the conversation — and that is probably why they were willing to spend so much money to get it. It puts them back in the game, and after years of struggling, that is something. And in the process, it has stirred up the hybrid cloud market in a way we didn’t see coming last week before this deal.

Oct
30
2018
--

Cockroach Labs launches CockroachDB as managed service

Cockroach Lab’s open source SQL database, CockroachDB, has been making inroads since it launched last year, but as any open source technology matures, in order to move deeper into markets it has to move beyond technical early adopters to a more generalized audience. To help achieve that, the company announced a new CockroachDB managed service today.

The service has been designed to be cloud-agnostic, and for starters it’s going to be available on Amazon Web Services and Google Cloud Platform. Cockroach, which launched in 2015, has always positioned itself as modern cloud alternative to the likes of Oracle or even Amazon’s Aurora database.

As company co-founder and CEO Spencer Kimball told me in an interview in May, those companies involve too much vendor lock-in for his taste. His company launched as open alternative to all of that. “You can migrate a Cockroach cluster from one cloud to another with no down time,” Kimball told TechCrunch in May.

He believes having that kind of flexibility is a huge advantage over what other vendors are offering, and today’s announcement carries that a step further. Instead of doing all the heavy lifting of setting up and managing a database and the related infrastructure, Cockroach is now offering CockroachDB as a service to handle all of that for you.

Kimball certainly recognizes that by offering his company’s product in this format, it will help grow his market. “We’ve been seeing significant migration activity away from Oracle, AWS Aurora, and Cassandra, and we’re now able to get our customers to market faster with Managed CockroachDB,” Kimball said in a statement.

The database itself offers the advantage of being ultra-resilient, meaning it stays up and running under most circumstances and that’s a huge value proposition for any database product. It achieves up time through replication, so if one version of itself goes down, the next can take over.

As an open source tool, it has been making money up until now by offering an enterprise version, which includes backup, support and other premium pieces. With today’s announcement, the company can get a more direct revenue stream from customers subscribing to the database service.

A year ago, the company announced version 1.0 of CockroachDB and $27 million in Series B financing, which was led by Redpoint with participation from Benchmark, GV, Index Ventures and FirstMark. They’ve obviously been putting that money to good use developing this new managed service.

Oct
30
2018
--

20+ MongoDB Alternatives You Should Know About

alternatives to MongoDB

alternatives to MongoDBAs MongoDB® has changed their license from AGPL to SSPL many are concerned by this change, and by how sudden it has been. Will SSPL be protective enough for MongoDB, or will the next change be to go to an altogether proprietary license? According to our poll, many are going to explore MongoDB alternatives. This blog post provides a brief outline of technologies to consider.

Open Source Data Stores

  • PostgreSQL is the darling of the open source database community. Especially if your concern is the license,  PostgreSQL’s permissive licence is hard to beat. PostgreSQL has powerful JSON Support, and there are many successful stories of migrating from MongoDB to PostgreSQL
  • Citus While PostgreSQL is a powerful database, and you can store terabytes of data on a single cluster, at a larger scale you will need sharding. If so, consider the Citus PostgreSQL extension, or the DBaaS offering from the same guys.
  • TimescaleDB  If on the other hand you are storing  time series data in MongoDB, then TimescaleDB might be a good fit.
  • ToroDB If you would love to use PostgreSQL but need MongoDB wire protocol compatibility, take a look at ToroDB. While it can’t serve as a full drop-in replacement for MongoDB server just yet, the developer told me that with some work it is possible.
  • CockroachDB While not based on the PostgreSQL codebase, CockroachDB is PostgreSQL wire protocol compatible and it is natively distributed, so you will not need to do manual sharding.
  • MySQL® is another feasible replacement. MySQL 5.7 and MySQL 8 have great support for JSON, and it continues to get better with every maintenance release. You can also consider MySQL Cluster for medium size sharded environments. You can also consider MariaDB and Percona Server  for MySQL
  • MySQL DocStore is a CRUD interface for JSON data stored in MySQL, and while it is not the same as MongoDB’s query language, it is much easier to transition to compared to SQL.
  • Vitess Would you love to use MySQL but can’t stand manual sharding? Vitess is a powerful sharding engine for MySQL which will allow you to grow to great scale while using proven MySQL as a backend.
  • TiDB is another take on MySQL compatible sharding. This NewSQL engine is MySQL wire protocol compatible but underneath is a distributed database designed from the ground up.
  • CouchDB is a document database which speaks JSON natively.
  • CouchBase is another database engine to consider. While being a document based database, CouchBase offers the N1QL language which has SQL look and feel.
  • ArangoDB is multi-model database, which can be used as document store.
  • Elastic While not a perfect choice for every MongoDB workload, for workloads where document data is searched and analyzed ElasticSearch can be a great alternative.
  • Redis is another contender for some MongoDB workloads. Often used as a cache in front of MongoDB, it can also be used as a JSON store through extensions.  While such extensions from RedisLabs are no longer open source, GoodForm projects provides open source alternatives.
  • ClickHouse may be a great contender for moving analytical workloads from MongoDB. Much faster, and with JSON support and Nested Data Structures, it can be great choice for storing and analyzing document data.
  • Cassandra does not have a document data model, but it has proven to be extremely successful for building scalable distributed clusters. If this is your main use case for MongoDB, then you should consider Cassandra.
  • ScyllaDB is a protocol compatible Cassandra alternative which claims to offer much higher per node performance.
  • HBase is another option worth considering, especially if you already have a Hadoop/HDFS infrastructure.

Public Cloud Document Stores

Most major cloud providers offer some variant of a native document database for you to consider.

  • Microsoft Azure Cosmos DB is an interesting engine that provides multiple NoSQL APIs, including for MongoDB and Cassandra.
  • Amazon DynamoDB supports key value and document based APIs. While not offering MongoDB compatibility, DynamoDB has been around for a long time, and is the most battle tested of the public cloud database offerings.
  • Google Cloud DataStore  – Google Cloud offers a number of data storage options for you to consider, and Cloud DataStore offers a data model and query language that is the most similar to MongoDB.

If you’re not ready for a major migration effort, there is one more solution for you – Percona Server for MongoDB.  Based on MongoDB Community, and enhanced by Percona with Enterprise Features, Percona Server for MongoDB offers 100% compatibility. As we wrote in a previous post, we commit to shipping a supported AGPL version until the situation around SSPL is clearly resolved.

Want help on deciding what is the best option for you, or with migration heavy lifting? Percona Professional Services can help!

Have idea for another feasible MongoDB alternative?  Please comment, and I will consider adding it to the list!


Image by JOSHUA COLEMAN on Unsplash

Oct
30
2018
--

PostgreSQL locking, part 3: lightweight locks

LWLocks lightweight locks postgres

PostgreSQL logoPostgreSQL lightweight locks, or LWLocks, control memory access. PostgreSQL uses multi-process architecture and should allow only consistent reads and writes to shared memory structures. LWLocks have two levels of locking: shared and exclusive. It’s also possible to release all acquired LWLocks to simplify clean up. Other databases often call primitives similar to LWLocks “latches”. Because LWLocks is an implementation detail, application developers shouldn’t pay much attention to this kind of locking.

This is the third and final part of a series on PostgreSQL locking, related to latches protecting internal database structures. Here are the previous parts: Row-level locks and table-level locks.

Instrumentation

Starting from PostgreSQL 9.6, LWLocks activity can be investigated with the pg_stat_activity system view. It could be useful under high CPU utilization. There are system settings to help with contention on specific lightweight locks.

Before PostgreSQL 9.5, the LWLocks implementation used spin-locks.  It was a bottleneck. This was fixed in 9.5 with atomic state variable.

Potential heavy contention places

  • WALInsertLock: protects WAL buffers. You can increase the number of wal buffers to get a slight improvement. Incidentally, synchronous_commit=off increases pressure on the lock even more, but it’s not a bad thing. full_page_writes=off reduces contention, but it’s generally not recommended.
  • WALWriteLock: accrued by PostgreSQL processes while WAL records are flushed to disk or during a WAL segments switch. synchronous_commit=off removes the wait for disk flush, full_page_writes=off reduces the amount of data to flush.
  • LockMgrLock: appears in top waits during a read-only workload. It latches relations regardless of its size. It’s not a single lock, but at least 16 partitions. Thus it’s important to use multiple tables during benchmarks and avoid single table anti-pattern in production.
  • ProcArrayLock: Protects the ProcArray structure. Before PostgreSQL 9.0, every transaction acquired this lock exclusively before commit.
  • CLogControlLock: protects CLogControl structure, if it shows on the top of pg_stat_activity, you should check the location of $PGDATA/pg_clog—it should be on a buffered file system.
  • SInvalidReadLock: protects sinval array. Readers using shared lock. SICleanupQueue, and other array-wide updates, requires an exclusive lock. It shows at the top of the pg_stat_activity when the shared buffer pool is under stress. Using a higher number of shared_buffers helps to reduce contention.
  • BufMappingLocks: protects regions of buffers. Sets 128 regions (16 before 9.5) of buffers to handle the whole buffer cache.

Spinlocks

The lowest level for locking is spinlocks. Therefore, it’s implemented within CPU-specific instructions. PostgreSQL is trying to change an atomic variable value in a loop. If the value is changed from zero to one – the process obtained a spinlock. If it’s not possible to get a spinlock immediately, the process will increase its wait delay exponentially.  There is no monitoring on spinlocks and it’s not possible to release all accrued spinlocks at once. Due to the single state change, it’s also an exclusive lock. In order to simplify the porting of PostgreSQL to exotic CPU and OS variants, PostgreSQL uses OS semaphores for its spinlocks implementation. Of course, it’s significantly slower compared to native CPU instructions port.

Summary

  • Use pg_stat_activity to find which queries or LWLocks are causing lock waits
  • Use fresh branches of PostgreSQL, as developers have been working on performance improvements and trying to reduce locking contention on hot mutexes.

References

Locks Photo by Warren Sammut on Unsplash

Oct
29
2018
--

Assessing IBM’s $34 billion Red Hat acquisition

As you look at the $34 billion IBM-Red Hat deal announced yesterday, if you follow the enterprise closely, it seems like a good move, at least on its face. It could be years before we understand the true value of it for IBM (or lack thereof, depending on how it ultimately goes). The questions stands then, is this a savvy move, a desperate one or perhaps a bit of both. It turns out, it depends on whom you ask.

For starters, there is the sheer amount of money involved, a 63 percent premium on Friday’s closing price of just under $117 a share. IBM spent $190 a share, but as Ray Wang, founder and chief analyst at Constellation Research said, Red Hat didn’t necessarily want to be sold, so IBM had to overpay to get their company.

Wang sees cloud, Linux and security as the big drivers on IBM’s part. “IBM is doubling down on the cloud, but they also are going for a grab in Linux for their largest and most important open source communities and some of the newer tech on Red Hat security,” he told TechCrunch. He acknowledges that it’s a huge premium for the stock, but he believes IBM needs the M&A action to drive down customer acquisition costs and drive up cross sell.

Photo: Ron Miller

IBM is placing a big bet here says Dharmesh Thakker, general partner at Battery Ventures, believing it to be worth 30x its current earnings in the next 12 months. “Needless to say, the hybrid cloud opportunity that we have been working on the last few years, is real and IBM/Cisco/HP/Dell all want a piece of this action going forward as the $300B in datacenter spend gets dislocated by public and hybrid cloud vendors,” Thakker explained in a statement.

He believes this deal could actually trigger a new set of mega mergers between the traditional tech vendors and cloud native, container and DevOps companies over the next few months.

IBM CEO Ginni Rometty was positively giddy at the prospects of a combined IBM-Red Hat in a call with analysts and press this morning, pointing out that only 20 percent of enterprise workloads have been moved to the cloud. She sees a big opportunity, one she projects to be worth $1 trillion by 2020. Keeping in mind you should take market projections with a grain of salt, this is undoubtedly a big market and one that Oracle and Microsoft have also targeted.

She said that Red Hat was a rare company indeed. “Red Hat on its own has been a high value company and has done a great job with strong growth, is highly profitable and generates cash. There are not many companies out there that look like that in this area,” Rometty said.

Slide: IBM

Dan Scholnick, general partner at Trinity Ventures, whose investments have included New Relic and Docker, was not terribly impressed with the deal, believing it smacked of desperation on IBM’s part.

“IBM is a declining business that somehow needs to become relevant in the cloud era. Red Hat is not the answer. Red Hat’s business centers around an operating system, which is a layer of the technology stack that has been completely commoditized by cloud. (If you use AWS, you can get Amazon’s OS for free, so why would you pay Red Hat?) Red Hat has NO story for cloud,” he claimed in a statement.

That might not be an entirely fair assessment. While Red Hat Enterprise Linux is a big part of the company’s revenue, it’s not the only piece. Over the last couple of years it has moved into Kubernetes and containerization and has grown the cloud native side of the business alongside RHEL.

In fact, Forrester analyst Dave Bartoletti sees the cloud native piece as being key here. “The combined company has a leading Kubernetes and container-based cloud-native development platform, and a much broader open source middleware and developer tools portfolio than either company separately. While any acquisition of this size will take time to play out, the combined company will be sure to reshape the open source and cloud platforms market for years to come,” he said.

Photo: IBM

Wang believes the deal could hinge on how long Red Hat CEO Jim Whitehurst, who had led the company for over a decade, stays with the unit. According to IBM, they will maintain the Red Hat brand and operate it as an independent entity inside Big Blue. “If Whitehurst doesn’t stick around for awhile, the deal could go south,” he said. But the company could dangle the CEO job when Rometty decides to leave as incentive to stay.

Regardless, Wall Street was not entirely happy with IBM’s move with their stock down all day. Needless to say the 63 percent premium IBM paid for the stock has driven Red Hat higher today.

The deal must pass shareholder muster, but given the premium IBM has offered, it’s hard to believe they would turn it down. In addition, since these companies operate across the world, they are subject to the global regulatory approval process. They won’t officially come together until at least the second half of next year at the soonest. That’s when we might begin to learn whether this was a brilliant or desperate move by IBM.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com