Jan
31
2019
--

Google’s Cloud Firestore NoSQL database hits general availability

Google today announced that Cloud Firestore, its serverless NoSQL document database for mobile, web and IoT apps, is now generally available. In addition, Google is also introducing a few new features and bringing the service to 10 new regions.

With this launch, Google is giving developers the option to run their databases in a single region. During the beta, developers had to use multi-region instances, and, while that obviously has some advantages with regard to resilience, it’s also more expensive and not every app needs to run in multiple regions.

“Some people don’t need the added reliability and durability of a multi-region application,” Google product manager Dan McGrath told me. “So for them, having a more cost-effective regional instance is very attractive, as well as data locality and being able to place a Cloud Firestore database as close as possible to their user base.”

The new regional instance pricing is up to 50 percent cheaper than the current multi-cloud instance prices. Which solution you pick does influence the SLA guarantee Google gives you, though. While the regional instances are still replicated within multiple zones inside the region, all of the data is still within a limited geographic area. Hence, Google promises 99.999 percent availability for multi-region instances and 99.99 percent availability for regional instances.

And talking about regions, Cloud Firestore is now available in 10 new regions around the world. Firestore launched with a single location when it launched and added two more during the beta. With this, Firestore is now available in 13 locations (including the North America and Europe multi-region offerings). McGrath tells me Google is still in the planning stage for deciding the next phase of locations, but he stressed that the current set provides pretty good coverage across the globe.

Also new in this release is deeper integration with Stackdriver, the Google Cloud monitoring service, which can now monitor read, write and delete operations in near-real time. McGrath also noted that Google plans to add the ability to query documents across collections and increment database values without needing a transaction.

It’s worth noting that while Cloud Firestore falls under the Google Firebase brand, which typically focuses on mobile developers, Firestore offers all of the usual client-side libraries for Compute Engine or Kubernetes Engine applications, too.

“If you’re looking for a more traditional NoSQL document database, then Cloud Firestore gives you a great solution that has all the benefits of not needing to manage the database at all,” McGrath said. “And then, through the Firebase SDK, you can use it as a more comprehensive back-end as a service that takes care of things like authentication for you.”

One of the advantages of Firestore is that it has extensive offline support, which makes it ideal for mobile developers but also IoT solutions. Maybe it’s no surprise, then, that Google is positioning it as a tool for both Google Cloud and Firebase users.

Jan
31
2019
--

A New PMM Dashboard to Monitor Memory Usage!

Dashboard to Monitor Memory Usage in Linux

While the PMM team works hard on our PMM 2.0 release, we have been working on a few things in the background which we’d like to show off !  In particular we have developed a new dashboard that displays metrics related to memory usage on Linux systems. The dashboard leverages information collected by node_exporter. The graphs take advantage of  /proc filesystem files, specifically:

  • meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options.
  • vmstat: Provides information about block IO and CPU activity in addition to memory.

The information is split into five sections:

  1. Total Memory
  2. VMM (Virtual Memory Manager) Statistics
  3. Memory Statistics
  4. Number and Dynamic of Pages
  5. Pages per Zone

The dashboard will be included as part of the PMM 2.0 release. For you early adopters, you can get it from GrafanaLab and install it alongside your existing Dashboards – it won’t overwrite anything!

Please notice that the dashboard can be imported by ID (9692) in Grafana versions since 5.4.2 or should be downloaded and imported manually in older Grafana versions.

Jan
29
2019
--

Percona Server for MySQL 5.6.43-84.3 Is Now Available

Percona Server for MySQL 8.0

Percona Server for MySQL 5.6Percona is glad to announce the release of Percona Server for MySQL 5.6.43-84.3 on January 29, 2019 (Downloads are available here and from the Percona Software Repositories).

This release merges changes of MySQL 5.6.43, including all the bug fixes in it. Percona Server for MySQL 5.6.43-84.3 is now the current GA release in the 5.6 series. All of Percona’s software is open-source and free.

Bugs Fixed

  • A sequence of LOCK TABLES FOR BACKUP and STOP SLAVE SQL_THREAD could cause replication to be blocked and not possible to be restarted normally. Bug fixed #4758 (upstream #93649).
  • http was replaced with https in http://bugs.percona.com in server crash messages. Bug fixed #4855.
  • Wrong query results could be received in semi-join sub queries with materialization-scan that allowed inner tables of different semi-join nests to interleave. Bug fixed #4907 (upstream bug #92809).
  • The audit logs could be corrupted due to an invalid size of the audit log file when audit_log_rotations was changed at runtime. Bug fixed #4950.
  • There was a typo in mysqld_safe.sh: trottling was replaced with throttling. Bug fixed #240. Thanks to Michael Coburn for the patch.

Other bugs fixed: #2477#3535#3568#3672#3673#4989#5100#5118#5163#5268#5270#5271

This release also contains fixes for the following CVE issues: CVE-2019-2534, CVE-2019-2529, CVE-2019-2482, CVE-2019-2455, CVE-2019-2503, CVE-2018-0734.

Find the release notes for Percona Server for MySQL 5.6.42-84.2 in our online documentation. Report bugs in the Jira bug tracker.

 

Jan
29
2019
--

Figma’s design and prototyping tool gets new enterprise collaboration features

Figma, the design and prototyping tool that aims to offer a web-based alternative to similar tools from the likes of Adobe, is launching a few new features today that will make the service easier to use to collaborate across teams in large organizations. Figma Organization, as the company calls this new feature set, is the company’s first enterprise-grade service that features the kind of controls and security tools that large companies expect. To develop and test these tools, the company partnered with companies like Rakuten, Square, Volvo and Uber, and introduced features like unified billing and audit reports for the admins and shared fonts, browsable teams and organization-wide design systems for the designers.

For designers, one of the most important new features here is probably organization-wide design systems. Figma already had tools to create design systems, of course, but this enterprise version now makes it easier for teams to share libraries and fonts with each other to ensure that the same styles are applied to products and services across a company.

Businesses can now also create as many teams as they would like and admins will get more controls over how files are shared and with whom they can be shared. That doesn’t seem like an especially interesting feature, but because many larger organizations work with customers outside of the company, it’s something that will make Figma more interesting to these large companies.

After working with Figma on these new tools, Uber, for example, moved all of its company over to the service and 90 percent of its product design work now happens on the platform. “We needed a way to get people in the right place at the right time — in the right team with the right assets,” said Jeff Jura, staff product designer who focuses on Uber’s design systems. “Figma does that.”

Other new enterprise features that matter in this context are single sign-on support, activity logs for tracking activities across users, teams, projects and files, and draft ownership to ensure that all the files that have been created in an organization can be recovered after an employee leaves the company.

Figma still offers free and professional tiers (at $12/editor/month). Unsurprisingly, the new Organization tier is a bit more expensive and will cost $45/editor/month.

Jan
29
2019
--

SAP job cuts prove harsh realities of enterprise transformation

As traditional enterprise companies like IBM, Oracle and SAP try to transform into more modern cloud companies, they are finding that making that transition, while absolutely necessary, could require difficult adjustments along the way. Just this morning, SAP announced that it was restructuring in order to save between €750 million and €800 million (between approximately $856 million and $914 million).

While the company tried to put as positive a spin on the announcement as possible, it could involve up to 4,000 job cuts as SAP shifts into more modern technologies. “We are going to move our people and our focus to the areas where the new economy needs SAP the most: artificial intelligence, deep machine learning, IoT, blockchain and quantum computing,” CEO Bill McDermott told a post-earnings press conference.

If that sounds familiar, it should. It is precisely the areas on which IBM has been trying to concentrate its transformation over the last several years. IBM has struggled to make this change and has also framed workforce reduction as moving to modern skill sets. It’s worth pointing out that SAP’s financial picture has been more positive than IBM’s.

CFO Luka Mucic tried to stress this was not about cost-cutting, so much as ensuring the long-term health of the company, but did admit it did involve job cuts. These could include early retirement and other incentives to leave the company voluntarily. “We still expect that there will be a number probably slightly higher than what we saw in the 2015 program, where we had around 3,000 employees leave the company, where at the end of this process will leave SAP,” he said.

The company believes that in spite of these cuts, it will actually have more employees by this time next year than it has now, but they will be shifted to these new technology areas. “This is a growth company move, not a cost-cutting move; every dollar that we gain from a restructuring initiative will be invested back into headcount and more jobs,” McDermott said. SAP kept stressing that cloud revenue will reach $35 billion in revenue by 2023.

Holger Mueller, an analyst who watches enterprise companies like SAP for Constellation Research, says the company is doing what it has to do in terms of transformation. “SAP is in the midst of upgrading its product portfolio to the 21st century demands of its customer base,” Mueller told TechCrunch. He added that this is not easy to pull off, and it requires new skill sets to build, operate and sell the new technologies.

McDermott stressed that the company would be offering a generous severance package to any employee leaving the company as a result of today’s announcement.

Today’s announcement comes after the company made two multi-billion-dollar acquisitions to help in this transition in 2018, paying $8 billion for Qualtrics and $2.4 billion for CallidusCloud.

Jan
29
2019
--

Upcoming Webinar Thurs 1/31: Percona Server for MongoDB 4.0 Feature Walkthrough

Percona Server for MongoDB 4.0 Feature Walkthrough

Percona Server for MongoDB 4.0 Feature WalkthroughPlease join Vinodh Krishnaswamy as he presents his talk, Percona Server for MongoDB 4.0 Feature Walkthrough on January 31st, 2019, at 6:00 AM PST (UTC-8) / 9:00 AM EST (UTC-5).

Register Now

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database. Moreover, it is a fully-compatible, drop-in replacement for MongoDB 4.0 Community Edition. It also supports MongoDB 4.0 protocols and drivers.

Percona Server for MongoDB extends the functionality of the MongoDB 4.0 Community Edition by including the Percona Memory Engine storage engine, encrypted WiredTiger storage engine, audit logging, SASL authentication, hot backups, and enhanced query profiling. Additionally, Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release includes all features of MongoDB 4.0 Community Edition 4.0. Most notable among these are:

– Multi-Document ACID transactions
– Type conversion through the new aggregation operators
– Enhancements to the Change Streams support

In order to learn more, register for the Percona Server for MongoDB 4.0 Feature Walkthrough.

Jan
29
2019
--

Timescale announces $15M investment and new enterprise version of TimescaleDB

It’s a big day for Timescale, makers of the open-source time-series database, TimescaleDB. The company announced a $15 million investment and a new enterprise version of the product.

The investment is technically an extension of the $12.4 million Series A it raised last January, which it’s referring to as A1. Today’s round is led by Icon Ventures, with existing investors Benchmark, NEA and Two Sigma Ventures also participating. With today’s funding, the startup has raised $31 million.

Timescale makes a time-series database. That means it can ingest large amounts of data and measure how it changes over time. This comes in handy for a variety of use cases, from financial services to smart homes to self-driving cars — or any data-intensive activity you want to measure over time.

While there are a number of time-scale database offerings on the market, Timescale co-founder and CEO Ajay Kulkarni says that what makes his company’s approach unique is that it uses SQL, one of the most popular languages in the world. Timescale wanted to take advantage of that penetration and build its product on top of Postgres, the popular open-source SQL database. This gave it an offering that is based on SQL and is highly scalable.

Timescale admittedly came late to the market in 2017, but by offering a unique approach and making it open source, it has been able to gain traction quickly. “Despite entering into what is a very crowded database market, we’ve seen quite a bit of community growth because of this message of SQL and scale for time series,” Kulkarni told TechCrunch.

In just over 22 months, the company has more than a million downloads and a range of users from older guard companies like Charter, Comcast and Hexagon Mining to more modern companies like Nutanix and and TransferWise.

With a strong base community in place, the company believes that it’s now time to commercialize its offering, and in addition to an open-source license, it’s introducing a commercial license. “Up until today, our main business model has been through support and deployment assistance. With this new release, we also will have enterprise features that are available with a commercial license,” Kulkarni explained.

The commercial version will offer a more sophisticated automation layer for larger companies with greater scale requirements. It will also provide better lifecycle management, so companies can get rid of older data or move it to cheaper long-term storage to reduce costs. It’s also offering the ability to reorder data in an automated fashion when that’s required, and, finally, it’s making it easier to turn the time series data into a series of data points for analytics purposes. The company also hinted that a managed cloud version is on the road map for later this year.

The new money should help Timescale continue fueling the growth and development of the product, especially as it builds out the commercial offering. Timescale, which was founded in 2015 in NYC, currently has 30 employees. With the new influx of cash, it expects to double that over the next year.

Jan
28
2019
--

Upcoming Webinar Wed 1/30: Percona XtraDB Cluster: Failure Scenarios and their Recovery

Percona XtraDB Cluster: Failure Scenarios and their Recovery

Percona XtraDB Cluster: Failure Scenarios and their RecoveryPlease join Percona’s Senior Technical Manager, Alkin Tezuysal, and Percona’s Percona XtraDB Cluster Lead, Krunal Bauskar as they present their talk, Percona XtraDB Cluster: Failure Scenarios and their Recovery on Wednesday, January 30th, 2019, at 8:00 AM PST (UTC-8) / 11:00 AM EST (UTC-5).

Register Now

Percona XtraDB Cluster (a.k.a PXC) is an open source, multi-master, high availability MySQL clustering solution. PXC works with your MySQL / Percona Server-created database. Given the multi-master aspect, there are multi-guards to protect a cluster from entering an inconsistent state. Most of these guards are configurable based on their user environment. However, if they are not configured properly they could cause the cluster to stall, fail or error-out.

In this session, we’ll discuss failure scenarios, including a MySQL cluster entering a non-primary state due to network partitioning. We’ll also discuss a cluster stall due to flow control, data inconsistency causing the shutdown of a node and common problems during the initial catch up – a.k.a State Snapshot Transfer (SST). Other issues include delays in the purging of a transaction, a blocking DDL causing the entire cluster to stall and a misconfigured cluster.

We will also go over how to solve some of these problems and how to safely recover from these failures.

To learn more, register for Percona XtraDB Cluster: Failure Scenarios and their Recovery.

Jan
28
2019
--

Percona Server for MongoDB Operator 0.2.0 Early Access Release Is Now Available

Percona Server for MongoDB Operator

Percona Server for MongoDB OperatorPercona announces the availability of the Percona Server for MongoDB Operator 0.2.0 early access release.

The Percona Server for MongoDB Operator simplifies the deployment and management of Percona Server for MongoDB in a Kubernetes or OpenShift environment. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.

Note: PerconaLabs is one of the open source GitHub repositories for unofficial scripts and tools created by Percona staff. These handy utilities can help save your time and effort.

Percona software builds located in the Percona-Lab repository are not officially released software, and also aren’t covered by Percona support or services agreements.

You can install the Percona Server for MongoDB Operator on Kubernetes or OpenShift. While the operator does not support all the Percona Server for MongoDB features in this early access release, instructions on how to install and configure it are already available along with the operator source code in our Github repository.

The Percona Server for MongoDB Operator on Percona-Lab is an early access release. Percona doesn’t recommend it for production environments. 

New features

  • Percona Server for MongoDB backups are now supported and can be performed on a schedule or on demand.
  • Percona Server for MongoDB Operator now supports Replica Set Arbiter nodes to reduce disk IO and occupied space if needed.
  • Service per Pod operation mode implemented in this version allows assigning external or internal static IP addresses to the Replica Set nodes.

Improvements

  • CLOUD-76: Several Percona Server for MongoDB clusters can now share one namespace.

Fixed Bugs

  • CLOUD-97: The Replica Set watcher was not stopped automatically after the custom resource deletion.
  • CLOUD-46: When k8s-mongodb-initiator was running on an already-initialized Replica Set, it still attempted to initiate it.
  • CLOUD-45: The operator was temporarily removing MongoDB nodes from the Replica Set during a Pod update without the need.
  • CLOUD-51: It was not possible to set requests without limits in the custom resource configuration.
  • CLOUD-52: It was not possible to set limits without requests in the custom resource configuration.
  • CLOUD-89: The k8s-mongodb-initiator  was exiting with exit-code 1 instead of 0 if the Replica Set initiation has already happened, e.g., when a custom resource was deleted and recreated without deleting PVC data.
  • CLOUD-96: The operator was crashing after a re-create of the custom resource that already had old PVC data, and caused it to skip Replica Set init.

Percona Server for MongoDB is an enhanced, open source and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition. It supports MongoDB protocols and drivers. Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.
Jan
28
2019
--

Monitor and Optimize Slow Queries with PMM and EverSQL – Part 2

percona_pmm_eversql

EverSQL is a platform that intelligently tunes your SQL queries by providing query optimization recommendations, and feedback on missing indexes. This is the second post of our EverSQL series, if you missed our introductory post take a look there first and then come back to this article.

We’ll use the Stackoverflow data set again as we did in our first post.

Diving into query optimization

We’ll grab the worst performing query in the list from PMM and optimize it. This query builds a list of the top 50 most recent posts which have a score greater than two, and involves joining two large tables – posts and comments. The original runtime of that query is above 20 minutes and causes high load on the server while running.

worst-query-in-PMM

Assuming you have EverSQL’s chrome extension installed, you’ll see a new button in the PMM Query Analytics page, allowing you to send the query and schema structure directly to EverSQL, to retrieve indexing and query optimization recommendations.

eversql recommendations

 

eversql-dashboard1

After implementing EverSQL’s recommendations, the query’s execution duration significantly improved:

improved-query-response-time

Optimization Internals

So what was the actual optimization in this specific case? And why did it work so well? Let’s look at the original query:

SELECT
   p.title
FROM
   so.posts p
       INNER JOIN
   so.comments c ON p.id = c.postid
WHERE
c.score > 2
GROUP BY p.id
ORDER BY p.creationdate DESC
LIMIT 100;

The tables’ structure:

CREATE TABLE `posts` (
  `Id` int(11) NOT NULL,
  `CreationDate` datetime NOT NULL,
  ...
  PRIMARY KEY (`Id`),
  KEY `posts_idx_creationdate` (`CreationDate`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1
CREATE TABLE `comments` (
  `Id` int(11) NOT NULL,
  `CreationDate` datetime NOT NULL,
  `PostId` int(11) NOT NULL,
  `Score` int(11) DEFAULT NULL,
  ....
  PRIMARY KEY (`Id`),
  KEY `comments_idx_postid` (`PostId`),
  KEY `comments_idx_postid_score` (`PostId`,`Score`),
  KEY `comments_idx_score` (`Score`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

This query will return the post title of the latest 100 stackoverflow posts, which had at least one popular comment (with a score higher than two). The posts table contains 39,646,923 records, while the comments table contains 64,510,258 records.

This is the execution plan MySQL (v5.7.20) chose:

original-execution-plan

One of the challenges with this query is that the GROUP BY and ORDER BY clauses contain different fields, which prevent MySQL from using an index for the ORDER BY. As MySQL’s documentation states:

“In some cases, MySQL cannot use indexes to resolve the ORDER BY, although it may still use indexes to find the rows that match the WHERE clause. Examples:  … The query has different ORDER BY and GROUP BY expressions.”.

Now let’s look into the optimized query:

SELECT
   p.title
FROM
   so.posts p
WHERE
   EXISTS( SELECT
           1
       FROM
           so.comments c
       WHERE
           p.id = c.postid AND c.score > 2)
ORDER BY p.creationdate DESC
LIMIT 100;

Since the comments table is joined in this query only to check for existence of matching records in the posts table, we can use an EXISTS subquery instead. This will allow us to avoid inflating the results (by using JOIN) and then deflating them (by using GROUP BY), which are costly operations.

Now that the GROUP BY is redundant and removed, the database can optionally choose to use an index for the ORDER BY clause.

The new execution plan MySQL chooses is:

As mentioned above, this transformation reduced the query execution duration from ~20 minutes to 370ms. We hope you enjoyed this post, please let us know your experiences using the integration between PMM Query Analytics and EverSQL!

As mentioned above, this transformation reduced the query execution duration from ~20 minutes to 370ms.

We hope you enjoyed this post, please let us know your experiences using the integration between PMM Query Analytics and EverSQL!

Co-Author: Tomer Shay

Tomer Shay, EverSQL

 

Tomer Shay is the Founder of EverSQL. He loves being where the challenge is. In the last 12 years, he had the privilege to code a lot and lead teams of developers, while focusing on databases and performance. He enjoys using technology to bring ideas into reality, help people and see them smile.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com