This Week in Data With Colin Charles 51: Debates Emerging on the Relicensing of OSS

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

There has been a lot of talk around licenses in open source software, and it has hit the database world in the past weeks. Redis Labs relicensed some AGPL software to the Commons Clause (in their case, Apache + Commons Clause; so you can’t really call it Apache any longer). I’ll have more to say on this topic soon, but in the meantime you might enjoy reading Open-source licensing war: Commons Clause. This was the most balanced article I read about this move and the kerfuffle it has caused. We also saw this with Lerna (not database related), and here’s another good read: Open Source Devs Reverse Decision to Block ICE Contractors From Using Software.

Reviewing is under way for Percona Live Europe 2018 talks: the review of the tutorials is complete. We can expect to see a schedule by mid-September, so hang in there—I’ve received a lot of messages asking if talks are going to be approved or not.


  • While not a new release, MySQL Shell 8.0.12 is worth spending some time with, especially since you might enjoy the pluggable password store.
  • SqlKata for C# – SqlKata is an elegant Sql Query Builder for C#, it helps you to talk with your database engine with a higher order of freedom, it allows you to write complex queries in an Object Oriented Manner, helpful when you need. Works with MySQL, PostgreSQL, and more

Link List

Industry Updates

  • Balazs Pocze is now a database SRE at Wikimedia Foundation. He has spoken at several Percona Live events too!

Upcoming Appearances


I look forward to feedback/tips via e-mail at or on Twitter @bytebot.


The post This Week in Data With Colin Charles 51: Debates Emerging on the Relicensing of OSS appeared first on Percona Database Performance Blog.


Webinar Wed 9/5: Choosing the Right Open Source Database

open source database

open source databasePlease join Percona’s CEO, Peter Zaitsev as he presents Choosing the Right Open Source Database on Wednesday, September 5th, 2018 at 11:00 AM PDT (UTC-7) / 2:00 PM EDT (UTC-4).



The world of open-source databases is overwhelming. There are dozens of types of databases – relational DBMS, time-series, graph, document, etc.  not to mention the dozens of software options within each of those categories. More and more, the strategies of enterprises involve open source software and open source database software. This allows companies to be more agile, quick to market, and cost-effective.

So how do you know which open source database to choose? Open-source database companies worldwide are competing for our attention. In this talk, we will distill the madness. I will give an overview of the types of open-source databases, use-cases, and key players in the ecosystem.

Register for this webinar on choosing the right open source database.

The post Webinar Wed 9/5: Choosing the Right Open Source Database appeared first on Percona Database Performance Blog.


Tuning PostgreSQL Database Parameters to Optimize Performance

PostgreSQL parameters for database performance tuning

Out of the box, the default PostgreSQL configuration is not tuned for any particular workload. Default values are set to ensure that PostgreSQL runs everywhere, with the least resources it can consume and so that it doesn’t cause any vulnerabilities. It has default settings for all of the database parameters. It is primarily the responsibility of the database administrator or developer to tune PostgreSQL according to their system’s workload. In this blog, we will establish basic guidelines for setting PostgreSQL database parameters to improve database performance according to workload.

Bear in mind that while optimizing PostgreSQL server configuration improves performance, a database developer must also be diligent when writing queries for the application. If queries perform full table scans where an index could be used or perform heavy joins or expensive aggregate operations, then the system can still perform poorly even if the database parameters are tuned. It is important to pay attention to performance when writing database queries.

Nevertheless, database parameters are very important too, so let’s take a look at the eight that have the greatest potential to improve performance

PostgreSQL’s Tuneable Parameters


PostgreSQL uses its own buffer and also uses kernel buffered IO. That means data is stored in memory twice, first in PostgreSQL buffer and then kernel buffer. Unlike other databases, PostgreSQL does not provide direct IO. This is called double buffering. The PostgreSQL buffer is called shared_buffer which is the most effective tunable parameter for most operating systems. This parameter sets how much dedicated memory will be used by PostgreSQL for cache.

The default value of shared_buffer is set very low and you will not get much benefit from that. It’s low because certain machines and operating systems do not support higher values. But in most modern machines, you need to increase this value for optimal performance.

The recommended value is 25% of your total machine RAM. You should try some lower and higher values because in some cases we achieve good performance with a setting over 25%. The configuration really depends on your machine and the working data set. If your working set of data can easily fit into your RAM, then you might want to increase the shared_buffer value to contain your entire database, so that the whole working set of data can reside in cache. That said, you obviously do not want to reserve all RAM for PostgreSQL.

In production environments, it is observed that a large value for shared_buffer gives really good performance, though you should always benchmark to find the right balance.

testdb=# SHOW shared_buffers;
(1 row)

Note: Be careful as some kernels do not allow a bigger value, specifically in Windows there is no use of higher value.


PostgreSQL writes its WAL (write ahead log) record into the buffers and then these buffers are flushed to disk. The default size of the buffer, defined by wal_buffers, is 16MB, but if you have a lot of concurrent connections then a higher value can give better performance.


The effective_cache_size provides an estimate of the memory available for disk caching. It is just a guideline, not the exact allocated memory or cache size. It does not allocate actual memory but tells the optimizer the amount of cache available in the kernel. If the value of this is set too low the query planner can decide not to use some indexes, even if they’d be helpful. Therefore, setting a large value is always beneficial.


This configuration is used for complex sorting. If you have to do complex sorting then increase the value of work_mem for good results. In-memory sorts are much faster than sorts spilling to disk. Setting a very high value can cause a memory bottleneck for your deployment environment because this parameter is per user sort operation. Therefore, if you have many users trying to execute sort operations, then the system will allocate

work_mem * total sort operations

  for all users. Setting this parameter globally can cause very high memory usage. So it is highly recommended to modify this at the session level.

testdb=# SET work_mem TO "2MB";
testdb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b;
                                    QUERY PLAN                                     
Gather Merge  (cost=509181.84..1706542.14 rows=10000116 width=24)
   Workers Planned: 4
   ->  Sort  (cost=508181.79..514431.86 rows=2500029 width=24)
         Sort Key: b
         ->  Parallel Seq Scan on bar  (cost=0.00..88695.29 rows=2500029 width=24)
(5 rows)

The initial query’s sort node has an estimated cost of 514431.86. Cost is an arbitrary unit of computation. For the above query, we have a work_mem of only 2MB. For testing purposes, let’s increase this to 256MB and see if there is any impact on cost.

testdb=# SET work_mem TO "256MB";
testdb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b;
                                    QUERY PLAN                                     
Gather Merge  (cost=355367.34..1552727.64 rows=10000116 width=24)
   Workers Planned: 4
   ->  Sort  (cost=354367.29..360617.36 rows=2500029 width=24)
         Sort Key: b
         ->  Parallel Seq Scan on bar  (cost=0.00..88695.29 rows=2500029 width=24)

The query cost is reduced to 360617.36 from 514431.86 — a 30% reduction.


maintenance_work_mem is a memory setting used for maintenance tasks. The default value is 64MB. Setting a large value helps in tasks like VACUUM, RESTORE, CREATE INDEX, ADD FOREIGN KEY and ALTER TABLE.

postgres=# CHECKPOINT;
postgres=# SET maintenance_work_mem to '10MB';
postgres=# CREATE INDEX foo_idx ON foo (c);
Time: 170091.371 ms (02:50.091)

postgres=# CHECKPOINT;
postgres=# set maintenance_work_mem to '256MB';
postgres=# CREATE INDEX foo_idx ON foo (c);
Time: 111274.903 ms (01:51.275)

The index creation time is 170091.371ms when maintenance_work_mem is set to only 10MB, but that is reduced to 111274.903 ms when we increase maintenance_work_mem setting to 256MB.


This is used to enforce that commit will wait for WAL to be written on disk before returning a success status to the client. This is a trade-off between performance and reliability. If your application is designed such that performance is more important than the reliability, then turn off synchronous_commit. This means that there will be a time gap between the success status and a guaranteed write to disk. In the case of a server crash, data might be lost even though the client received a success message on commit. In this case, a transaction commits very quickly because it will not wait for a WAL file to be flushed, but reliability is compromised.

checkpoint_timeout, checkpoint_completion_target

PostgreSQL writes changes into WAL. The checkpoint process flushes the data into the data files. This activity is done when CHECKPOINT occurs. This is an expensive operation and can cause a huge amount of IO. This whole process involves expensive disk read/write operations. Users can always issue CHECKPOINT whenever it seems necessary or automate the system by PostgreSQL’s parameters checkpoint_timeout and checkpoint_completion_target.

The checkpoint_timeout parameter is used to set time between WAL checkpoints. Setting this too low decreases crash recovery time, as more data is written to disk, but it hurts performance too since every checkpoint ends up consuming valuable system resources. The checkpoint_completion_target is the fraction of time between checkpoints for checkpoint completion. A high frequency of checkpoints can impact performance. For smooth checkpointing, checkpoint_timeout must be a low value. Otherwise the OS will accumulate all the dirty pages until the ratio is met and then go for a big flush.


There are more parameters that can be tuned to gain better performance but those have less impact than the ones highlighted here. In the end, we must always keep in mind that not all parameters are relevant for all applications types. Some applications perform better by tuning a parameter and some don’t. Database parameters must be tuned for the specific needs of an application and the OS it runs on.

Related posts

You can read my post about tuning Linux parameters for PostgreSQL database performance

Plus another recent post on benchmarks:

Tuning PostgreSQL for sysbench-tpcc

The post Tuning PostgreSQL Database Parameters to Optimize Performance appeared first on Percona Database Performance Blog.


Amazon is quietly doubling down on cryptographic security

The growth of cloud services — with on-demand access to IT services over the Internet — has become one of the biggest evolutions in enterprise technology, but with it, so has the threat of security breaches and other cybercriminal activity. Now it appears that one of the leading companies in cloud services is looking for more ways to double down and fight the latter. Amazon’s AWS has been working on a range of new cryptographic and AI-based tools to help manage the security around cloud-based enterprise services, and it currently has over 130 vacancies for engineers with cryptography skills to help build and run it all.

One significant part of the work has been within a division of AWS called the Automated Reasoning Group, which focuses on identifying security issues and developing new tools to fix them for AWS and its customers based on automated reasoning, a branch of artificial intelligence that covers both computer science and mathematical logic and is aimed at helping computers automatically reason completely or nearly completely.

In recent times, Amazon has registered two new trademarks, Quivela and SideTrail, both of which have connections to ARG.

Classified in its patent application as “computer software for cryptographic protocol specification and verification,” Quivela also has a Github repository within AWS Labs’ profile that describes it as a “prototype tool for proving the security of cryptographic protocols,” developed by the AWS Automated Reasoning Group. (The ARG also has as part of its mission to share code and ideas with the community.)

SideTrail is not on Github, but Byron Cook, an academic who is the founder and director of the AWS Automated Reasoning Group, has co-authored a research paper called “SideTrail: Verifying the Time Balancing of Cryptosystems.” However, the link to the paper, describing what this is about, is no longer working.

The trademark application for SideTrail includes a long list of potential applications (as trademark applications often do). The general idea is cryptography-based security services. Among them: “Computer software, namely, software for monitoring, identifying, tracking, logging, analyzing, verifying, and profiling the health and security of cryptosystems; network encryption software; computer network security software,” “Providing access to hosted operating systems and computer applications through the Internet,” and a smattering of consulting potential: “Consultation in the field of cloud computing; research and development in the field of security and encryption for cryptosystems; research and development in the field of software; research and development in the field of information technology; computer systems analysis.”

Added to this, in July, a customer of AWS started testing out two other new cryptographic tools developed by the ARG also for improving an organization’s cybersecurity — with the tools originally released the previous August (2017). Tiros and Zelkova, as the two tools are called, are math-based techniques that variously evaluate access control schemes, security configurations and feedback based on different setups to help troubleshoot and prove the effectiveness of security systems across storage (S3) buckets.

Amazon has not trademarked Tiros and Zelkova. A Zelkova trademark, for financial services, appears to be registered as an LLC called “Zelkova Acquisition” in Las Vegas, while there is no active trademark listed for Tiros.

Amazon declined to respond to our questions about the trademarks. A selection of people we contacted associated with the projects did not respond to requests for comment.

More generally, cryptography is a central part of how IT services are secured: Amazon’s Automated Reasoning Group has been around since 2014 working in this area. But Amazon appears to be doing more now both to ramp up the tools it produces and consider how it can be applied across the wider business. A quick look on open vacancies at the company shows that there are currently 132 openings at Amazon for people with cryptography skills.

“Cloud is the new computer, the Earth is the motherboard and data centers are the cards,” Cook said in a lecture he delivered recently describing AWS and the work that the ARG is doing to help AWS grow. “The challenge is that as [AWS] scales it needs to be ever more secure… How does AWS continue to scale quickly and securely?

“AWS has made a big bet on our community,” he continued, as one answer to that question. That’s led to an expansion of the group’s activities in areas like formal verification and beyond, as a way of working with customers and encouraging them to move more data to the cloud.

Amazon is also making some key acquisitions also to build up its cloud security footprint, such as Sqrrl and, two AI-based security startups whose founding teams both happen to have worked at the NSA.

Amazon’s AWS division pulled in over $6 billion in revenues last quarter with $1.6 billion in operating income, a healthy margin that underscores the shift that businesses and other organizations are making to cloud-based services.

Security is an essential component of how that business will continue to grow for Amazon and the wider industry: more trust in the infrastructure, and more proofs that cloud architectures can work better than using and scaling the legacy systems that businesses use today, will bolster the business. And it’s also essential, given the rise of breaches and ever more sophisticated cyber crimes. Gartner estimates that cloud-based security services will be a $6.9 billion market this year, rising to nearly $9 billion by 2020.

Automated tools that help human security specialists do their jobs better is an area that others like Microsoft are also eyeing up. Last year, it acquired Israeli security firm Hexadite, which offers remediation services to complement and bolster the work done by enterprise security specialists.


OpenStack’s latest release focuses on bare metal clouds and easier upgrades

The OpenStack Foundation today released the 18th version of its namesake open-source cloud infrastructure software. The project has had its ups and downs, but it remains the de facto standard for running and managing large private clouds.

What’s been interesting to watch over the years is how the project’s releases have mirrored what’s been happening in the wider world of enterprise software. The core features of the platform (compute, storage, networking) are very much in place at this point, allowing the project to look forward and to add new features that enterprises are now requesting.

The new release, dubbed Rocky, puts an emphasis on bare metal clouds, for example. While the majority of enterprises still run their workloads in virtual machines, a lot of them are now looking at containers as an alternative with less overhead and the promise of faster development cycles. Many of these enterprises want to run those containers on bare metal clouds and the project is reacting to this with its “Ironic” project that offers all of the management and automation features necessary to run these kinds of deployments.

“There’s a couple of big features that landed in Ironic in the Rocky release cycle that we think really set it up well for OpenStack bare metal clouds to be the foundation for both running VMs and containers,” OpenStack Foundation VP of marketing and community Lauren Sell told me. 

Ironic itself isn’t new, but in today’s update, Ironic gets user-managed BIOS settings (to configure power management, for example) and RAM disk support for high-performance computing workloads. Magnum, OpenStack’s service for using container engines like Docker Swarm, Apache Mesos and Kubernetes, is now also a Kubernetes certified installer, meaning that users can be confident that OpenStack and Kubernetes work together just like a user would expect.

Another trend that’s becoming quite apparent is that many enterprises that build their own private clouds do so because they have very specific hardware needs. Often, that includes GPUs and FPGAs, for example, for machine learning workloads. To make it easier for these businesses to use OpenStack, the project now includes a lifecycle management service for these kinds of accelerators.

“Specialized hardware is getting a lot of traction right now,” OpenStack CTO Mark Collier noted. “And what’s interesting is that FPGAs have been around for a long time but people are finding out that they are really useful for certain types of AI, because they’re really good at doing the relatively simple math that you need to repeat over and over again millions of times. It’s kind of interesting to see this kind of resurgence of certain types of hardware that maybe was seen as going to be disrupted by cloud and now it’s making a roaring comeback.”

With this update, the OpenStack project is also enabling easier upgrades, something that was long a daunting process for enterprises. Because it was so hard, many chose to simply not update to the latest releases and often stayed a few releases behind. Now, the so-called Fast Forward Upgrade feature allows these users to get on new releases faster, even if they are well behind the project’s own cycle. Oath, which owns TechCrunch, runs a massive OpenStack cloud, for example, and the team recently upgraded a 20,000-core deployment from Juno (the 10th OpenStack release) to Ocata (the 15th release).

The fact that Vexxhost, a Canadian cloud provider, is already offering support for the Rocky release in its new Silicon Valley cloud today is yet another sign that updates are getting a bit easier (and the whole public cloud side of OpenStack, too, often gets overlooked, but continues to grow).


InVision deepens integrations with Atlassian

InVision today announced a newly expanded integration and strategic partnership with Atlassian that will let users of Confluence, Trello and Jira see and share InVision prototypes from within those programs.

Atlassian’s product suite is built around making product teams faster and more efficient. These tools streamline and organize communication so developers and designers can focus on getting the job done. Meanwhile, InVision’s collaboration platform has caught on to the idea that design is now a team sport, letting designers, engineers, executives and other shareholders be involved in the design process right from the get-go.

Specifically, the expanded integration allows designers to share InVision Studio designs and prototypes right within Jira, Trello and Confluence. InVision Studio was unveiled late last year, offering designers an alternative to Sketch and Adobe.

Given the way design and development teams use both product suites, it only makes sense to let these product suites communicate with one another.

As part of the partnership, Atlassian has also made a strategic financial investment in InVision, though the companies declined to share the amount.

Here’s what InVision CEO Clark Valberg had to say about it in a prepared statement:

In today’s digital world creating delightful, highly effective customer experiences has become a central business imperative for every company in the world. InVision and Atlassian represent the essential platforms for organizations looking to unleash the potential of their design and development teams. We’re looking forward to all the opportunities to deepen our relationship on both a product and strategic basis, and build toward a more cohesive digital product operating system that enables every organization to build better products, faster.

InVision has been working to position itself as the Salesforce of the design world. Alongside InVision and InVision Studio, the company has also built out an asset and app store, as well as launched a small fund to invest in design startups. In short, InVision wants the design ecosystem to revolve around it.

Considering that InVision has raised more than $200 million, and serves 4 million users, including 80 percent of the Fortune 500, it would seem that the strategy is paying off.


Is It a Read Intensive or a Write Intensive Workload?

innodb row operations featured

One of the common ways to classify database workloads is whether it is  “read intensive” or “write intensive”. In other words, whether the workload is dominated by reads or writes.

Why should you care? Because recognizing if the workload is read intensive or write intensive will impact your hardware choices, database configuration as well as what techniques you can apply for performance optimization and scalability.

This question looks trivial on the surface, but as you go deeper—complexity emerges. There are different “levels” of reads and writes for you to consider. You can also choose to look at event counts or at the time it takes to do operations. These can provide very different responses, especially as the cost difference between a single read and a single write can be an order of magnitude.

Let’s examine the TPC-C Benchmark from this point of view, or more specifically its implementation in Sysbench. The illustrations below are taken from Percona Monitoring and Management (PMM) while running this benchmark.

Analyzing read/write workload by counts

analyzing read write workload by counts
At the highest level, you can think about queries that are sent to the database. In this case we can see about 30K of SELECT queries versus 20K of UPDATE+INSERT queries, making this benchmark slightly more read intensive by this measure.

innodb row operations
Another way to look at the load is through actual operations at the row level – a single query may touch just one row or may touch millions. In this benchmark the difference between looking at workload from a SQL commands standpoint vs a row operation standpoint yields the same results, but it is not going to always be the case.

io activity
Let’s now look at the operating system level. We can see the amount of data written to the disk is 2x more than the amount of data being read from the disk. This workload is write intensive by this measure.

top tables by row read

top tables by rows changed

Yet another way to take a look at your workload is to take a look at it from the aspect of tables. This view shows us that tables are being mostly accessed for reads and writes. This in turn allows us to see whether a given table is getting more reads or writes. This is helpful, for example, if you are considering to move some of the tables to a different server and want to clearly understand how your workload will be impacted.

Analyzing Read/Write Workload by Response Time

As I mentioned already, the counts often do not reflect the time to respond, which is typically more representative of the real work being done. To look at timing information from query point of view, we want to look at query analytics.

query analytics providing time analysis
The “Load” column here is a measure of such a combined response time, versus count which is reflective of query counts. Looking at this list we can see that three out of top five queries are SELECT queries. Looking at the numbers overall, we can see we have a read intensive application from this perspective.

In terms of row level operations, there is currently no easy way to see if reads or writes are dominating overall but  you can get an idea from the table operations dashboard:

table operations dashboard
This shows the load on a per table basis. It labels reads “Fetch” and breaks down writes in more detail—“Update”, “Delete”, “Inserts”—which is helpful. Not all writes are equal either.

disk io load

If we want to look at a response time based view of read vs write on an operating system, we can check out this disk IO Load graph. You can see in this case it happens to match the IO activity graph, with storage taking more time to serve write requests versus read requests


As you can see, the question about whether a workload is read intensive or write intensive, while simple on the surface, can have many different answers. You might ask me “OK, so what should I use?” Well… it really depends.

Looking at query counts is a great way to understand the application’s demands on the database—you can’t really do anything to change the database size.  However by changing the database configuration and schema you may drastically alter the impact of these queries, both from the standpoint of the number of rows they crunch and in terms of the disk IO they require.

The response time based statistics, gathered from the impact your queries cause on the system or disk IO, provide a better representation of the load these queries currently generate.

Another thing to keep in mind—reads and writes are not created equal. My rule of thumb for InnoDB is that a single row write is about 10x more expensive than a single row read.

More resources that you might enjoy

If you found this post useful, you might also like to see some of Percona’s other resources.

For an introduction to PMM, our free and open source management and monitoring software, you might find value in my recorded webinar, MySQL Troubleshooting and Performance Optimization with PMM

While our white paper Performance at Scale could provide useful insight if you are at the planning or review stage.

The post Is It a Read Intensive or a Write Intensive Workload? appeared first on Percona Database Performance Blog.


Google takes a step back from running the Kubernetes development infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project, like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs more than 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry has served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure, either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands — making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.


Box builds a digital hub to help fight content fragmentation

The interconnectedness of the cloud has allowed us to share content widely with people inside and outside the organization and across different applications, but that ability has created a problem of its own, a kind of digital fragmentation. How do you track how that piece of content is being used across a range of cloud services? It’s a problem Box wants to solve with its latest features, Activity Stream and Recommended Apps.

The company made the announcements at BoxWorks, its annual customer conference being held this week in San Francisco,

Activity Stream provides a way to track your content in real time as it moves through the organization, including who touches it and what applications it’s used in, acting as a kind of digital audit trail. One of the big problems with content in the cloud age is understanding what happened to it after you created it. Did it get used in Salesforce or ServiceNow or Slack? You can now follow the path of your content and see how people have shared it, and this could help remove some of the disconnect people feel in the digital world.

As Jeetu Patel, Box’s Chief Product and Chief Strategy Officer points out, an average large company could have more than a thousand apps and there is no good way to connect the dots when it comes to tracking unstructured content and getting a unified view of the digital trail.

“We integrate with over 1400 applications, and as we integrate with those applications, we thought if we could surface those events, it would be insanely useful to our users,” he said. Patel sees this as the beginning of an important construct, the notion of a content hub where you can see the entire transaction record associated with a piece of content.

Activity Stream sidebar inside Box. Photo: Box

But Box didn’t want to stop with just a laundry list of the connections. It also created deep links into the applications being used, so a user can click a link, open the application and view the content in the context of that other application. “It seems like Box was a logical place to get a bird’s eye view of how content is being used,” Patel said, explaining Box’s thinking in creating this feature.

A related feature is a list of Recommended Apps. Based the Box Graph, and what Box knows about the user, the content they use, and how it’s interconnected with other cloud apps, it also displays a list of recommended apps right in the Box interface. This lets users access those applications in the context of their work, so for instance, they could share the content in Slack right from the document.

Recommended Apps bar inside Box. Photo: Box

For starters, Recommended Apps integrations include G Suite apps, Slack, Salesforce, DocuSign and Netsuite, but Patel says anyone who is integrated with the web app via the API will start showing up in Activity Stream.

While the products were announced today, Box is still working out the kinks in terms of how this will work. They expect these features to be available early next year. If they can pull this off, it will go a long way toward solving the digital fragmentation problem and making Box the content center for organizations.


Scaling IO-Bound Workloads for MySQL in the Cloud

InnoDB / MyRocks throughput on IO1

Is increasing GP2 volumes size or increasing IOPS for IO1 volumes a valid method for scaling IO-Bound workloads? In this post I’ll focus on one question: how much can we improve performance if we use faster cloud volumes? This post is a continuance of previous cloud research posts:

To recap, in Amazon EC2 we can use gp2 and io1 volumes. gp2 performance can be scaled with size, i.e for gp2 volume size of 500GB we get 1500 iops; size 1000GB – 3000 iops; and for 3334GB – 10000 iops (maximal possible value). For io1 volumes we can “buy” throughput up to 30000 iops.

So I wanted to check how both InnoDB and RocksDB storage engines perform on these volumes with different throughput.

Benchmark Scenario

I will use the same datasize that I used in Saving With MyRocks in The Cloud, that is sysbench-tpcc, 50 tables, 100W each, about 500GB datasize in InnoDB and 100GB in RocksDB (compressed with LZ4).

Volumes settings: gp2 volumes from 500GB (1000GB for InnoDB) to 3400GB with 100GB increments (so each increment increases throughput by 300 iops); io1 volumes: 1TB in size, iops from 1000 to 30000 with 1000 increments.

Let’s take look at the results. I will use a slightly different format than usual, but hopefully it represents the results better. You will see density throughout the plots—a higher and narrower chart represents less variance in the throughput. The plot represents the distribution of the throughput.

Results on GP2 volumes:

InnoDB/MyRocks throughput on gp2

It’s quite interesting to see how the result scales with better IO throughput. InnoDB does not improve its throughput after gp2 size 2600GB, while MyRocks continues to scale linearly. The problem with MyRocks is that there is a lot of variance in throughput (I will show a one second resolution chart).

Results on IO1 volumes

InnoDB / MyRocks throughput on IO1

Here MyRocks again shows an impressive growth as as we add more IO capacity, but also shows a lot of variance on high capacity volumes.

Let’s compare how engines perform with one second resolution. GP2 volume, 3400GB:

InnoDB/MyRocks throughput on gp2 3400GB

IO1 volume, 30000 iops:

InnoDB/MyRocks throughput on IO1 30000 IOPS

So for MyRocks there seems to be periodical background activity, which does not allow it to achieve a stable throughput.

Raw results, if you’d like to review them, can be found here:


If you are looking to improve throughput in IO-bound workloads, either increasing GP2 volumes size or increasing IOPS for IO1 volumes is a valid method, especially for the MyRocks engine.

The post Scaling IO-Bound Workloads for MySQL in the Cloud appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by