Oct
30
2020
--

CVE-2020-15180 – Affects Percona XtraDB Cluster

CVE-2020-15180

CVE-2020-15180Galera replication technology, a key component of Percona XtraDB Cluster, suffered from a remote code execution vulnerability. Percona has been working with the vendor since early September on this issue and has made releases available to address the problem.

Applicability

A malicious party with access to the WSREP service port (4567/TCP) as well as prerequisite knowledge of the configuration of the Galera cluster name is required in order to exploit this vulnerability, which leads to remote code execution via the WSREP protocol. 

Fixes are available in Percona XtraDB Cluster versions:

>= 8.0.20-11.2

>= 5.7.31-31.45.2

>= 5.6.49-28.42.2

Credits

Percona would like to thank all the Percona staff involved in the resolution of this issue.

More Information

Release notes

Oct
30
2020
--

Cloud infrastructure revenue grows 33% this quarter to almost $33B

The cloud infrastructure market kept growing at a brisk pace last quarter, as the pandemic continued to push more companies to the cloud with offices shut down in much of the world. This week the big three — Amazon, Microsoft and Google — all reported their numbers and, as expected, the news was good, with Synergy Research reporting revenue growth of 33% year over year, up to almost $33 billion for the quarter.

Still, John Dinsdale, chief analyst at Synergy, was a bit taken aback that the market continued to grow as much as it did. “While we were fully expecting continued strong growth in the market, the scale of the growth in Q3 was a little surprising,” he said in a statement.

He added, “Total revenues were up by $2.5 billion from the previous quarter causing the year-on-year growth rate to nudge upwards, which is unusual for such a large market. It is quite clear that COVID-19 has provided an added boost to a market that was already developing rapidly.”

Per usual Amazon led the way with $11.6 billion in revenue, up from $10.8 billion last quarter. That’s up 29% year over year. Amazon continues to exhibit slowing growth in the cloud market, but because of its market share lead of 33%, a rate that has held fairly steady for some time, the growth is less important than the eye-popping revenue it continues to generate, almost double its closest rival Microsoft .

Speaking of Microsoft, Azure revenue was up 48% year over year, also slowing some, but good enough for a strong second place with 18% market share. Using Synergy’s total quarterly number of $33 billion, Microsoft came in at $5.9 billion in revenue for the quarter, up from $5.2 billion last quarter.

Finally, Google announced cloud revenue of $3.4 billion, but that number includes all of its cloud revenue including G Suite and other software. Synergy reported that this was good for 9%, or $2.98 billion, up from $2.7 billion last quarter, good for third place.

Alibaba and IBM were tied for fourth with 5%, or around $1.65 billion each.

Synergy Research cloud infrastructure relative market positions. Amazon is the largest circle followed by Microsoft.

Image Credits: Synergy Research

It’s worth noting that Canalys had similar numbers to Synergy, with growth of 33% to $36.5 billion. They had the same market order with slightly different numbers, with Amazon at 32%, Microsoft at 19% and Google at 7%, and Alibaba in 4th place at 6%.

Canalys sees continued growth ahead, especially as hybrid cloud begins to merge with newer technologies like 5G and edge computing. “All three [providers] are collaborating with mobile operators to deploy their cloud stacks at the edge in the operators’ data centers. These are part of holistic initiatives to profit from 5G services among business customers, as well as transform the mobile operators’ IT infrastructure,” Canalys analyst Blake Murray said in a statement.

While the pure growth continues to move steadily downward over time, this is expected in a market that’s maturing like cloud infrastructure, but as companies continue to shift workloads more rapidly to the cloud during the pandemic, and find new use cases like 5G and edge computing, the market could continue to generate substantial revenue well into the future.

Oct
30
2020
--

Percona Monthly Bug Report: October 2020

Percona Monthly Bug Report October 2020

Percona Monthly Bug Report October 2020At Percona, we operate on the premise that full-transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.

We constantly update our bug reports and monitor other boards to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. This monthly post is a central place to get information on the most noteworthy open and recently resolved bugs. 

In this October 2020 edition of our monthly bug report, we have the following list of bugs:

Percona Server/MySQL Bugs

PS-7300 (MySQL#98869):  Session temporary tablespace truncation on connection disconnect causes high CPU usage. Only affected the MySQL server with a large buffer pool ( tested with ~150GB and above BP size).

Affects Version/s: 8.0  [Tested/Reported version 8.0.19, 8.0.20, 8.0.21]

Fixed Version/s: PS-8.0.22

 

PS-5641 (MySQL#95863): Deadlock on MTS replica while running administrative commands. This is a design flaw in the way MTS locks interact with non-replication locks like MDL or ACL locks, and the lack of deadlock detection infrastructure in replication.  

There are three ways of hitting this issue:

  • SET GLOBAL read_only=[ON/OFF]
  • SET GLOBAL super_read_only=[ON/OFF]
  • FLUSH TABLE WITH READ LOCK

 Affects Version/s: 5.7,8.0  [Tested/Reported version 5.7.26, 8.0.15,8.0.20]

 

MySQL#91977: Dropping Large Table Causes Semaphore Waits, due which no Other Work Possible.

Affects Version/s: 5.7,8.0  [Tested/Reported version 5.7.23, 8.0.12]

Running DROP/ALTER on the large table could lead to this bug. DROP/ALTER query will be stuck in ‘checking permissions’ state  and later it may crash mysqld due to long semaphore

Wait. It’s critical since it can result in unplanned downtime. The issue is also evident with the pt-online-schema-change tool while performing ALTER operations on large tables.

This is the design flow, due to such InnoDB protection for semaphore all naturally long DDL operations are at the risk. To address this issue there is a Feature Request(MySQL#92044) to have a configurable timeout for long semaphore wait until the server dies.

 

Percona XtraDB Cluster

PXC-3366: Upgrading SST traffic enabled  PXC 5.7 to PXC 8.0 will not show any status/variable, it will give empty output always.

Also taking backup of this upgraded PXC 8.0 node using xtrabackup will fail with a crash due to this bug.

Affects Version/s: 8.0   [Tested/Reported 8.0.19]

 

PXC-3456:   If wsrep_node_address value specified as hostname, Joiner node failed to join using SST. The workaround for the problem is to use an IP Address for wsrep_node_address.

Affects Version/s: 8.0   [Tested/Reported 8.0.20]

Fixed Version: 8.0.20-11.3

 

PXC-3396: SST failing because of the wrong binlog name.

Setting the log-bin name explicitly with the data-dir path will result in SST failure. 

Workaround for this issue is, Use different directories for binlog other than data dir and SST will work fine. Also, I do not see this issue with default binlog settings which will create binlogs in data-dir with the name “binlog.*” 

Affects Version/s: 8.0   [Tested/Reported 8.0.20-11.2]

 

PXC-3418: Cluster node locked up with alter table(even when using pt-online-schema-changes) and concurrent RW load. It is possible to hit a writer node deadlock when ALTER TABLE is run a few times on another node, with a concurrent read/write load on the writer node.

Affects Version/s: 5.7   [Tested/Reported 5.7.31]

 

PXC-3373: [ERROR] WSREP: Certification exception: Unsupported key prefix: ^B: 71 (Protocol error) at galera/src/key_set.cpp:throw_bad_prefix():152

Affects Version/s: 5.7   [Tested/Reported 5.7.30]

IST to the lower version node will fail with this error when the write load is running on the Donor node. Rolling upgrade is the most commonly used method resulting in a different version of pxc nodes for while and in such cases, the user can experience this issue easily.

Possible workaround,

  • Upgrade pxc node to have the same version across the cluster.
  • Stop write load on the donor node while IST is running.

 

Percona XtraBackup

PXB-2237: PXB crashes during a backup when an encrypted table is updated

Affects Version/s:  2.4  [Tested/Reported version 2.4.20]

Databases with encrypted tables are affected by this bug. As a workaround, Taking backup in non-peak hours could avoid this crash Or block write access to encrypted tables

 

PXB-2178: Restoring datadir from partial backup results in an inconsistent data dictionary

 Affects Version/s:  8.0  [Tested/Reported version 8.0.11

As a result of this bug after restored, you will see additional database/s which were not part of a partial backup. Issue evident only in Xtrabackup 8.0 due to new data dictionary implementation in MySQL 8.0 version, this issue is not reproducible with xtrabackup 2.4. 

The workaround for this issue is to use “DROP DATABASE IF EXISTS” for dropping unwanted extra database/s.

 

Percona Toolkit

PT-1570: Percona Toolkit tools fail to detect columns with the word GENERATED as part of the comment.

The issue is actually much more severe for other percona toolkit tools using the same TableParser.pm package- like pt-online-schema-change and pt-archiver. Basically, with the old package, the tool will skip the column that has ‘generated’ word in a comment in the copying process. This will result in data loss.

Therefore, it is very important to use the toolkit version 3.0.11 or higher to avoid that serious problem.

Affects Version/s:  3.0.10

Fixed Version/s: 3.0.11

 

PT-1747: pt-online-schema-change was bringing the database into a broken state when applying the “rebuild_constraints” foreign keys modification method if any of the child tables were blocked by the metadata lock.

Affects Version/s:  3.x   [Tested/Reported version 3.0.13]

Critical bug since it can cause data inconsistency in the user environment. It potentially affects who rebuilds tables with foreign keys.

 

PT-169:  pt-online-schema-change remove the old and new table.

It is a critical bug, It destroys the original table when the pt-online-schema-change tool fails. For example, in a case, an attempt to alter data type on parent table not only failed when a child table refers to it (FK) but resulted in the loss (drop) of both tables.

Affects Version/s:  3.2.1   [Tested/Reported version 3.2.1]

Fixed version: 3.3.0

 

PT-1853: pt-online-schema-change doesn’t handle self-referencing foreign keys properly

When using pt-osc to change a table that has a self FK pt-osc creates the FK pointing to the old table instead of pointing to the _new table. Because of it, pt-osc needs to rebuild the FK after swapping the tables (DROP the FK and recreating it again pointing to the _new table). This can cause issues because the INPLACE algorithm is supported when foreign_key_checks is disabled. Otherwise, only the COPY algorithm is supported.

Affects Version/s:  3.x  [Tested/Reported version 3.2.0]

Fixed Version/s: 3.2.1

Affects who rebuild tables with a self-referencing foreign key.

 

PMM  [Percona Monitoring and Management]

PMM-4547: MongoDB dashboard replication lag count incorrect.

With a hidden replica (with/without any delay specified). Another replica will show a delay of the max unsigned integer value (about 136 years).

The problem here is that sometimes, MongoDB reports (in getDiagnosticData) that the primary behind the secondary.

Since timestamps are unsigned ints, subtracting a bigger number produces an overflow and that’s why we see 100+ years lag.

Affects Version/s:  2.9  [Tested/Reported version 2.0,2.9.0]

Fixed Version: PMM-2.12.0

 

PMM-6733: QAN explain, example, table not working properly when using –query-source=perfschema.

This a documentation bug, In the reported case query Examples, Explain and Tables tab in PMM QAN showing error since Performance_Schema events_statements_history consumer was not enabled in performance_schema( In MySQL 5.6 it’s by default disabled).  Enabling events_statements_history will fix the issue for new queries.

Affects Version/s:  Documentation Bug

Fixed version/s: 2.10.0

 

PMM-5823: pmm-server log download and api to get the version failed with timeout

Occurring at irregular intervals and only affected to pmm docker installation with no external internet access from pmm-server docker container. The issue is only visible for a while(around 5-10 mins) after starting pmm-server later you will not see this issue.

Affects Version/s:  2.x  [Tested/Reported version 2.2]

Fixed version: 2.12.0

 

Summary

We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, How to Report Bugs, Improvements, New Feature Requests for Percona Products.

For the most up-to-date information, be sure to follow us on Twitter, LinkedIn, and Facebook.

Quick References:

Percona JIRA  

MySQL Bug Report

Report a Bug in a Percona Product

___

About Percona:

As the only provider of distributions for all three of the most popular open source databases—PostgreSQL, MySQL, and MongoDB—Percona provides expertise, software, support, and services no matter the technology.

Whether its enabling developers or DBAs to realize value faster with tools, advice, and guidance, or making sure applications can scale and handle peak loads, Percona is here to help.

Percona is committed to being open source and preventing vendor lock-in. Percona contributes all changes to the upstream community for possible inclusion in future product releases.

Oct
29
2020
--

Donut launches Watercooler, an easy way to socialize online with coworkers

If you miss hanging out with your coworkers but don’t want to spend a single second more on Zoom, the latest product from Donut might be the answer.

The startup is launching its new Watercooler product today while also announcing that it has raised $12 million in total funding, led by Accel and with participation from Bloomberg Beta, FirstMark, Slack Fund and various angel investors.

Co-founder and CEO Dan Manian told me that this is actually money that the startup raised before the pandemic, across multiple rounds. It just didn’t announce the fundraising until now.

The startup’s vision, Manian said, is “to create human connection between people at work.” Its first product, Intros, connects teammates who didn’t already know each via Slack, often with the goal of setting up quick coffee meetings (originally in-person and now virtual).

Donut says it has facilitated 4 million connections across 12,000 companies (including The New York Times, Toyota and InVision), with 1 million of those connections made since the beginning of the pandemic.

However, Manian said customers have been asking Donut to facilitate more frequent interactions, especially since most people aren’t going to have these coffee meetings every day. At the same time, people face are the duelling issues of isolation and Zoom fatigue, where “the antidote to one thing makes the other pain worse.” And he suggested that one of the hardest things to recreate while so many of us are working remotely are “all the little microinteractions that you have while you’re working.”

That’s where Watercooler comes in — as the name suggests, it’s designed to replicate the feeling of hanging out at the office watercooler, having brief, low-key conversations. Like Intros, it integrates with Slack, creating a new channel where Watercooler will post fun, conversation-starting questions like “‘What’s your favorite form of potato?” or “What’s one thing you’ve learned in your career that you wish you knew sooner?”

Talking about these topics shouldn’t take much time, but Manian argued that brief conversations are important: “Those things add up to friendship over time, they’re what actually transform you from coworker to friend.” And those friendships are important for employers too, because they help with team cohesion and retention.

I fully endorse the idea of a Slack watercooler — in fact, the TechCrunch editorial team has a very active “watercooler” channel and I’m always happy to waste time there. My big question was: Why do companies need to purchase a product for this?

Donut Watercooler

Donut Watercooler

Manian said that there were “a bunch of our early adopters” who had tried doing this manually, but it was always in the “past tense”: “It got too hard to come up with the questions, or it took real work coming up with them, whoever was doing it already had a it full time job.”

With Watercooler, on the other hand, the company can choose from pre-selected topics and questions, set the frequency with which those questions are posted and then everything happens automatically.

Manian also noted that different organizations will focus on different types of questions. There are no divisive political questions included, but while some teams will stick to easy questions about things like potatoes and breakfast foods, others will get into more substantive topics like the ways that people prefer to receive feedback.

And yes, Manian thinks companies will still need these tools after the pandemic is over.

“Work has fundamentally changed,” he said. “I don’t think we’ll put remote work back in the bottle. I think it’s here to stay.”

At the same time, he described the past few months as “training wheels” for a hybrid model, where some team members go back to the office while others continue working remotely. In his view, teams will face an even bigger challenge then: To keep their remote members feeling like they’re connected and in-the-loop.

 

Oct
29
2020
--

Intel acquires SigOpt, a specialist in modeling optimization, to boost its AI business

Intel has been doubling down on building chips and related architecture for the next generation of computing, and today it announced an acquisition that will bolster its expertise and work specifically in one area of future technology: artificial intelligence.

The semiconductor giant today announced that it has acquired SigOpt, a startup out of San Francisco that has built an optimization platform that can be used to run modeling and simulations (two key applications of AI tech) in a better way. Anthony described SigOpt as a startup built to “optimize everything” when we covered its Series A, but Intel specifically will be integrating the tech into its AI business, specifically into its AI Analytics Toolkit, a spokesperson tells me.

Terms of the deal were not disclosed, but SigOpt already counted a number of large enterprises — “SigOpt’s customer base includes Fortune 500 companies across industries, as well as leading research institutions, universities and consortiums using its products” — among its customers. The product was still in a closed beta, however. Notably, it had raised money from an interesting group of investors that included In-Q-Tel (the firm associated with the CIA that makes strategic investments) and Andreessen Horowitz, and Y Combinator, among others. It had raised less than $10 million.

The plan will be to continue providing services to existing users, and to continue building out the company’s platform — co-founders Scott Clark (CEO) and Patrick Hayes (CTO) and their team are joining Intel.

“We will continue to work with SigOpt’s existing customers and will also integrate the technology into our product road map,” a spokesperson confirmed.

While Intel is working hard on streamlining its business around next-generation chips to better compete against the likes of Nvidia (which itself is growing substantially with the acquisition of ARM) and smaller players like GraphCore, in part by divesting more legacy operations, it seems a strong opportunity in providing services for its customers alongside those chips, and these services specifically will help customers with the compute loads that they will be running on those chips.

The focus for Intel has been on the next generation of computing to offset declines in its legacy operations. In the last quarter, even as it beat expectations, Intel reported a 3% decline in its revenues, led by a drop in its data center business. It said that it’s projecting the AI silicon market to be bigger than $25 billion by 2024, with AI silicon in the data center to be greater than $10 billion in that period.

In 2019, Intel reported some $3.8 billion in AI-driven revenue, but it hopes that tools like SigOpt’s will help drive more activity in that business, dovetailing with the push for more AI applications in a wider range of businesses.

“In the new intelligence era, AI is driving the compute needs of the future. It is even more important for software to automatically extract the best compute performance while scaling AI models,” said Raja Koduri, Intel’s chief architect and senior vice president of its discrete graphics division. “SigOpt’s AI software platform and data science talent will augment Intel software, architecture, product offerings and teams, and provide us with valuable customer insights. We welcome the SigOpt team and its customers to the Intel family.”

While there could potentially be a number of applications for SigOpt’s tech, this is a signal of how bigger players will continue to consolidate specific services around their bigger business, giving the small startup a much bigger horizon in terms of potential business (even if it is all tied to customers that only use Intel hardware).

“We are excited to join Intel and supercharge our mission to accelerate and amplify the impact of modelers everywhere. By combining our AI optimization software with Intel’s decades-long leadership in AI computing and machine learning performance, we will be able to unlock entirely new AI capabilities for modelers,” said Clark in a statement.

Oct
29
2020
--

More chip industry action as Marvell is acquiring Inphi for $10B

It’s been quite a time for chip industry consolidation, and today Marvell joined the acquisition parade when it announced it is acquiring Inphi in a combination of stock and cash valued at approximately $10 billion, according to the company.

Marvell CEO Matt Murphy believes that by adding Inphi, a chip maker that helps connect internal servers in cloud data centers, and then between data centers, using fiber cabling, it will complement Marvell’s copper-based chip portfolio and give it an edge in developing more future-looking use cases where Inphi shines.

“Our acquisition of Inphi will fuel Marvell’s leadership in the cloud and extend our 5G position over the next decade,” Murphy said in a statement.

In the classic buy versus build calculus, this acquisition uses the company’s cash to push it in new directions without having to build all this new technology. “This highly complementary transaction expands Marvell’s addressable market, strengthens customer base and accelerates Marvell’s leadership in hyperscale cloud data centers and 5G wireless infrastructure,” the company said in a statement.

It’s been a busy time for the chip industry as multiple players are combining, hoping for a similar kind of lift that Marvell sees with this deal. In fact, today’s announcement comes in the same week AMD announced it was acquiring Xilinx for $35 billion and follows Nvidia acquiring ARM for $40 billion last month. The three deals combined come to a whopping $85 billion.

There appears to be prevailing wisdom in the industry that by combining forces and using the power of the checkbook, these companies can do more together than they can by themselves.

Certainly Marvell and Inphi are suggesting that. As they highlighted, their combined enterprise value will be more than $40 billion, with hundreds of millions of dollars in market potential. All of this of course depends on how well these combined entities work together, and we won’t know that for some time.

For what it’s worth, the stock market appears unimpressed with the deal, with Marvell’s stock down more than 7% in early trading — but Inphi stock is being bolstered in a big way by the announcement, up almost 23% this morning so far.

The deal, which has been approved by both companies’ boards, is expected to close by the second half of 2021, subject to shareholder and regulatory approval.

Oct
29
2020
--

Redpoint and Sequoia are backing a startup to copyedit your shit code

Code is the lifeblood of the modern world, yet the tooling for some programming environments can be remarkably spartan. While developers have long had access to graphical programming environments (IDEs) and performance profilers and debuggers, advanced products to analyze and improve lines of code have been harder to find.

These days, the most typical tool in the kit is a linter, which scans through code pointing out flaws that might cause issues. For instance, there might be too many spaces on a line, or a particular line might have a well-known ambiguity that could cause bugs that are hard to diagnose and would best be avoided.

What if we could expand the power of linters to do a lot more though? What if programmers had an assistant that could analyze their code and actively point out new security issues, erroneous code, style problems and bad logic?

Static code analysis is a whole interesting branch of computer science, and some of those ideas have trickled into the real-world with tools like semgrep, which was developed at Facebook to add more robust code-checking tools to its developer workflow. Semgrep is an open-source project, and it’s being commercialized through r2c, a startup that wants to bring the power of this tool to the developer masses.

The whole project has found enough traction among developers that Satish Dharmaraj at Redpoint and Jim Goetz at Sequoia teamed up to pour $13 million into the company for its Series A round, and also backed the company in an earlier, unannounced seed round.

The company was founded by three MIT grads — CEO Isaac Evans and Drew Dennison were roommates in college, and they joined up with head of product Luke O’Malley. Across their various experiences, they have worked at Palantir, the intelligence community, and Fortune 500 companies, and when Evans and Dennison were EIRs at Redpoint, they explored ideas based on what they had seen in their wide-ranging coding experiences.

The r2c team, which I assume only writes bug-free code. Image by r2c.

“Facebook, Apple, and Amazon are so far ahead when it comes to what they do at the code level to bake security [into their products compared to] other companies, it’s really not even funny,” Evans explained. The big tech companies have massively scaled their coding infrastructure to ensure uniform coding standards, but few others have access to the talent or technology to be on an equal playing field. Through r2c and semgrep, the founders want to close the gap.

With r2c’s technology, developers can scan their codebases on-demand or enforce a regular code check through their continuous integration platform. The company provides its own template rulesets (“rule packs”) to check for issues like security holes, complicated errors and other potential bugs, and developers and companies can add their own custom rulesets to enforce their own standards. Currently, r2c supports eight programming languages, including JavaScript and Python, and a variety of frameworks, and it is actively working on more compatibility.

One unique focus for r2c has been getting developers onboard with the model. The core technology remains open-sourced. Evans said that “if you actually want something that’s going to get broad developer adoption, it has to be predominantly open source so that developers can actually mess with it and hack on it and see whether or not it’s valuable without having to worry about some kind of super restrictive license.”

Beyond its model, the key has been getting developers to actually use the tool. No one likes bugs, and no developer wants to find more bugs that they have to fix. With semgrep and r2c though, developers can get much more immediate and comprehensive feedback — helping them fix tricky errors before they move on and forget the context of what they were engineering.

“I think one of the coolest things for us is that none of the existing tools in the space have ever been adopted by developers, but for us, it’s about 50/50 developer teams who are getting excited about it versus security teams getting excited about it,” Evans said. Developers hate finding more bugs, but they also hate writing them in the first place. Evans notes that the company’s key metric is the number of bugs found that are actually fixed by developers, indicating that they are offering “good, actionable results” through the product. One area that r2c has explored is actively patching obvious bugs, saving developers time.

Breaches, errors and downtime are a bedrock of software, but it doesn’t have to be that way. With more than a dozen employees and a hefty pool of capital, r2c hopes to improve the reliability of all the experiences we enjoy — and save developers time in the process.

Oct
29
2020
--

2020 Percona Survey Results Reveal the Latest Open Source Database Trends

Percona 2020 Open Source Survey

Percona 2020 Open Source SurveyAt our recent Percona Live ONLINE we announced the release of the 2020 Open Source Data Management Software Survey results. 

This year, we wanted to build upon the data we collected in 2019 and continue to monitor the open source industry’s pulse. The 2020 survey was perhaps reflective of the wider world environment, as companies indicated a desire to consolidate their database infrastructure and software, avoid risk, and manage costs.

We asked a number of new questions, as well as comparing 2019 data. One of the headline findings is that database footprints and Database as a Service (DBaaS) continues to grow, but that many companies are facing unexpected cloud bills.

According to the results, the cost of cloud computing was flagged as a problem for many companies, with 22% of organizations facing additional unplanned costs from their cloud providers.

The share of companies using DBaaS increased to 45% of respondents compared to 40% the previous year. More than half of large companies (56%) indicated that they use DBaaS, and, in-line with the trend of companies looking to mitigate their risk, around half of the total large company category used more than one DBaaS service.

The survey also revealed a number of other interesting data points:

Decisions on open source database choice are centralizing

open source survey

  • Architects are now the primary group in charge of making database deployment decisions, according to 41% of respondents.
  • The next group is developers with 26%. So, although many developers remain responsible for their organizations’ technology choices, this is starting to consolidate based on the need to manage support and management overheads.

 

 

Database footprints continue to grow

Database footprints continue to grow

  • 82% of organizations saw their database footprint grow more than 5% per year.
  • For 12% of organizations, the volume of data they held doubled or more in twelve months.

 

 

 

Upgrading cloud instances is a common occurrence

Upgrading cloud instances

  • Around 28% required two to three upgrades, while 21% had more than ten upgrades.
  • Just 12% of organizations made no changes to their instances.

 

 

 

Cloud spending and planning gave mixed results

Cloud spending

  • Around 22% found that their cloud spending was above expected, while 60% found their cloud spending was about right. For 17%, their cloud spend was lower than expected.

 

 

 

What keeps you up at night?

  • This year, we dug deeper into the issues that companies face in their environments and their biggest database management concerns. When asking, ‘what keeps you up at night?’ the most significant issues flagged were downtime (59%) and performance (51%), along with the concern of fixing emergency issues (35%).
  • Performance issues were by far the biggest experienced issue for nearly three-quarters of respondents at 74%, with unplanned downtime impacting 45%.

Given the negative impact that performance issues and downtime has on businesses, this reiterates how crucial it is to ensure that your databases are correctly configured and optimized.

Huge thanks to everyone who gave their input and helped contribute to the success of this project. 

Visit our website to view the full 2020 results and see a comparison with 2019 figures.

Oct
29
2020
--

Sinch announces Conversation API to bring together multiple messaging tools

As communicating with customers across the world grows ever more challenging due to multiple channels, tools and networking providers, companies are looking for a way to simplify it all. Sinch, a company that makes communications APIs, announced a new tool this morning called the Conversation API designed to make it easier to interact with customers across the planet using multiple messaging products.

Sinch chief product officer Vikram Khandpur says that business is being conducted in different messaging channels such as SMS, WhatsApp or Viber depending on location, and businesses have to be able to communicate with their customers wherever they happen to be from a technology and geographic standpoint. What’s more, this need has become even more pronounced during a pandemic when online communication has become paramount.

Khandpur says that up until now, Sinch has concentrated on optimizing the SMS experience for actions like customer acquisition, customer engagement, delivery notifications and customer support. Now the company wants to take that next step into richer omni-channel messaging.

The idea is to provide a set of tools to help marketing teams communicate across these multiple channels by walking them through the processes required by each player. “By writing to our API, what we can provide is that we can get our customers on all these platforms if they are not already on these platforms,” he said.

He uses WhatsApp as an example because it has a very defined process for brands to work with it. “On WhatsApp, there is this concept of creating these pre-approved templates, and they need to be reviewed, curated and finally approved by the WhatsApp team. We help them with that process, and then do the same with other channels and platforms, so we take that complexity away from the brand,” Khandpur explained.

He adds, “By giving us that message once, we take care of all of [the different tools] behind the scenes transcoding when needed. So if you give us a very rich message with images and videos, WhatsApp may want to render it in a certain way, but then Viber renders that in a different way, and we take care of that.

Sinch Conv API Transcoding

Examples of transcoding across messaging channels. Image Credits: Sinch

Marketers can use the Conversation API to define parameters like using WhatsApp first in India, but if the targeted customer doesn’t open WhatsApp, then fall back to SMS.

The company has made four acquisitions in the last year, including ACL Mobile in India and SAP’s Interconnect Messaging business, to enhance its presence across the world.

Sinch, which competes with Twilio in the communications API space, may be one of the most successful companies you never heard of, generating more than $500 million in revenue last year while processing over 110 billion messages.

The company launched in Sweden in 2008 and has never taken a dime of venture capital, yet has been profitable since the early days. In fact, it’s publicly traded on the NASDAQ Exchange in Stockholm and will be reporting earnings next week.

Oct
29
2020
--

Microsoft now lets you bring your own data types to Excel

Over the course of the last few years, Microsoft started adding the concept of “data types” to Excel; that is, the ability to pull in geography and real-time stock data from the cloud, for example. Thanks to its partnership with Wolfram, Excel now features more than 100 of these data types that can flow into a spreadsheet. But you won’t be limited to only these pre-built data types for long. Soon, Excel will also let you bring in your own data types.

That means you can have a “customer” data type, for example, that can bring in rich customer data from a third-party service into Excel. The conduit for this is either Power BI, which now allows Excel to pull in any data you previously published there, or Microsoft’s Power Query feature in Excel that lets you connect to a wide variety of data sources, including common databases like SQL Server, MySQL and PostreSQL, as well as third-party services like Teradata and Facebook.

Image Credits: Microsoft

“Up to this point, the Excel grid has been flat… it’s two dimensional,” Microsoft’s head of product for Excel, Brian Jones, writes in today’s announcement. “You can lay out numbers, text, and formulas across the flexible grid, and people have built amazing things with those capabilities. Not all data is flat though and forcing data into that 2D structure has its limits. With Data Types we’ve added a 3rd dimension to what you can build with Excel. Any cell can now contain a rich set of structured data… in just a single cell.”

The promise here is that this will make Excel more flexible, and I’m sure a lot of enterprises will adapt these capabilities. These companies aren’t likely to move to Airtable or similar Excel-like tools anytime soon, but have data analysis needs that are only increasing now that every company gathers more data than it knows what to do with. This is also a feature that none of Excel’s competitors currently offer, including Google Sheets.

Image Credits: Microsoft

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com