Jun
19
2018
--

Chunk Change: InnoDB Buffer Pool Resizing

innodb buffer pool chunk size

Since MySQL 5.7.5, we have been able to resize dynamically the InnoDB Buffer Pool. This new feature also introduced a new variable — innodb_buffer_pool_chunk_size — which defines the chunk size by which the buffer pool is enlarged or reduced. This variable is not dynamic and if it is incorrectly configured, could lead to undesired situations.

Let’s see first how innodb_buffer_pool_size , innodb_buffer_pool_instances  and innodb_buffer_pool_chunk_size interact:

The buffer pool can hold several instances and each instance is divided into chunks. There is some information that we need to take into account: the number of instances can go from 1 to 64 and the total amount of chunks should not exceed 1000.

So, for a server with 3GB RAM, a buffer pool of 2GB with 8 instances and chunks at default value (128MB) we are going to get 2 chunks per instance:

This means that there will be 16 chunks.

I’m not going to explain the benefits of having multiple instances, I will focus on resizing operations. Why would you want to resize the buffer pool? Well, there are several reasons, such as:

  • on a virtual server you can add more memory dynamically
  • for a physical server, you might want to reduce database memory usage to make way for other processes
  • on systems where the database size is smaller than available RAM
  • if you expect a huge growth and want to increase the buffer pool on demand

Reducing the buffer pool

Let’s start reducing the buffer pool:

| innodb_buffer_pool_size | 2147483648 |
| innodb_buffer_pool_instances | 8     |
| innodb_buffer_pool_chunk_size | 134217728 |
mysql> set global innodb_buffer_pool_size=1073741824;
Query OK, 0 rows affected (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 1073741824 |
+-------------------------+------------+
1 row in set (0.00 sec)

If we try to decrease it to 1.5GB, the buffer pool will not change and a warning will be showed:

mysql> set global innodb_buffer_pool_size=1610612736;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+---------------------------------------------------------------------------------+
| Level   | Code | Message                                                                         |
+---------+------+---------------------------------------------------------------------------------+
| Warning | 1210 | InnoDB: Cannot resize buffer pool to lesser than chunk size of 134217728 bytes. |
+---------+------+---------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 2147483648 |
+-------------------------+------------+
1 row in set (0.01 sec)

Increasing the buffer pool

When we try to increase the value from 1GB to 1.5GB, the buffer pool is resized but the requested innodb_buffer_pool_size is considered to be incorrect and is truncated:

mysql> set global innodb_buffer_pool_size=1610612736;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+-----------------------------------------------------------------+
| Level   | Code | Message                                                         |
+---------+------+-----------------------------------------------------------------+
| Warning | 1292 | Truncated incorrect innodb_buffer_pool_size value: '1610612736' |
+---------+------+-----------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 2147483648 |
+-------------------------+------------+
1 row in set (0.01 sec)

And the final size is 2GB. Yes! you intended to set the value to 1.5GB and you succeeded in setting it to 2GB. Even if you set 1 byte higher, like setting: 1073741825, you will end up with a buffer pool of 2GB.

mysql> set global innodb_buffer_pool_size=1073741825;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_%size' ;
+-------------------------------+------------+
| Variable_name                 | Value      |
+-------------------------------+------------+
| innodb_buffer_pool_chunk_size | 134217728  |
| innodb_buffer_pool_size       | 2147483648 |
+-------------------------------+------------+
2 rows in set (0.01 sec)

Interesting scenarios

Increasing size in the config file

Let’s suppose one day you get up willing to change or tune some variables in your server, and you decide that as you have free memory you will increase the buffer pool. In this example, we are going to use a server with 

innodb_buffer_pool_instances = 16

  and 2GB of buffer pool size which will be increased to 2.5GB

So, we set in the configuration file:

innodb_buffer_pool_size = 2684354560

But then after restart, we found:

mysql> show global variables like 'innodb_buffer_pool_%size' ;
+-------------------------------+------------+
| Variable_name                 | Value      |
+-------------------------------+------------+
| innodb_buffer_pool_chunk_size | 134217728  |
| innodb_buffer_pool_size       | 4294967296 |
+-------------------------------+------------+
2 rows in set (0.00 sec)

And the error log says:

2018-05-02T21:52:43.568054Z 0 [Note] InnoDB: Initializing buffer pool, total size = 4G, instances = 16, chunk size = 128M

So, after we have set innodb_buffer_pool_size in the config file to 2.5GB, the database gives us a 4GB buffer pool, because of the number of instances and the chunk size. What the message doesn’t tell us is the number of chunks, and this would be useful to understand why such a huge difference.

Let’s take a look at how that’s calculated.

Increasing instances and chunk size

Changing the number of instances or the chunk size will require a restart and will take into consideration the buffer pool size as an upper limit to set the chunk size. For instance, with this configuration:

innodb_buffer_pool_size = 2147483648
innodb_buffer_pool_instances = 32
innodb_buffer_pool_chunk_size = 134217728

We get this chunk size:

mysql> show global variables like 'innodb_buffer_pool_%size' ;
+-------------------------------+------------+
| Variable_name                 | Value      |
+-------------------------------+------------+
| innodb_buffer_pool_chunk_size | 67108864   |
| innodb_buffer_pool_size       | 2147483648 |
+-------------------------------+------------+
2 rows in set (0.00 sec)

However, we need to understand how this is really working. To get the innodb_buffer_pool_chunk_size it will make this calculation: innodb_buffer_pool_size / innodb_buffer_pool_instances with the result rounded to a multiple of 1MB.

In our example, the calculation will be 2147483648 / 32 = 67108864 which 67108864%1048576=0, no rounding needed. The number of chunks will be one chunk per instance.

When does it consider that it needs to use more chunks per instance? When the difference between the required size and the innodb_buffer_pool_size configured in the file is greater or equal to 1MB.

That is why, for instance, if you try to set the innodb_buffer_pool_size equal to 1GB + 1MB – 1B you will get 1GB of buffer pool:

innodb_buffer_pool_size = 1074790399
innodb_buffer_pool_instances = 16
innodb_buffer_pool_chunk_size = 67141632
2018-05-07T09:26:43.328313Z 0 [Note] InnoDB: Initializing buffer pool, total size = 1G, instances = 16, chunk size = 64M

But if you set the innodb_buffer_pool_size equals to 1GB + 1MB you will get 2GB of buffer pool:

innodb_buffer_pool_size = 1074790400
innodb_buffer_pool_instances = 16
innodb_buffer_pool_chunk_size = 67141632
2018-05-07T09:25:48.204032Z 0 [Note] InnoDB: Initializing buffer pool, total size = 2G, instances = 16, chunk size = 64M

This is because it considers that two chunks will fit. We can say that this is how the InnoDB Buffer pool size is calculated:

determine_best_chunk_size{
  if innodb_buffer_pool_size / innodb_buffer_pool_instances < innodb_buffer_pool_chunk_size
  then
    innodb_buffer_pool_chunk_size = roundDownMB(innodb_buffer_pool_size / innodb_buffer_pool_instances)
  fi
}
determine_amount_of_chunks{
  innodb_buffer_amount_chunks_per_instance = roundDown(innodb_buffer_pool_size / innodb_buffer_pool_instances / innodb_buffer_pool_chunk_size)
  if innodb_buffer_amount_chunks_per_instance * innodb_buffer_pool_instances * innodb_buffer_pool_chunk_size - innodb_buffer_pool_size > 1024*1024
  then
    innodb_buffer_amount_chunks_per_instance++
  fi
}
determine_best_chunk_size
determine_amount_of_chunks
innodb_buffer_pool_size = innodb_buffer_pool_instances * innodb_buffer_pool_chunk_size * innodb_buffer_amount_chunks_per_instance

What is the best setting?

In order to analyze the best setting you will need to know that there is a upper limit of 1000 chunks. In our example with 16 instances, we can have no more than 62 chunks per instance.

Another thing to consider is what each chunk represents in percentage terms. Continuing with the example, each chunk per instance represent 1.61%, which means that we can increase or decrease the complete buffer pool size in multiples of this percentage.

From a management point of view, I think that you might want to consider at least a range of 2% to 5% to increase or decrease the buffer. I performed some tests to see the impact of having small chunks and I found no issues but this is something that needs to be thoroughly tested.

The post Chunk Change: InnoDB Buffer Pool Resizing appeared first on Percona Database Performance Blog.

Jun
19
2018
--

Riskified prevents fraud on your favorite e-commerce site

Meet Riskified, an Israel-based startup that has raised $64 million in total to fight online fraud. The company has built a service that helps you reject transactions from stolen credit cards and approve more transactions from legitimate clients.

If you live in the U.S., chances are you know someone who noticed a fraudulent charge for an expensive purchase with their credit card — it’s still unclear why most restaurants and bars in the U.S. take your card away instead of bringing the card reader to you.

Online purchases, also known as card-not-present transactions, represent the primary source of fraudulent transactions. That’s why e-commerce websites need to optimize their payment system to detect fraudulent transactions and approve all the others.

Riskified uses machine learning to recognize good orders and improve your bottom line. In fact, Riskified is so confident that it guarantees that you’ll never have to pay chargebacks. As long as a transaction is approved by the product, the startup offers chargeback protection. If Riskified made the wrong call, the company reimburses fraudulent chargebacks.

On the other side of the equation, many e-commerce websites leave money on the table by rejecting transactions and false declines. It’s hard to quantify this as some customers end up not ordering anything. Riskified should help you on this front too.

The startup targets big customers — Macy’s, Dyson, Prada, Vestiaire Collective and GOAT are all using it. You can integrate Riskified with popular e-commerce payment systems and solutions, such as Shopify, Magento and Stripe. Riskified also has an API for more sophisticated implementations.

Jun
19
2018
--

Google injects Hire with AI to speed up common tasks

Since Google Hire launched last year it has been trying to make it easier for hiring managers to manage the data and tasks associated with the hiring process, while maybe tweaking LinkedIn while they’re at it. Today the company announced some AI-infused enhancements that they say will help save time and energy spent on manual processes.

“By incorporating Google AI, Hire now reduces repetitive, time-consuming tasks, like scheduling interviews into one-click interactions. This means hiring teams can spend less time with logistics and more time connecting with people,” Google’s Berit Hoffmann, Hire product manager wrote in a blog post announcing the new features.

The first piece involves making it easier and faster to schedule interviews with candidates. This is a multi-step activity that involves scheduling appropriate interviewers, choosing a time and date that works for all parties involved in the interview and scheduling a room in which to conduct the interview. Organizing these kind of logistics tend to eat up a lot of time.

“To streamline this process, Hire now uses AI to automatically suggest interviewers and ideal time slots, reducing interview scheduling to a few clicks,” Hoffmann wrote.

Photo: Google

Another common hiring chore is finding keywords in a resume. Hire’s AI now finds these words for a recruiter automatically by analysing terms in a job description or search query and highlighting relevant words including synonyms and acronyms in a resume to save time spent manually searching for them.

Photo: Google

Finally, another standard part of the hiring process is making phone calls, lots of phone calls. To make this easier, the latest version of Google Hire has a new click-to-call function. Simply click the phone number and it dials automatically and registers the call in call a log for easy recall or auditing.

While Microsoft has LinkedIn and Office 365, Google has G Suite and Google Hire. The strategy behind Hire is to allow hiring personnel to work in the G Suite tools they are immersed in every day and incorporate Hire functionality within those tools.

It’s not unlike CRM tools that integrate with Outlook or GMail because that’s where sales people spend a good deal of their time anyway. The idea is to reduce the time spent switching between tools and make the process a more integrated experience.

While none of these features individually will necessarily wow you, they are making use of Google AI to simplify common tasks to reduce some of the tedium associated with every-day hiring tasks.

Jun
19
2018
--

Cisco buys July Systems to bring digital experience to the real world

Customer experience management is about getting to know your customer’s preferences in an online context, but pulling that information into the real world often proves a major challenge for organizations. This results in a huge disconnect when a customer walks into a physical store. This morning, Cisco announced it has bought July Systems, a company that purports to solve that problem.

The companies did not share the acquisition price.

July Systems connects to a building’s WiFi system to understand the customer who just walked in the door, how many times they have shopped at this retailer, their loyalty point score and so forth. This gives the vendor the same kind of understanding about that customer offline as they are used to getting online.

It’s an interesting acquisition for Cisco, taking advantage of some of its strengths as a networking company, given the WiFi component, but also moving in the direction of providing more specific customer experience services.

“Enterprises have an opportunity to take advantage of their in-building Wi-Fi for a broad range of indoor location services. In addition to providing seamless connectivity, Wi-Fi can help enterprises glean deep visitor behavior insights, associate these learnings with their enterprise systems, and drive better customer and employee experiences,” Cisco’s Rob Salvagno wrote in a blog post announcing the acquisition.

As is often the case with these kinds of purchases, the two companies are not strangers. In fact, July Systems lists Cisco as a partner prominently on the company website (along with AWS). Customers include an interesting variety from Intercontinental Hotels Group to the New York Yankees baseball team.

Ray Wang, founder and principal analyst at Constellation Research says the acquisition is also about taking advantage of 5G. “July Systems gives Cisco the ability to expand its localization and customer experience management (CXM) capabilities pre-5g and post-5g. The WiFi analytics improve CXM, but more importantly Cisco also gains a robust developer community,” Wang told TechCrunch.

According to reports, the company had over $67 billion in cash as of February. That leaves plenty of money to make investments like this one and the company hasn’t been shy about using their cash horde to buy companies as they try to transform from a pure hardware company to one built on services

In fact, they have made 211 acquisitions over the years, according to data on Crunchbase. In recent years they have made some eye-popping ones like plucking AppDynamics for $3.7 billion just before it was going to IPO in 2017 or grabbing Jasper for $1.4 billion in 2016, but the company has also made a host of smaller ones like today’s announcement.

July Systems was founded back in 2001 and raised almost $60 million from a variety of investors including Sequoia Capital, Intel Capital, CRV and Motorola Solutions. Salvagno indicated the July Systems group will become incorporated into Cisco’s enterprise networking group. The deal is expected to be finalized in the first quarter of fiscal 2019.

Jun
19
2018
--

Webinar Weds 20/6: Percona XtraDB Cluster 5.7 Tutorial Part 2

webinar Percona XtraDB Cluster

Including setting up Percona XtraDB Cluster with ProxySQL and PMM

webinar Percona XtraDB ClusterPlease join Percona’s Architect, Tibi Köröcz as he presents Percona XtraDB Cluster 5.7 Tutorial Part 2 on Wednesday, June 20th, 2018, at 7:00 am PDT (UTC-7) / 10:00 am EDT (UTC-4).

 

Never used Percona XtraDB Cluster before? This is the webinar for you! In this 45-minute webinar, we will introduce you to a fully functional Percona XtraDB Cluster.

This webinar will show you how to install Percona XtraDB Cluster with ProxySQL, and monitor it with Percona Monitoring and Management (PMM).

We will also cover topics like bootstrap, IST, SST, certification, common-failure situations and online schema changes.

After this webinar, you will have enough knowledge to set up a working Percona XtraDB Cluster with ProxySQL, in order to meet your high availability requirements.

You can see part one of this series here: Percona XtraDB Cluster 5.7 Tutorial Part 1

Register Now!

Tibor Köröcz

Architect

ProxySQL for Connection Pooling

Tibi joined Percona in 2015 as a Consultant. Before joining Percona, among many other things, he worked at the world’s largest car hire booking service as a Senior Database Engineer. He enjoys trying and working with the latest technologies and applications which can help or work with MySQL together. In his spare time he likes to spend time with his friends, travel around the world and play ultimate frisbee.

 

The post Webinar Weds 20/6: Percona XtraDB Cluster 5.7 Tutorial Part 2 appeared first on Percona Database Performance Blog.

Jun
19
2018
--

Brex picks up $57M to build an easy credit card for startups

While Henrique Dubugras and Pedro Franceschi were giving up on their augmented reality startup inside Y Combinator and figuring out what to do next, they saw their batch mates struggling to get even the most basic corporate credit cards — and in a lot of cases, having to guarantee those cards themselves.

Brex, their new startup,  aims to try to fix that by offering startups a way to quickly get what’s effectively a credit card that they can use without having to personally guarantee that card or wade through complex processes to finally get a charge card. It’s geared initially towards smaller companies, but Dubugras expects those startups to grow up with it over time — and that Brex is already picking up larger clients. The company, coming out of stealth, said it has raised a total of $57 million from investors including the Y Combinator Continuity fund, Peter Thiel, Max Levchin, Yuri Milner, financial services VC Ribbit Capital and former Visa CEO Carl Pascarella. Y Combinator Continuity fund partner Anu Hariharan and Ribbit Capital managing partner Meyer Malka are joining the company’s board of directors.

“We want to be the best corporate credit card for startups,” Dubugras said. “We’re don’t require a personal guarantee or deposit, and we can give people a credit limit that’s as much as ten times higher. We can get you a virtual credit card in literally 5 minutes, versus traditional banks, in which you’d have to personally guarantee the card and get a low limit and it takes weeks to approve.”

Startup executives go to Brex’s website, sign up, and then put in their bank account info. They then use that banking information to underwrite the card, with the idea being that the service can see that the start has raised millions of dollars and doesn’t have the kind of wild liability that those banks think they might have given how young they are. Once the application is done, companies get a virtual credit card, and they can start divvying up virtual cards with custom limits for their employees. The company says it has attracted more than 1,000 customers and is now opening up globally.

The cards are designed to have better spending limits, and also offer company executives more granular ways to assign those limits to employees. The cards have to be paid off by the end of the month, and the rolling balance for those cards is dependent on the amount of capital each startup has available. The total limit available is, instead, a percentage of the company’s cash balance available. So rather than having to go through the process of getting approved for a card, the service can look at how much money is in a startup’s bank account and adjust the spending limit for all those cards accordingly.

Another aspect is automating the whole expense and auditing process. Rather than just going through typical applications like Concur and inputting specifics, card users can send a text message of a receipt through Brex associated with each transaction. Users will just get a text message about a charge — like a cup of coffee for a meeting with a potential business partner — and reply to that text with a message of the receipt to log the whole process. Everything is geared toward simplifying the whole process for startups that have an opportunity to be a bit more nimble and aren’t bogged down with complex layers of enterprise software. Each expense is looped in with a vendor, so executives can see the total amount of spending that’s happening at that scale.

The ability to have those dynamic spending limits is just one example of what Dubugras hopes will make Brex competitive. Rather than slotting into existing systems, Brex has an opportunity to recreate the back-end processes that power those cards, which larger institutions might not be able to do as they’ve hit a massive scale and get less and less agile. Dubugras and Franceschi previously worked on and sold Pagar.me, a Brazilian payments processor, where they saw firsthand the complex nature of working with global financial institutions — and some of the holes they could exploit.

“It’s not like we’re two geniuses that came up with a lot of things that no one came up with,” Dubugras said. “Implementing them with third-party processors is hard, but we didn’t have any of [those integrations], so we can rebuild them from scratch. It’s hard for banks to throw money at a problem and build those tools. We’ve rebuilt the way that these things work internally — they’d have to change fundamentally how the system works.”

While there are plenty of startups looking to quickly offer virtual cards, like Revolut’s disposable virtual card service, Brex aims to be what’s effectively a corporate card — just one that’s easier to get and works basically the same as a normal card. Users still have to pay off the balance at the end of the month, but the idea there is that Brex can de-risk itself by doing that while still offering startups a way to get a card with a high limit to start paying for the services or tools they need to get started.

Jun
18
2018
--

Percona Database Performance Blog Commenting Issues

Percona Database Performance Blog Comments

We are experiencing an intermittent commenting problem on the Percona Database Performance Blog.

At Percona, part of our purpose is to engage with the open source database community. A big part of this engagement is the Percona Database Performance Blog, and the participation of the community in reading and commenting. We appreciate your interest and contributions.

Currently, we are experiencing an intermittent problem with comments on the blog. Some users are unable to post comments and can receive an error message similar to the following:

Percona Blog Error MsgWe are working on correcting the issue, and apologize if it affects you. If you have a comment that you want to make on a specific blog and this issue affects you, you can also leave comments on Twitter, Facebook or LinkedIn – all blog posts are socialized on those platforms.

Thanks for your patience.

The post Percona Database Performance Blog Commenting Issues appeared first on Percona Database Performance Blog.

Jun
18
2018
--

Blockchain startups woo enterprises with a private chain audit trail

By placing all the information about services or complex manufacturing and assembly processes on a private, permissioned blockchain, the idea is that a company can create an “immutable” audit trail of data. When you think about it, currently this involves a labor-intensive combination of paper and networks. But initial trials with private blockchains in the last couple of years have shown there is potential to reduce the identification process of a data trail from several days to minutes.

Indications that this is becoming a hot issue amongst startups arrives today in two pieces of news.

Firstly, London-based “Gospel”(yes, that really is their name…) has raised £1.4m in seed funding from investors led by European-focused LocalGlobe.

The blockchain startup says it has been working with an unnamed “aerospace and defence manufacturer” to develop a proof of concept to improve record keeping for its supply chain. What’s the betting it’s British Aerospace? They aren’t saying.

At any rate, Gospel says it has developed a way of securely distributing data across decentralised infrastructures, offering companies the potential to automate records for complex products that usually require significant manual management. The idea is that is shares only the information it needs to, securely, with other partners in its supply chain, potentially leading to improved efficiency and lower costs of information recall.

Founded in December 2016 by entrepreneur Ian Smith, Gospel uses a private blockchain that requires users to set up a network of “nodes” within their ecosystem. Each party controls their own node and all the nodes must agree before any transaction can be processed and put on the blockchain. The node network acts as a consensus and provides a mechanism of trust.

Smith says: “For manufacturers and other businesses dealing with critical data there is a problem of trust in data systems, particularly when there is a need to share that data outside the organization. With Gospel technology we can provide an immutable record store so that trust can be fully automated between systems of forward-thinking businesses.”

Prior to this seed round, Gospel was backed by a number of angel investors including Gumtree co-founder Michael Pennington and Vivek Kundra, the Chief Information Officer for the US Government during Barack Obama’s administration.

Secondly, Russia-based startup Waves, which has issued its own cryptocurrency, is getting into the space with the launch of Vostok, a universal blockchain solution for scalable digital infrastructure.

The idea is that public institutions and large enterprises can use the platform to enhance security, data storage, transparency and stability of their systems.

Vostok, which is named after the craft that carried Yuri Gagarin into space, claims to be significantly faster and cheaper than existing blockchain solutions, claiming 10,000 transactions per second (TPS) at only $0.000001 per a transaction. This is compared to Bitcoin which has transactional processing capacity of 3-6TPS and costs $0.951 per transaction. Vostok also uses a closed operational node set and Proof-of-Stake.

Sasha Ivanov, CEO and Founder of Vostok and Waves Platform, said: “Vostok is a multi- purpose solution, quite simple, but at the same time non-trivial. It will allow any large organisation to gain the benefits of blockchain without having to create new systems from scratch or retrain their staff.”

Jun
18
2018
--

Webinar Tues 19/6: MySQL: Scaling and High Availability – Production Experience from the Last Decade(s)

scale high availability

scale high availability
Please join Percona’s CEO, Peter Zaitsev as he presents MySQL: Scaling and High Availability – Production Experience Over the Last Decade(s) on Tuesday, June 19th, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

 

Percona is known as the MySQL performance experts. With over 4,000 customers, we’ve studied, mastered and executed many different ways of scaling applications. Percona can help ensure your application is highly available. Come learn from our playbook, and leave this talk knowing your MySQL database will run faster and more optimized than before.

Register Now

About Peter Zaitsev, CEO

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014, 2015 and 2016.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his recent ebook Practical MySQL Performance Optimization Volume 1 is one of percona.com’s most popular downloads. Peter lives in North Carolina with his wife and two children. In his spare time, Peter enjoys travel and spending time outdoors.

The post Webinar Tues 19/6: MySQL: Scaling and High Availability – Production Experience from the Last Decade(s) appeared first on Percona Database Performance Blog.

Jun
15
2018
--

Kustomer gets $26M to take on Zendesk with an omnichannel approach to customer support

The CRM industry is now estimated to be worth some $4 billion annually, and today a startup has announced a round of funding that it hopes will help it take on one aspect of that lucrative pie, customer support. Kustomer, a startup out of New York that integrates a number of sources to give support staff a complete picture of a customer when he or she contacts the company, has raised $26 million.

The funding, a series B, was led by Redpoint Ventures (notably, an early investor in Zendesk, which Kustomer cites as a key competitor), with existing investors Canaan Partners, Boldstart Ventures, and Social Leverage also participating.

Cisco Investments was also a part of this round as a strategic investor: Cisco (along with Avaya) is one of the world’s biggest PBX equipment vendors, and customer support is one of the biggest users of this equipment, but the segment is also under pressure as more companies move these services to the cloud (and consider alternative options). Potentially, you could see how Cisco might want to partner with Kustomer to provide more services on top of its existing equipment, and potentially as a standalone service — although for now the two have yet to announce any actual partnerships.

Given that Kustomer has been approached already for potential acquisitions, you could see how the Ciscos of the world might be one possible category of buyers.

Kustomer is not discussing valuation but it has raised a total of $38.5 million. Kustomer’s customers include brands in fashion, e-commerce and other sectors that provide customer support on products on a regular basis, such as Ring, Modsy, Glossier, Smug Mug and more.

When we last wrote about Kustomer, when it raised $12.5 million in 2016, the company’s mission was to effectively turn anyone at a company into a customer service rep — the idea being that some issues are better answered by specific people, and a CRM platform for all employees to engage could help them fill that need.

Today, Brad Birnbaum, the co-founder and CEO, says that this concept has evolved. He said that “half of its business model still involves the idea of everyone being on the platform.” For example, an internal sales rep can collaborate with someone in a company’s shipping department — “but the only person who can communicate with the customer is the full-fledged agent,” he said. “That is what the customers wanted so that they could better control the messaging.”

The collaboration, meanwhile, has taken an interesting turn: it’s not just related to employees communicating better to develop a more complete picture of a customer and his/her history with the company; but it’s about a company’s systems integrating better to give a more complete view to the reps. Integrations include data from e-commerce platforms like Shopify and Magento; voice and messaging platforms like Twilio, TalkDesk, Twitter and Facebook Messenger; feedback tools like Nicereply; analytics services like Looker, Snowflake, Jira and Redshift; and Slack.

Birnbaum previously founded and sold Assistly to Salesforce, which turned it into Desk.com — (his co-founder in Kustomer, Jeremy Suriel, was Assistly’s chief architect), and between that and Kustomer he also had a go at building out Airtime, Sean Parker’s social startup. Kustomer, he says, is not only competing against Salesforce but perhaps even more specifically Zendesk, in offering a new take on customer support.

Zendesk, he said, had really figured out how to make customer support ticketing work efficiently, “but they don’t understand the customer at all.”

“We are a much more modern solution in how we see the world,” he continued. “No one does omni-channel customer service properly, where you can see a single threaded conversation speaking to all of a customer’s points.”

(In actual fact, Zendesk has now started to respond: in May the company launched a new omnichannel product called The Suite, which bundles Zendesk Support, Guide, Chat, and Talk to give a unified view of a customer to the support agent. One more reason Kustomer needs to keep expanding what it does.)

Going forward, Kustomer will be using the funding to expand its platform with more capabilities, and some of its own automations and insights (rather than those provided by way of integrations). This will also see the company expand into other kinds of services adjacent to taking inbound customer requests, such as reaching out to the customers, potentially to seel to them. “We plan to go broadly with engagement as an example,” Birnbaum said. “We already know everything about you so if we see you on a website, we can proactively reach out to you and engage you.”

“It is time for disruption in customer support industry, and Kustomer is leading the way,” said Tomasz Tunguz, partner at Redpoint Ventures, in a statement. “Kustomer has had impressive traction to date, and we are confident the world’s best B2C and B2B companies will be able to utilize the platform in order to develop meaningful relationships, experiences, and lifetime value for their customers. This is an exciting and forward-thinking platform for companies as well as their customers.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com