Feb
28
2019
--

Percona XtraDB Cluster 5.6.43-28.32 Is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7

Percona is glad to announce the release of Percona XtraDB Cluster 5.6.43-28.32 on February 28, 2019. Binaries are available from the downloads section or from our software repositories.

This release of Percona XtraDB Cluster includes the support of Ubuntu 18.10 (Cosmic Cuttlefish). Percona XtraDB Cluster 5.6.43-28.32 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2388: In some cases, DROP FUNCTION function_name was not replicated.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Feb
28
2019
--

Percona XtraDB Cluster 5.7.25-31.35 Is Now Available

Percona XtraDB Cluster 5.7

Percona XtraDB Cluster 5.7Percona is glad to announce the release of Percona XtraDB Cluster 5.7.25-31.35 on February 28, 2018. Binaries are available from the downloads section or from our software repositories.

This release of Percona XtraDB Cluster includes the support of Ubuntu 18.10 (Cosmic Cuttlefish). Percona XtraDB Cluster 5.7.25-31.35 is now the current release, based on the following:

All Percona software is open-source and free.

Bugs Fixed

  • PXC-2346mysqld could crash when executing mysqldump --single-transaction while the binary log is disabled. This problem was also reported in PXC-1711PXC-2371PXC-2419.
  • PXC-2388: In some cases, DROP FUNCTION function_name was not replicated.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Feb
28
2019
--

Percona Server for MongoDB 4.0.6-3 Is Now Available

Percona Server for MongoDB

Percona Server for MongoDB

Percona announces the release of Percona Server for MongoDB 4.0.6-3 on February 28, 2019. Download the latest version from the Percona website or the Percona software repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 4.0 Community Edition. It supports MongoDB 4.0 protocols and drivers.

Percona Server for MongoDB extends the functionality of the MongoDB 4.0 Community Edition by including the Percona Memory Engine storage engine, encrypted WiredTiger storage engineaudit loggingSASL authenticationhot backups, and enhanced query profilingPercona Server for MongoDB requires no changes to MongoDB applications or code.

Release 4.0.6-3 extends the buildInfo command with the psmdbVersion key to report the version of Percona Server for MongoDB. If this key exists then Percona Server for MongoDB is installed on the server. This key not available from MongoDB.

This release includes all features of MongoDB 4.0 Community Edition 4.0. Most notable among these are:

Note that the MMAPv1 storage engine is deprecated in MongoDB 4.0 Community Edition 4.0.

Improvements

  • PSMDB-216: The database command buildInfo provides the psmdbVersion key to report the version of Percona Server for MongoDB. If this key exists then Percona Server for MongoDB is installed on the server. This key is not available from MongoDB.

The Percona Server for MongoDB 4.0.6-3 release notes are available in the official documentation.

Feb
28
2019
--

MySQL 8.0 Bug 94394, Fixed!

MySQL optimizer bugs

MySQL optimizer bugs

Last week I came across a bug in MySQL 8.0, which meant that the absence of mysql.user leads to auto-apply of –skip-grant-tables (#94394) would leave MySQL running in an undesirable state. My colleague Sveta Smirnova blogged about the issue and it also caught the interest of Valeriy Kravchuk in Fun with Bugs #80 – On MySQL Bug Reports I am Subscribed to, Part XVI. Thanks for the extra visibility!

Credit is now due to Oracle for the quick response, as it was fixed in less than one week (including a weekend):

Fixed in 8.0.16.

Previously, if the grant tables were corrupted, the MySQL server
wrote a message to the error log but continued as if the
–skip-grant-tables option had been specified. This resulted in the
server operating in an unexpected state unless –skip-grant-tables
had in fact been specified. Now, the server stops after writing a
message to the error log unless started with –skip-grant-tables.
(Starting the server with that option enables you to connect to
perform diagnostic operations.)

I think that this particular bug reflects some of the nice things about the MySQL community (and Open Source in general); anyone can find and report a bug, or make a feature request, to one of the software vendors (MySQL, Percona, or MariaDB) and try to improve the software. Sometimes bugs hang around for a while, either because they are hard to fix, viewed as lower in priority (despite the reporter’s opinion), or perhaps the bug does not have enough public visibility. Then a member of the community notices the bug and takes an interest and soon there is more interest. If you are lucky the bug gets fixed quickly! You can of course also provide a fix for the bug yourself, which may speed up the process with a little luck.

If you have not yet reported a bug, or want to find if you are reporting them in the right sort of way then you can take a look at How to create a useful MySQL bug report…and make sure it’s properly processed by Valeriy from FOSDEM 2019.

? ? ? You can help to find more!

Feb
27
2019
--

Open-source communities fight over telco market

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, ex-OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.

Feb
27
2019
--

Understanding How MySQL Collation and Charset Settings Impact Performance

MySQL 8.0 utf8mb4

This blog was originally published in February 2019 and was updated in September 2023.

Web applications rely on databases to run the internet, powering everything from e-commerce platforms to social media networks to streaming services. MySQL is one of the most popular database management systems, playing a pivotal role in the functionality and performance of web applications.

In today’s blog, I’ll take a look at MySQL collation and charset settings to shed light on how they impact the performance of web applications and how to use them to effectively communicate with your users.

Understanding Character Sets and Encoding in MySQL

Character sets and encoding in MySQL play a vital role in how data is stored and retrieved in a database. A character set is a collection of characters with unique representations for each character, such as letters, numbers, and symbols, that define how data is stored and how it is interpreted.

Character encoding refers to the method used to represent characters as binary data for storage and transmission. It specifies how characters are converted into binary code and vice-versa. 

The choice of character set and encoding impacts not only efficiency but also how the data appears to users.

How Character Sets Affect Data Storage and Retrieval

You can specify the character set for each column when you create a table, indicating the set of characters allowed in that column. This affects the type and range of characters that can be inserted into the column.

When data is inserted into the database, it is converted into the specified character set’s binary representation and stored accordingly. When retrieving data, MySQL converts the binary representation back into characters according to the character set and encoding rules. This conversion ensures that the data appears correctly to users and can be processed and displayed as intended.

An Example Illustrating Character Set Concepts

If you have a MySQL database for a multilingual website, you might use the UTF-8 character set, which supports characters from various languages, including English, Chinese, Arabic, and others. Using UTF-8 encoding, you can store and retrieve data in these languages seamlessly, ensuring that text displays correctly for users worldwide.

However, if you use a character set that doesn’t support specific characters, such as storing Arabic text in a Chinese character set, there would be issues with the display of the data.

MySQL and Percona work better together. Download and install Percona Distribution for MySQL today!

MySQL Collation and its Relationship with Character Sets

Collation refers to a set of rules and conventions that dictate how character data is compared and sorted, playing a crucial role in determining the order in which data is retrieved from the database and how various string operations, such as searching and filtering, are performed. 

Collation is closely intertwined with character sets, defining how characters within a specific character set are ordered. To ensure consistent results, it’s important to select compatible collations for your chosen character sets. Incompatibilities between character sets and collations can lead to unexpected sorting and comparison outcomes, which could lead to issues in database operations and application functionality. 

Choosing the Appropriate Character Set

When deciding on the character set for your data, several important considerations should guide your decision-making process. First and foremost, you should take into account the nature of your data and your target audience. Is your data primarily composed of Latin-based text, or do you expect a diverse audience that requires support for various languages? Additionally, it’s crucial to ensure that the character set you select aligns with the encoding used in your web application, ensuring seamless communication between your database and the application layer.

If your application serves a global audience, it’s often wise to opt for a Unicode character set like UTF-8. Unicode provides comprehensive support for various languages and scripts, making it an excellent choice for multilingual applications. However, if your application predominantly serves a single language or region, choosing a character set optimized for that specific context can lead to more efficient data storage and improved overall performance.

Dealing with the complexities of multilingual data may involve not only choosing the right character set but also ensuring that your database design, application code, and collation settings are configured to handle multilingual content effectively. It’s essential to plan for data input, storage, and retrieval in a manner that accommodates working with diverse character sets and languages while maintaining a seamless user experience.

Impact of Charset and Collation on Indexing Strategies

Charset and collation choices in MySQL can have a significant impact on indexing strategies. These choices influence how data is stored, sorted, and compared within the database, which directly affects how indexes function.

When it comes to indexing, one of the key considerations is the length of indexed values — especially for text columns. Different character sets have different storage requirements for characters, with some requiring more bytes to represent certain characters. For example, UTF-8, a widely used character set, can use one to four bytes per character, while Latin1 typically uses one byte per character. This difference in storage size can lead to variations in the size of index entries, which can impact efficiency.

Collation determines how string comparisons are executed, which affects how indexes are used in sorting and searching operations. For example, if your application requires case-insensitive searches, selecting a case-insensitive collation can be more efficient than using a case-sensitive one. However, it’s essential to note that different collations can have different performance implications. Some collations may be faster for sorting operations but slower for searching, while others may be optimized for specific languages or uses.

To illustrate, let’s look at a real-world example. Suppose you’re in the process of building a multilingual system that needs to support a diverse array of languages. In this scenario, opting for a UTF-8 character set is a logical choice, as it provides comprehensive language support, and you could employ a collation that facilitates case-insensitive searches, enhancing user-friendly information retrieval.

Alternatively, if you find yourself developing an application where space constraints are a concern, you may opt for a more space-efficient character set like Latin1. Here, it becomes crucial to select a collation that strikes a balance between sorting and searching performance, ensuring efficient data handling.

Ultimately, the influence of character sets and collations on your indexing strategies should meet the unique requirements and priorities of your application, ensuring optimal database performance and a positive user experience.

Are you looking to optimize database efficiency and reduce the risk of downtime? This on-demand webinar covers everything you need to know. Watch the recorded webinar.

Charset and Collation Effects on Query Execution

Choosing the correct charset and collation settings in a MySQL database can significantly impact query execution time and overall database performance. When not appropriately configured, they can become bottlenecks in query execution. For example, if your database uses a character set that doesn’t match the character set of the data sent in a query, MySQL may need to perform character set conversions on the fly, resulting in slower query execution times.

In addition, collation settings can affect string comparisons, which are common in queries involving WHERE clauses or JOIN operations. Using an inefficient collation can lead to suboptimal query performance. For instance, if you have a case-insensitive collation but most of your queries are case-sensitive, it slows down the query.

To demonstrate the importance of optimizing charset and collation settings, here’s an example. Say you have a web application with a user database using a UTF-8 character set and a case-insensitive collation, and the application queries user data to validate login credentials. By switching to a UTF-8 character set with a case-sensitive collation, user authentication queries can be significantly faster, as they no longer require case-insensitive comparisons. This optimization improves query execution times and provides a much better user experience.

Testing Read-Only CPU Intensive Workloads

Following my post MySQL 8 is not always faster than MySQL 5.7, this time I decided to test very simple read-only CPU-intensive workloads when all data fits memory. In this workload, there are NO IO operations, only memory and CPU operations.

My Testing Setup

Environment specification

  • Release | Ubuntu 18.04 LTS (bionic)
  • Kernel | 4.15.0-20-generic
  • Processors | physical = 2, cores = 28, virtual = 56, hyperthreading = yes
  • Models | 56xIntel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz<
  • Memory Total | 376.6G
  • Provider | packet.net x2.xlarge.x86 instance

I will test two workloads, sysbench oltp_read_only and oltp_point_select varying amount of threads

sysbench oltp_read_only mysqlssl=off reportinterval=1 time=300 threads=$i tables=10 tablesize=10000000 mysqluser=root run

sysbench oltp_point_select mysqlssl=off reportinterval=1 time=300 threads=$i tables=10 tablesize=10000000 mysqluser=root run

The results for OLTP read-only (latin1 character set):

MySQL 5.7.25 MySQL 8.0.15
threads throughput throughput throughput ratio
1 1241.18 1114.4 1.11
4 4578.18 4106.69 1.11
16 15763.64 14303.54 1.10
24 21384.57 19472.89 1.10
32 25081.17 22897.04 1.10
48 32363.27 29600.26 1.09
64 39629.09 35585.88 1.11
128 38448.23 34718.42 1.11
256 36306.44 32798.12 1.11

The results for point_select (latin1 character set):

point select MySQL 5.7.25 MySQL 8.0.15
threads throughput throughput throughput ratio
1 31672.52 28344.25 1.12
4 110650.7 98296.46 1.13
16 390165.41 347026.49 1.12
24 534454.55 474024.56 1.13
32 620402.74 554524.73 1.12
48 806367.3 718350.87 1.12
64 1120586.03 972366.59 1.15
128 1108638.47 960015.17 1.15
256 1038166.63 891470.11 1.16

We can see that in the OLTP read-only workload, MySQL 8.0.15 is slower by 10%, and for the point_select workload MySQL 8.0.15 is slower by 12-16%.

Although the difference is not necessarily significant, this is enough to reveal that MySQL 8.0.15 does not perform as well as MySQL 5.7.25 in the variety of workloads that I am testing.

However, it appears that the dynamic of the results will change if we use the utf8mb4 character set instead of latin1.

Let’s compare MySQL 5.7.25 latin1 vs. utf8mb4, as utf8mb4 is now the default CHARSET in MySQL 8.0.

But before we do that, let’s take a look also at COLLATION.

MySQL 5.7.25 uses a default collation utf8mb4_general_ci, However, I read that to use proper sorting and comparison for Eastern European languages, you may want to use the utf8mb4_unicode_ci collation. For MySQL 8.0.5, the default collation is

So let’s compare each version latin1 vs utf8mb4 (with default collation). First 5.7:

Threads utf8mb4_general_ci latin1 latin1 ratio
4 2957.99 4578.18 1.55
24 13792.55 21384.57 1.55
64 24516.99 39629.09 1.62
128 23977.07 38448.23 1.60

So here we can see that utf8mb4 in MySQL 5.7 is really much slower than latin1 (by 55-60%)

And the same for MySQL 8.0.15

MySQL 8.0 defaultcollations

Threads utf8mb4_0900_ai_ci (default) latin1 latin1 ratio
4 3968.88 4106.69 1.03
24 18446.19 19472.89 1.06
64 32776.35 35585.88 1.09
128 31301.75 34718.42 1.11

For MySQL 8.0 the hit from utf8mb4 is much lower (up to 11%)

Now let’s compare all collations for utf8mb4

For MySQL 5.7

MySQL 5.7 utf8mb4

utf8mb4_general_ci (default) utf8mb4_bin utf8mb4_unicode_ci utf8mb4_unicode_520_ci
4 2957.99 3328.8 2157.61 1942.78
24 13792.55 15857.29 9989.96 9095.17
64 24516.99 28125.16 16207.26 14768.64
128 23977.07 27410.94 15970.6 14560.6

If you plan to use utf8mb4_unicode_ci, you will get an even further performance hit (comparing to utf8mb4_general_ci).

And for MySQL 8.0.15

MySQL 8.0 utf8mb4

utf8mb4_general_ci utf8mb4_bin utf8mb4_unicode_ci utf8mb4_0900_ai_ci (default)
4 3461.8 3628.01 3363.7 3968.88
24 16327.45 17136.16 15740.83 18446.19
64 28960.62 30390.29 27242.72 32776.35
128 27967.25 29256.89 26489.83 31301.75

So now let’s compare MySQL 8.0 vs MySQL 5.7 in utf8mb4 with default collations:

mysql 8 and 5.7 default collation

MySQL 8.0 utf8mb4_0900_ai_ci MySQL 5.7 utf8mb4_general_ci MySQL 8.0 ratio
4 3968.88 2957.99 1.34
24 18446.19 13792.55 1.34
64 32776.35 24516.99 1.34
128 31301.75 23977.07 1.31

So there we are. In this case, MySQL 8.0 is actually better than MySQL 5.7 by 34%

After Testing Conclusions

There are several observations to make:

  • MySQL 5.7 outperforms MySQL 8.0 in latin1 charset
  • MySQL 8.0 outperforms MySQL 5.7 by a wide margin if we use utf8mb4 charset
  • Be aware that utf8mb4  is now the default MySQL 8.0, while MySQL 5.7 has latin1 by default
  • When running comparisons between MySQL 8.0 and MySQL 5.7, be aware of what charset you are using, as it may affect the comparison a lot.

Best Practices for Charset and Collation Optimization

Optimizing charset and collation settings in MySQL involves careful planning and execution to ensure compatibility, efficiency, and long-term maintenance. It’s recommended to choose character sets and collations that align with your application’s needs, so consider the types of data your database will store and process, as well as the languages and character sets used in your application. For multilingual applications, UTF-8 is a popular choice due to its broad character support. When selecting collations, opt for those that match your application’s case sensitivity requirements to prevent unnecessary overhead.

When making charset and collation settings changes to existing databases, it can be a little trickier. Back up your data before making any modifications, conduct a thorough analysis of the database’s current state and the potential impact of changes, and test the changes in a controlled environment to ensure they won’t disrupt operations.

Also, you should regularly review and update your charset and collation settings as your application evolves and data requirements change. Robust monitoring solutions, like Percona Monitoring and Management, can track query performance, identify bottlenecks, and ensure that charset and collation settings continue to align with your application’s demands.

Looking to upgrade to MySQL 8.0 or stay on 5.7? Percona can help.

Choosing the appropriate character set and collation for your data involves considerations of your data’s nature, your target audience, and your application’s needs, and highlights the role that optimized charset and collation settings play in query execution times, the user experience, and overall database performance.

Percona offers comprehensive support solutions to ensure a smooth transition to MySQL 8.0 or EOL support for 5.7.

Move to MySQL 8.0  Get Post-EOL Support For MYSQL 5.7

Feb
27
2019
--

Box fourth quarter revenue up 20 percent, but stock down 22 percent after hours

By most common sense measurements, Box had a pretty good earnings report today, reporting revenue up 20 percent year over year to $163.7 million. That doesn’t sound bad, yet Wall Street was not happy with the stock getting whacked, down more than 22 percent after hours as we went to press. It appears investors were unhappy with the company’s guidance.

Part of the problem, says Alan Pelz-Sharpe, principal analyst at Deep Analysis, a firm that watches the content management space, is that the company failed to hit its projections, combined with weaker guidance; a tough combination, but he points out the future does look bright for the company.

Box did miss its estimates and got dinged pretty hard today; however, the bigger picture is still of solid growth. As Box moves more and more into the enterprise space, the deal cycle takes longer to close and I think that has played a large part in this shift. The onus is on Box to close those bigger deals over the next couple of quarters, but if it does, then that will be a real warning shot to the legacy enterprise vendors as Box starts taking a chunk out of their addressable market,” Pelz-Sharpe told TechCrunch.

This fits with what company CEO Aaron Levie was saying. “Wall Street did have higher expectations with our revenue guidance for next year, and I think that’s totally fair, but we’re very focused as a company right now on driving reacceleration in our growth rate and the way that we’re going to do that is by really bringing the full suite of Box’s capabilities to more of our customers,” Levie told TechCrunch.

Holger Mueller, an analyst with Constellation Research says failing to hit guidance is always going to hurt a company with Wall Street. “It’s all about hitting the guidance, and Box struggled with this. At the end of the day, investors don’t care for the reasons, but making the number is what matters. But a booming economy and the push to AI will help Box as enterprises need document automation solutions,” Mueller said.

On the positive side, Levie pointed out that the company achieved positive non-GAAP growth rate for the first time in its 14-year history, with projections for the first full year of non-GAAP profitability for FY20 that it just kicked off.

The company was showing losses on a cost per share of 14 cents a share for the most recent quarter, but even that was a smaller loss than the 24 cents a share from the previous fiscal year. It would seem that the revenue is heading generally in the correct direction, but Wall Street did not see it that way, flogging the cloud content management company.

Chart: Box

Wall Street tends to try to project future performance. What a company has done this quarter is not as important to investors, who are apparently not happy with the projections, but Levie pointed out the opportunity here is huge. “We’re going after 40 plus billion dollar market, so if you think about the entirety of spend on content management, collaboration, storage infrastructure — as all of that moves to the cloud, we see that as the full market opportunity that we’re going out and serving,” Levie explained.

Pelz-Sharpe also thinks Wall Street could be missing the longer-range picture here. “The move to true enterprise started a couple of years back at Box, but it has taken time to bring on the right partners and infrastructure to deal with these bigger and more complex migrations and implementations,” Pelz-Sharpe explained. Should that happen, Box could begin capturing much larger chunks of that $40 billion addressable cloud content management market, and the numbers could ultimately be much more to investor’s liking. For now though, they are clearly not happy with what they are seeing.

Feb
27
2019
--

Compass acquires Contactually, a CRM provider to the real estate industry

Compass, the real estate tech platform that is now worth $4.4 billion, has made an acquisition to give its agents a boost when it comes to looking for good leads on properties to sell. It is acquiring Contactually, an AI-based CRM platform designed specifically for the industry, which includes features like linking up a list of homes sold by a brokerage with records of sales in the area and other property indexes to determine which properties might be good targets to tap for future listings.

Contactually had already been powering Compass’s own CRM service that it launched last year, so there is already a degree of integration between the two.

Terms of the deal are not being disclosed. Crunchbase notes that Contactually had raised around $18 million from VCs that included Rally Ventures, Grotech and Point Nine Capital, and it was last valued at around $30 million in 2016, according to PitchBook. From what I understand, the startup had strong penetration in the market, so it’s likely that the price was a bit higher than this previous valuation.

The plan is to bring over all of Contactually’s team of 32 employees, led by Zvi Band, the co-founder and CEO, to integrate the company’s product into Compass’s platform completely. They will report to CTO Joseph Sirosh and head of product Eytan Seidman. It will also mean a bigger operation for Compass in Washington, DC, which is where Contactually had been based.

“The Contactually team has worked for the past 8 years to build a best-in-class CRM that aggregates relationships and automatically documents every touchpoint,” said Band in a statement “We are proud that our investment into machine learning has resulted in new features like Best Time to Email and other data-driven, follow-up recommendations which help agents be more effective in their day-to-day. After working extensively with the Compass team, it was apparent that joining forces would accelerate our missions of building the future of the industry.”

For the time being, customers who are already using the product — and a large number of real estate brokers and agents in the U.S. already were, at prices that ranged from $59/month to $399/month depending on the level of service — will continue their contracts as before.

I suspect that the longer-term plan, however, will be a little different: You have to wonder if agents who compete against Compass would be happy to use a service where their data is being processed by it, and for Compass itself. I would suspect that having this tech for itself would give it an edge over the others.

Compass, I understand from sources, is on track to make $2 billion in revenues in 2019 (its 2018 targets were $1 billion on $34 billion in property sales, and it had previously said it would be doubling that this year). Now in 100 cities, it’s come a long way from its founding in 2012 by Ori Allon and Robert Reffkin.

The bigger picture beyond real estate is that, as with many other analog industries, those who are tackling them with tech-first approaches are sweeping up not only existing business, but in many cases helping the whole market to expand. Contactually, as a tool that can help source potential properties for sale that owners hadn’t previously considered putting on the market, could end up serving that very end for Compass.

The focus on using tech to storm into a legacy industry is also coming at an interesting time. As we’ve pointed out before, the housing market is predicted to cool this year, and that will put the squeeze on agents who do not have strong networks of clients and the tools to maximise whatever opportunities there are out there to list and sell properties.

The likes of Opendoor — which appears to be raising money and inching closer to Compass in terms of valuation — is also trying out a different model, which essentially involves becoming a middle part in the chain, buying properties from sellers and selling them on to buyers, to speed up the process and cut out some of the expenses for the end users. That approach underscores the fact that, while the infusion of technology is an inevitable trend, there will be multiple ways of applying that.

This appears to be Compass’s first full acquisition of a tech startup, although it has made partial acqui-hires in the past.

Feb
27
2019
--

Threads emerges from stealth with $10.5M from Sequoia for a new take on enabling work conversations

The rapid rise of Slack has ushered in a new wave of apps, all aiming to solve one challenge: creating a user-friendly platform where coworkers can have productive conversations. Many of these are based around real-time notifications and “instant” messaging, but today a new startup called Threads coming out of stealth to address the other side of the coin: a platform for asynchronous communication that is less time-sensitive, and creating coherent narratives out of those conversations.

Armed with $10.5 million in funding led by Sequoia, the company is launching a beta of its service today.

Rousseau Kazi, the startup’s CEO who co-founded threads with Jon McCord, Mark Rich and Suman Venkataswamy, cut his social teeth working for six years at Facebook (with a resulting number of patents to his name around the mechanics of social networking), says that the mission of Threads is to become more inclusive when it comes to online conversations.

“After a certain number of people get involved in an online discussion, conversations just break and messaging becomes chaotic,” he said. (McCord and Rich are also Facebook engineering alums, while Venkataswamy is a Bright Roll alum.)

And if you have ever used Twitter, or even been in a popular channel in Slack, you will understand what he is talking about. When too many people begin to talk, the conversation gets very noisy and it can mean losing the “thread” of what is being discussed, and seeing conversation lurch from one topic to another, often losing track of important information in the process.

There is an argument to be made for whether a platform that was built for real-time information is capable of handling a difference kind of cadence. Twitter, as it happens, is trying to figure that out right now. Slack, meanwhile, has itself introduced threaded comments to try to address this too — although the practical application of its own threading feature is not actually very user friendly.

Threads’ answer is to view its purpose as addressing the benefit of “asynchronous” conversation.

To start, those who want to start threads first register as organizations on the platform. Then, those who are working on a project or in a specific team creates a “space” for themselves within that org. You can then start threads within those spaces. And when a problem has been solved or the conversation has come to a conclusion, the last comment gets marked as the conclusion.

The idea is that topics and conversations that can stretch out over hours, days or even longer, around specific topics. Threads doeesn’t want to be the place you go for red alerts or urgent requests, but where you go when you have thoughts about a work-related subject and how to tackle it.

These resulting threads, when completed or when in progress, can in turn be looked at as straight conversations, or as annotated narratives.

For now, it’s up to users themselves to annotate what might be important to highlight for readers, although when I asked him, Kazi told me he would like to incorporate over time more features that might use natural language processing to summarize and pull out what might be worth following up or looking at if you only want to skim read a longer conversation. Ditto the ability to search threads. Right now it’s all based around keywords but you can imagine a time when more sophisticated and nuanced searches to surface conversations relevant to what you might be looking for.

Indeed, in this initial launch, the focus is all about what you want to say on Threads itself — not lots of bells and whistles, and not trying to compete against the likes of Slack, or Workplace (Facebook’s effort in this space), or Yammer or Teams from Microsoft, or any of the others in the messaging mix.

There are no integrations of other programs to bring data into Threads from other places, but there is a Slack integration in the other direction: you can create an alert there so that you know when someone has updated a Thread.

“We don’t view ourselves as a competitor to Slack,” Kazi said. “Slack is great for transactional conversation but for asynchronous chats, we thought there was a need for this in the market. We wanted something to address that.”

It may not be a stated competitor, but Threads actually has something in common with Slack: the latter launched with the purpose of enabling a certain kind of conversation between co-workers in a way that was easier to consume and engage with than email.

You could argue that Threads has the same intention: email chains, especially those with multiple parties, can also be hard to follow and are in any case often very messy to look at: something that the conversations in Threads also attempt to clear up.

But email is not the only kind of conversation medium that Threads thinks it can replace.

“With in-person meetings there is a constant tension between keeping the room small for efficiency and including more people for transparency,” said Sequoia partner Mike Vernal in a statement. “When we first started chatting with the team about what is now Threads, we saw an opportunity to get rid of this false dichotomy by making decision-making both more efficient and more inclusive. We’re thrilled to be partnering with Threads to make work more inclusive.” Others in the round include Eventbrite CEO Julia Hartz, GV’s Jessica Verrilli, Minted CEO Mariam Naficy, and TaskRabbit CEO Stacy Brown-Philpot.

The startup was actually formed in 2017, and for months now it has been running a closed, private version of the service to test it out with a small amount of users. So far, the company sizes have ranged between 5 and 60 employees, Kazi tells me.

“By using Threads as our primary communications platform, we’ve seen incredible progress streamlining our operations,” said one of the testers, Perfect Keto & Equip Foods Founder and CEO, Anthony Gustin. “Internal meetings have reduced by at least 80 percent, we’ve seen an increase in participation in discussion and speed of decision making, and noticed an adherence and reinforcement of company culture that we thought was impossible before. Our employees are feeling more ownership and autonomy, with less work and time that needs to be spent — something we didn’t even know was possible before Threads.”

Kazi said that the intention is ultimately to target companies of any size, although it will be worth watching what features it will have to introduce to help handle the noise, and continue to provide coherent discussions, when and if they do start to tackle that end of the market.

Feb
26
2019
--

Talking Drupal #200

Topics

  • Catching up with Jason
  • Discussion with Dries

Resources

Web typography newsletter!

Type Audits

Guests

Jason Pamental – @jpamental https://rwt.io

Dries Buytaert – @Dries https://dri.es

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com