Feb
19
2019
--

Google acquires cloud migration platform Alooma

Google today announced its intention to acquire Alooma, a company that allows enterprises to combine all of their data sources into services like Google’s BigQuery, Amazon’s Redshift, Snowflake and Azure. The promise of Alooma is that it handles the data pipelines and manages them for its users. In addition to this data integration service, though, Alooma also helps with migrating to the cloud, cleaning up this data and then using it for AI and machine learning use cases.

“Here at Google Cloud, we’re committed to helping enterprise customers easily and securely migrate their data to our platform,” Google VP of engineering Amit Ganesh and Google Cloud Platform director of product management Dominic Preuss write today. “The addition of Alooma, subject to closing conditions, is a natural fit that allows us to offer customers a streamlined, automated migration experience to Google Cloud, and give them access to our full range of database services, from managed open source database offerings to solutions like Cloud Spanner and Cloud Bigtable.”

Before the acquisition, Alooma had raised about $15 million, including an $11.2 million Series A round led by Lightspeed Venture Partners and Sequoia Capital in early 2016. The two companies did not disclose the price of the acquisition, but chances are we are talking about a modest price, given how much Alooma had previously raised.

Neither Google nor Alooma said much about what will happen to the existing products and customers — and whether it will continue to support migrations to Google’s competitors. We’ve reached out to Google and will update this post once we hear more.

Update. Here is Google’s statement about the future of the platform:

For now, it’s business as usual for Alooma and Google Cloud as we await regulatory approvals and complete the deal. After close, the team will be joining us in our Tel Aviv and Sunnyvale offices, and we will be leveraging the Alooma technology and team to provide our Google Cloud customers with a great data migration service in the future.

Regarding supporting competitors, yes, the existing Alooma product will continue to support other cloud providers. We will only be accepting new customers that are migrating data to Google Cloud Platform, but existing customers will continue to have access to other cloud providers.   
So going forward, Alooma will not accept any new customers who want to migrate data to any competitors. That’s not necessarily unsurprising and it’s good to see that Google will continue to support Alooma’s existing users. Those who use Alooma in combination with AWS, Azure and other non-Google services will likely start looking for other solutions soon, though, as this also means that Google isn’t likely to develop the service for them beyond its current state.

Alooma’s co-founders do stress, though, that “the journey is not over. Alooma has always aimed to provide the simplest and most efficient path toward standardizing enterprise data from every source and transforming it into actionable intelligence,” they write. “Joining Google Cloud will bring us one step closer to delivering a full self-service database migration experience bolstered by the power of their cloud technology, including analytics, security, AI, and machine learning.”

Feb
19
2019
--

Slack off — send videos instead with $11M-funded Loom

If a picture is worth a thousand words, how many emails can you replace with a video? As offices fragment into remote teams, work becomes more visual and social media makes us more comfortable on camera, it’s time for collaboration to go beyond text. That’s the idea behind Loom, a fast-rising startup that equips enterprises with instant video messaging tools. In a click, you can film yourself or narrate a screenshare to get an idea across in a more vivid, personal way. Instead of scheduling a video call, employees can asynchronously discuss projects or give “stand-up” updates without massive disruptions to their workflow.

In the 2.5 years since launch, Loom has signed up 1.1 million users from 18,000 companies. And that was just as a Chrome extension. Today Loom launches its PC and Mac apps that give it a dedicated presence in your digital work space. Whether you’re communicating across the room or across the globe, “Loom is the next best thing to being there,” co-founder Shahed Khan tells me.

Now Loom is ready to spin up bigger sales and product teams thanks to an $11 million Series A led by Kleiner Perkins . The firm’s partner Ilya Fushman, formally Dropbox’s head of product and corporate development, will join Loom’s board. He’ll shepherd Loom through today’s launch of its $10 per month per user Pro version that offers HD recording, calls-to-action at the end of videos, clip editing, live annotation drawings and analytics to see who actually watched like they’re supposed to.

“We’re ditching the suits and ties and bringing our whole selves to work. We’re emailing and messaging like never before, but though we may be more connected, we’re further apart,” Khan tells me. “We want to make it very easy to bring the humanity back in.”

Loom co-founder Shahed Khan

But back in 2016, Loom was just trying to survive. Khan had worked at Upfront Ventures after a stint as a product designer at website builder Weebly. He and two close friends, Joe Thomas and Vinay Hiremath, started Opentest to let app makers get usability feedback from experts via video. But after six months and going through the NFX accelerator, they were running out of bootstrapped money. That’s when they realized it was the video messaging that could be a business as teams sought to keep in touch with members working from home or remotely.

Together they launched Loom in mid-2016, raising a pre-seed and seed round amounting to $4 million. Part of its secret sauce is that Loom immediately starts uploading bytes of your video while you’re still recording so it’s ready to send the moment you’re finished. That makes sharing your face, voice and screen feel as seamless as firing off a Slack message, but with more emotion and nuance.

“Sales teams use it to close more deals by sending personalized messages to leads. Marketing teams use Loom to walk through internal presentations and social posts. Product teams use Loom to capture bugs, stand ups, etc.,” Khan explains.

Loom has grown to a 16-person team that will expand thanks to the new $11 million Series A from Kleiner, Slack, Cue founder Daniel Gross and actor Jared Leto that brings it to $15 million in funding. They predict the new desktop apps that open Loom to a larger market will see it spread from team to team for both internal collaboration and external discussions from focus groups to customer service.

Loom will have to hope that after becoming popular at a company, managers will pay for the Pro version that shows exactly how long each viewer watched. That could clue them in that they need to be more concise, or that someone is cutting corners on training and cooperation. It’s also a great way to onboard new employees. “Just watch this collection of videos and let us know what you don’t understand.” At $10 per month though, the same cost as Google’s entire GSuite, Loom could be priced too high.

Next Loom will have to figure out a mobile strategy — something that’s surprisingly absent. Khan imagines users being able to record quick clips from their phones to relay updates from travel and client meetings. Loom also plans to build out voice transcription to add automatic subtitles to videos and even divide clips into thematic sections you can fast-forward between. Loom will have to stay ahead of competitors like Vidyard’s GoVideo and Wistia’s Soapbox that have cropped up since its launch. But Khan says Loom looms largest in the space thanks to customers at Uber, Dropbox, Airbnb, Red Bull and 1,100 employees at HubSpot.

“The overall space of collaboration tools is becoming deeper than just email + docs,” says Fushman, citing Slack, Zoom, Dropbox Paper, Coda, Notion, Intercom, Productboard and Figma. To get things done the fastest, businesses are cobbling together B2B software so they can skip building it in-house and focus on their own product.

No piece of enterprise software has to solve everything. But Loom is dependent on apps like Slack, Google Docs, Convo and Asana. Because it lacks a social or identity layer, you’ll need to send the links to your videos through another service. Loom should really build its own video messaging system into its desktop app. But at least Slack is an investor, and Khan says “they’re trying to be the hub of text-based communication,” and the soon-to-be-public unicorn tells him anything it does in video will focus on real-time interaction.

Still, the biggest threat to Loom is apathy. People already feel overwhelmed with Slack and email, and if recording videos comes off as more of a chore than an efficiency, workers will stick to text. And without the skimability of an email, you can imagine a big queue of videos piling up that staffers don’t want to watch. But Khan thinks the ubiquity of Instagram Stories is making it seem natural to jump on camera briefly. And the advantage is that you don’t need a bunch of time-wasting pleasantries to ensure no one misinterprets your message as sarcastic or pissed off.

Khan concludes, “We believe instantly sharable video can foster more authentic communication between people at work, and convey complex scenarios and ideas with empathy.”

Feb
19
2019
--

Percona Server for MongoDB 3.4.19-2.17 Is Now Available

Percona Server for MongoDB

Percona Server for MongoDB

Percona announces the release of Percona Server for MongoDB 3.4.19-2.17 on February 19, 2019. Download the latest version from the Percona website or the Percona Software Repositories.

Percona Server for MongoDB 3.4 is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.4 Community Edition. It supports MongoDB 3.4 protocols and drivers.

Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine and MongoRocks storage engines, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code. This release is based on MongoDB 3.4.19.

In this release, Percona Server for MongoDB supports the ngram full-text search engine. Thanks to Sunguck Lee (@SunguckLee) for this contribution. To enable the ngram full-text search engine, create an index passing ngram to the default_language parameter:

mongo > db.collection.createIndex({name:"text"}, {default_language: "ngram"})

New Features

  • PSMDB-250: The ngram full-text search engine has been added to Percona Server for MongoDB.Thanks to Sunguck Lee (@SunguckLee) for this contribution.

Bugs Fixed

  • PSMDB-272mongos could crash when running the createBackup command.

Other bugs fixed: PSMDB-247

The Percona Server for MongoDB 3.4.19-2.17 release notes are available in the official documentation.

Feb
19
2019
--

GN acquires Altia Systems for $125M to add video to its advanced audio solutions

Some interesting M&A is afoot in the world of hardware and software that’s aiming to improve the quality of audio and video communications over digital networks.

GN Group — the Danish company that broke new ground in mobile when it inked deals first with Apple and then Google to stream audio from their phones directly to smart, connected hearing aids — is expanding from audio to video, and from Europe to Silicon Valley.

Today, the company announced that it would acquire Altia Systems, a startup out of Cupertino that makes a “surround” videoconferencing device and software called the PanaCast (we reviewed it oncedesigned to replicate the panoramic, immersive experience of vision that we have as humans

GN is paying $125 million for the startup. For some context, this price represents a decent return: according to PitchBook, Altia was last valued at around $78 million with investors including Intel Capital and others.

Intel’s investment was one of several strategic partnerships that Altia had inked over the years. (Another was with Zoom to provide a new video solution for Uber.)

The Intel partnership, for one, will continue post-acquisition. “Intel invested in Altia Systems to bring an industry leading immersive, Panoramic-4K camera experience to business video collaboration,” said Dave Flanagan, Vice President of Intel Corporation and Senior Managing Director of Intel Capital, in a statement. “Over the past few years, Altia Systems has collaborated with Intel to use AI and to deliver more intelligent conference rooms and business meetings. This helps customers make better decisions, automate workflows and improve business efficiency. We are excited to work with GN to further scale this technology on a global basis.”

We have seen a lot of applications of AI in just about every area of technology, but one of the less talked about, but very interesting, areas has been in how it’s being used to enhance audio in digital network. Pindrop, as one example, is creating and tracking “audio fingerprints” for security applications, specifically fraud prevention (to authenticate users and to help weed out imposters based not just on the actual voice but on all the other aural cues we may not pick up as humans but can help build a picture of a caller’s location and so on).

GN, meanwhile, has been building AI-based algorithms to help those who cannot hear as well, or who simply needs to hear better, be able to listen to calls on digital networks and make out what’s being said. This not only requires technology to optimise the audio quality, but also algorithms that can help tailor that quality to the specific person’s own unique hearing needs.

One of the more obvious applications of services like these are for those who are hard of hearing and use hearing aids (which can be awful or impossible to use with mobile phones), another is in call centers, and this appears to be the area where GN is hoping to address with the Altia acquisition.

GN already offers two products for call centre workers, Jabra and BlueParrot — headsets and speakerphones with their own proprietary software that it claims makes workers more efficient and productive just by making it easier to understand what callers are saying.

Altia will be integrated into that solution to expand it to include videoconferencing around unified communications solutions, creating more natural experiences for those who are not actually in physical rooms together.

“Combining GN Audio’s sound expertise, partner eco-system and global channel access with the video technology from Altia Systems, we will take the experience of conference calls to a completely new level,” said René Svendsen-Tune, President and CEO of GN Audio, in a statement.

What’s notable is that GN is a vertically-integrated company, building not just hardware but software to run on it. The AI engine underpinning some of its software development will be getting a vast new trove of data fed into it now by way of the PanaCast solution: not jut in terms of video, but the large amount of audio that will naturally come along with it.

“Combining PanaCast’s immersive, intelligent video with GN Audio’s intelligent audio solutions will enable us to deliver a whole new class of collaboration products for our customers,” said Aurangzeb Khan, President and CEO of Altia Systems, in a statement. “PanaCast’s solutions enable companies to improve meeting participants’ experience, automate workflows, and enhance business efficiency and real estate utilization with data lakes of valid information.”

Given GN’s work with Android and iOS devices, it will be interesting to see how and if these video solutions make their way to those platforms as well, either by way of solutions that work on their phones or perhaps more native integrations down the line.

Regardless of how that develops, what’s clear is that there remains a market not just for basic tools to get work done, but technology to improve the quality of those tools, and that’s where GN hopes it will resonate with this deal.

Feb
19
2019
--

Redis Labs raises a $60M Series E round

Redis Labs, a startup that offers commercial services around the Redis in-memory data store (and which counts Redis creator and lead developer Salvatore Sanfilippo among its employees), today announced that it has raised a $60 million Series E funding round led by private equity firm Francisco Partners.

The firm didn’t participate in any of Redis Labs’ previous rounds, but existing investors Goldman Sachs Private Capital Investing, Bain Capital Ventures, Viola Ventures and Dell Technologies Capital all participated in this round.

In total, Redis Labs has now raised $146 million and the company plans to use the new funding to accelerate its go-to-market strategy and continue to invest in the Redis community and product development.

Current Redis Labs users include the likes of American Express, Staples, Microsoft, Mastercard and Atlassian . In total, the company now has more than 8,500 customers. Because it’s pretty flexible, these customers use the service as a database, cache and message broker, depending on their needs. The company’s flagship product is Redis Enterprise, which extends the open-source Redis platform with additional tools and services for enterprises. The company offers managed cloud services, which give businesses the choice between hosting on public clouds like AWS, GCP and Azure, as well as their private clouds, in addition to traditional software downloads and licenses for self-managed installs.

Redis Labs CEO Ofer Bengal told me the company’s isn’t cash positive yet. He also noted that the company didn’t need to raise this round but that he decided to do so in order to accelerate growth. “In this competitive environment, you have to spend a lot and push hard on product development,” he said.

It’s worth noting that he stressed that Francisco Partners has a reputation for taking companies forward and the logical next step for Redis Labs would be an IPO. “We think that we have a very unique opportunity to build a very large company that deserves an IPO,” he said.

Part of this new competitive environment also involves competitors that use other companies’ open-source projects to build their own products without contributing back. Redis Labs was one of the first of a number of open-source companies that decided to offer its newest releases under a new license that still allows developers to modify the code but that forces competitors that want to essentially resell it to buy a commercial license. Ofer specifically notes AWS in this context. It’s worth noting that this isn’t about the Redis database itself but about the additional modules that Redis Labs built. Redis Enterprise itself is closed-source.

“When we came out with this new license, there were many different views,” he acknowledged. “Some people condemned that. But after the initial noise calmed down — and especially after some other companies came out with a similar concept — the community now understands that the original concept of open source has to be fixed because it isn’t suitable anymore to the modern era where cloud companies use their monopoly power to adopt any successful open source project without contributing anything to it.”

Feb
19
2019
--

How Network Bandwidth Affects MySQL Performance

10gb network and 10gb with SSL

Network is a major part of a database infrastructure. However, often performance benchmarks are done on a local machine, where a client and a server are collocated – I am guilty myself. This is done to simplify the setup and to exclude one more variable (the networking part), but with this we also miss looking at how network affects performance.

The network is even more important for clustering products like Percona XtraDB Cluster and MySQL Group Replication. Also, we are working on our Percona XtraDB Cluster Operator for Kubernetes and OpenShift, where network performance is critical for overall performance.

In this post, I will look into networking setups. These are simple and trivial, but are a building block towards understanding networking effects for more complex setups.

Setup

I will use two bare-metal servers, connected via a dedicated 10Gb network. I will emulate a 1Gb network by changing the network interface speed with

ethtool -s eth1 speed 1000 duplex full autoneg off

  command.

network test topology

I will run a simple benchmark:

sysbench oltp_read_only --mysql-ssl=on --mysql-host=172.16.0.1 --tables=20 --table-size=10000000 --mysql-user=sbtest --mysql-password=sbtest --threads=$i --time=300 --report-interval=1 --rand-type=pareto

This is run with the number of threads varied from 1 to 2048. All data fits into memory – innodb_buffer_pool_size is big enough – so the workload is CPU-intensive in memory: there is no IO overhead.

Operating System: Ubuntu 16.04

Benchmark N1. Network bandwidth

In the first experiment I will compare 1Gb network vs 10Gb network.

1gb vs 10gb network

threads/throughput 1Gb network 10Gb network
1 326.13 394.4
4 1143.36 1544.73
16 2400.19 5647.73
32 2665.61 10256.11
64 2838.47 15762.59
96 2865.22 17626.77
128 2867.46 18525.91
256 2867.47 18529.4
512 2867.27 17901.67
1024 2865.4 16953.76
2048 2761.78 16393.84

 

Obviously the 1Gb network performance is a bottleneck here, and we can improve our results significantly if we move to the 10Gb network.

To see that 1Gb network is bottleneck we can check the network traffic chart in PMM:

network traffic in PMM

We can see we achieved 116MiB/sec (or 928Mb/sec)  in throughput, which is very close to the network bandwidth.

But what we can do if the our network infrastructure is limited to 1Gb?

Benchmark N2. Protocol compression

There is a feature in MySQL protocol whereby you can see the compression for the network exchange between client and server:

--mysql-compression=on

  for sysbench.

Let’s see how it will affect our results.

1gb network with compression protocol

threads/throughput 1Gb network 1Gb with compression protocol
1 326.13 198.33
4 1143.36 771.59
16 2400.19 2714
32 2665.61 3939.73
64 2838.47 4454.87
96 2865.22 4770.83
128 2867.46 5030.78
256 2867.47 5134.57
512 2867.27 5133.94
1024 2865.4 5129.24
2048 2761.78 5100.46

 

Here is an interesting result. When we use all available network bandwidth, the protocol compression actually helps to improve the result.10g network with compression protocol

threads/throughput 10Gb 10Gb with compression
1 394.4 216.25
4 1544.73 857.93
16 5647.73 3202.2
32 10256.11 5855.03
64 15762.59 8973.23
96 17626.77 9682.44
128 18525.91 10006.91
256 18529.4 9899.97
512 17901.67 9612.34
1024 16953.76 9270.27
2048 16393.84 9123.84

 

But this is not the case with the 10Gb network. The CPU resources needed for compression/decompression are a limiting factor, and with compression the throughput actually only reach half of what we have without compression.

Now let’s talk about protocol encryption, and how using SSL affects our results.

Benchmark N3. Network encryption

1gb network and 1gb with SSL

threads/throughput 1Gb network 1Gb SSL
1 326.13 295.19
4 1143.36 1070
16 2400.19 2351.81
32 2665.61 2630.53
64 2838.47 2822.34
96 2865.22 2837.04
128 2867.46 2837.21
256 2867.47 2837.12
512 2867.27 2836.28
1024 2865.4 1830.11
2048 2761.78 1019.23

10gb network and 10gb with SSL

threads/throughput 10Gb 10Gb SSL
1 394.4 359.8
4 1544.73 1417.93
16 5647.73 5235.1
32 10256.11 9131.34
64 15762.59 8248.6
96 17626.77 7801.6
128 18525.91 7107.31
256 18529.4 4726.5
512 17901.67 3067.55
1024 16953.76 1812.83
2048 16393.84 1013.22

 

For the 1Gb network, SSL encryption shows some penalty – about 10% for the single thread – but otherwise we hit the bandwidth limit again. We also see some scalability hit on a high amount of threads, which is more visible in the 10Gb network case.

With 10Gb, the SSL protocol does not scale after 32 threads. Actually, it appears to be a scalability problem in OpenSSL 1.0, which MySQL currently uses.

In our experiments, we saw that OpenSSL 1.1.1 provides much better scalability, but you need to have a special build of MySQL from source code linked to OpenSSL 1.1.1 to achieve this. I don’t show them here, as we do not have production binaries.

Conclusions

  1. Network performance and utilization will affect the general application throughput.
  2. Check if you are hitting network bandwidth limits
  3. Protocol compression can improve the results if you are limited by network bandwidth, but also can make things worse if you are not
  4. SSL encryption has some penalty (~10%) with a low amount of threads, but it does not scale for high concurrency workloads.
Feb
19
2019
--

Senseon raises $6.4M to tackle cybersecurity threats with an AI ‘triangulation’ approach

Darktrace helped pave the way for using artificial intelligence to combat malicious hacking and enterprise security breaches. Now a new U.K. startup founded by an ex-Darktrace executive has raised some funding to take the use of AI in cybersecurity to the next level.

Senseon, which has pioneered a new model that it calls “AI triangulation” — simultaneously applying artificial intelligence algorithms to oversee, monitor and defend an organization’s network appliances, endpoints and “investigator bots” covering multiple microservices — has raised $6.4 million in seed funding.

David Atkinson — the startup’s CEO and founder who had previously been the commercial director for Darktrace and before that helped pioneer new cybersecurity techniques as an operative at the U.K.’s Ministry of Defense — said that Senseon will use the funding to continue to expand its business both in Europe and the U.S. 

The deal was co-led by MMC Ventures and Mark Weatherford, who is chief cybersecurity strategist at vArmour (which itself raised money in recent weeks) and previously Deputy Under Secretary for Cybersecurity, U.S. Department of Homeland Security. Others in the round included Amadeus Capital Partners, Crane Venture Partners and CyLon, a security startup incubator in London.

As Atkinson describes it, triangulation was an analytics concept first introduced by the CIA in the U.S., a method of bringing together multiple vectors of information to unearth inconsistencies in a data set (you can read more on triangulation in this CIA publication). He saw an opportunity to build a platform that took the same kind of approach to enterprise security.

There are a number of companies that are using AI-based techniques to help defend against breaches — in addition to Darktrace, there is Hexadite (a remediation specialist acquired by Microsoft), Amazon is working in the field and many others. In fact I think you’d be hard-pressed to find any IT security company today that doesn’t claim to or actually use AI in its approach.

Atkinson claims, however, that many AI-based solutions — and many other IT security products — take siloed, single-point approaches to defending a network. That is to say, you have network appliance security products, endpoint security, perhaps security for individual microservices and so on.

But while many of these work well, you don’t always get those different services speaking to each other. And that doesn’t reflect the shape that the most sophisticated security breaches are taking today.

As cybersecurity breaches and identified vulnerabilities continue to grow in frequency and scope — with hundreds of millions of individuals’ and organizations’ data potentially exposed in the process, systems disabled, and more — we’re seeing an increasing amount of sophistication on the part of the attackers.

Yes, those malicious actors employ artificial intelligence. But — as described in this 2019 paper on the state of cybersecurity from Symantec — they are also taking advantage of bigger “surface areas” with growing networks of connected objects all up for grabs; and they are tackling new frontiers like infiltrating data in transport and cloud-based systems. (In terms of examples of new frontiers, mobile networks, biometric data, gaming networks, public clouds and new card-skimming techniques are some of the specific areas that Experian calls out.)

Senseon’s antidote has been to build a new platform that “emulates how analysts think,” said Atkinson. Looking at an enterprise’s network appliance, an endpoint and microservices in the cloud, the Senseon platform “has an autonomous conversation” using the source data, before it presents a conclusion, threat, warning or even breach alert to the organization’s security team.

“We have an ability to take observations and compare that to hypothetical scenarios. When we tell you something, it has a rich context,” he said. Single-point alternatives essentially can create “blind spots that hackers manoeuvre around. Relying on single-source intelligence is like tying one hand behind your back.”

After Senseon compiles its data, it sends out alerts to security teams in a remediation service. Interestingly, while the platform’s aim is to identify malicious activity in a network, another consequence of what it’s doing is to help organizations identify “false positives” that are not actually threats, to cut down on time and money that get wasted on investigating those.

“Organisations of all sizes need to get better at keeping pace with emerging threats, but more importantly, identifying the attacks that require intervention,” said Mina Samaan of MMC Ventures in a statement. “Senseon’s technology directly addresses this challenge by using reinforcement learning AI techniques to help over-burdened security teams better understand anomalous behaviour through a single holistic platform.”

Although Senseon is only announcing seed funding today, the company has actually been around since 2017 and already has customers, primarily in the finance and legal industries (it would only give out one customer reference, the law firm of Harbottle & Lewis).

Feb
18
2019
--

Percona Server for MySQL 5.7.25-28 Is Now Available

Percona Server for MySQL 8.0

Percona Server for MySQL 5.6Percona is glad to announce the release of Percona Server 5.7.25-28 on February 18, 2019. Downloads are available here and from the Percona Software Repositories.

This release is based on MySQL 5.7.25 and includes all the bug fixes in it. Percona Server 5.7.25-28 is now the current GA (Generally Available) release in the 5.7 series.

All software developed by Percona is open-source and free.

In this release, Percona Server introduces the variable binlog_skip_flush_commands. This variable controls whether or not FLUSH commands are written to the binary log. Setting this variable to ON can help avoid problems in replication. For more information, refer to our documentation.

Note

If you’re currently using Percona Server 5.7, Percona recommends upgrading to this version of 5.7 prior to upgrading to Percona Server 8.0.

Bugs fixed

  • FLUSH commands written to the binary log could cause errors in case of replication. Bug fixed #1827: (upstream #88720).
  • Running LOCK TABLES FOR BACKUP followed by STOP SLAVE SQL_THREAD could block replication preventing it from being restarted normally. Bug fixed #4758.
  • The ACCESS_DENIED field of the information_schema.user_statistics table was not updated correctly. Bug fixed #3956.
  • MySQL could report that the maximum number of connections was exceeded with too many connections being in the CLOSE_WAIT state. Bug fixed #4716 (upstream #92108)
  • Wrong query results could be received in semi-join sub queries with materialization-scan that allowed inner tables of different semi-join nests to interleave. Bug fixed #4907 (upstream bug #92809).
  • In some cases, the server using the the MyRocks storage engine could crash when TTL (Time to Live) was defined on a table. Bug fixed #4911
  • Running the SELECT statement with the ORDER BY and LIMIT clauses could result in a less than optimal performance. Bug fixed #4949 (upstream #92850)
  • There was a typo in mysqld_safe.shtrottling was replaced with throttling. Bug fixed #240. Thanks to Michael Coburn for the patch.
  • MyRocks could crash while running START TRANSACTION WITH CONSISTENT SNAPSHOT if other transactions were in specific states. Bug fixed #4705,
  • In some cases, mysqld could crash when inserting data into a database the name of which contained special characters (CVE-2018-20324). Bug fixed #5158.
  • MyRocks incorrectly processed transactions in which multiple statements had to be rolled back. Bug fixed #5219.
  • In some cases, the MyRocks storage engine could crash without triggering the crash recovery. Bug fixed #5366.
  • When bootstrapped with undo or redo log encryption enabled on a very fast storage, the server could fail to start. Bug fixed #4958.

Other bugs fixed: #2455#4791#4855#4996#5268.

This release also contains fixes for the following CVE issues: CVE-2019-2534, CVE-2019-2529, CVE-2019-2482, CVE-2019-2434.

Find the release notes for Percona Server for MySQL 5.7.25-28 in our online documentation. Report bugs in the Jira bug tracker.

 

Feb
18
2019
--

Percona Server for MongoDB 4.0.5-2 Is Now Available

Percona Server for MongoDB

Percona Server for MongoDB

Percona announces the release of Percona Server for MongoDB 4.0.5-2 on February 18, 2019. Download the latest version from the Percona website or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 4.0 Community Edition. It supports MongoDB 4.0 protocols and drivers.

Percona Server for MongoDB extends Community Edition functionality by including the Percona Memory Engine storage engine, as well as several enterprise-grade features. It also includes MongoRocks storage engine (which is now deprecated). Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release includes all features of MongoDB 4.0 Community Edition. Most notable among these are:

Note that the MMAPv1 storage engine is deprecated in MongoDB 4.0 Community Edition.

In Percona Server for MongoDB 4.0.5-2, data at rest encryption becomes GA. The data at rest encryption feature now covers the temporary files used for external sorting and the rollback files. You can decrypt and examine the contents of the rollback files using the new perconadecrypt command line tool.

In this release, Percona Server for MongoDB supports the ngram full-text search engine. Thanks to Sunguck Lee (@SunguckLee) for this contribution. To enable the ngram full-text search engine, create an index passing ngram to the default_language parameter:

mongo > db.collection.createIndex({name:"text"}, {default_language: "ngram"})

New Features

  • PSMDB-276perconadecrypt tool is now available for decrypting the encrypted rollback files.
  • PSMDB-250: The Ngram full-text search engine has been added to Percona Server for MongoDB.Thanks to Sunguck Lee (@SunguckLee) for this contribution.

Bugs Fixed

  • PSMDB-234: It was possible to use a key file for encryption the owner of which was not the owner of the mongod process.
  • PSMDB-273: When using data at rest encryption, temporary files for external sorting and rollback files were not encrypted
  • PSMDB-257: MongoDB could not be started with a group-readable key file owned by root.
  • PSMDB-272mongos could crash when running the createBackup command.

Other bugs fixed: PSMDB-247

The Percona Server for MongoDB 4.0.5-2 release notes are available in the official documentation.

Feb
18
2019
--

Deprecation of TLSv1.0 2019-02-28

end of Percona support for TLS1.0

end of Percona support for TLS1.0Ahead of the PCI move to deprecate the use of ‘early TLS’, we’ve previously taken steps to disable TLSv1.0.

Unfortunately at that time we encountered some issues which led us to rollback these changes. This was to allow users of operating systems that did not – yet – support TLSv1.1 or higher to download Percona packages over TLSv1.0.

Since then, we have been tracking our usage statistics for older operating systems that don’t support TLSv1.1 or higher at https://repo.percona.com. We now receive very few legitimate requests for these downloads.

Consequently,  we are ending support for TLSv1.0 on all Percona web properties.

While the packages will still be available for download from percona.com, we are unlikely to update them as the OS’s are end-of-life (e.g. RHEL5). Also, in future you will need to download these packages from a client that supports TLSv1.1 or greater.

For example EL5 will not receive an updated version of OpenSSL to support versions greater than TLSv1.1. PCI has called for the deprecation of ‘early TLS’ standards. Therefore you should upgrade any EL5 installations to EL6 or greater as soon as possible. As noted in this support policy update by Red Hat, EL5 stopped receiving support under extended user support (EUS) in March 2015.

To continue to receive updates for your OS and for any Percona products that you use, you need to update to more recent versions of CentOS, Scientific Linux, and RedHat Enterprise Linux.


Photo by Kevin Noble on Unsplash

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com