Jun
30
2021
--

Dispense with the chasm? No way!

Jeff Bussgang, a co-founder and general partner at Flybridge Capital, recently wrote an Extra Crunch guest post that argued it is time for a refresh when it comes to the technology adoption life cycle and the chasm. His argument went as follows:

  1. VCs in recent years have drastically underestimated the size of SAMs (serviceable addressable markets) for their startup investments because they were “trained to think only a portion of the SAM is obtainable within any reasonable window of time because of the chasm.”
  2. The chasm is no longer the barrier it once was because businesses have finally understood that software is eating the world.
  3. As a result, the early majority has joined up with the innovators and early adopters to create an expanded early market. Effectively, they have defected from the mainstream market to cross the chasm in the other direction, leaving only the late majority and the laggards on the other side.
  4. That is why we now are seeing multiple instances of very large high-growth markets that appear to have no limit to their upside. There is no chasm to cross until much later in the life cycle, and it isn’t worth much effort to cross it then.

Now, I agree with Jeff that we are seeing remarkable growth in technology adoption at levels that would have astonished investors from prior decades. In particular, I agree with him when he says:

The pandemic helped accelerate a global appreciation that digital innovation was no longer a luxury but a necessity. As such, companies could no longer wait around for new innovations to cross the chasm. Instead, everyone had to embrace change or be exposed to an existential competitive disadvantage.

But this is crossing the chasm! Pragmatic customers are being forced to adopt because they are under duress. It is not that they buy into the vision of software eating the world. It is because their very own lunches are being eaten. The pandemic created a flotilla of chasm-crossings because it unleashed a very real set of existential threats.

The key here is to understand the difference between two buying decision processes, one governed by visionaries and technology enthusiasts (the early adopters and innovators), the other by pragmatists (the early majority).

The key here is to understand the difference between two buying decision processes, one governed by visionaries and technology enthusiasts (the early adopters and innovators), the other by pragmatists (the early majority). The early group makes their decisions based on their own analyses. They do not look to others for corroborative support. Pragmatists do. Indeed, word-of-mouth endorsements are by far the most impactful input not only about what to buy and when but also from whom.

Jun
30
2021
--

How to cut through the promotional haze and select a digital building platform

Everyone from investors to casual LinkedIn observers has more reasons than ever to look at buildings and wonder what’s going on inside. The property industry is known for moving slowly when it comes to adopting new technologies, but novel concepts and products are now entering this market at a dizzying pace.

However, this ever-growing array of smart-building products has made it confusing for professionals who seek to implement digital building platform (DBP) technologies in their spaces, let alone across their entire enterprise. The waters get even murkier when it comes to cloud platforms and their impact on ROI with regard to energy usage and day-to-day operations.

Breaking down technology decisions into bite-sized pieces, starting with fundamental functions, is the most straightforward way to cut through the promotional haze.

Facility managers, energy professionals and building operators are increasingly hit with daily requests to review the latest platform for managing and operating their buildings. Here are a few tips to help decision-makers clear through the marketing fluff and put DBP platforms to the test.

The why, how and what

Breaking down technology decisions into bite-sized pieces, starting with fundamental functions, is the most straightforward way to cut through the promotional haze. Ask two simple questions: Who on your team will use this technology and what problem will it solve for them? Answers to these questions will help you maintain your key objectives, making it easier to narrow down the hundreds of options to a handful.

Another way to prioritize problems and solutions when sourcing smart-building technology is to identify your use cases. If you don’t know why you need a technology platform for your smart building, you’ll find it difficult to tell which option is better. Further, once you have chosen one, you’ll be hard put to determine if it has been successful. We find use cases draw the most direct line from why to how and what.

For example, let’s examine the why, how and what questions for a real estate developer planning to construct or modernize a commercial office building:

  • Why will people come? — Our building will be full of amenities and technological touches that will make discerning tenants feel comfortable, safe and part of a warm community of like-minded individuals.
  • How will we do it? — Implement the latest tenant-facing technology offering services and capabilities that are not readily available at home. We will create indoor and outdoor environments that make people feel comfortable and happy.
  • What tools, products and technology will we use?

This last question is often the hardest to answer and is usually left until the last possible moment. For building systems integrators, this is where the real work begins.

Focus on desired outcomes

When various stakeholder groups begin their investigations of the technology, it is crucial to define the outcomes everyone hopes to achieve for each use case. When evaluating specific products, it helps to categorize them at high levels.

Several high-level outcomes, such as digital twin enablement, data normalization and data storage are expected across multiple categories of systems. However, only an enterprise building management system includes the most expected outcomes. Integration platform as a service, bespoke reports and dashboarding, analytics as a service and energy-optimization platforms have various enabled and optional outcomes.

The following table breaks down a list of high-level outcomes and aligns them to a category of smart-building platforms available in the market. Expanded definitions of each item are included at the end of this article.

Jun
30
2021
--

Slack’s new voice, video tools should fit nicely on Salesforce platform after deal closes

It’s easy to forget, but Salesforce bought Slack at the end of last year for almost $28 billion, a deal that has yet to close. We don’t know exactly when that will happen, but Slack continues to develop its product roadmap adding new functionality, even while waiting to become part of Salesforce eventually.

Just this morning, the company made official some new tools it had been talking about for some time including a new voice tool called Slack Huddles, which is available starting today, along with video messaging and a directory service called Slack Atlas.

These tools enhance the functionality of the platform in ways that should prove useful as it becomes part of Salesforce whenever that happens. It’s not hard to envision how integrating Huddles or the video tools (or even Slack Atlas for both internal and external company organizational views) could work when integrated into the Salesforce platform.

Slack CEO Stewart Butterfield says the companies aren’t working together yet because of regulatory limits on communications, but he could definitely see how these tools could work in tandem with Salesforce Service Cloud and Sales Cloud among others and how you can start to merge the data in Salesforce with Slack’s communications capabilities.

“[There’s] this excitement around workflows from the big system of record [in Salesforce] into the communication [in Slack] and having the data show up where the conversations are happening. And I think there’s a lot of potential here for leveraging these indirectly in customer interactions, whether that’s sales, marketing, support or whatever,” he said.

He said that he could also see Salesforce taking advantage of Slack Connect, a capability introduced last year that enables companies to communicate with people outside the company. “We have all this stuff working inside of Slack Connect, and you get all the same benefits that you would get using Huddles to properly start a conversation, solve some problem or use video as a better way of communicating with [customers],” he said.

These announcements seem to fall into two main categories: the future of work and in the context of the acquisition. Bret Taylor, Salesforce president and COO certainly seemed to recognize that when discussing the deal with TechCrunch when it was announced back in December. He sees the two companies directly addressing the changing face of work:

“When we say we really want Slack to be this next generation interface for Customer 360, what we mean is we’re pulling together all these systems. How do you rally your teams around these systems in this digital work-anywhere world that we’re in right now where these teams are distributed and collaboration is more important than ever,” Taylor said.

Brent Leary, founder and principal analyst at CRM Essentials says that there is clearly a future of work angle at play as the two companies come together. “I think moves like [today’s Slack announcements] are in response to where things are trending with respect to the future of work as we all find ourselves spending an increasing amount of time in front of webcams and microphones in our home offices meeting and collaborating with others,” he said.

Huddles is an example of how the company is trying to fix that screen fatigue from too many meetings or typing our thoughts. “This kind of “audio-first” capability takes the emphasis off trying to type what we mean in the way we think will get the point across to just being able to say it without the additional effort to make it look right,” he said.

Leary added, “And not only will it allow people to just speak, but also allows us to get a better understanding of the sentiment and emotion that also comes with speaking to people and not having to guess what the intent/emotion is behind the text in a chat.”

As Karissa Bell pointed out on Engadget, Huddles also works like Discord’s chat feature in a business context, which could have great utility for Salesforce tools when it’s integrated with the Salesforce platform

While the regulatory machinations grind on, Slack continues to develop its platform and products. It will of course continue to operate as a stand-alone company, even when the mega deal finally closes, but there will certainly be plenty of cross-platform integrations.

Even if executives can’t discuss what those integrations could look like openly, there has to be a lot of excitement at Salesforce and Slack about the possibilities that these new tools bring to the table — and to the future of work in general — whenever the deal crosses the finish line.

Jun
30
2021
--

Complex Archival with Percona Toolkit’s pt-archiver

Complex Archival with Percona Toolkit pt-archiver

Complex Archival with Percona Toolkit pt-archiverThe Problem

I recently worked on a customer engagement where the customer needed to archive a high amount of rows from different tables into another server (in this example for simplicity I am just archiving the results into a file). 

As explained in this other blog post, “Want to archive tables? Use Percona Toolkit’s pt-archiver“, you can use pt-archiver to purge/archive rows from a table that match any “WHERE” condition, but this case was not that easy as the archive/delete condition was complex and involved joining many tables…

The archive conditions involved four tables with the following query and the following table schema. In the example, there are no foreign keys, but this method can be used also with foreign keys by reordering the table archive/purge.

Percona Toolkit's pt-archiver

And the delete condition is the following:

DELETE table1, table2, table3, table4
FROM table1
INNER JOIN table3 ON table1.id = table3.table1_id
INNER JOIN table2 ON table1.table2_id = table2.id
INNER JOIN table4 ON (table3.table4_id = table4.id AND table4.cond = 'Value1')
WHERE table1.created_at < '2020-01-01 00:00:00';

It can be seen that for a row to be archived, it depends on the existence and condition of other rows in other tables. Trying to purge/archive one table at a time is not a possible solution, because once a row has been purged/archived, it is not possible to find the other referenced rows that need to be purged/archived together with that one.

So, how do we proceed in this case?

The Solution

For tackling the above problem, the best is to set up a transient table containing all the pairs of rows to be purged/archived, i.e:

mysql> select * from tmp_ids_to_remove ; 
+-----------+-----------+-----------+-----------+
| table1_id | table2_id | table3_id | table4_id |
+-----------+-----------+-----------+-----------+
|         1 |         1 |         1 |         1 |
|         1 |         1 |         2 |         1 |
|         1 |         1 |         3 |         1 |
|         3 |         3 |         5 |         3 |
+-----------+-----------+-----------+-----------+

For the above example, the following rows from each table have to be purged:

  • Table1: ids = {1,3}
  • Table2: ids = {1,3}
  • Table3: ids = {1,2,3,5}
  • Table4: ids = {1,3}

Then the pt-archiver from Percona Toolkit can be used to purge/archive one table at a time, checking that the row to be purged does exist on “tmp_ids_to_remove”. The pt-archiver expression would be similar to:

--where 'EXISTS(SELECT tableX_id FROM percona.tmp_ids_to_remove purge_t WHERE id=purge_t.tableX_id)'

And the query for populating table should be something similar to INSERT INTO tmp_ids_to_remove ( SELECT <query with the delete condition>) i.e:

INSERT INTO percona.tmp_ids_to_remove ( SELECT table1.id, table2.id, table3.id, table4.id
FROM table1
INNER JOIN table3 ON table1.id = table3.table1_id
INNER JOIN table2 ON table1.table2_id = table2.id
INNER JOIN table4 ON (table3.table4_id = table4.id AND table4.cond = 'Value1')
WHERE table1.created_at < '2020-01-01 00:00:00');

Things to consider:

  • Instead of creating one “big” table containing all the rows, multiple smaller tables can be created. For simplicity and easier data view, one big table was used in this example.
  • The above insert might lock a lot of rows which can impact server performance depending on transaction size and current server load. Either run the query out of business hours or if not possible and to keep referential integrity, SELECT …. INTO OUTFILE and then load into another table; the select part would be faster and non-locking.
  • The table with the tmp_ids_to_remove should have an index for each column since pt-archiver will need the index to fast check the row to be removed
  • If the amount of rows you need to purge/archive is in the various GB, you should adjust the “WHERE” condition to only process a few million rows at a time and process the rows in batches. Trying to execute a huge transaction (by either populating a big enough tmp_ids_to_remove or purge/archive all rows at once) will be performance detrimental.  

Note: The above solution aims for data consistency at the cost of performance. If for whatever reason the purge/archive gets stopped halfway through, you will still know which rows ids are meant for purging since they are kept on tmp_ids_to_remove table.

On my GitHub repository, you can find an example scenario file and an example script for doing a test archive. The script is POC (proof of concept) and you should execute on a test env:

Instructions for usage are:

  • Download  the scripts:
curl https://raw.githubusercontent.com/ctutte/blog_complex_archive/master/setup.sql > setup.sql
curl https://github.com/ctutte/blog_complex_archive/blob/master/archiver_script.sh > archiver_script.sh

  • Create the test env:
mysql -u root -p < setup.sql

  • Configure the script:
chmod a+x archiver_script.sh

  • On archiver_script.sh configure various parameters at the top (USER/PASS/SOURCE_DSN)
  • Finally, execute the script:
./archiver_script.sh

The archived rows are deleted from the DB, and the archived rows are written to /tmp/table_name.out file.

Conclusion

Trying to purge/archive rows for complex conditions or when trying to keep data consistency can be hard. The above solution will generate an intermediate table and be based on pt-archiver for purging/archiving rows in a tidy way and can be automated to be able to purge/archive millions of rows that otherwise would not be possible to do manually.

Note: This example is from a real case scenario but was obfuscated and simplified. It might still seem “unnecessarily complex” but it was kept like that so that the proposed solution makes sense. 

Under similar scenarios, a much easier/faster solution might be suitable, but other times due to business logic or other restrictions, a more complex solution must be implemented. 

Jun
30
2021
--

MongoDB Multi-Document Transaction Fails When getLastErrorDefaults is Changed

MongoDB getLastErrorDefaults

MongoDB getLastErrorDefaultsIn every new version of MongoDB, there have been a lot of changes and newly introduced features. One such change is the introduction of setDefaultRWConcern command from MongoDB 4.4. This feature has caused multi-document transaction writes to fail for one of my customers. In this blog post, we will look into the problem and how to resolve it.

Introduction

When you want to set the default common writeConcern for your replicaSet to use if writeConcern is not specified explicitly in the command, then you can set it via rs.conf().settings.getLastErrorDefaults values. The default value is {w: 1, wtimeout: 0} i.e. requires an acknowledgment from PRIMARY member alone and this has been there for a long time in MongoDB. After upgrading to MongoDB 4.4, the customer was facing issues with multi-document transactions and the writes failed causing application downtime. 

Issue

The cluster/replicaset wide writeConcern could be changed via rs.conf().settings.getLastErrorDefaults value till MongoDB 4.2. But from MongoDB 4.4, if you are using a multi-document transaction with the non-default values of getLastErrorDefaults, then it will eventually fail with the below message:

 "errmsg" : "writeConcern is not allowed within a multi-statement transaction",

This is because changing the default value of getLastErrorDefaults was deprecated from MongoDB v4.4 and to change the global writeConcern/readConcern, it needs to be done via the new method – setDefaultRWConcern which was introduced in MongoDB 4.4.

Test Case

The default value of getLastErrorDefaults is {w: 1, wtimeout: 0}. Let’s change it to different values – { “w” : “majority”, “wtimeout” : 3600 }:

replset:PRIMARY> cfg = rs.conf()
replset:PRIMARY> cfg.settings.getLastErrorDefaults.w = "majority"
majority
replset:PRIMARY> cfg.settings.getLastErrorDefaults.wtimeout = 3600
3600
replset:PRIMARY> 
replset:PRIMARY> rs.reconfig(cfg)
{
 "ok" : 1,
 "$clusterTime" : {
  "clusterTime" : Timestamp(1624733172, 1),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 },
 "operationTime" : Timestamp(1624733172, 1)
}

replset:PRIMARY> rs.conf().settings.getLastErrorDefaults
{ "w" : "majority", "wtimeout" : 3600 }

When running a transaction into a collection percona.people, the error occurs and complains that writes could not be written with the changed getLastErrorDefaults value:

replset:PRIMARY> session = db.getMongo().startSession()
session { "id" : UUID("3ae288d5-b793-4505-ab5b-5e37d289414a") }
replset:PRIMARY> session.startTransaction()
replset:PRIMARY> session.getDatabase("percona").people.insert([{_id: 1 , name : "George"},{_id: 2, name: "Tom"}])
WriteCommandError({
 "operationTime" : Timestamp(1623235947, 1),
 "ok" : 0,
 "errmsg" : "writeConcern is not allowed within a multi-statement transaction",
 "code" : 72,
 "codeName" : "InvalidOptions",
 "$clusterTime" : {
  "clusterTime" : Timestamp(1623235947, 1),
  "signature" : {
   "hash" : BinData(0,"XPcHTqxG4/LNyaScd/M3ZV6yM3g="),
   "keyId" : NumberLong("6952046137905774595")
  }
 }
})

How to Resolve It

To resolve this issue, revert getLastErrorDefaults to its default value if changed. Let’s change it to the default value as follows:

replset:PRIMARY> cfg = rs.conf()
replset:PRIMARY> cfg.settings.getLastErrorDefaults.w = 1
1
replset:PRIMARY> cfg.settings.getLastErrorDefaults.wtimeout = 0
0

replset:PRIMARY> rs.reconfig(cfg)
{
 "ok" : 1,
 "$clusterTime" : {
  "clusterTime" : Timestamp(1623236051, 1),
  "signature" : {
   "hash" : BinData(0,"gZwK9B08VTiEUcLq2/1wvxW5RJI="),
   "keyId" : NumberLong("6952046137905774595")
  }
 },
 "operationTime" : Timestamp(1623236051, 1)
}

replset:PRIMARY> rs.conf().settings.getLastErrorDefaults
{ "w" : 1, "wtimeout" : 0 }

Then mention the required default writeConcern, readConcern via the command setDefaultRWConcern as follows:

replset:PRIMARY> db.adminCommand({ "setDefaultRWConcern" : 1, "defaultWriteConcern" : { "w" : "majority", "wtimeout": 3600 } })
{
 "defaultWriteConcern" : {
  "w" : "majority",
  "wtimeout" : 3600
 },
 "updateOpTime" : Timestamp(1624786369, 1),
 "updateWallClockTime" : ISODate("2021-06-27T09:32:57.906Z"),
 "localUpdateWallClockTime" : ISODate("2021-06-27T09:32:57.956Z"),
 "ok" : 1,
 "$clusterTime" : {
  "clusterTime" : Timestamp(1624786377, 2),
  "signature" : {
   "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
   "keyId" : NumberLong(0)
  }
 },
 "operationTime" : Timestamp(1624786377, 2)
}

Then write with a transaction into the collection percona.people:

replset:PRIMARY> session2 = db.getMongo().startSession()
session { "id" : UUID("aedf3139-08a4-466d-b1df-d8ced97abd9d") }
replset:PRIMARY> session2.startTransaction()
replset:PRIMARY> session2.getDatabase("percona").people.find()
replset:PRIMARY> session2.getDatabase("percona").people.insert([{_id: 4 , name : "George"},{_id: 5, name: "Tom"}])
BulkWriteResult({
 "writeErrors" : [ ],
 "writeConcernErrors" : [ ],
 "nInserted" : 2,
 "nUpserted" : 0,
 "nMatched" : 0,
 "nModified" : 0,
 "nRemoved" : 0,
 "upserted" : [ ]
})
replset:PRIMARY> session2.getDatabase("percona").people.find()
{ "_id" : 4, "name" : "George" }
{ "_id" : 5, "name" : "Tom" }
replset:PRIMARY> session2.commitTransaction()
replset:PRIMARY>

Check the data with the different sessions to validate the writes:

replset:PRIMARY> use percona
switched to db percona

replset:PRIMARY> db.people.find()
{ "_id" : 4, "name" : "George" }
{ "_id" : 5, "name" : "Tom" }

Note:

Here note that normal writes (non-transaction writes) are not affected by changing defaults for getLastErrorDefaults. So if you have an application that doesn’t use transactions, then you don’t need to act immediately to remove non-default values in getLastErrorDefaults though it is not recommended. 

How setDefaultRWConcern Works

Here, we will also verify a case, whether the default writeConcern set via setDefaultRWConcern is working as expected. For testing, let’s take down a member from the 3 nodes replicaSet:

replset:PRIMARY> rs.status().members.forEach(function(doc){printjson(doc.name+" - "+doc.stateStr)})
"localhost:37040 - PRIMARY"
"localhost:37041 - SECONDARY"
"localhost:37042 - (not reachable/healthy)"

Then test it via a insert command with writeConcern mentioned in it explicitly. It should fail as there are only 2 members alive and {w:3} needs acknowledgement from 3 members of the replicaSet:

replset:PRIMARY> rs.conf().settings.getLastErrorDefaults
{ "w" : 1, "wtimeout" : 0 }
replset:PRIMARY> db.people.insert({"_id" : 6, "name" : "F"}, { writeConcern: { w: 3, wtimeout: 50 } })
WriteResult({
	"nInserted" : 1,
	"writeConcernError" : {
		"code" : 64,
		"codeName" : "WriteConcernFailed",
		"errmsg" : "waiting for replication timed out",
		"errInfo" : {
			"wtimeout" : true,
			"writeConcern" : {
				"w" : 3,
				"wtimeout" : 50,
				"provenance" : "clientSupplied"
			}
		}
	}
})

Let’s now test by setting setDefaultRWConcern to { “w” : 3, “wtimeout”: 30 } as follows:

replset:PRIMARY> db.adminCommand({ "setDefaultRWConcern" : 1, "defaultWriteConcern" : { "w" : 3, "wtimeout": 30 } })
{
	"defaultWriteConcern" : {
		"w" : 3,
		"wtimeout" : 30
	},
	"updateOpTime" : Timestamp(1624787659, 1),
	"updateWallClockTime" : ISODate("2021-06-27T09:54:26.332Z"),
	"localUpdateWallClockTime" : ISODate("2021-06-27T09:54:26.332Z"),
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1624787666, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1624787666, 1)
}

Now try the write within a transaction or normally without specifying the writeConcern explicitly:

replset:PRIMARY> db.people.insert({"_id" : 7, "name" : "G"})
WriteResult({
	"nInserted" : 1,
	"writeConcernError" : {
		"code" : 64,
		"codeName" : "WriteConcernFailed",
		"errmsg" : "waiting for replication timed out",
		"errInfo" : {
			"wtimeout" : true,
			"writeConcern" : {
				"w" : 3,
				"wtimeout" : 30,
				"provenance" : "customDefault"
			}
		}
	}
})

With Transaction, the error occurs on commitTransaction() as follows:

replset:PRIMARY> session = db.getMongo().startSession()
session { "id" : UUID("9aa75ea2-e03f-4ee6-abfb-8969334c9d98") }
replset:PRIMARY> 
replset:PRIMARY> session.startTransaction()
replset:PRIMARY> 
replset:PRIMARY> session.getDatabase("percona").people.insert([{_id: 8 , name : "H"},{_id: 9, name: "I"}])
BulkWriteResult({
	"writeErrors" : [ ],
	"writeConcernErrors" : [ ],
	"nInserted" : 2,
	"nUpserted" : 0,
	"nMatched" : 0,
	"nModified" : 0,
	"nRemoved" : 0,
	"upserted" : [ ]
})
replset:PRIMARY> 
replset:PRIMARY> session.commitTransaction()
uncaught exception: Error: command failed: {
	"writeConcernError" : {
		"code" : 64,
		"codeName" : "WriteConcernFailed",
		"errmsg" : "waiting for replication timed out",
		"errInfo" : {
			"wtimeout" : true,
			"writeConcern" : {
				"w" : 3,
				"wtimeout" : 30,
				"provenance" : "customDefault"
			}
		}
	},
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1624787808, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1624787808, 1)
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
doassert@src/mongo/shell/assert.js:18:14
_assertCommandWorked@src/mongo/shell/assert.js:639:17
assert.commandWorked@src/mongo/shell/assert.js:729:16
commitTransaction@src/mongo/shell/session.js:966:17
@(shell):1:1
replset:PRIMARY>

How to Check DefaultRWConcern

You can get the current value of read/write concerns via getDefaultRWConcern command.

replset:PRIMARY> db.adminCommand( { getDefaultRWConcern : 1 } )
{
	"defaultWriteConcern" : {
		"w" : 3,
		"wtimeout" : 30
	},
	"updateOpTime" : Timestamp(1624793719, 1),
	"updateWallClockTime" : ISODate("2021-06-27T11:35:22.552Z"),
	"localUpdateWallClockTime" : ISODate("2021-06-27T11:35:22.552Z"),
	"ok" : 1,
	"$clusterTime" : {
		"clusterTime" : Timestamp(1624793821, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	},
	"operationTime" : Timestamp(1624793821, 1)
}

Conclusion

From 4.4.0, the workaround for the said problem is to use the defaults for getLastErrorDefaults until 4.4.6 and if needed after 4.4.6, set the cluster/replicaset wide write/read concerns via setDefaultRWConcern command instead. The version MongoDB 4.4 honors any write concern value that you specify in getLastErrorDefaults, however, it is not allowable from v5.0 (refer SERVER-56241). This behavior was reported in JIRA SERVER-54896, SERVER-55701 and got fixed from the version 4.4.7 to ignore the value of getLastErrorDefaults as per the bug fix. 

Also, when you do MongoDB 4.4 upgrade, you can note this and if needed change your default read/write concern settings through setDefaultRWConcern method.

Hope this helps you!

Jun
30
2021
--

Slack’s new video and voice tools are nod to changing face of work

Slack started talking about a new set of communications tools to enhance the text-based channels at the end of last year. Today the company released a new audio tool called Slack Huddles and gave more details on a couple of other new tools including the ability to leave a video message and an enhanced employee directory, which you can access from inside Slack. All of these appear to have been designed with the changing nature of work in mind.

Let’s start with Slack Huddles, the audio tool that lets you have a real-time conversation with someone in Slack instead of typing out all of your thoughts. This will be much easier for people who find typing challenging, but the company also believes it will allow more spontaneous discussion, which mimics being in the office, at least to some degree.

“Huddles is a light-weight, audio-first way of communicating right in Slack. [It] recreates the spontaneous and serendipitous interactions that happen outside of scheduled meetings,” Tamar Yehoshua, chief product officer at Slack explained in a press briefing yesterday.

As companies continue to introduce more flexible working models, they will have to adjust how they work. Huddles is one way of thinking about that, says Slack CEO Stewart Butterfield.

“Some things can be synchronous, but only take three minutes. Instead of [scheduling a meeting for] next Tuesdays from 11:30 to 12 and [using] the whole half hour because that’s what we scheduled, it’s two or three minutes, right now, And if the conversation fizzles out in the Huddle you leave it open, maybe someone joins later and says something, which you wouldn’t do on a call,” Butterfield said.

And recognizing that not everyone will be able to hear, the new tool includes real-time transcription.

The company has also been talking about providing some kind of video message capability since last year. The idea is almost like a video voicemail or an Instagram Story where you shoot a short video and post it in Slack. “We’ve been thinking about it and we believe that by giving people a way to expressively and asynchronously share and consume information we can enable people to be more flexible in how they work, and reduce the need for video meetings,” Yehoshua said.

The new feature will enable Slack users to play back video, voice and screen recordings natively in Slack. People can record and upload short clips into a channel or DM, “enabling others to watch and respond on their own schedule,” she explained.  While this feature isn’t ready to release yet, Yehoshua reported it is being piloted and will be available to paid teams some time in the coming months.

The last piece is based on the Rimeto acquisition, which Slack bought last year with an eye toward upping their corporate directory piece. The Rimeto product has in fact been repurposed as Slack Atlas, a corporate directory that users can access right in Slack, rather than moving to another program to find that information. It’s another way Slack can keep users in Slack to find the information that they need, while avoiding context switching. This is currently in limited customer testing, but should be available some time later this year, according to the company.

Slack first announced these tools last year, initially saying they were experimental, but quickly shifting them to the product road map. Butterfield appeared in a Clubhouse interview in March with former TechCrunch reporter Josh Constine, who is now a SignalFire investor ostensibly to talk about the future of work, but he also went into more detail about these tools for the first time.

It’s hard not to wrap this discussion into the future of work, and indeed Slack’s future as part of Salesforce, which bought the communications tool for $27 billion last year. Work is changing and Slack is looking to be a broader part of that solution, whatever the future holds.

Jun
30
2021
--

ServiceTitan acquires Aspire to move into landscaping, raises $200M at a $9.5B valuation

With a lot of us spending more time at home these days, home improvement has continued to be a booming market. Now, one of the big players in that space — ServiceTitan, which builds software that today is used by over 100,000 contractors to manage their work — is getting a little bigger.

The company — which also works with contractors that work on business properties — is acquiring Aspire Software, a software provider specifically for commercial landscapers. Along with that, ServiceTitan is announcing another $200 million in funding, a Series G that values that company at $9.5 billion.

The funding is being led by a new backer, Thoma Bravo, with other unnamed existing investors participating. (That list includes Sequoia, Tiger Global, Dragoneer, T. Rowe Price, Battery Ventures, Bessemer Venture Partners and ICONIQ Capital.)

Los Angeles-based ServiceTitan is not disclosing the financial terms of the deal, but it comes on the heels of the company raising $500 million only in March (when it was valued at $8.3 billion) — money that it earmarked at the time for acquisitions.

ServiceTitan also confirmed that this is its biggest acquisition yet, which roughly puts this deal in the hundreds of millions of dollars. Aspire will stay based in Missouri to build out the company further from there.

Aspire itself has some 50,000 users and sees $4 billion in annualized transactions on its platform across areas like landscaping, snow and ice management, and construction. It has never disclosed a valuation, nor how much money it has raised. The St Louis, MO company was previously backed by growth equity firm Mainsail Partners.

The deal underscores not just how much scale and opportunity remains in building technology to serve the home services space, but also what might be a consolidating trend within that, where a smaller number of companies are building technology for contractors and others in the space working across a number of adjacent and related verticals.

ServiceTitan is already bringing in annual recurring revenues of $250 million — a figure it shared in March and hasn’t updated — and as of that month, it had grown 50% over the preceding year. Part of that growth is based on simply more usage of and demand for its software, but part of it also has to do with the company expanding what it covers.

ServiceTitan got its start in residential plumbing, HVAC and electrical — the areas where the the two founders Ara Mahdessian (CEO) and Vahe Kuzoyan (president) went first because they knew them best from their own family businesses — but expanded into areas like garage door, chimney and other areas, as well as commercial property, on its own steam.

In other markets like landscaping or pest control, the expertise is more specialized, however, so it makes sense to make acquisitions in those areas to bring in that software, and teams to manage and build it, to further diversify the company. (ServicePro, a pest control company, was acquired in February.)

ServiceTitan said that its contractor customers have made more than $20 billion in transactions in the last year, but with the wider industry of contracting repair and maintenance services estimated to be worth $1 trillion, there is obviously a lot more potential. Hence expanding the range of areas covered in the industry.

“Both Aspire and ServiceTitan were born out of a desire to improve the lives of contractors who work tirelessly to serve their communities, but who have historically been underserved by technology,” said Mahdessian in a statement. “Mark and his team at Aspire have more than 500 years of combined experience in the commercial landscaping industry. Just like we built ServiceTitan to solve the problems our fathers faced, it’s that first-hand industry knowledge that has enabled Aspire to build the most powerful software in the industry with the highest customer satisfaction.”

Thoma Bravo has been making some prolific moves to take majority positions in a number of older tech companies in recent weeks (see QAD, Proofpoint and Talend for three examples among others). This, however, is a growth investment that is coming as many wonder when and if ServiceTitan might go public.

I’ll hopefully get a chance to ask Mahdessian about that later but in March he hinted that an IPO might come later this year or latest by the end of 2022, depending on market conditions. This Series G round implies perhaps stretching to the later part of that timeframe.

“As the fastest-growing software solution for the trades with an unrelenting focus on customer success, ServiceTitan is poised to extend its leadership and capture increased market share as the industry exceeds $1 trillion globally,” said Robert (Tre) Sayle, a partner at Thoma Bravo, in a statement. “ServiceTitan’s expansion into landscaping, a more than $100 billion market in the US alone, is an important step on its path to provide all home and commercial tradesmen with the tools they need to grow and manage a successful business. We are excited to partner with ServiceTitan and to leverage our software and operational expertise to accelerate the company’s growth and build upon its strong momentum.”

There are a number of companies playing in the wider home services market that speak to the opportunity ahead. Companies like Thumbtack are digging deeper into home management, providing a bridge to contractors to fill out the work needed (and also providing them with the software to do so), while companies like Jobber and BigChange, which have also raised recently, are also looking to build better software to manage individual and fleets of contractors and their fleets.

ServiceTitan, the biggest of the software players now, is likely going to continue making more deals to grow its own empire, but it added that it will also be using the funding to expand more organically, with investments into customer service, R&D, and to hire more people across the board.

Jun
30
2021
--

FloLive, an IoT startup building cloud-based private 5G networks, raises $15.5M led by Intel

As enterprises and carriers gear up for operating and scaling IoT services and monitoring the activity of their devices, machines and more globally, a startup that is building technology to make this easier and cheaper to implement is announcing some funding.

FloLive, which has built a cloud-based solution to stitch together private, local cellular networks to create private global IoT 5G networks for its customers, has raised $15.5 million, funding that it will be using to continue expanding its service, both through investing and building out its tech stack, upgrading its network to 5G where it’s being used, and building a global SIM2Cloud offering in partnership with an as-yet unnamed global cloud provider.

Intel Capital, the investment arm of the chip giant, is leading the investment, with Qualcomm Ventures, Dell Technologies Capital, 83North and Saban Ventures also participating. Intel, Qualcomm and Dell are all strategic backers here: the three work with carriers and enterprises to power and manage services and devices, and this will give them potentially a better way of integrating a much more flexible, global technology and network to provision those services more seamlessly across different geographies.

This is an extension to a $21.5 million round that London-based FloLive raised last year, bringing the total for the Series B to $37 million. From what we understand, the startup is also now working on its Series C.

As we move towards more ubiquitous 5G networks and services that use them, the challenge in the market that FloLive is addressing is a critical one to get right.

In a nutshell, enterprises and carriers that are building networks for managing IoT and other connected devices face a scaling issue. Typically, IoT networks to cover services like these are built on national or even more localized footprints, making it a challenge — if not completely impossible — to control or monitor devices in a global network in a centralized way.

“If you look on high level at tier one networks, you see two main things,” Nir Shalom, FloLive’s CEO, said in an interview. “These networks are built for local footprints, and they are mainly built for consumers. What we do is different in that we think about the global, not local, footprint; and our data networks are for IoT, not only people.”

Of course there are some carriers that might look at building their own networks to rival this, but they will often lack the scaled use cases to do so, and may in any case work with providers like FloLive to build these anyway. The bigger picture is that there are 900 larger mobile network operators globally, Shalom said, and the majority of that group is far from being able to do this themselves.

FloLive’s approach to fixing this is not to build completely new infrastructure, but to stitch together networks from different localities and to run them as a single network. It does this by way of its software-defined connectivity built and implemented in the cloud, which stitches together not just 5G networks but whatever cellular technology happens to be in use (eg 4G, 3G or even 2G) in a particular locale.

FloLive’s tech lives in the core network, where it builds a private radio access network that it can integrate with carriers and their capacity in different markets, while then managing the network for customers as a single service.

This is somewhat similar to what you might get with a enterprise virtual private network except that this is focused specifically on the kinds of use cases that might use connected objects — FloLive cites manufacturing, logistics, healthcare and utilities as four areas — rather than laptops for employees.

The resulting network, however, also becomes a viable alternative for companies that might otherwise use a VPN for connectivity, too, as well as carriers themselves needing to extend their network for a customer. In addition to its IoT focused core network, it also provides business support systems for IoT, device management, and solutions targeted for specific verticals. FloLive supports devices that use SIM or eSIM or “softSIM” technology to connect to networks. That’s one part that likely interested those strategic investors as it allows for significantly easier integration.

“We are truly excited about floLIVE’s unique cloud-native approach to IoT connectivity,” said David Johnson, MD at Intel Capital, in a statement. “Cloud-native architectures bring efficiency, scalability and flexibility which are important for IoT services. In addition, floLIVE’s cloud-based core can provide consistency of features across many independent private and public networks. We look forward to the expansion of floLIVE’s products and services enabled by this investment.”

Updated to note the round is $15.5 million, not $15 million.

Jun
30
2021
--

Device42 introduces multi-cloud migration analysis and recommendation tool

In 2020 lots of workloads shifted to the cloud due to the pandemic, but that doesn’t mean that figuring out how to migrate those workloads got any easier. Device42, a startup that helps companies understand their infrastructure, has a new product that is designed to analyze your infrastructure and make recommendations about the most cost-effective way to migrate to the cloud.

Raj Jalan, CEO and co-founder, says that the tool uses machine learning to help discover the best configuration, and supports four cloud vendors including AWS, Microsoft, Google and Oracle plus VMware running on AWS.

“The [new tool] that’s coming out is a multi-cloud migration and recommendation [engine]. Basically, with machine learning what we have done is in addition to our discovery tool […] is we can constantly update based on your existing utilization of your resources, what it is going to cost you to run these resources across each of these multiple clouds,” Jalan explained.

This capability builds on the company’s core competency, which is providing a map of resources wherever they exist along with the dependencies that exist across the infrastructure, something that’s extremely hard for organizations to understand. “Our focus is on hybrid IT discovery and dependency mapping, [whether the] infrastructure is on prem, in colocation facilities or in the cloud,” he said.

That helps Device42 customers see how all of the different pieces of infrastructure including applications work together. “You can’t find a tool that does everything together, and also gives you a very deep discovery where you can go from the physical layer all the way to the logical layer, and see things like, ‘this is where my storage sits on this web server…’,” Jalan said.

It’s important to note that this isn’t about managing resources or making any changes to allocation. It’s about understanding your entire infrastructure wherever it lives and how the different parts fit together, while the newest piece finds the most cost-effective way to migrate to the cloud it from its current location.

The company has been around since 2012, has around 100 employees. It has raised around $38 million including a $34 million Series A in 2019. It hasn’t required a ton of outside investment as Jalan reports they are cash flow positive with “decent growth.”

Jun
30
2021
--

Percona Monthly Bug Report: June 2021

Percona June 2021 Bug Report

Percona June 2021 Bug ReportHere at Percona, we operate on the premise that full transparency makes a product better. We strive to build the best open-source database products, but also to help you manage any issues that arise in any of the databases that we support. And, in true open-source form, report back on any issues or bugs you might encounter along the way.

We constantly update our bug reports and monitor other boards to ensure we have the latest information, but we wanted to make it a little easier for you to keep track of the most critical ones. These posts are a central place to get information on the most noteworthy open and recently resolved bugs. 

In this June 2021 edition of our monthly bug report, we have the following list of bugs:

Percona Server for MySQL/MySQL Bugs

MySQL#83263: If multiple columns have an ON DELETE constraint set to the same foreign table in that case not all constraints are executed and this will result in data inconsistency.

Example Case: When having ON DELETE SET NULL constraint on one column and an ON DELETE CASCADE constraint on another column. 

In a row where both reference columns refer to the same foreign table id the ON DELETE CASCADE operation is not executed. The row is not removed as expected. But the column with ON DELETE SET NULL constraint gets the null value as expected.

As per the latest update on bug from one of the community use issues is not reproducible anymore with 5.7.21 version. I also tested with the 5.7.33 version and don’t see this issue anymore.

Affects Version/s: 5.7  [Tested/Reported version 5.7.15]

Fixed Version: 5.7.21 as per user report and my test.

 

MySQL#102586:  When doing a multiple-table DELETE that is anticipating a foreign key ON DELETE CASCADE, the statements work on the primary but it breaks row-based replication.

Affects Version/s: 8.0, 5.7  [Tested/Reported version 8.0.23, 5.7.33]

 

PS-7485 (MySQL#102094): MySQL Crash can be seen in the following cases after running store procedures.

  • Execution of a prepared statement that performed a multi-table UPDATE or DELETE was not always done correctly.
  • Prepared SET statements containing subqueries in stored procedures could raise an assertion.

Affects Version/s: 8.0.22  [Tested/Reported version 8.0.22]

Fixed Version/s: 8.0.23

 

Percona XtraDB Cluster

PXC-3645: When using RSU as wsrep_OSU_method, there is an issue of resolving metadata locks which results in deadlock.

In earlier PXC 5.7.x, both TOI and RSU DDLs don’t wait for the MDL lock, and if there is any ongoing (or just not yet committed) transaction on the same table, it will be immediately aborted and rollback. 

However, the same scenario in PXC 8.0 leads to an unexpected lock – neither the transaction gets aborted nor it is allowed to commit, while DDL is waiting on MDL lock.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.23

 

PXC-3449: When ALTER TABLE (TOI) is executed in a user session, sometimes it happens that it conflicts (MDL) with high priority transaction, which causes BF-BF abort and server termination.

Affects Version/s: 8.0  [Tested/Reported version 8.0.21]

Fixed Version/s: 8.0.25

 

PXC-3631: In PXC cluster running ALTER TABLE can be stuck waiting for table metadata lock state.

The issue with this bug is that normally, in TOI mode, when ALTER TABLE comes, the master node finishes already running INSERTs first, then it finishes quickly and does not affect the performance of SELECT queries on reader nodes at all.

However, sometimes it hangs for a long time and blocks other queries. With a high number of wsrep_slave_threads, the pxc node also crashed.

Affects Version/s:  5.7 [Tested/Reported version 5.7.33]

 

Percona XtraBackup

PXB-2486: When using xtrabackup –encrypt and –parallel  with xbcloud, it doesn’t handle broken pipe correctly.The backup will hang in an infinite loop if xbcloud fails.

Affects Version/s: 2.4  [Tested/Reported version 2.4.22]

Fixed Version/s: 2.4.22

 

PXB-2375:  In some cases, xtrabackup will write the wrong binlog filename, pos, and GTID combination info in xtrabackup_binlog_info. 

If we are using this backup with GTID position details in xtrabackup_binlog_info to create a new replica, then most likely replication will break due to incorrect GTID position.

Looks like the GTID position is not consistent with binlog file pos, they are captured differently and later printed together in xtrabackup_binlog_info  file.

Affects Version/s:  8.0 [Tested/Reported version 8.0.14]

 

Percona Toolkit

PT-1889: Incorrect output when using pt-show-grants for users based on MySQL roles and as a result, they can not be applied back properly on MySQL server.

Affects Version/s:  3.2.1

 

PT-1747: pt-online-schema-change was bringing the database into a broken state when applying the “rebuild_constraints” foreign keys modification method if any of the child tables were blocked by the metadata lock.

Affects Version/s:  3.0.13

Fixed Version: 3.3.2

 

PMM  [Percona Monitoring and Management]

PMM-7846:  Adding mongodb instance via pmm-admin with tls option not working and failing with error Connection check failed: timeout (context deadline exceeded)

Affects Version/s: 2.x  [Tested/Reported version 2.13, 2.16]

 

PMM-7941:  mongodb_exporter provides a wrong replication status value on MongoDB ReplicaSet Summary dashboard, all replication servers have got status PRIMARY. This status is provided by metric mongodb_mongod_replset_my_state.

Affects Version/s: 2.x  [Tested/Reported version  2.16.0]

Fixed Version: 2.18.0

 

PMM-4665: Frequent error messages in pmm-agent.log for components like tokudb storage engine which are not supported by upstream MySQL. As a result, it increases the overall log file size due to these additional messages.

Affects Version/s:  2.x  [Tested/Reported version 2.0.13]

Fixed version: 2.0.19

 

Percona Kubernetes Operator for Percona XtraDB Cluster

New Feature in PXC-operator 1.8.0:

K8SPXC-442: The Operator can now automatically remove old backups from S3 storage if the retention period is set

 

Summary

We welcome community input and feedback on all our products. If you find a bug or would like to suggest an improvement or a feature, learn how in our post, How to Report Bugs, Improvements, New Feature Requests for Percona Products.

For the most up-to-date information, be sure to follow us on Twitter, LinkedIn, and Facebook.

Quick References:

Percona JIRA  

MySQL Bug Report

Report a Bug in a Percona Product

MySQL 8.0.24 Release notes

___

About Percona:

As the only provider of distributions for all three of the most popular open source databases—PostgreSQL, MySQL, and MongoDB—Percona provides expertise, software, support, and services no matter the technology.

Whether it’s enabling developers or DBAs to realize value faster with tools, advice, and guidance, or making sure applications can scale and handle peak loads, Percona is here to help.

Percona is committed to being open source and preventing vendor lock-in. Percona contributes all changes to the upstream community for possible inclusion in future product releases.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com