Mar
26
2019
--

Adobe and Salesforce announce Customer Data Platforms to pull data into single view

Marketing analytics is an increasingly complex business. It’s meant to collect as much information as possible across multiple channels from multiple tools and provide marketers with as complete a picture of their customers and their experience in dealing with you as possible. Perhaps not coincidentally, Adobe, which is holding its Adobe Summit this week in Las Vegas, and Salesforce both made Customer Data Platform (CDP) announcements this week.

The Customer Data Platform is a complex construct, but it’s basically a marketer’s dream, a central database that pulls customer data from a variety of channels and disparate data sources to give marketeers deep insight into their customers, all with the hope of gathering enough data to serve the perfect experience. As always, the ultimate goal is happy repeat customers, who build brand loyalty.

It always comes down to experience for marketers these days, and that involves serving up the right kind of experience. You don’t want the first-time visitor to have the same experience as a loyal customer. You don’t want a business customer to have the same experience as the consumer. All of that takes lots and lots of information, and when you want to make those experiences even more personalized in real time, it’s a tough problem to solve.

Part of the problem is that customers are working across multiple channels and marketers are using multiple tools from a variety of vendors. When you combine those two problems, it’s hard to collect all of the data on a given customer.

The process is a bit like boiling, the ocean and to complicate matters even further, it involves anonymized data and non-anonymized data about customers being stored in the same database. Imagine those two elements being hacked. It wouldn’t be pretty, which is just one reason that these kinds of platforms are so difficult to build.

Yet the promise of having a central data hub like this is so tantalizing, and the amount of data growing so quickly, that having a tool to help pull it all together could have great utility for marketers. Armed with this kind of information, it could enable marketers to build what Salesforce’s Bob Stutz called “hyper-targeted messages” in a blog post yesterday.

Stutz used that same blog post to announce Salesforce’s CDP offering, which is not the same as the Customer 360 product announced at Dreamforce last year, although you would be forgiven for confusing the two. “Salesforce Customer 360 helps companies easily connect and resolve customer data across Salesforce and 3rd party applications with a single customer ID. Our Customer Data Platform builds on this unified identity foundation to deliver a ‘single view of the customer’ for marketing professionals,” Stutz wrote.

Adobe, which announced its CDP use case today, sees it in somewhat similar terms, but its approach is different, says Matt Skinner, product marketing manager for the Adobe Audience Manager product. For starters, it’s powered by the Adobe Experience Platform and “brings together known and unknown data to activate real-time customer profiles across channels throughout the customer journey,” Skinner said. In addition, he says it can use AI to help build these experiences and augment marketer ideas.

Both companies have to pull in data from their own systems, as well as external systems, to make this work. That kind of integration problem is one of the reasons Salesforce bought MuleSoft last year for $6.5 billion, but Skinner says that Adobe is taking its own open API approach to the problem. “Adobe’s platform is open and extensible with APIs and an extensive partner ecosystem, so data and applications can really come from anywhere,” he said.

Regardless, both vendors are working hard to make this happen, and it will be interesting to see how each one plays to its strengths to bring this data together. It’s clearly going to be a huge data integration and security challenge, and both companies will have to move carefully to protect the data as they build this kind of system.

Mar
26
2019
--

Adobe announces two new analytics tools to help marketers fill in the customer picture

Today at Adobe Summit in Las Vegas, Adobe announced some enhancements to its Analytics Suite that are supposed to help marketers understand their customers more deeply, including a new tool to track the entire customer journey, and one to help see the relationship between advertising and marketing success, which is surprisingly harder than you would think to understand.

The first is called Journey IQ, and as the name suggests, the idea is to provide a better understanding of the entire customer journey. That in itself isn’t new. It’s a task that marketing analytics vendors have been trying to solve for more than 10 years.

John Bates, director of product marketing for Adobe Analytics, says that understanding the customer journey can help focus marketing efforts in the future, and this tool is designed to help. “It’s really focused on helping find a complete view of a past experience and helping separate those good experiences or moments from the bad,” he explained.

Adobe wants to provide actionable data and analysis to help users understand what happened as their customers engaged with their site, in order to provide better experiences in the future. For marketing vendors, it’s always about the experience and the more data focused on understanding that experience, the more vendors believe their customers will have greater success.

This solution involves looking at elements like churn analysis, time-lapsed analysis to follow the journey step by step and look-back and look-forward kinds of analytics, all with a goal of giving marketers as much information as they can to turn that visit into positive action in the future. For marketers, that means you end the journey next time by buying (more) stuff.

The second piece, called Advertising Analytics, is a new integration with Adobe Advertising Cloud, which allows marketers to see the connection between their advertising and the success of their marketing campaigns. Given the insight digital advertising is supposed to provide marketers about the ads they are serving, you would think they would be getting that already, but advertising and marketing often operate in technology silos making it hard to put the data together to see the big picture.

Adobe wants to help marketers see the connections between the ads they are serving customers and the actions the customers take when they come to the company website. It can help give insight and understanding into how effectively your advertising strategy is translating into consumer action.

Taken together, these two analytics tools are designed to help marketers understand how and why the customer came to the site, what actions they took when they got there and give deeper insight into why they took an action or not.

In a world where it’s all about building positive customer experiences with the goal of driving more sales and more satisfied customers, understanding these kinds of relationships can be crucial, but keep in mind it’s challenging to understand all of this as it’s happening, even with tools like these.

Mar
26
2019
--

Adobe launches its Commerce Cloud, based on its Magento acquisition

Adobe today announced the launch of its Commerce Cloud, the newest part of the company’s Experience Cloud. Unsurprisingly, the Commerce Cloud builds on the company’s $1.68 billion acquisition of Magento last May. Indeed, at its core, the Adobe Commerce Cloud is essentially a fully managed cloud-based version of the Magento platform that is fully integrated with the rest of Adobe’s tools, including its Analytics Cloud, Marketing Cloud and Advertising Cloud.

With this launch, Adobe is also extending the platform by adding new features like dashboards for keeping an eye on a company’s e-commerce strategy and, for the first time, an integration with the Amazon marketplace from which users will be able to directly manage within the Commerce Cloud interface.

“For Adobe, that’s really important because it actually closes the last mile in its Experience offering,” said Jason Woosley, Adobe’s VP of its commerce product and platform and Magento’s former VP of product and technology. “It’s no mystery that they’ve been looking at commerce offerings in the past. We’re just super glad that they settled on us.”

Woosley also stressed that this new product isn’t just about closing the last mile for Adobe from a commerce perspective but also from a data intelligence perspective.”If you think about behavioral data you get from your interactions with our content, that’s all very critical for understanding how your customers are interacting with your brand,” he said. “But now that we’ve got a commerce offering, we are actually able to put the dollars and cents behind that.”

Adobe notes that this new offering also means that Magento users won’t have to worry about the operational aspects of running the service themselves. To ensure that it can manage this for these customers, the company has tweaked the service to be flexible and scalable on its platform.

Woosley also stressed the importance of the Amazon integration that launches with the Commerce Cloud. “Love it or hate it,” he said of Amazon. “Either you are comfortable participating in those marketplaces or you are not, but at the end of the day, they are capturing more and more of the initial product search.” Commerce Cloud users will be able to pick and choose which parts of their inventory will appear on Amazon and at what prices. Plenty of brands, after all, only want to showcase a selection of their products on Amazon to drive their brand awareness and then drive customers back to their own e-commerce stores.

It’s worth noting that all of the usual Magento extensions will work on the Adobe Commerce Cloud. That’s important given that there are more than 300,000 developers in the Magento ecosystem, plus thousands of partners. With that, the Commerce Cloud can cover quite a few use cases that wouldn’t be important enough for Adobe itself to put its own resources behind but that make the platform attractive for a wider range of potential users.

Mar
26
2019
--

Vlocity nabs $60M Series C investment on $1B valuation

As we wrote last week in How Salesforce paved the way for the SaaS platform approach, the ability to build extensions, applications and even whole companies on top of the Salesforce platform set the stage and the bar for every SaaS company since. Vlocity certainly recognized that. Targeting five verticals, it built industry-specific CRM solutions on the Salesforce platform, and today announced a $60 million Series C round on a fat unicorn $1 billion valuation.

The round was led by Sutter Hill Ventures and Salesforce Ventures. New investors Bessemer Venture Partners and existing strategic investors Accenture and New York Life also participated. The company has now raised $163 million.

Company co-founder and CEO David Schmaier, whose extensive career includes stints with Siebel Systems and Oracle, says he and his co-founders (three of whom helped launch Veeva) wanted to take the idea of Veeva, which is a life sciences-focused company built on top of Salesforce, and extend that idea across five verticals instead of just one. Those five verticals include communications and media, insurance and financial services, health, energy and utilities and government and nonprofits.

The idea he said was to build a company with a market that was 10x the size of life sciences. “What we’re doing now is building five Veevas at once. If you could buy a product already tailored to the needs of your industry, why wouldn’t you do that?,” Schmaier said.

The theory seems to be working. He says that the company, which was founded in 2014, has already reached $100 million in revenue and expects to double that by the end of this year. Then of course, there is the unicorn valuation. While perhaps not as rare as it once was, reaching the $1 billion level is still a significant milestone for a startup.

In the Salesforce platform story, co-founder and CTO Parker Harris addressed the need for solutions like the ones from Veeva and Vlocity. “…Harris said they couldn’t build one Salesforce for healthcare and another for insurance and a third one for finance. We knew that wouldn’t scale, and so the platform [eventually] just evolved out of this really close relationship with our customers and the needs they had.” In other words, Salesforce made the platform flexible enough for companies like these to fill in the blanks.

“Vlocity is a perfect example of the incredible innovation occurring in the Salesforce ecosystem and how we are working together to provide customers in all industries the technologies they need to attract and serve customers in smarter ways,” Jujhar Singh, EVP and GM for Salesforce Industries said in a statement.

It’s also telling that of the three strategic investors in this round — New York Life, Accenture and Salesforce Ventures — Salesforce is the biggest investor, according to Schmaier.

The company has 150 customers, including investor New York Life, Verizon (which owns this publication), Cigna and the City of New York. It already has 700 employees in 20 countries. With this additional investment, you can expect those numbers to increase.

“What this Series C round allows us to do is to really put the gas on investing in product development, because verticals are all about going deep,” Schmaier said.

Mar
25
2019
--

Percona Server for MongoDB Operator 0.3.0 Early Access Release Is Now Available

Percona Server for MongoDB

Percona Server for MongoDB OperatorPercona announces the availability of the Percona Server for MongoDB Operator 0.3.0 early access release.

The Percona Server for MongoDB Operator simplifies the deployment and management of Percona Server for MongoDB in a Kubernetes or OpenShift environment. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.

You can install the Percona Server for MongoDB Operator on Kubernetes or OpenShift. While the operator does not support all the Percona Server for MongoDB features in this early access release, instructions on how to install and configure it are already available along with the operator source code in our Github repository.

The Percona Server for MongoDB Operator is an early access release. Percona doesn’t recommend it for production environments.

New Features

Improvements

Fixed Bugs

  • CLOUD-141: Operator failed to rescale cluster after self-healing.
  • CLOUD-151: Dashboard upgrade in Percona Monitoring and Management caused loop due to no write access.
  • CLOUD-152: Percona Server for MongoDB crash took place in case of no backup section in the Operator configuration file.
  • CLOUD-91: The Operator was throwing error messages with Arbiters disabled in the deploy/cr.yaml configuration file.

Percona Server for MongoDB is an enhanced, open source and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition. It supports MongoDB® protocols and drivers. Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.
Mar
25
2019
--

Scalyr launches PowerQueries for advanced log management

Log management service Scalyr today announced the beta launch of PowerQueries, its new tools for letting its users create advanced search operations as they manage their log files and troubleshoot potential issues. The new service allows users to perform complex actions to group, transform, filter and sort their large data sets, as well as to create table lookups and joins. The company promises these queries will happen just as fast as Scalyr’s standard queries and that getting started with these more advanced queries is pretty straightforward.

Scalyr founder and chairman Steve Newman argues that the company’s competitors may offer similar tools, but that “their query languages are too complex, hard-to-learn and hard-to-use.” He also stressed that Scalyr made a conscious decision not to use any machine learning tools to power this and its other services to help admins and developers prioritize issues and instead decided to focus on its query language and making it easier for its users to manage their logs that way.

“So we thought about how we could leverage our strengths — real-time performance, ease-of-use and scalability — to provide similar but better functionality,” he said in today’s announcement. “As a result, we came up with a set of simple but powerful queries that address advanced use cases while improving the user experience dramatically. Like the rest of our solution, our PowerQueries are fast, easy-to-learn and easy-to-use.”

Current Scalyr customers cover a wide range of verticals. They include the likes of NBCUniversal, Barracuda Networks, Spiceworks, John Hopkins University, Giphy, OkCupid and Flexport. Currently, Scalyr has more than 300 paying customers. As Newman stressed, more than 3,500 employees from these customers regularly use the service. He attributes this to the fact that it’s relatively easy to use, thanks to Scalyr’s focus on usability.

The company raised its last funding round — a $20 million Series A round — back in 2017. As Scalyr’s newly minted CEO Christine Heckart told me, though, the company is currently seeing rapid growth and has quickly added headcount in recent months to capitalize on this opportunity. Given this, I wouldn’t be surprised if we saw Scalyr raise another round in the not-so-distant future, especially considering that the log management market itself is also rapidly growing (and has changed quite a bit since Scalyr launched back in 2011), as more companies start their own digital transformation projects, which often allows them to replace some of their legacy IT tools with more modern systems.

Mar
25
2019
--

How to Perform Compatible Schema Changes in Percona XtraDB Cluster (Advanced Alternative)?

PXC schema changes options

PXC schema changes optionsIf you are using Galera replication, you know that schema changes may be a serious problem. With its current implementation, there is no way even a simple ALTER will be unobtrusive for live production traffic. It is a fact that with the default TOI alter method, Percona XtraDB Cluster (PXC) cluster suspends writes in order to execute the ALTER in the same order on all nodes.

For factual data structure changes, we have to adapt to the limitations, and either plan for a maintenance window, or use pt-online-schema-change, where interruptions should be very short. I suggest you be extra careful here, as normally you cannot kill an ongoing ALTER query in Galera cluster.

For schema compatible changes, that is, ones that cannot break ROW replication when the writer node and applier nodes have different metadata, we can consider using the Rolling Schema Update (RSU) method. An example of 100% replication-safe DDL is OPTIMIZE TABLE (aka noop-ALTER). However, the following are safe to consider too:

  • adding and removing secondary index,
  • renaming an index,
  • changing the ROW_FORMAT (for example enabling/disabling table compression),
  • changing the KEY_BLOCK_SIZE(compression property).

However, a lesser known fact is that even using the RSU method or pt-online-schema-change for the above may not save us from some unwanted disruptions.

RSU and Concurrent Queries

Let’s take a closer look at a very simple scenario with noop ALTER. We will set wsrep_OSU_method to RSU to avoid a cluster-wide stall. In fact, this mode turns off replication for the following DDL (and only for DDL), so you have to remember to repeat the same ALTER on every cluster member later.

For simplicity, let’s assume there is only one node used for writes. In the first client session, we change the method accordingly to prepare for DDL:

node1 > set wsrep_OSU_method=RSU;
Query OK, 0 rows affected (0.00 sec)
node1 > select @@wsrep_OSU_method,@@wsrep_on,@@wsrep_desync;
+--------------------+------------+----------------+
| @@wsrep_OSU_method | @@wsrep_on | @@wsrep_desync |
+--------------------+------------+----------------+
| RSU                |          1 |              0 |
+--------------------+------------+----------------+
1 row in set (0.00 sec)

(By the way, as seen above, the desync mode is not enabled yet, as it will be automatically enabled around the DDL query only, and disabled right after it finishes).

In a second client session, we start a long enough SELECT query:

node1 > select count(*) from db1.sbtest1 a join db1.sbtest1 b where a.id<10000;
...

And while it’s ongoing, let’s rebuild the table:

node1 > alter table db1.sbtest1 engine=innodb;
Query OK, 0 rows affected (0.98 sec)
Records: 0 Duplicates: 0 Warnings: 0

Surprisingly, immediately the client in the second session receives its SELECT failure:

ERROR 1213 (40001): WSREP detected deadlock/conflict and aborted the transaction. Try restarting the transaction

So, even a simple SELECT is aborted if it conflicts with the local, concurrent ALTER (RSU)… We can see more details in the error log:

2018-12-04T21:39:17.285108Z 0 [Note] WSREP: Member 0.0 (node1) desyncs itself from group
2018-12-04T21:39:17.285124Z 0 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 471796)
2018-12-04T21:39:17.305018Z 12 [Note] WSREP: Provider paused at 7bf59bb4-996d-11e8-b3b6-8ed02cd38513:471796 (30)
2018-12-04T21:39:17.324509Z 12 [Note] WSREP: --------- CONFLICT DETECTED --------
2018-12-04T21:39:17.324532Z 12 [Note] WSREP: cluster conflict due to high priority abort for threads:
2018-12-04T21:39:17.324535Z 12 [Note] WSREP: Winning thread:
THD: 12, mode: total order, state: executing, conflict: no conflict, seqno: -1
SQL: alter table db1.sbtest1 engine=innodb
2018-12-04T21:39:17.324537Z 12 [Note] WSREP: Victim thread:
THD: 11, mode: local, state: executing, conflict: no conflict, seqno: -1
SQL: select count(*) from db1.sbtest1 a join db1.sbtest1 b where a.id<10000
2018-12-04T21:39:17.324542Z 12 [Note] WSREP: MDL conflict db=db1 table=sbtest1 ticket=MDL_SHARED_READ solved by abort
2018-12-04T21:39:17.324544Z 12 [Note] WSREP: --------- CONFLICT DETECTED --------
2018-12-04T21:39:17.324545Z 12 [Note] WSREP: cluster conflict due to high priority abort for threads:
2018-12-04T21:39:17.324547Z 12 [Note] WSREP: Winning thread:
THD: 12, mode: total order, state: executing, conflict: no conflict, seqno: -1
SQL: alter table db1.sbtest1 engine=innodb
2018-12-04T21:39:17.324548Z 12 [Note] WSREP: Victim thread:
THD: 11, mode: local, state: executing, conflict: must abort, seqno: -1
SQL: select count(*) from db1.sbtest1 a join db1.sbtest1 b where a.id<10000
2018-12-04T21:39:18.517457Z 12 [Note] WSREP: resuming provider at 30
2018-12-04T21:39:18.517482Z 12 [Note] WSREP: Provider resumed.
2018-12-04T21:39:18.518310Z 0 [Note] WSREP: Member 0.0 (node1) resyncs itself to group
2018-12-04T21:39:18.518342Z 0 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 471796)
2018-12-04T21:39:18.519077Z 0 [Note] WSREP: Member 0.0 (node1) synced with group.
2018-12-04T21:39:18.519099Z 0 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 471796)
2018-12-04T21:39:18.519119Z 2 [Note] WSREP: Synchronized with group, ready for connections
2018-12-04T21:39:18.519126Z 2 [Note] WSREP: Setting wsrep_ready to true

Another example – a simple sysbench test, during which I did noop ALTER in RSU mode:

# sysbench /usr/share/sysbench/oltp_read_only.lua --table-size=1000 --tables=8 --mysql-db=db1 --mysql-user=root --threads=8 --time=200 --report-interval=1 --events=0 --db-driver=mysql run
sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2)
Running the test with following options:
Number of threads: 8
Report intermediate results every 1 second(s)
Initializing random number generator from current time
Initializing worker threads...
Threads started!
[ 1s ] thds: 8 tps: 558.37 qps: 9004.30 (r/w/o: 7880.62/0.00/1123.68) lat (ms,95%): 18.28 err/s: 0.00 reconn/s: 0.00
[ 2s ] thds: 8 tps: 579.01 qps: 9290.22 (r/w/o: 8130.20/0.00/1160.02) lat (ms,95%): 17.01 err/s: 0.00 reconn/s: 0.00
[ 3s ] thds: 8 tps: 597.36 qps: 9528.89 (r/w/o: 8335.17/0.00/1193.72) lat (ms,95%): 15.83 err/s: 0.00 reconn/s: 0.00
FATAL: mysql_stmt_store_result() returned error 1317 (Query execution was interrupted)
FATAL: `thread_run' function failed: /usr/share/sysbench/oltp_common.lua:432: SQL error, errno = 1317, state = '70100': Query execution was interrupted

So, SELECT queries are aborted to resolve MDL lock request that a DDL in RSU needs immediately. This of course applies to INSERT, UPDATE and DELETE as well. That’s quite an intrusive way to accomplish the goal…

“Manual RSU”

Let’s try a “manual RSU” workaround instead. In fact, we can achieve the same isolated DDL execution as in RSU, by putting a node in desync mode (to avoid flow control) and disabling replication for our session. That way, the ALTER will only be executed in that particular node.

Session 1:

node1 > set wsrep_OSU_method=TOI; set global wsrep_desync=1; set wsrep_on=0;
Query OK, 0 rows affected (0.01 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
node1 > select @@wsrep_OSU_method,@@wsrep_on,@@wsrep_desync;
+--------------------+------------+----------------+
| @@wsrep_OSU_method | @@wsrep_on | @@wsrep_desync |
+--------------------+------------+----------------+
| TOI                |          0 |              1 |
+--------------------+------------+----------------+
1 row in set (0.00 sec)

Session 2:

node1 > select count(*) from db1.sbtest1 a join db1.sbtest1 b where a.id<10000;
+-----------+
| count(*)  |
+-----------+
| 423680000 |
+-----------+
1 row in set (14.07 sec)

Session 1:

node1 > alter table db1.sbtest1 engine=innodb;
Query OK, 0 rows affected (13.52 sec)
Records: 0 Duplicates: 0 Warnings: 0

Session 3:

node1 > select id,command,time,state,info from information_schema.processlist where user="root";
+----+---------+------+---------------------------------+-----------------------------------------------------------------------------------------+
| id | command | time | state                           | info |
+----+---------+------+---------------------------------+-----------------------------------------------------------------------------------------+
| 11 | Query   | 9    | Sending data                    | select count(*) from db1.sbtest1 a join db1.sbtest1 b where a.id<10000 |
| 12 | Query   | 7    | Waiting for table metadata lock | alter table db1.sbtest1 engine=innodb |
| 17 | Query   | 0    | executing                       | select id,command,time,state,info from information_schema.processlist where user="root" |
+----+---------+------+---------------------------------+-----------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)
node1 > select id,command,time,state,info from information_schema.processlist where user="root";
+----+---------+------+----------------+-----------------------------------------------------------------------------------------+
| id | command | time | state          | info |
+----+---------+------+----------------+-----------------------------------------------------------------------------------------+
| 11 | Sleep   | 14   |                | NULL |
| 12 | Query   | 13   | altering table | alter table db1.sbtest1 engine=innodb |
| 17 | Query   | 0    | executing      | select id,command,time,state,info from information_schema.processlist where user="root" |
+----+---------+------+----------------+-----------------------------------------------------------------------------------------+
3 rows in set (0.00 sec)

In this case, there was no interruption, the ALTER waited for it’s MDL lock request to succeed gracefully, and did it’s job when it became possible.

Remember, you have to execute the same commands on the rest of the nodes to make them consistent – even for noop-alter, it’s important to make the nodes consistent in terms of table size on disk.

Kill Problem

Another fact is that you cannot cancel or kill a DDL query executed in RSU or in TOI method:

node1 > kill query 12;
ERROR 1095 (HY000): You are not owner of thread 12

This may be an annoying problem when you need to unblock a node urgently. Fortunately, the workaround with wsrep_on=0 also allows to kill an ALTER without that restriction:

Session 1:

node1 > kill query 22;
Query OK, 0 rows affected (0.00 sec)

Session 2:

node1 > alter table db1.sbtest1 engine=innodb;
ERROR 1317 (70100): Query execution was interrupted

Summary

The RSU method may be more intrusive then you’d expect. For schema compatible changes, it is worth considering “manual RSU” with

set global wsrep_desync=1; set wsrep_on=0;

When using it though, please remember that wsrep_on applies to all types of writes, both DDL and DML, so be extra careful to set it back to 1 after the ALTER is done. So the procedure will look like this:

SET GLOBAL wsrep_desync=1;
SET wsrep_on=0;
ALTER ...  /* compatible schema change only! */
SET wsrep_on=1;
SET GLOBAL wsrep_desync=0;

Incidentally, as in my opinion the current RSU behavior is unnecessarily intrusive, I have filed this change suggestion: https://jira.percona.com/browse/PXC-2293


Photo by Pierre Bamin on Unsplash

Mar
25
2019
--

Talking Drupal #204 – A Few Things

In episode #204 we talk about getting the most out of DrupalCon and expanding your toolkit. www.talkdrupal.com/204

Topics

  • Drupal stories
  • Getting the most of DrupalCon 
  • Planning your trip
  • Exhibit floor
  • Social events
  • Expanding your toolkit

Resources

Drupal accepted into Google Summer of Code 2019

#175 – Automated Testing with Oliver

Guest Host

Oliver Davies  @opdavies   www.drupal.org/u/opdavies

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

Mar
24
2019
--

Alibaba acquires Israeli startup Infinity Augmented Reality

Infinity Augmented Reality, an Israeli startup, has been acquired by Alibaba, the companies announced this weekend. The deal’s terms were not disclosed. Alibaba and InfinityAR have had a strategic partnership since 2016, when Alibaba Group led InfinityAR’s Series C. Since then, the two have collaborated on augmented reality, computer vision and artificial intelligence projects.

Founded in 2013, the startup’s augmented glasses platform enables developers in a wide range of industries (retail, gaming, medical, etc.) to integrate AR into their apps. InfinityAR’s products include software for ODMs and OEMs and a SDK plug-in for 3D engines.

Alibaba’s foray into virtual reality started three years ago, when it invested in Magic Leap and then announced a new research lab in China to develop ways of incorporating virtual reality into its e-commerce platform.

InfinityAR’s research and development team will begin working out of Alibaba’s Israel Machine Laboratory, part of Alibaba DAMO Academy, the R&D initiative into which it is pouring $15 billion with the goal of eventually serving two billion customers and creating 100 million jobs by 2036. DAMO Academy collaborates with universities around the world, and Alibaba’s Israel Machine Laboratory has a partnership with Tel Aviv University focused on video analysis and machine learning.

In a press statement, the laboratory’s head, Lihi Zelnik-Manor, said “Alibaba is delighted to be working with InfinityAR as one team after three years of partnership. The talented team brings unique knowhow in sensor fusion, computer vision and navigation technologies. We look forward to exploring these leading technologies and offering additional benefits to customers, partners and developers.”

Mar
22
2019
--

How Salesforce paved the way for the SaaS platform approach

When we think of enterprise SaaS companies today, just about every startup in the space aspires to be a platform. That means they want people using their stack of services to build entirely new applications, either to enhance the base product, or even build entirely independent companies. But when Salesforce launched Force.com, the company’s Platform as a Service, in 2007, there wasn’t any model.

It turns out that Force.com was actually the culmination of a series of incremental steps after the launch of the first version of Salesforce in February, 2000, all of which were designed to make the software more flexible for customers. Company co-founder and CTO Parker Harris says they didn’t have this goal to be a platform early on. “We were a solution first, I would say. We didn’t say ‘let’s build a platform and then build sales-force automation on top of it.’ We wanted a solution that people could actually use,” Harris told TechCrunch.

The march toward becoming a full-fledged platform started with simple customization. That first version of Salesforce was pretty basic, and the company learned over time that customers didn’t always use the same language it did to describe customers and accounts — and that was something that would need to change.

Customizing the product

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com