Oct
15
2018
--

Talking Drupal #180 – Media in 8.6

In episode 180 with Ivan Zugec from Webwash about Media in Drupal 8.6.  www.talkingdrupa.com/180

Topics

  • Drupal 8 Media
  • What’s new in Media 8.6
  • Related modules
  • Current limitations 

Resources

Ivan Zugec on Drupal.org 

Ivan’s Website

Wedwash

Managing Media Assets using Core Media in Drupal 8 

New Media Management Functionality in Drupal 8.6

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

Ivan Zugec – www.Webwash.net @ivanzugec

Oct
15
2018
--

Celonis brings intelligent process automation software to cloud

Celonis has been helping companies analyze and improve their internal processes using machine learning. Today the company announced it was providing that same solution as a cloud service with a few nifty improvements you won’t find on prem.

The new approach, called Celonis Intelligent Business Cloud, allows customers to analyze a workflow, find inefficiencies and offer improvements very quickly. Companies typically follow a workflow that has developed over time and very rarely think about why it developed the way it did, or how to fix it. If they do, it usually involves bringing in consultants to help. Celonis puts software and machine learning to bear on the problem.

Co-founder and CEO Alexander Rinke says that his company deals with massive volumes of data and moving all of that to the cloud makes sense. “With Intelligent Business Cloud, we will unlock that [on prem data], bring it to the cloud in a very efficient infrastructure and provide much more value on top of it,” he told TechCrunch.

The idea is to speed up the whole ingestion process, allowing a company to see the inefficiencies in their business processes very quickly. Rinke says it starts with ingesting data from sources such as Salesforce or SAP and then creating a visual view of the process flow. There may be hundreds of variants from the main process workflow, but you can see which ones would give you the most value to change, based on the number of times the variation occurs.

Screenshot: Celonis

By packaging the Celonis tools as a cloud service, they are reducing the complexity of running and managing it. They are also introducing an app store with over 300 pre-packaged options for popular products like Salesforce and ServiceNow and popular process like order to cash. This should also help get customers up and running much more quickly.

New Celonis App Store. Screenshot: Celonis

The cloud service also includes an Action Engine, which Rinke describes as a big step toward moving Celonis from being purely analytical to operational. “Action Engine focuses on changing and improving processes. It gives workers concrete info on what to do next. For example in process analysis, it would notice on time delivery isn’t great because order to cash is to slow. It helps accelerate changes in system configuration,” he explained.

Celonis Action Engine. Screenshot: Celonis

The new cloud service is available today. Celonis was founded in 2011. It has raised over $77 million. The most recent round was a $50 million Series B on a valuation over $1 billion.

Oct
15
2018
--

Truphone, an eSIM mobile carrier that works with Apple, raises another $71M, now valued at $507M

Truphone — a UK startup that provides global mobile voice and data services by way of an eSIM model for phones, tablets and IoT devices — said that it has raised another £18 million ($23.7 million) in funding; plus it said it has secured £36 million ($47 million) more “on a conditional basis” to expand its business after signing “a number of high-value deals.”

It doesn’t specify which deals these are, but Truphone was an early partner of Apple’s to provide eSIM-based connectivity to the iPad — that is, a way to access a mobile carrier without having to swap in a physical SIM card, which has up to now been the standard for GMSA-based networks. Truphone is expanding on this by offering a service for new iPhone XS and XR models, taking advantage of the dual SIM capability in these devicews. Truphone says that strategic partners of the company include Apple (“which chose Truphone as the only carrier to offer global data, voice and text plans on the iPad and iPhone digital eSIM”); Synopsys, which has integrated Truphone’s eSIM technology into its chipset designs; and Workz Group, a SIM manufacturer, which has a license from Truphone for its GSMA-accredited remote SIM provisioning platform and SIM operating system.

The company said that this funding, which was made by way of a rights issue, values Truphone at £386 million ($507 million at today’s rates) post-money. Truphone told TechCrunch that the funding came from Vollin Holdings and Minden Worldwide — two investment firms with ties to Roman Abramovich, the Russian oligarch who also owns the Chelsea football club, among other things — along with unspecified minority shareholders. Collectively, Abramovich-connected entities control more than 80 percent of the company.

We have asked the company for more detail on what the conditions are for the additional £36 million in funding to be released and all it is willing to say is that “it’s KPI-driven and related to the speed of growth in the business.” It’s unclear what the state of the business is at the moment because Truphone has not updated its accounts at Companies House (they are overdue). We have asked about that, too.

For some context, Truphone most recently raised money almost exactly a year ago, when it picked up £255 million also by way of a rights issue, and also from the same two big investors. The large amount that time was partly being raised to retire debt. That deal was done at a valuation of £370 million ($491 million at the time of the deal). Going just on sterling values, this is a slight down-round.

Truphone, however, says that business is strong right now:

“The appetite for our technology has been enormous and we are thrilled that our investors have given us the opportunity to accelerate and scale these groundbreaking products to market,” said Ralph Steffens, CEO, Truphone, in a statement. “We recognised early on that the more integrated the supply chain, the smoother the customer experience. That recognition paid off—not just for our customers, but for our business. Because we have this capability, we can move at a speed and proficiency that has never before seen in our industry. This investment is particularly important because it is testament not just to our investors’ confidence in our ambitions, but pride in our accomplishments and enthusiasm to see more of what we can do.”

Truphone is one of a handful of providers that is working with Apple to provide plans for the digital eSIM by way of the MyTruphone app. Essentially this will give users an option for international data plans while travelling — Truphone’s network covers 80 countries — without having to swap out the SIMs for their home networks.

The eSIM technology is bigger than the iPhone itself, of course: some believe it could be the future of how we connect on mobile networks. On phones and tablets, it does away with users ordering, and inserting or swapping small, fiddly chips into their devices (that ironically is also one reason that carriers have been resistant to eSIMs traditionally: it makes it much easier for their customers to churn away). And in IoT networks where you might have thousands of connected, unmanned devices, this becomes one way of scaling those networks.

“eSIM technology is the next big thing in telecommunications and the impact will be felt by everyone involved, from consumers to chipset manufacturers and all those in-between,” said Steve Alder, chief business development officer at Truphone. “We’re one of only a handful of network operators that work with the iPhone digital eSIM. Choosing Truphone means that your new iPhone works across the world—just as it was intended.” Of note, Alder was the person who brokered the first iPhone carrier deal in the UK, when he was with O2.

However, one thing to consider when sizing up the eSIM market is that rollout has been slow so far: there are around 10 countries where there are carriers that support eSIM for handsets. Combining that with machine-to-machine deployments, the market is projected to be worth $254 million this year. However, forecasts put that the market size at $978 million by 2023, possibly pushed along by hardware companies like Apple making it an increasingly central part of the proposition, initially as a complement to a “home carrier.”

Truphone has not released numbers detailing how many devices are using its eSIM services at the moment — either among enterprises or consumers — but it has said that customers include more than 3,500 multinational enterprises in 196 countries. We have asked for more detail and will update this post as we learn more.

Oct
12
2018
--

Anaplan hits the ground running with strong stock market debut up over 42 percent

You might think that Anaplan CEO, Frank Calderoni would have had a few sleepless nights this week. His company picked a bad week to go public as market instability rocked tech stocks. Still he wasn’t worried, and today the company had by any measure a successful debut with the stock soaring up over 42 percent. As of 4 pm ET, it hit $24.18, up from the IPO price of $17. Not a bad way to launch your company.

Stock Chart: Yahoo Finance

“I feel good because it really shows the quality of the company, the business model that we have and how we’ve been able to build a growing successful business, and I think it provides us with a tremendous amount of opportunity going forward,” Calderoni told TechCrunch.

Calderoni joined the company a couple of years ago, and seemed to emerge from Silicon Valley central casting as former CFO at Red Hat and Cisco along with stints at IBM and SanDisk. He said he has often wished that there were a tool around like Anaplan when he was in charge of a several thousand person planning operation at Cisco. He indicated that while they were successful, it could have been even more so with a tool like Anaplan.

“The planning phase has not had much change in in several decades. I’ve been part of it and I’ve dealt with a lot of the pain. And so having something like Anaplan, I see it’s really being a disrupter in the planning space because of the breadth of the platform that we have. And then it goes across organizations to sales, supply chain, HR and finance, and as we say, really connects the data, the people and the plan to make for better decision making as a result of all that,” he said.

Calderoni describes Anaplan as a planning and data analysis tool. In his previous jobs he says that he spent a ton of time just gathering data and making sure they had the right data, but precious little time on analysis. In his view Anaplan, lets companies concentrate more on the crucial analysis phase.

“Anaplan allows customers to really spend their time on what I call forward planning where they can start to run different scenarios and be much more predictive, and hopefully be able to, as we’ve seen a lot of our customers do, forecast more accurately,” he said.

Anaplan was founded in 2006 and raised almost $300 million along the way. It achieved a lofty valuation of $1.5 billion in its last round, which was $60 million in 2017. The company has just under 1000 customers including Del Monte, VMware, Box and United.

Calderoni says although the company has 40 percent of its business outside the US, there are plenty of markets left to conquer and they hope to use today’s cash infusion in part to continue to expand into a worldwide company.

Oct
12
2018
--

IBM files formal JEDI protest a day before bidding process closes

IBM announced yesterday that it has filed a formal protest with the U.S. Government Accountability Office over the structure of the Pentagon’s winner-take-all $10 billion, 10-year JEDI cloud contract. The protest came just a day before the bidding process is scheduled to close. As IBM put it in a blog post, they took issues with the single vendor approach. They are certainly not alone.

Just about every vendor short of Amazon, which has remained mostly quiet, has been complaining about this strategy. IBM certainly faces a tough fight going up against Amazon and Microsoft.

IBM doesn’t disguise the fact that it thinks the contract has been written for Amazon to win and they believe the one-vendor approach simply doesn’t make sense. “No business in the world would build a cloud the way JEDI would and then lock in to it for a decade. JEDI turns its back on the preferences of Congress and the administration, is a bad use of taxpayer dollars and was written with just one company in mind.” IBM wrote in the blog post explaining why it was protesting the deal before a decision was made or the bidding was even closed.

For the record, DOD spokesperson Heather Babb told TechCrunch last month that the bidding is open and no vendor is favored. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said.

Much like Oracle, which filed a protest of its own back in August, IBM is a traditional vendor that was late to the cloud. It began a journey to build a cloud business in 2013 when it purchased Infrastructure as a Service vendor SoftLayer and has been using its checkbook to buy software services to add on top of SoftLayer ever since. IBM has concentrated on building cloud services around AI, security, big data, blockchain and other emerging technologies.

Both IBM and Oracle have a problem with the one-vendor approach, especially one that locks in the government for a 10-year period. It’s worth pointing out that the contract actually is an initial two-year deal with two additional three year options and a final two year option. The DOD has left open the possibility this might not go the entire 10 years.

It’s also worth putting the contract in perspective. While 10 years and $10 billion is nothing to sneeze at, neither is it as market altering as it might appear, not when some are predicting the cloud will be $100 billion a year market very soon.

IBM uses the blog post as a kind of sales pitch as to why it’s a good choice, while at the same time pointing out the flaws in the single vendor approach and complaining that it’s geared toward a single unnamed vendor that we all know is Amazon.

The bidding process closes today, and unless something changes as a result of these protests, the winner will be selected next April

Oct
12
2018
--

Track PostgreSQL Row Changes Using Public/Private Key Signing

PostgreSQL encryption and authorization

row signing with postgresqlAuthorisations and encryption/decryption within a database system establish the basic guidelines in protecting your database by guarding against malicious structural or data changes.

What are authorisations?

Authorisations are the access privileges that mainly control what a user can and cannot do on the database server for one or more databases. So consider this to be like granting a key to unlock specific doors. Think of this as more like your five star hotel smart card. It allows you access all facilities that are meant for you, but doesn’t let you open every door. Whereas, privileged staff have master keys which let them open any door.

Similarly, in the database world, granting permissions secures the system by allowing specific actions by specific users or user groups, yet it allows database administrator to perform whatever action(s) on the database he/she wishes. PostgreSQL provides user management where you can can create users, and grant and revoke their privileges.

Encryption

Encryption, decryption can protect your data, obfuscate schema structure and help hide code from prying eyes. Encryption/decryption hides the valuable information and ensures that there are no mischievous changes in the code or data that may be considered harmful. In almost all cases, data encryption and decryption happens on the database server. This is more like hiding your stuff somewhere in your room so that nobody can see it, but also making your stuff difficult to access.

PostgreSQL also provides encryption using pgcrypto (PostgreSQL extension). There are some cases where you don’t want to hide the data, but don’t want people to update it either. You can revoke the privileges to modify the data.

Data modifications

But what if an admin user modifies the data? How you can identify that data is changed? If somebody changes the data and you don’t know about, then it is more dangerous than you losing your data, as you are relying on data which may no longer be valid.

Logs in database systems allow us to track back changes and “potentially” identify what was changed—unless, those logs are removed by the administrator.

So consider if you can leave your stuff openly in your room and in case of any changes, you can identify that something was tampered with. In database terms, that translates to data without encryption, but with your very own signature. One option is to add a column to your database table which keeps a checksum for the data that is generated on the client side using the user’s own private key.  Any changes in the data would mean that checksum doesn’t match anymore, and hence, one can easily identify if the data has changed. The data signing happens on the client-side, thereby ensuring that only users with the required private key can insert the data and anyone with a public key can validate.

Public/Private Keys

Asymmetric cryptographic system uses pairs of keys; public keys and private keys. Private keys are known only to the owner(s). It is used for signing or decrypting data. Public keys are shared with other stakeholders who may use it to encrypt messages or validate messages signed by the owner.

Generate Private / Public Key

Private Key

$ openssl genrsa -aes128 -passout pass:password -out key.private.pem
Generating RSA private key, 2048 bit long modulus

Public Key

$ openssl rsa -in key.private.pem -passin pass:password -pubout -out key.public.pem
writing RSA key

Signing Data

Create a sample table tbl_marks and insert a sample row in that. We’ll need to add additional columns for signature verification. This will understandably increase the table size as we are adding additional columns.

postgres=# CREATE TABLE tbl_marks (id INTEGER, name TEXT, marks INTEGER, hash TEXT);

Let’s add a row that we’d like to validate.

postgres=# INSERT INTO tbl_marks VALUES(1, 'Alice', 80);

We will select the data to store the value into into query buffer using

\gset

  command (https://www.postgresql.org/docs/current/static/app-psql.html). The complete row will be saved into “row” psql variable.

postgres=# SELECT row(id,name,marks) FROM tbl_marks WHERE id = 1;
     row   
---------------
(1,Alice,80)
(1 row)
postgres=# \gset
postgres=# SELECT :'row' as row;
     row   
---------------
(1,Alice,80)
(1 row)

Now let’s generate signature for the data stored in “row” variable.

postgres=# \set sign_command `echo :'row' | openssl dgst -sha256 -sign key.private.pem | openssl base64 | tr -d '\n' | tr -d '\r'`
Enter pass phrase for key.private.pem:

The signed hash is stored into the “sign_command” psql variable. Let’s now add this to the data row in tbl_marks table.

postgres=# UPDATE tbl_marks SET hash = :'sign_command' WHERE id = 1;
UPDATE 1

Validating Data

So our data row now contains data with a valid signature. Let’s try to validate to it. We are going to select our data in “row” psql variable and the signature hash in “hash” psql variable.

postgres=# SELECT row(id,name,marks), hash FROM tbl_marks;    
Row           hash                                                                                                                                                                                                                                                                                                                                                                                            
---------------+-----------------------------------------------
(1,Alice,80) | U23g3RwaZmbeZpYPmwezP5xvbIs8ILupW7jtrat8ixA ...
(1 row)
postgres=# \gset

Let’s now validate the data using a public key.

postgres=# \set verify_command `echo :'hash' | awk '{gsub(/.{65}/,"&\n")}1' | openssl base64 -d -out v && echo :'row' | openssl dgst -sha256 -verify key.public.pem -signature v`
postgres=# select :'verify_command' as verify;
  verify    
-------------
Verified OK
(1 row)

Perfect! The data is validated and all this happened on the client side. Imagine somebody doesn’t like that Alice got 80 marks, and they decide to reduce Alice’s marks to 30. Nobody knows if the teacher had given Alice 80 or 30 unless somebody goes and checks the database logs. We’ll give Alice 30 marks now.

postgres=# UPDATE tbl_marks SET marks = 30;
UPDATE 1

The school admin now decides to check that all data is correct before giving out the final results. The school admin has the teacher’s public key and tries to validate the data.

postgres=# SELECT row(id,name,marks), hash FROM tbl_marks;
    row    | hash                                                                                                                                                                                                                                                                  
--------------+--------------------------------------------------
(1,Alice,30) | yO20vyPRPR+HgW9D2nMSQstRgyGmCxyS9bVVrJ8tC7nh18iYc...
(1 row)
postgres=# \gset

postgres=# \set verify_command `echo :'hash' | awk '{gsub(/.{65}/,"&\n")}1' | openssl base64 -d -out v && echo :'row' | openssl dgst -sha256 -verify key.public.pem -signature v`
postgres=# SELECT :'verify_command' AS verify;
      verify      
----------------------
Verification Failure

As expected, the validation fails. Nobody other than the teacher had the private key to sign that data, and any tampering is easily identifiable.

This might not be the most efficient way of securing a dataset, but it is definitely an option if you want to keep the data unencrypted, and yet easily detect any unauthorised changes. All the load is shifted on to the client side for signing and verification thereby reducing load on the server. It allows only users with private keys to update the data, and anybody with the associated public key to validate it.

The example used psql as a client application for signing but you can do this on any client which can call the required openssl functions or directly used openssl binaries for signing and verification.

Oct
11
2018
--

How to Fix ProxySQL Configuration When it Won’t Start

restart ProxySQL config

restart ProxySQL configWith the exception of the three configuration variables described here, ProxySQL will only parse the configuration files the first time it is started, or if the proxysql.db file is missing for some other reason.

If we want to change any of this data we need to do so via ProxySQL’s admin interface and then save them to disk. That’s fine if ProxySQL is running, but what if it won’t start because of these values?

For example, perhaps we accidentally configured ProxySQL to run on port 3306 and restarted it, but there’s already a production MySQL instance running on this port. ProxySQL won’t start, so we can’t edit the value that way:

2018-10-02 09:18:33 network.cpp:53:listen_on_port(): [ERROR] bind(): Address already in use

We could delete proxysql.db and have it reload the configuration files, but that would mean any changes we didn’t mirror into the configuration files will be lost.

Another option is to edit ProxySQL’s database file using sqlite3:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/
[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db
sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces';
mysql-interfaces|127.0.0.1:3306
sqlite> UPDATE global_variables SET variable_value='127.0.0.1:6033' WHERE variable_name='mysql-interfaces';
sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces';
mysql-interfaces|127.0.0.1:6033

Or if we have a few edits to make we may prefer to do so with a text editor:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/
[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db
sqlite> .output /tmp/global_variables
sqlite> .dump global_variables
sqlite> .exit

The above commands will dump the global_variables table into a file in SQL format, which we can then edit:

[root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables
INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:3306’);
[root@centos7-pxc57-4 proxysql]# vim /tmp/global_variables
[root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables
INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:6033’);

Now we need to restore this data. We’ll use the restore command to empty the table (as we’re restoring from a missing backup):

[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db
sqlite> .restore global_variables
sqlite> .read /tmp/global_variables
sqlite> .exit

Once we’ve made the change, we should be able to start ProxySQL again:

[root@centos7-pxc57-4 proxysql]# /etc/init.d/proxysql start
Starting ProxySQL: DONE!
[root@centos7-pxc57-4 proxysql]# lsof -I | grep proxysql
proxysql 15171 proxysql 19u IPv4 265881 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 20u IPv4 265882 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 21u IPv4 265883 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 22u IPv4 265884 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 23u IPv4 266635 0t0 TCP *:6032 (LISTEN)

While you are here

You might enjoy my recent post Using ProxySQL to connect to IPV6-only databases over IPV4

You can download ProxySQL from Percona repositories, and you might also want to check out our recorded webinars that feature ProxySQL too.

Oct
11
2018
--

New Relic acquires Belgium’s CoScale to expand its monitoring of Kubernetes containers and microservices

New Relic, a provider of analytics and monitoring around a company’s internal and external facing apps and services to help optimise their performance, is making an acquisition today as it continues to expand a newer area of its business, containers and microservices. The company has announced that it has purchased CoScale, a provider of monitoring for containers and microservices, with a specific focus on Kubernetes.

Terms of the deal — which will include the team and technology — are not being disclosed, as it will not have a material impact on New Relic’s earnings. The larger company is traded on the NYSE (ticker: NEWR) and has been a strong upswing in the last two years, and its current market cap its around $4.6 billion.

Originally founded in Belgium, CoScale had raised $6.4 million and was last valued at $7.9 million, according to PitchBook. Investors included Microsoft (via its ScaleUp accelerator), PMV and the Qbic Fund, two Belgian investors.

We are thrilled to bring CoScale’s knowledge and deeply technical team into the New Relic fold,” noted Ramon Guiu, senior director of product management at New Relic. “The CoScale team members joining New Relic will focus on incorporating CoScale’s capabilities and experience into continuing innovations for the New Relic platform.”

The deal underscores how New Relic has had to shift in the last couple of years: when the company was founded years ago, application monitoring was a relatively easy task, with the web and a specified number of services the limit of what needed attention. But services, apps and functions have become increasingly complex and now tap data stored across a range of locations and devices, and processing everything generates a lot of computing demand.

New Relic first added container and microservices monitoring to its stack in 2016. That’s a somewhat late arrival to the area, New Relic CEO Lew Cirne believes that it’s just at the right time, dovetailing New Relic’s changes with wider shifts in the market.

‘We think those changes have actually been an opportunity for us to further differentiate and further strengthen our thesis that the New Relic  way is really the most logical way to address this,” he told my colleague Ron Miller last month. As Ron wrote, Cirne’s take is that New Relic has always been centered on the code, as opposed to the infrastructure where it’s delivered, and that has helped it make adjustments as the delivery mechanisms have changed.

New Relic already provides monitoring for Kubernetes, Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (EKS), Microsoft Azure Kubernetes Service (AKS), and RedHat Openshift, and the idea is that CoScale will help it ramp up across that range, while also adding Docker and OpenShift to the mix, as well as offering new services down the line to serve the DevOps community.

“The visions of New Relic and CoScale are remarkably well aligned, so our team is excited that we get to join New Relic and continue on our journey of helping companies innovate faster by providing them visibility into the performance of their modern architectures,” said CoScale CEO Stijn Polfliet, in a statement. “[Co-founder] Fred [Ryckbosch] and I feel like this is such an exciting space and time to be in this market, and we’re thrilled to be teaming up with the amazing team at New Relic, the leader in monitoring modern applications and infrastructure.”

Oct
11
2018
--

Zuora partners with Amazon Pay to expand subscription billing options

Zuora, the SaaS company helping organizations manage payments for subscription businesses, announced today that it had been selected as a Premier Partner in the Amazon Pay Global Partner Program. 

The “Premier Partner” distinction means businesses using Zuora’s billing platform can now easily integrate Amazon’s digital payment system as an option during checkout or recurring payment processes. 

The strategic rationale for Zuora is clear, as the partnership expands the company’s product offering to prospective and existing customers.  The ability to support a wide array of payment methodologies is a key value proposition for subscription businesses that enables them to service a larger customer base and provide a more seamless customer experience.

It also doesn’t hurt to have a deep-pocketed ally like Amazon in a fairly early-stage industry.  With omnipotent tech titans waging war over digital payment dominance, Amazon has reportedly doubled down on efforts to spread Amazon Pay usage, cutting into its own margins and offering incentives to retailers.

As adoption of Amazon Pay spreads, subscription businesses will be compelled to offer the service as an available payment option and Zuora should benefit from supporting early billing integration.

For Amazon Pay, teaming up with Zuora provides direct access to Zuora’s customer base, which caters to tens of millions of subscribers. 

With Zuora minimizing the complexity of adding additional payment options, which can often disrupt an otherwise unobtrusive subscription purchase experience, the partnership with Zuora should help spur Amazon Pay adoption and reduce potential friction.

“By extending the trust and convenience of the Amazon experience to Zuora, merchants around the world can now streamline the subscription checkout experience for their customers,” said Vice President of Amazon Pay, Patrick Gauthier.  “We are excited to be working with Zuora to accelerate the Amazon Pay integration process for their merchants and provide a fast, simple and secure payment solution that helps grow their business.”

The world subscribed

The collaboration with Amazon Pay represents another milestone for Zuora, which completed its IPO in April of this year and is now looking to further differentiate its offering from competing in-house systems or large incumbents in the Enterprise Resource Planning (ERP) space, such as Oracle or SAP.   

Going forward, Zuora hopes to play a central role in ushering a broader shift towards a subscription-based economy. 

Tien Tzuo, founder and CEO of Zuora, told TechCrunch he wants the company to help businesses first realize they should be in the subscription economy and then provide them with the resources necessary to flourish within it.

“Our vision is the world subscribed.”  said Tzuo. “We want to be the leading company that has the right technology platform to get companies to be successful in the subscription economy.”

The partnership will launch with publishers “The Seattle Times” and “The Telegraph”, with both now offering Amazon Pay as a payment method while running on the Zuora platform.

Oct
11
2018
--

Percona Live 2019 – Save the Date!

Austin Texas

Austin State Capitol

After much speculation following the announcement in Santa Clara earlier this year, we are delighted to announce Percona Live 2019 will be taking place in Austin, Texas.

Save the dates in your diary for May, 28-30 2019!

The conference will take place just after Memorial Day at The Hyatt Regency, Austin on the shores of Lady Bird Lake.

This is also an ideal central location for those who wish to extend their stay and explore what Austin has to offer! Call for papers, ticket sales and sponsorship opportunities will be announced soon, so stay tuned!

In other Percona Live news, we’re less than 4 weeks away from this year’s European conference taking place in Frankfurt, Germany on 5-7 November. The tutorials and breakout sessions have been announced, and you can view the full schedule here. Tickets are still on sale so don’t miss out, book yours here today!

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com