Talking Drupal #263 – Birds of a Feather

This week we talk about Birds of a Feather, in relation to the world of Drupal Conferences.


  • Stephen – Dryer
  • Nic – Smart Meters
  • John – Bike Path
  • What is BOF
  • Potential BOF topics
  • Tips for participating in BOF
  • Tips for leading a BOF
  • Virtual BOF
  • Bof Experiences


Stephen Cross – @stephencross

John Picozzi – @johnpicozzi

Nic Laflin – @nicxvan


Postgresql_fdw Authentication Changes in PostgreSQL 13

Postgresql_fdw Authentication Changes in PostgreSQL 13

PostgreSQL 13 is released with some cool features, such as index enhancement, partition enhancements, and many others. Along with these enhancements, there are some security-related enhancements that require some explanation. There are two major ones: one is related to libpq and the other is related to postgres_fdw. As it is known that postgres_fdw  is considered to be a “reference implementation” for other foreign data wrappers, all other foreign data wrappers follow their footsteps in development.  This is a community-supported foreign-data wrapper. The blog will explain the security changes in postgresq_fdw.

1 – The superuser can permit the non-superusers to establish a password-less connection on postgres_fdw

Previously, only the superuser can establish a password-less connection with PostgreSQL using postgres_fdw. No other password-less authentication method was allowed. It had been observed that in some cases there is no password required, so it does not make sense to have that limitation. Therefore, PostgreSQL 13 introduced a new option (password_required) where superusers can give permission to non-superusers to use a password-less connection on postgres_fdw.

postgres=# CREATE EXTENSION postgres_fdw;

postgres=# CREATE SERVER postgres_svr FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres');

postgres=# CREATE FORIENG TABLE foo_for(a INT) SERVER postgres_svr OPTIONS(table_name 'foo');

postgres=# create user MAPPING FOR vagrant SERVER postgres_svr;
postgres=# SELECT * FROM foo_for;
(3 rows)

When we perform the same query from a non-superuser, then we will get this error message:

ERROR:  password is required
DETAIL:  Non-superusers must provide a password in the user mapping

postgres=# CREATE USER nonsup;

postgres=# create user MAPPING FOR nonsup SERVER postgres_svr;

postgres=# grant ALL ON foo_for TO nonsup;

vagrant@vagrant:/work/data$ psql postgres -U nonsup;
psql (13.0)
Type "help" for help.

postgres=> SELECT * FROM foo_for;
2020-09-28 13:00:02.798 UTC [16702] ERROR:  password is required
2020-09-28 13:00:02.798 UTC [16702] DETAIL:  Non-superusers must provide a password in the user mapping.
2020-09-28 13:00:02.798 UTC [16702] STATEMENT:  SELECT * FROM foo_for;
ERROR:  password is required
DETAIL:  Non-superusers must provide a password in the user mapping.

Now perform the same query from non-superuser after setting the new parameter password_required ‘false’ while creating the user mapping.

vagrant@vagrant:/work/data$ psql postgres
psql (13.0)
Type "help" for help.

postgres=# DROP USER MAPPING FOR nonsup SERVER postgres_svr;

postgres=# CREATE USER MAPPING FOR nonsup SERVER postgres_svr OPTIONS(password_required 'false');

vagrant@vagrant:/work/data$ psql postgres -U nonsup;
psql (13.0)
Type "help" for help.

postgres=> SELECT * FROM foo_for;
(3 rows)

2 – Authentication via an SSL certificate

A new option is provided to use an SSL certificate for authentication in postgres_fdw. To achieve this, the two new options added to use that feature are sslkey and sslcert.

Before performing this task we need to configure SSL for server and client. There are many blogs available (How to Enable SSL authentication for an EDB Postgres Advanced Server and SSL Certificates For PostgreSQL) to setup SSL for PostgreSQL, and this blog tries to configure SSL with minimum requirements.

Step 1: Generate Key in $PGDATA

vagrant@vagrant$  openssl genrsa -des3 -out server.key 1024
Generating RSA private key, 1024 bit long modulus (2 primes)
e is 65537 (0x010001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:

vagrant@vagrant$ openssl rsa -in server.key -out server.key
Enter pass phrase for server.key:
writing RSA key

Step 2:  Change the mode of the server.key

vagrant@vagrant$  chmod og-rwx server.key

Step 3: Generate the certificate

vagrant@vagrant$ openssl req -new -key server.key -days 3650 -out server.crt -x509
Country Name (2 letter code) [AU]:PK
State or Province Name (full name) [Some-State]:ISB
Locality Name (eg, city) []:Islamabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Percona
Organizational Unit Name (eg, section) []:Dev
Common Name (e.g. server FQDN or YOUR name) []:localhost
Email Address []

vagrant@vagrant$ cp server.crt root.crt

Now we need to generate the client certificate.

Step 4: Generate a Client key

vagrant@vagrant$ openssl genrsa -des3 -out /tmp/postgresql.key 1024
Generating RSA private key, 1024 bit long modulus (2 primes)
e is 65537 (0x010001)
Enter pass phrase for /tmp/postgresql.key:
Verifying - Enter pass phrase for /tmp/postgresql.key:

vagrant@vagrant$ openssl rsa -in /tmp/postgresql.key -out /tmp/postgresql.key
Enter pass phrase for /tmp/postgresql.key:
writing RSA key

vagrant@vagrant$ openssl req -new -key /tmp/postgresql.key -out 
Country Name (2 letter code) [AU]:PK
State or Province Name (full name) [Some-State]:ISB
Locality Name (eg, city) []:Islamabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Percona
Organizational Unit Name (eg, section) []:Dev
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:pakistan
An optional company name []:Percona 

Step 5:  Copy root.crt to the client

vagrant@vagrant$ cp data5555/root.crt /tmp/

Step 6: Test the connection using a certificate

vagrant@vagrant$ psql 'host=localhost port=5555 dbname=postgres user=ibrar sslmode=verify-full sslcert=/tmp/postgresql.crt sslkey=/tmp/postgresql.key sslrootcert=/tmp/root.crt'
psql (13.0)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> \q

Now we are ready, and we can create a foreign server in PostgreSQL with certificates.

postgres=# CREATE server postgres_ssl_svr foreign data wrapper postgres_fdw options (dbname 'postgres', host 'localhost', port '5555', sslcert '/tmp/postgresql.crt', sslkey '/tmp/postgresql.key', sslrootcert '/tmp/root.crt');

postgres=# create user MAPPING FOR vagrant SERVER postgres_ssl_svr;

postgres=# create foreign table foo_ssl_for(a int) server postgres_ssl_svr options(table_name 'foo');

Now we are ready and set to query a foreign table by postgres_fdw using certificate authentication.

postgres=# select * from foo_ssl_for;

(3 rows)

Note:  Only superusers can modify user mappings options sslcert and sslkey settings.

Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.

Download PDF


MySQL 101: Tuning MySQL After Upgrading Memory

Tuning MySQL After Upgrading Memory

Tuning MySQL After Upgrading MemoryIn this post, we will discuss what to do when you add more memory to your instance. Adding memory to a server where MySQL is running is common practice when scaling resources.

First, Some Context

Scaling resources is just adding more resources to your environment, and this can be split in two main ways: vertical scaling and horizontal scaling.

Vertical scaling is increasing hardware capacity for a given instance, thus having a more powerful server, while horizontal scaling is adding more servers, a pretty standard approach for load balancing and sharding.

As traffic grows, working datasets are getting bigger, and thus we start to suffer because the data that doesn’t fit into memory has to be retrieved from disk. This is a costly operation, even with modern NVME drives, so at some point, we will need to deal with either of the scaling solutions we mentioned.

In this case, we will discuss adding more RAM, which is usually the fastest and easiest way to scale hardware vertically, and also having more memory is probably the main benefit for MySQL.

How to Calculate Memory Utilization

First of all, we need to be clear about what variables allocate memory during MySQL operations, and we will cover only commons ones as there are a bunch of them. Also, we need to know that some variables will allocate memory globally, and others will do a per-thread allocation.

For the sake of simplicity, we will cover this topic considering the usage of the standard storage engine: InnoDB.

We have globally allocated variables:

key_buffer_size: MyISAM setting should be set to 8-16M, and anything above that is just wrong because we shouldn’t use MyISAM tables unless for a particular reason. A typical scenario is MyISAM being used by system tables only, which are small (this is valid for versions up to 5.7), and in MySQL 8 system tables were migrated to the InnoDB engine. So the impact of this variable is negligible.

query_cache_size: 0 is default and removed in 8.0, so we won’t consider it.

innodb_buffer_pool_size: which is the cache where InnoDB places pages to perform operations. The bigger, the better. ?

Of course, there are others, but their impact is minimal when running with defaults.

Also, there are other variables that are allocated on each thread (or open connection):
read_buffer_size, read_rnd_buffer_size, sort_buffer_size, join_buffer_size and tmp_table_size, and few others. All of them, by default, work very well as allocation is small and efficient. Hence, the main potential issue becomes where we allocate many connections that can hold these buffers for some time and add extra memory pressure. The ideal situation is to control how many connections are being opened (and used) and try to reduce that number to a sufficient number that doesn’t hurt the application.

But let’s not lose the focus, we have more memory, and we need to know how to tune it properly to make the best usage.

The most memory-impacting setting we need to focus on is innodb_buffer_pool_size, as this is where almost all magic happens and is usually the more significant memory consumer. There is an old rule of thumb that says, “size of this setting should be set around 75% of available memory”, and some cloud vendors setup this value to total_memory*0.75.

I said “old” because that rule was good when running instances with 8G or 16G of RAM was common, so allocating roughly 6G out of 8G or 13G out of 16G used to be logical.

But what if we run into an instance with 100G or even 200G? It’s not uncommon to see this type of hardware nowadays, so we will use 80G out of 100G or 160G out of 200G? Meaning, will we avoid allocating something between 20G to 40G of memory and leave that for filesystem cache operations? While these filesystem operations are not useless, I don’t see OS needing more than 4G-8G for this purpose on a dedicated DB. Also, it is recommended to use the O_DIRECT flushing method for InnoDB to bypass the filesystem cache.


Now that we understand the primary variables allocating memory let’s check a good use case I’m currently working on. Assuming this system:

$ free -m
      total      used     free    shared    buff/cache    available
Mem: 385625    307295    40921         4         37408        74865

So roughly 380G of RAM, a nice amount of memory. Now let’s check what is the maximum potential allocation considering max used connections.

*A little disclaimer here, while this query is not entirely accurate and thus it can diverge from real results, we can have a sense of what is potentially going to be allocated, and we can take advantage of performance_schema database, but this may require enabling some instruments disabled by default:

mysql > show global status like 'max_used_connections';
| Variable_name        | Value |
| Max_used_connections |    67 |
1 row in set (0.00 sec)

So with a maximum of 67 connections used, we can get:

mysql > SELECT ( @@key_buffer_size
-> + @@innodb_buffer_pool_size
-> + 67 * (@@read_buffer_size
-> + @@read_rnd_buffer_size
-> + @@sort_buffer_size
-> + @@join_buffer_size
-> + @@tmp_table_size )) / (1024*1024*1024) AS MAX_MEMORY_GB;
| 316.4434      |
1 row in set (0.00 sec)

So far, so good, we are within memory ranges, now let’s see how big the innodb_buffer_pool_size is and if it is well sized:

mysql > SELECT (@@innodb_buffer_pool_size) / (1024*1024*1024) AS BUFFER_POOL_SIZE;
| 310.0000         |
1 row in set (0.01 sec)

So the buffer pool is 310G, roughly 82% of total memory, and total usage so far was around 84% which leaves us around 60G of memory not being used. Well, being used by filesystem cache, which, in the end, is not used by InnoDB.

Ok now, let’s get to the point, how to properly configure memory to be used effectively by MySQL. From pt-mysql-summary we know that the buffer pool is fully filled:

Buffer Pool Size | 310.0G
Buffer Pool Fill | 100%

Does this mean we need more memory? Maybe, so let’s check how many disk operations we have in an instance we know with a working dataset that doesn’t fit in memory (the very reason why we increased memory size) using with this command:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99857480858|
| Innodb_buffer_pool_reads         | 598600690  |
| Innodb_buffer_pool_read_requests | 274985     |
| Innodb_buffer_pool_reads         | 1602       |
| Innodb_buffer_pool_read_requests | 267139     |
| Innodb_buffer_pool_reads         | 1562       |
| Innodb_buffer_pool_read_requests | 270779     |
| Innodb_buffer_pool_reads         | 1731       |
| Innodb_buffer_pool_read_requests | 287594     |
| Innodb_buffer_pool_reads         | 1567       |
| Innodb_buffer_pool_read_requests | 282786     |
| Innodb_buffer_pool_reads         | 1754       |

Innodb_buffer_pool_read_requests: page reads satisfied from memory (good)
Innodb_buffer_pool_reads: page reads from disk (bad)

As you may notice, we still get some reads from the disk, and we want to avoid them, so let’s increase the buffer pool size to 340G (90% of total memory) and check again:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99937722883 |
| Innodb_buffer_pool_reads         | 599056712   |
| Innodb_buffer_pool_read_requests | 293642      |
| Innodb_buffer_pool_reads         | 1           |
| Innodb_buffer_pool_read_requests | 296248      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 294409      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 296394      |
| Innodb_buffer_pool_reads         | 6           |
| Innodb_buffer_pool_read_requests | 303379      |
| Innodb_buffer_pool_reads         | 0           |

Now we are barely going to disk, and IO pressure was released; this makes us happy –  right?


If you increase the memory size of a server, you mostly need to focus on innodb_buffer_pool_size, as this is the most critical variable to tune. Allocating 90% to 95% of total available memory on big systems is not bad at all, as OS requires only a few GB to run correctly, and a few more for memory swap should be enough to run without problems.

Also, check your maximum connections required (and used,) as this is a common mistake causing memory issues, and if you need to run with 1000 connections opened, then allocating 90% of the memory of the buffer pool may not be possible, and some additional actions may be required (i.e., adding a proxy layer or a connection pool).

From MySQL 8, we have a new variable called innodb_dedicated_server, which will auto-calculate the memory allocation. While this variable is really useful for an initial approach, it may under-allocate some memory in systems with more than 4G of RAM as it sets the buffer pool size = (detected server memory * 0.75), so in a 200G server, we have only 150 for the buffer pool.


Vertical scaling is the easiest and fastest way to improve performance, and it is also cheaper – but not magical. Tuning variables properly requires analysis and understanding of how memory is being used. This post focused on the essential variables to consider when tuning memory allocation, specifically innodb_buffer_pool_size and max_connections. Don’t over-tune when it’s not necessary and be cautious of how these two affect your systems.


Coralogix lands $25M Series B to rethink log analysis and monitoring

Logging and monitoring tends to be an expensive endeavor because of the sheer amount of data involved. Companies are therefore forced to pick and choose what they monitor, limiting what they can see. Coralogix wants to change that by offering a more flexible pricing model, and today the company announced a $25 million Series B and a new real-time analytics solution called Streama.

First the funding. The round was led by Red Dot Capital Partners and O.G. Tech Ventures, with help from existing investors Aleph VC, StageOne Ventures, Janvest Capital Partners and 2B Angels. Today’s round, which comes after the startup’s $10 million Series A last November, brings the total to $41.2 million raised, according to the company.

When we spoke to Coralogix CEO and co-founder Ariel Assaraf last year regarding the A round, he described his company as more of an intelligent applications performance monitoring with some security logging analytics.

Today, the company announced Streama, which has been in Alpha since July. Assaraf says companies can pick and choose how they monitor and pay only for the features they use. That means if a particular log is only tangentially important, a customer can set it to low priority and save money, and direct the budget toward more important targets.

As the pandemic has taken hold, he says that companies are appreciating the ability to save money on their monitoring costs, and directing those resources elsewhere in the company. “We’re basically building out this full platform that is going to be inside-centric and value-centric instead of volume or machine count-centric in its pricing model,” Assaraf said.

Assaraf differentiates his company from others out there like Splunk, Datadog and Sumo Logic, saying his is a more modern approach to the problem that simplifies the operations. “All these complicated engineering things are being abstracted away in a simple way, so that any user can very quickly create savings and demonstrate that it’s [no longer] an engineering problem, it’s more of a business value question,” he explained.

Since the A round, the company has grown from 25 to 60 people spread out between Israel and the U.S. It plans to grow to 120 people in the next year with the new funding. When it comes to diversity in hiring, he says Israel is fairly homogeneous, so it involves gender parity there, something that he says he is working to achieve. The U.S. operation is still relatively small, with just 12 employees now, but it will be expanding in the next year and it’s something he says that he will need to be thinking about as he hires.

As part of that hiring spree, he wants to kick his sales and marketing operations into higher gear and start spending more on those areas as the company grows.


Gusto is expanding from payroll into a full suite financial wellness platform

When we caught up with Gusto last year, the small business payroll startup had just raised $200 million and was launching a new office in New York City. Over the past few years though, Gusto has also been accruing new features outside of its original payroll product, features that redefine the borders between payroll and financial wellness, and in the process, are blurring the lines of the classic fintech market map.

Today, the company announced a slew of new offerings that it hopes will give employees better financial and health options through their employers.

The most interesting one here is a tool the company is calling Gusto Wallet. It’s an app and collection of products for employees paid through Gusto that basically acts as a mini bank and financial health monitor. It offers an interest-bearing cash account (called, appropriately enough, Cash Accounts) which can also divert a small slice of each paycheck into a user’s savings, similar to products like Acorns and Digit. Cash stored in the account earns 0.34% interest today, and you can also get a Gusto debit card to spend it.

Gusto’s app gives you access to financial services and wellness tools. Image via Gusto

For employees, what’s interesting here is that these services are offered essentially for free: Gusto makes money on its payroll services from employers as a software subscription fee, and so it offers financial services like these as an inducement to keep employers and employees engaged. Gusto hopes that this can keep debt low for employees, and also offer them more financial stability, particularly as businesses open and close in the wake of COVID-19.

In addition, Gusto Wallet also offers “Cashout,” which can accelerate a payday ahead of time based on the pay history of an employee. Rather than securing a high-cost payday loan, the product is designed to help users smooth out a bit of their income if they need their paycheck a bit ahead of their actual direct deposit. It’s also free of fees.

Gusto CEO Joshua Reeves said that “One of the biggest problems is people are oftentimes living paycheck-to-paycheck — they’re either not saving money, or they’re getting stuck in debt accessing things like overdraft fees, or credit card debt, or payday loans.” The hope with Gusto Wallet is that its easy availability and low costs not only attract users, but leave them in much better financial shape than before.

What’s interesting to me is placing these new features in the wider scope of the fintech landscape. It seems that every week, there is another startup launching a consumer credit card, or a new debt product, or another savings app designed to help consumers with their finances. And then every week, we hear about the credit card startup launching a new savings account, or the savings app launching an insurance product.

The math is simple: It’s very, very hard to acquire a customer in financial services, and it’s so competitive that the cost per acquired customer is extremely high (think hundreds of dollars or more per customer). For most of these startups, once you have a customer using one financial product, much like traditional banks, they want you to use all of their other products as well to maximize customer value and amortize those high CAC costs.

Gusto is an interesting play here precisely because it starts at the payroll layer. Banks and other savings apps often try to get you to send your paycheck to their service, since if your money resides there, you are much more likely to use that service’s features. Gusto intercepts that transaction and owns it itself. Plus, because it ultimately is selling subscriptions to payroll and not financial services, it can offer many of these features outright for free.

Reeves said that “This is a future that just seems inevitable, like all this information right now is sitting in silos. How do we give the employee more of that ownership and access through one location?” By combining payroll, 401(k) planning, savings accounts, debit cards and more in one place, Gusto is hoping to become the key financial health tool for its employee end users.

That’s the financial side. In addition, Gusto announced today that it is now helping small businesses set up health reimbursement accounts. Under a provision passed by Congress a few years ago, small businesses have a unique mechanism (called QSEHRA) to offer health reimbursement to their employees. That program is riven with technicalities and administrivia though. Gusto believes its new offering will help more small businesses create these kinds of programs.

Given Gusto’s small business focus, this year has seen huge changes thanks to the global pandemic. “It’s been an inspiring, challenging, motivating [and] galvanizing time for the company,” Reeves said. “Normally, I would say [we have] three home bases: New York, SF [and] Denver. Now we have 1,400 home bases.” That hasn’t stopped the company’s mission, and if anything, has brought many of its employees closer to the small businesses they ultimately serve.

Gusto team, with CEO Joshua Reeves on the left of the second row. Image via Gusto


VTEX raises $225M at a $1.7B valuation for e-commerce solutions aimed at retailers and brands

Retailers and consumer brands are focused more than ever in their histories on using e-commerce channels to connect with customers: the global health pandemic has disrupted much of their traditional business in places like physical stores, event venues and restaurants, and vending machines, and accelerated the hunt for newer ways to sell goods and services. Today, a startup that’s been helping them build those bridges, specifically to expand into newer markets, is announcing a huge round of funding, underscoring the demand.

VTEX, which builds e-commerce solutions and strategies for retailers like Walmart and huge consumer names like AB InBev, Motorola, Stanley Black & Decker, Sony, Walmart, Whirlpool, Coca-Cola and Nestlé, has raised $225 million in new funding, valuing the company at $1.7 billion post-money.

The funding is being co-led by two investors, Tiger Global and Lone Pine Capital, with Constellation, Endeavour Catalyst and SoftBank also participating. It’s a mix of investors, with two leads, that offers a “signal” of what might come next for the startup, said Amit Shah, the company’s chief strategy officer and general manager for North America.

“We’ve seen them invest in big rounds right before companies go public,” he said. “Now, that’s not necessarily happening here right now, but it’s a signal.” The company has been profitable and plans to continue to be, Shah said (making it one example of a SoftBank investment that hasn’t gone sour). Revenues this year are up 114% with $8 billion in gross merchandise volume (GMV) processed over platforms it’s built.

Given that VTEX last raised money less than a year ago — a $140 million round led by SoftBank’s Latin American Innovation Fund — the valuation jump for the startup is huge. Shah confirmed to us that it represents a 4x increase on its previous valuation (which would have been $425 million).

The interest back in November from SoftBank’s Latin American fund stemmed from VTEX’s beginnings.

The company got its start building e-commerce storefronts and strategies for businesses that were hoping to break into Brazil — the B of the world’s biggest emerging “BRIC” markets — and the rest of Latin America. It made its name building Walmart in the region, and has continued to help run and develop that operation even after Walmart divested the asset, and it’s working with Walmart now in other regions outside the US, too, he added.

But since then, while the Latin American arm of the business has continued to thrive, the company has capitalized both on the funding it had picked up, and the current global climate for e-commerce solutions, to expand its business into more markets, specifically North America, EMEA and most recently Asia.

“We are today even more impressed by the quality and energy of the VTEX team than we were when we invested in the previous round,” said Marcello Silva at Constellation. “The best is yet to come. VTEX’s team is stronger than ever, VTEX’s product is stronger than ever, and we are still in the early stages of ecommerce penetration. We could not miss the opportunity to increase our exposure.”

Revenues were growing at a rate of 50% a year before the pandemic ahead of it’s more recent growth this year of 114%, Shah said. “Of course, we would prefer Covid-19 not to be here, but it has had a good effect on our business. The arc of e-commerce has grown has impacted revenues and created that additional level of investor interest.”

VTEX’s success has hinged not just on catering to companies that have up to now not prioritized their online channels, but in doing so in a way that is more unified.

Consumer packaged goods have been in a multi-faceted bind because of the fragmented way in which they have grown. A drinks brand will not only manufacture on a local level (and sometimes, as in the case of, say, Coca-Cola, use different ingredient formulations), but they will often have products that are only sold in select markets, and because the audiences are different, they’ve devise marketing and distribution strategies on a local level, too.

On top of all that, products like these have long relied on channels like retailers, restaurants, vending machines and more to get their products into the hands of consumers.

These days, of course, all of that has been disrupted: all the traditional channels they would have used to sell things are now either closed or seeing greatly reduced custom. And as for marketing: the rise of social networks has led to a globalization in messaging, where something can go viral all over the world and marketing therefore knows no regional boundaries.

So, all of this means that brands have to rethink everything around how they sell their products, and that’s where a company like VTEX steps in, building strategies and solutions that can be used in multiple regions. Among typical deals, it’s been working with AB InBev to develop a global commerce platform covering 50 countries (replacing multiple products from other vendors, typically competitors to VTEX include SAP, Shopify and Magento, and giving brands and others a viable route to market that doesn’t cut in the likes of Amazon).

“CPG companies are seeking to standardize and make their businesses and lives a little easier,” Shah said. Typical work that it does includes building marketplaces for retailers, or new e-commerce interfaces so that brands can better supply online and offline retailers, or sell directly to customers — for example, with new ways of ordering products to get delivered by others. Shah said that some 200 marketplaces have now been built by VTEX for its customers.

(Shah himself, it’s worth pointing out, has a pedigree in startups and in e-commerce. He founded an e-commerce analytics company called Jirafe, which was acquired by SAP, where he then became the chief revenue officer of SAP Hybris.)

“We are excited to grow quickly in new and existing markets, and offer even more brands a platform that embraces the future of commerce, which is about being collaborative, leveraging marketplaces, and delivering customer experiences that are second-to-none,” said Mariano Gomide de Faria, VTEX co-founder and co-CEO, in a statement. “This injection of funding will undoubtedly support us in achieving our mission to accelerate digital commerce transformation around the world.”


Papaya Global raises $40M for a payroll and HR platform aimed at global workforces

Workforces are getting more global, and people who work day in, day out for organizations don’t always sit day in, day out in a single office, in a single country, to get a job done. Today, one of the startups building HR to help companies provision services for and manage those global workers better is announcing a funding round to capitalise on a surge in business that it has seen in the last year — spurred in no small part by the global health pandemic, the impact it’s had on travel and the way it has focused the minds of companies to get their cloud services and workforce management in order.

Papaya Global, an Israeli startup that provides cloud-based payroll, as well as hiring, onboarding and compliance services for organizations that employ full-time, part-time, or contract workers outside of their home country, has raised $40 million in a Series B round of funding led by Scale Venture Partners. Workday Ventures — the corporate investment arm of the HR company — Access Industries (via its Israeli vehicle Claltech), and previous investors Insight Partners, Bessemer Venture Partners, New Era Ventures, Group 11 and Dynamic Loop also participated

The money comes less than a year after its Series A of $45 million, following the company growing 300% year-over-year annually since 2016. It has now raised $95 million and is not disclosing valuation. But Eynat Guez, the CEO who co-founded the company in that year with Ruben Drong and Ofer Herman, said in an interview that it’s 5x the valuation it had in its round last year.

Its customers include fast-growing startups (precisely the kind of customer that not only has global workforces, but is expanding its employee base quickly) like OneTrust, nCino and Hopin, as well as major corporates like Toyota, Microsoft, Wix and General Dynamics.

Guez said Papaya Global was partly born out of the frustrations she herself had with HR solutions — she’s worked in the field for years. Different countries have different employment regulations, varied banking rules, completely different norms in terms of how people get paid, and so on. While there have been some really modern tools built for local workforces — Rippling, Gusto and Zenefits now going head to head with incumbents like ADP — they weren’t built to address these issues.

Other HR people who have dealt with international workers would understand her pain; those who control the purse strings might have been less aware of the fragmentation. All that changed in the last eight months (and for the foreseeable future), a period when companies have had to reassess everything about how they work to make sure that they can get through the current period without collapsing.

“The major impact of COVID-19 for us has been changing attitudes,” said Guez. “People usually think that payroll works by itself, but it’s one of the more complex parts of the organization, covering major areas like labor, accounting, tax. Eight months ago, a lot of clients thought, it just happens. But now they realize they didn’t have control of the data, some don’t even have a handle on who is being paid.”

As people moved into and out of jobs, and out of offices into working from home, as the pandemic kicked off, some operations fell apart as a result, she said. “Payroll continuity is like IT continuity, and so all of a sudden when COVID started its march, we had prospects calling us saying they didn’t have data on, for example, their Italian employees, and the office they were using wasn’t answering the phone.”

Guez herself is walking the walk on the remote working front. Papaya Global itself has offices around the world, and Guez is normally based in Tel Aviv. But our interview was conducted with her in the Maldives. She said she and her family decided to decamp elsewhere before Israel went into a second lockdown, which was very tough to handle in a small flat with small children. Working anywhere, as we have found out, can work.

The company is not the only one that has identified and is building to help organizations handle global workforces. In fact, just when you think the unemployment, furlough and layoff crunch is affecting an inordinate number of people and the job market is in a slump, a rush of them, along with other HR companies, have all been announcing significant funding rounds this year on the back of surges in business.

Others that have raised money during the pandemic include Deel, which like Papaya Global is also addressing the complexities of running global workforces; Turing, which helps with sourcing and then managing international teams; Factorial with its platform targeting specifically SMBs; Lattice focused on the bigger challenges of people management; and Rippling, the second act from Zenefits’ Parker Conrad.

“Papaya Global’s accelerating growth is a testament to their top-notch executive leadership as well as their ability to streamline international payroll management, a first for many enterprises that have learned to live with highly manual payroll processes,” said Rory O’Driscoll, a partner at Scale Venture Partners, in a statement. “The complexity and cost of managing multi-region workforces cannot be understated. Eynat and her team are uniquely serving their customers’ needs, bringing an advanced SaaS platform into a market long-starved for more effective software solutions.”


Salesforce creates for-profit platform to help governments distribute COVID vaccine when it’s ready

For more than 20 years, Salesforce has been selling cloud business software, but it has also used the same platform to build ways to track other elements besides sales, marketing and service information, including, the platform it created earlier this year to help companies develop and organize a safe way to begin returning to work during the pandemic.

Today, the company announced it was putting that same platform to work to help distribute and track a vaccine whenever it becomes available, along with related materials like syringes that will be needed to administer it. The plan is to use Salesforce tools to solve logistical problems around distributing the vaccine, as well as data to understand where it could be needed most and the efficacy of the drug, according to Bill Patterson, EVP and general manager for CRM applications at Salesforce.

“The next wave of the virus phasing, if you will, will be [when] a vaccine is on the horizon, and we begin planning the logistics. Can we plan the orchestration? Can we measure the inventory? Can we track the outcomes of the vaccine once it reaches the public’s hands,” Patterson asked.

Salesforce has put together a new product called for Vaccines to put its platform to work to help answer these questions, which Patterson says ultimately involves logistics and data, two areas that are strengths for Salesforce.

The platform includes the core command center along with additional components for inventory management, appointment management, clinical administration, outcome monitoring and public outreach.

While this all sounds good, what Salesforce lacks of course is expertise in drug distribution or public health administration, but the company believes that by creating a flexible platform with open data, government entities can share that data with other software products outside of the Salesforce family.

“That’s why it’s important to use an open data platform that allows for aggregate data to be quickly summarized and abstracted for public use,” he said. He points to the fact that some states are using Tableau, the company that Salesforce bought last year for a tidy $15.7 billion, to track other types of COVID data.

“Many states today are running all their COVID testing and positive case reporting through the Tableau platform. We want to do the same kind of exchange of data with things like inventory management [for a vaccine],” he said.

While this sounds like a public service kind of activity, Salesforce intends to sell this product to governments to manage vaccines. Patterson says that to run a system like this at what they envision will be enormous scale, it will be a service that governments have to pay for to access.

This isn’t the first time that Salesforce has created a product that falls somewhat outside of the standard kind of business realm, but which takes advantage of the Salesforce platform. Last year it developed a tool to help companies measure how sustainable they are being. While the end goal is positive, just like for Vaccines and the broader platform, it is a tool that they charge for to help companies implement and measure these kinds of initiatives.

The tool set is available starting today. Pricing will vary depending on the requirements and components of each government entity.

The real question here is, should this kind of distribution platform be created by a private company like Salesforce for profit, or perhaps would it be better suited to an open-source project, where a community of developers could create the software and distribute it for free.


How Twilio built its own conference platform

Twilio’s annual customer conference was supposed to happen in May, but like everyone else who had live events scheduled for this year, it ran smack-dab into COVID-19 and was forced to cancel. That left the company wondering how to reimagine the event online. It began an RFP process to find a vendor to help, but eventually concluded it could use its own APIs and built a platform on its own.

That’s a pretty bold move, but one of the key issues facing Twilio was how to recreate the in-person experience of the show floor where people could chat with specific API experts. After much internal deliberation, they realized that was what their communication API products were designed to do.

Once they committed to going their own way, they began a long process that involved figuring out must-have features, building consensus in the company, creating a development and testing cycle and finding third-party partnerships to help them when they ran into the limitations of their own products.

All that work culminates this week when Twilio holds its annual Signal Conference online Wednesday and Thursday. We spoke to In-Young Chang, director of experience at Twilio, to learn how this project came together.

Chang said once the decision was made to go virtual, the biggest issue for them (and for anyone putting on a virtual conference) was how to recreate that human connection that is a natural part of the in-person conference experience.

The company’s first step was to put out a request for proposals with event software vendors. She said that the problem was that these platforms hadn’t been designed for the most part to be fully virtual. At best, they had a hybrid approach, where some people attended virtually, but most were there in person.

“We met with a lot of different vendors, vendors that a lot of big tech companies were using, but there were pros to some of them, and then cons to others, and none of them truly fit everything that we needed, which was connecting our customers to product experts [like we do at our in-person conferences],” Chang told TechCrunch.

Even though they had winnowed the proposals down to a manageable few, they weren’t truly satisfied with what the event software vendors were offering, and they came to a realization.

“Either we find a vendor who can do this fully custom in three months’ time, or [we do it ourselves]. This is what we do. This is in our DNA, so we can make this happen. The hard part became how do you prioritize because once we made the conference fully software-based, the possibilities were endless,” she said.

All of this happened pretty quickly. The team interviewed the vendors in May, and by June made the decision to build it themselves. They began the process of designing the event software they would be using, taking advantage of their own communications capabilities, first and foremost.

The first thing they needed to do was meet with various stakeholders inside the company and figure out the must-have features in their custom platform. She said that reeling in people’s ambitions for version 1.0 of the platform was part of the challenge that they faced trying to pull this together.

“We only had three months. It wasn’t going to be totally perfect. There had to be some prioritization and compromises, but with our APIs we [felt that we] could totally make this happen,” Chang said.

They started meeting with different groups across the company to find out their must-haves. They knew that they wanted to recreate this personal contact experience. Other needs included typical conference activities like being able to collect leads and build agendas and the kinds of things you would expect to do at any conference, whether in-person or virtual.

As the team met with the various constituencies across the company, they began to get a sense of what they needed to build and they created a priorities document, which they reviewed with the Signal leadership team. “There were some hard conversations and some debates, but everyone really had goodwill toward each other knowing that we only had a few months,” she said.

Signal Concierge Agent for virtual Twilio Signal Conference

Signal Concierge Agent helps attendees navigate the online conference. Image Credits: Twilio

The team believed it could build a platform that met the company’s needs, but with only 10 developers working on it, they had a huge challenge to get it done in three months.

With one of the major priorities putting customers together with the right Twilio personnel, they decided to put their customer service platform, Twilio Flex, to work on the problem. Flex combines voice, messaging, video and chat in one interface. While the conference wasn’t a pure customer service issue, they believed that they could leverage the platform to direct requests to people with the right expertise and recreate the experience of walking up to the booth and asking questions of a Twilio employee with a particular skill set.

“Twilio Flex has Taskrouter, which allows us to assign agents unique skills-based characteristics, like you’re a video expert, so I’m going to tag you as a video expert. If anyone has a question around video, I know that we can route it directly to you,” Chang explained.

They also built a bot companion, called Signal Concierge, that moves through the online experience with each attendee and helps them find what they need, applying their customer service approach to the conference experience.

“Signal Concierge is your conference companion, so that if you ever have a question about what session you should go to next or [you want to talk to an expert], there’s just one place that you have to go to get an answer to your question, and we’ll be there to help you with it,” she said.

The company couldn’t do everything with Twilio’s tools, so it turned to third parties in those cases. “We continued our partnership with Klik, a conference data and badging platform all available via API. And Perficient, a Twilio SI partner we hired to augment the internal team to more quickly implement the custom Twilio Flex experience in the tight time frame we had. And Plexus, who provided streaming capabilities that we could use in an open-source video player,” she said.

They spent September testing what they built, making sure the Signal Concierge was routing requests correctly and all the moving parts were working. They open the virtual doors on Wednesday morning and get to see how well they pulled it off.

Chang says she is proud of what her team pulled off, but recognizes this is a first pass and future versions will have additional features that they didn’t have time to build.

“This is V1 of the platform. It’s not by any means exactly what we want, but we’re really proud of what we were able to accomplish from scoping the content to actually building the platform within three months’ time,” she said.


Using Security Threat Tool and Alertmanager in Percona Monitoring and Management

security threat tool percona monitoring and management

security threat tool percona monitoring and managementWith version 2.9.1 of Percona Monitoring and Management (PMM) we delivered some new improvements to its Security Threat Tool (STT).

Aside from an updated user interface, you now have the ability to run STT checks manually at any time, instead of waiting for the normal 24 hours check cycle. This can be useful if, for example, you want to see an alert gone after you fixed it. Moreover, you can now also temporarily mute (for 24 hours) some alerts you may want to work on later.

But how do these actions work?


In a previous article, we briefly explained how the STT back end publishes alerts to Alertmanager so they appear in the STT section of PMM.

Now, before we uncover the details of that, please bear in mind that PMM’s built-in Alertmanager is still under development. We do not recommend you use it directly for your own needs, at least not for now.

With that out of the way, let’s see the details of the interaction with Alertmanager.

To retrieve the current alerts, the interface calls an Alertmanager’s API, filtering for non-silenced alerts:

GET /alertmanager/api/v2/alerts?silenced=false[...]

This call returns a list of active alerts, which looks like this:

    "annotations": {
      "description": "MongoDB admin password does not meet the complexity requirement",
      "summary": "MongoDB password is weak"
    "endsAt": "2020-09-30T14:39:03.575Z",
    "startsAt": "2020-04-20T12:08:48.946Z",
    "labels": {
      "service_name": "mongodb-inst-rpl-1",
      "severity": "warning",

Active alerts have a


timestamp at the current time or in the past, while the


 timestamp is in the future. The other properties contain descriptions and the severity of the issue the alert is about.


, in particular, uniquely identify a specific alert and are used by Alertmanager to deduplicate alerts. (There are also other “meta” properties, but they are out of the scope of this article.)

Force Check

Clicking on “Run DB checks” will trigger an API call to the PMM server, which will execute the checks workflow on the PMM back end (you can read more about it here). At the end of that workflow, alerts are sent to Alertmanager through a POST call to the same endpoint used to retrieve active alerts. The call payload has the same structure as shown above.

Note that while you could create alerts manually this way, that’s highly discouraged, since it could negatively impact STT alerts. If you want to define your own rules for Alertmanager, PMM can integrate with an external Alertmanager, independent of STT. You can read more in Percona Monitoring and Management, Meet Prometheus Alertmanager.


Alertmanager has the concept of Silences. To temporarily mute an alert, the front end generates a “silence” payload starting from the metadata of the alert the user wants to mute and calls the silence API on Alertmanager:

POST /alertmanager/api/v2/silences

An example of a silence payload:

  "matchers": [
    { "name": "service_name", "value": "mongodb-inst-rpl-1", "isRegex": false },
    { "name": "severity", "value": "warning", "isRegex": false },
  "startsAt": "2020-09-14T20:24:15Z",
  "endsAt": "2020-09-15T20:24:15Z",
  "createdBy": "someuser",
  "comment": "reason for this silence",
  "id": "a-silence-id"

As a confirmation of success, this API call will return a



{ "silenceID": "1fcaae42-ec92-4272-ab6b-410d98534dfc" }



From this quick overview, you can hopefully understand how simple it is for us to deliver security checks. Alertmanager helps us a lot in simplifying the final stage of delivering security checks to you in a reliable way. It allows us to focus more on the checks we deliver and the way you can interact with them.

We’re constantly improving our Security Threat Tool, adding more checks and features to help you protect your organization’s valuable data. While we’ll try to make our checks as comprehensive as possible, we know that you might have very specific needs. That’s why for the future we plan to make STT even more flexible, adding scheduling of checks (since some need to run more/less frequently than others), disabling of checks, and even the ability to let you add your own checks! Keep following the latest releases as we continue to iterate on STT.

For now, let us know in the comments: what other checks or features would you like to see in STT? We love to hear your feedback!

Check out our Percona Monitoring and Management Demo site or download Percona Monitoring and Management today and give it a try!

Powered by WordPress | Theme: Aeros 2.0 by