Jun
20
2018
--

Stensul raises $7M to make email creation easier for marketers

Email marketing startup Stensul is announcing that it has raised $7 million.

Stensul spun out of founder and CEO Noah Dinkin’s previous company FanBridge. Dinkin explained via email that the startup isn’t competing with the big email service providers — in fact, it integrates with ESPs including Salesforce Marketing Cloud, Oracle Marketing Cloud, Adobe Marketing Cloud and Marketo.

Dinkin said that while ESPs include email creation tools, most companies ignore them. Instead, the marketer has to work with specialists like designers, developers and agencies: “That process often takes weeks, everyone hates it, and it is SUPER expensive.”

Stensul, meanwhile, is focused exclusively on the email creation process. Marketers at large enterprises can build the email themselves, without having to rely on anyone else, in less than 20 minutes.

“They don’t need to know how to code, they don’t need to know how to use Photoshop or have memorized the 100 page pdf of brand guidelines,” Dinkin said. “The platform controls for brand governance and rules, and also guarantees that the output will be technically perfect.”

Stensul

Javelin Venture Partners led the Series A, with participation from Arthur Ventures, First Round Capital, Uncork Capital, Lowercase Capital and former ExactTarget President Scott McCorkle.

“Stensul has zeroed in on a massive problem space hiding in plain sight,” said Javelin’s Alex Gurevich in the funding announcement. “Email Marketing is used by every large company in the world, and the amount of time and money spent on email creation is far more than most people realize. The quality of top-tier customers that stensul has been able to bring on made it clear to us that they have a solution that really delivers value on day 1.”

Companies that have used Stensul include YouTube, Grubhub, BMW, Lyft and Box. Dinkin said he will continue to invest in product, but the big goal with the new funding is to grow sales and marketing.

Jun
20
2018
--

Beamery closes $28M Series B to stoke support for its ‘talent CRM’

Beamery, a London-based startup that offers self-styled “talent CRM”– aka ‘candidate relationship management’ — and recruitment marketing software targeted at fast-growing companies, has closed a $28M Series B funding round, led by EQT Ventures.

Also participating in the round are M12, Microsoft’s venture fund, and existing investors Index Ventures, Edenred Capital Partners and Angelpad Fund. Beamery last raised a $5M Series A, in April 2017, led by Index.

Its pitch centers on the notion of helping businesses win a ‘talent war’ by taking a more strategic and pro-active approach to future hires vs just maintaining a spreadsheet of potential candidates.

Its platform aims to help the target enterprises build and manage a talent pool of people they might want to hire in future to get out ahead of the competition in HR terms, including providing tools for customized marketing aimed at nurture relations with possible future hires.

Customer numbers for Beamery’s software have stepped up from around 50 in April 2017 to 100 using it now — including the likes of Facebook (which is using it globally), Continental, VMware, Zalando, Grab and Balfour Beatty.

It says the new funding will be going towards supporting customer growth, including by ramping up hiring in its offices in London (HQ), Austin and San Francisco.

It also wants to expand into more markets. “We’re focusing on some of the world’s biggest global businesses that need support in multiple timezones and geographies so really it’s a global approach,” said a spokesman on that.

“Companies adopting the system are large enterprises doing talent at scale, that are innovative in terms of being proactive about recruiting, candidate experience and employer brand,” he added.

A “significant” portion of the Series B funds will also go towards R&D and produce development focused on its HR tech niche.

“Across all sectors, there’s a shift towards proactive recruitment through technology, and Beamery is emerging as the category leader,” added Tom Mendoza, venture lead and investment advisor at EQT, in a supporting statement.

“Beamery has a fantastic product, world-class high-ambition founders, and an outstanding analytics-driven team. They’ve been relentless about building the best talent CRM and marketing platform and gaining a deep understanding of the industry-wide problems.”

Jun
20
2018
--

Nginx lands $43 million Series C to fuel expansion

Nginx, the commercial company behind the open source web server, announced a $43 million Series C investment today led by Goldman Sachs Growth Equity.

NEA, which has been on board as an early investor is also participating. As part of the deal, David Campbell, managing director at Goldman Sachs’ Merchant Banking Division will join the Nginx board. Today’s investment brings the total raised to $103 million, according to the company.

The company was not willing to discuss valuation for this round.

Nginx’s open source approach is already well established running 400 million websites including some of the biggest in the world. Meanwhile, the commercial side of the business has 1,500 paying customers, giving those customers not just support, but additional functionality such as load balancing, an API gateway and analytics.

Nginx CEO Gus Robertson was pleased to get the backing of such prestigious investors. “NEA is one of the largest venture capitalists in Silicon Valley and Goldman Sachs is one of the largest investment banks in the world. And so to have both of those parceled together to lead this round is a great testament to the company and the technology and the team,” he said.

The company already has plans to expand its core commercial product, Nginx Plus in the coming weeks. “We need to continue to innovate and build products that help our customers alleviate the complexity of delivery of distributed or micro service based applications. So you’ll see us release a new product in the coming weeks called Controller. Controller is the control plane on top of Nginx Plus,” Robertson explained. (Controller was launched in Beta last fall.)

But with $43 million in the bank, they want to look to build out Nginx Plus even more in the next 12-18 months. They will also be opening new offices globally to add to its international presence, while expanding its partners ecosystem. All of this means an ambitious goal to increase the current staff of 220 to 300 by the end of the year.

The open source product was originally created by Igor Sysoev back in 2002. He introduced the commercial company on top of the open source project in 2011. Robertson came on board as CEO a year later. The company has been growing 100 percent year over year since 2013 and expects to continue that trajectory through 2019.

Jun
19
2018
--

Chunk Change: InnoDB Buffer Pool Resizing

innodb buffer pool chunk size

Since MySQL 5.7.5, we have been able to resize dynamically the InnoDB Buffer Pool. This new feature also introduced a new variable — innodb_buffer_pool_chunk_size — which defines the chunk size by which the buffer pool is enlarged or reduced. This variable is not dynamic and if it is incorrectly configured, could lead to undesired situations.

Let’s see first how innodb_buffer_pool_size , innodb_buffer_pool_instances  and innodb_buffer_pool_chunk_size interact:

The buffer pool can hold several instances and each instance is divided into chunks. There is some information that we need to take into account: the number of instances can go from 1 to 64 and the total amount of chunks should not exceed 1000.

So, for a server with 3GB RAM, a buffer pool of 2GB with 8 instances and chunks at default value (128MB) we are going to get 2 chunks per instance:

This means that there will be 16 chunks.

I’m not going to explain the benefits of having multiple instances, I will focus on resizing operations. Why would you want to resize the buffer pool? Well, there are several reasons, such as:

  • on a virtual server you can add more memory dynamically
  • for a physical server, you might want to reduce database memory usage to make way for other processes
  • on systems where the database size is smaller than available RAM
  • if you expect a huge growth and want to increase the buffer pool on demand

Reducing the buffer pool

Let’s start reducing the buffer pool:

| innodb_buffer_pool_size | 2147483648 |
| innodb_buffer_pool_instances | 8     |
| innodb_buffer_pool_chunk_size | 134217728 |
mysql> set global innodb_buffer_pool_size=1073741824;
Query OK, 0 rows affected (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 1073741824 |
+-------------------------+------------+
1 row in set (0.00 sec)

If we try to decrease it to 1.5GB, the buffer pool will not change and a warning will be showed:

mysql> set global innodb_buffer_pool_size=1610612736;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+---------------------------------------------------------------------------------+
| Level   | Code | Message                                                                         |
+---------+------+---------------------------------------------------------------------------------+
| Warning | 1210 | InnoDB: Cannot resize buffer pool to lesser than chunk size of 134217728 bytes. |
+---------+------+---------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 2147483648 |
+-------------------------+------------+
1 row in set (0.01 sec)

Increasing the buffer pool

When we try to increase the value from 1GB to 1.5GB, the buffer pool is resized but the requested innodb_buffer_pool_size is considered to be incorrect and is truncated:

mysql> set global innodb_buffer_pool_size=1610612736;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show warnings;
+---------+------+-----------------------------------------------------------------+
| Level   | Code | Message                                                         |
+---------+------+-----------------------------------------------------------------+
| Warning | 1292 | Truncated incorrect innodb_buffer_pool_size value: '1610612736' |
+---------+------+-----------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name           | Value      |
+-------------------------+------------+
| innodb_buffer_pool_size | 2147483648 |
+-------------------------+------------+
1 row in set (0.01 sec)

And the final size is 2GB. Yes! you intended to set the value to 1.5GB and you succeeded in setting it to 2GB. Even if you set 1 byte higher, like setting: 1073741825, you will end up with a buffer pool of 2GB.

mysql> set global innodb_buffer_pool_size=1073741825;
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> show global variables like 'innodb_buffer_pool_%size' ;
+-------------------------------+------------+
| Variable_name                 | Value      |
+-------------------------------+------------+
| innodb_buffer_pool_chunk_size | 134217728  |
| innodb_buffer_pool_size       | 2147483648 |
+-------------------------------+------------+
2 rows in set (0.01 sec)

Interesting scenarios

Increasing size in the config file

Let’s suppose one day you get up willing to change or tune some variables in your server, and you decide that as you have free memory you will increase the buffer pool. In this example, we are going to use a server with 

innodb_buffer_pool_instances = 16

  and 2GB of buffer pool size which will be increased to 2.5GB

So, we set in the configuration file:

innodb_buffer_pool_size = 2684354560

But then after restart, we found:

mysql> show global variables like 'innodb_buffer_pool_%size' ;
+-------------------------------+------------+
| Variable_name                 | Value      |
+-------------------------------+------------+
| innodb_buffer_pool_chunk_size | 134217728  |
| innodb_buffer_pool_size       | 4294967296 |
+-------------------------------+------------+
2 rows in set (0.00 sec)

And the error log says:

2018-05-02T21:52:43.568054Z 0 [Note] InnoDB: Initializing buffer pool, total size = 4G, instances = 16, chunk size = 128M

So, after we have set innodb_buffer_pool_size in the config file to 2.5GB, the database gives us a 4GB buffer pool, because of the number of instances and the chunk size. What the message doesn’t tell us is the number of chunks, and this would be useful to understand why such a huge difference.

Let’s take a look at how that’s calculated.

Increasing instances and chunk size

Changing the number of instances or the chunk size will require a restart and will take into consideration the buffer pool size as an upper limit to set the chunk size. For instance, with this configuration:

innodb_buffer_pool_size = 2147483648
innodb_buffer_pool_instances = 32
innodb_buffer_pool_chunk_size = 134217728

We get this chunk size:

mysql> show global variables like 'innodb_buffer_pool_%size' ;
+-------------------------------+------------+
| Variable_name                 | Value      |
+-------------------------------+------------+
| innodb_buffer_pool_chunk_size | 67108864   |
| innodb_buffer_pool_size       | 2147483648 |
+-------------------------------+------------+
2 rows in set (0.00 sec)

However, we need to understand how this is really working. To get the innodb_buffer_pool_chunk_size it will make this calculation: innodb_buffer_pool_size / innodb_buffer_pool_instances with the result rounded to a multiple of 1MB.

In our example, the calculation will be 2147483648 / 32 = 67108864 which 67108864%1048576=0, no rounding needed. The number of chunks will be one chunk per instance.

When does it consider that it needs to use more chunks per instance? When the difference between the required size and the innodb_buffer_pool_size configured in the file is greater or equal to 1MB.

That is why, for instance, if you try to set the innodb_buffer_pool_size equal to 1GB + 1MB – 1B you will get 1GB of buffer pool:

innodb_buffer_pool_size = 1074790399
innodb_buffer_pool_instances = 16
innodb_buffer_pool_chunk_size = 67141632
2018-05-07T09:26:43.328313Z 0 [Note] InnoDB: Initializing buffer pool, total size = 1G, instances = 16, chunk size = 64M

But if you set the innodb_buffer_pool_size equals to 1GB + 1MB you will get 2GB of buffer pool:

innodb_buffer_pool_size = 1074790400
innodb_buffer_pool_instances = 16
innodb_buffer_pool_chunk_size = 67141632
2018-05-07T09:25:48.204032Z 0 [Note] InnoDB: Initializing buffer pool, total size = 2G, instances = 16, chunk size = 64M

This is because it considers that two chunks will fit. We can say that this is how the InnoDB Buffer pool size is calculated:

determine_best_chunk_size{
  if innodb_buffer_pool_size / innodb_buffer_pool_instances < innodb_buffer_pool_chunk_size
  then
    innodb_buffer_pool_chunk_size = roundDownMB(innodb_buffer_pool_size / innodb_buffer_pool_instances)
  fi
}
determine_amount_of_chunks{
  innodb_buffer_amount_chunks_per_instance = roundDown(innodb_buffer_pool_size / innodb_buffer_pool_instances / innodb_buffer_pool_chunk_size)
  if innodb_buffer_amount_chunks_per_instance * innodb_buffer_pool_instances * innodb_buffer_pool_chunk_size - innodb_buffer_pool_size > 1024*1024
  then
    innodb_buffer_amount_chunks_per_instance++
  fi
}
determine_best_chunk_size
determine_amount_of_chunks
innodb_buffer_pool_size = innodb_buffer_pool_instances * innodb_buffer_pool_chunk_size * innodb_buffer_amount_chunks_per_instance

What is the best setting?

In order to analyze the best setting you will need to know that there is a upper limit of 1000 chunks. In our example with 16 instances, we can have no more than 62 chunks per instance.

Another thing to consider is what each chunk represents in percentage terms. Continuing with the example, each chunk per instance represent 1.61%, which means that we can increase or decrease the complete buffer pool size in multiples of this percentage.

From a management point of view, I think that you might want to consider at least a range of 2% to 5% to increase or decrease the buffer. I performed some tests to see the impact of having small chunks and I found no issues but this is something that needs to be thoroughly tested.

The post Chunk Change: InnoDB Buffer Pool Resizing appeared first on Percona Database Performance Blog.

Jun
19
2018
--

Riskified prevents fraud on your favorite e-commerce site

Meet Riskified, an Israel-based startup that has raised $64 million in total to fight online fraud. The company has built a service that helps you reject transactions from stolen credit cards and approve more transactions from legitimate clients.

If you live in the U.S., chances are you know someone who noticed a fraudulent charge for an expensive purchase with their credit card — it’s still unclear why most restaurants and bars in the U.S. take your card away instead of bringing the card reader to you.

Online purchases, also known as card-not-present transactions, represent the primary source of fraudulent transactions. That’s why e-commerce websites need to optimize their payment system to detect fraudulent transactions and approve all the others.

Riskified uses machine learning to recognize good orders and improve your bottom line. In fact, Riskified is so confident that it guarantees that you’ll never have to pay chargebacks. As long as a transaction is approved by the product, the startup offers chargeback protection. If Riskified made the wrong call, the company reimburses fraudulent chargebacks.

On the other side of the equation, many e-commerce websites leave money on the table by rejecting transactions and false declines. It’s hard to quantify this as some customers end up not ordering anything. Riskified should help you on this front too.

The startup targets big customers — Macy’s, Dyson, Prada, Vestiaire Collective and GOAT are all using it. You can integrate Riskified with popular e-commerce payment systems and solutions, such as Shopify, Magento and Stripe. Riskified also has an API for more sophisticated implementations.

Jun
19
2018
--

Google injects Hire with AI to speed up common tasks

Since Google Hire launched last year it has been trying to make it easier for hiring managers to manage the data and tasks associated with the hiring process, while maybe tweaking LinkedIn while they’re at it. Today the company announced some AI-infused enhancements that they say will help save time and energy spent on manual processes.

“By incorporating Google AI, Hire now reduces repetitive, time-consuming tasks, like scheduling interviews into one-click interactions. This means hiring teams can spend less time with logistics and more time connecting with people,” Google’s Berit Hoffmann, Hire product manager wrote in a blog post announcing the new features.

The first piece involves making it easier and faster to schedule interviews with candidates. This is a multi-step activity that involves scheduling appropriate interviewers, choosing a time and date that works for all parties involved in the interview and scheduling a room in which to conduct the interview. Organizing these kind of logistics tend to eat up a lot of time.

“To streamline this process, Hire now uses AI to automatically suggest interviewers and ideal time slots, reducing interview scheduling to a few clicks,” Hoffmann wrote.

Photo: Google

Another common hiring chore is finding keywords in a resume. Hire’s AI now finds these words for a recruiter automatically by analysing terms in a job description or search query and highlighting relevant words including synonyms and acronyms in a resume to save time spent manually searching for them.

Photo: Google

Finally, another standard part of the hiring process is making phone calls, lots of phone calls. To make this easier, the latest version of Google Hire has a new click-to-call function. Simply click the phone number and it dials automatically and registers the call in call a log for easy recall or auditing.

While Microsoft has LinkedIn and Office 365, Google has G Suite and Google Hire. The strategy behind Hire is to allow hiring personnel to work in the G Suite tools they are immersed in every day and incorporate Hire functionality within those tools.

It’s not unlike CRM tools that integrate with Outlook or GMail because that’s where sales people spend a good deal of their time anyway. The idea is to reduce the time spent switching between tools and make the process a more integrated experience.

While none of these features individually will necessarily wow you, they are making use of Google AI to simplify common tasks to reduce some of the tedium associated with every-day hiring tasks.

Jun
19
2018
--

Cisco buys July Systems to bring digital experience to the real world

Customer experience management is about getting to know your customer’s preferences in an online context, but pulling that information into the real world often proves a major challenge for organizations. This results in a huge disconnect when a customer walks into a physical store. This morning, Cisco announced it has bought July Systems, a company that purports to solve that problem.

The companies did not share the acquisition price.

July Systems connects to a building’s WiFi system to understand the customer who just walked in the door, how many times they have shopped at this retailer, their loyalty point score and so forth. This gives the vendor the same kind of understanding about that customer offline as they are used to getting online.

It’s an interesting acquisition for Cisco, taking advantage of some of its strengths as a networking company, given the WiFi component, but also moving in the direction of providing more specific customer experience services.

“Enterprises have an opportunity to take advantage of their in-building Wi-Fi for a broad range of indoor location services. In addition to providing seamless connectivity, Wi-Fi can help enterprises glean deep visitor behavior insights, associate these learnings with their enterprise systems, and drive better customer and employee experiences,” Cisco’s Rob Salvagno wrote in a blog post announcing the acquisition.

As is often the case with these kinds of purchases, the two companies are not strangers. In fact, July Systems lists Cisco as a partner prominently on the company website (along with AWS). Customers include an interesting variety from Intercontinental Hotels Group to the New York Yankees baseball team.

Ray Wang, founder and principal analyst at Constellation Research says the acquisition is also about taking advantage of 5G. “July Systems gives Cisco the ability to expand its localization and customer experience management (CXM) capabilities pre-5g and post-5g. The WiFi analytics improve CXM, but more importantly Cisco also gains a robust developer community,” Wang told TechCrunch.

According to reports, the company had over $67 billion in cash as of February. That leaves plenty of money to make investments like this one and the company hasn’t been shy about using their cash horde to buy companies as they try to transform from a pure hardware company to one built on services

In fact, they have made 211 acquisitions over the years, according to data on Crunchbase. In recent years they have made some eye-popping ones like plucking AppDynamics for $3.7 billion just before it was going to IPO in 2017 or grabbing Jasper for $1.4 billion in 2016, but the company has also made a host of smaller ones like today’s announcement.

July Systems was founded back in 2001 and raised almost $60 million from a variety of investors including Sequoia Capital, Intel Capital, CRV and Motorola Solutions. Salvagno indicated the July Systems group will become incorporated into Cisco’s enterprise networking group. The deal is expected to be finalized in the first quarter of fiscal 2019.

Jun
19
2018
--

Webinar Weds 20/6: Percona XtraDB Cluster 5.7 Tutorial Part 2

webinar Percona XtraDB Cluster

Including setting up Percona XtraDB Cluster with ProxySQL and PMM

webinar Percona XtraDB ClusterPlease join Percona’s Architect, Tibi Köröcz as he presents Percona XtraDB Cluster 5.7 Tutorial Part 2 on Wednesday, June 20th, 2018, at 7:00 am PDT (UTC-7) / 10:00 am EDT (UTC-4).

 

Never used Percona XtraDB Cluster before? This is the webinar for you! In this 45-minute webinar, we will introduce you to a fully functional Percona XtraDB Cluster.

This webinar will show you how to install Percona XtraDB Cluster with ProxySQL, and monitor it with Percona Monitoring and Management (PMM).

We will also cover topics like bootstrap, IST, SST, certification, common-failure situations and online schema changes.

After this webinar, you will have enough knowledge to set up a working Percona XtraDB Cluster with ProxySQL, in order to meet your high availability requirements.

You can see part one of this series here: Percona XtraDB Cluster 5.7 Tutorial Part 1

Register Now!

Tibor Köröcz

Architect

ProxySQL for Connection Pooling

Tibi joined Percona in 2015 as a Consultant. Before joining Percona, among many other things, he worked at the world’s largest car hire booking service as a Senior Database Engineer. He enjoys trying and working with the latest technologies and applications which can help or work with MySQL together. In his spare time he likes to spend time with his friends, travel around the world and play ultimate frisbee.

 

The post Webinar Weds 20/6: Percona XtraDB Cluster 5.7 Tutorial Part 2 appeared first on Percona Database Performance Blog.

Jun
19
2018
--

Brex picks up $57M to build an easy credit card for startups

While Henrique Dubugras and Pedro Franceschi were giving up on their augmented reality startup inside Y Combinator and figuring out what to do next, they saw their batch mates struggling to get even the most basic corporate credit cards — and in a lot of cases, having to guarantee those cards themselves.

Brex, their new startup,  aims to try to fix that by offering startups a way to quickly get what’s effectively a credit card that they can use without having to personally guarantee that card or wade through complex processes to finally get a charge card. It’s geared initially towards smaller companies, but Dubugras expects those startups to grow up with it over time — and that Brex is already picking up larger clients. The company, coming out of stealth, said it has raised a total of $57 million from investors including the Y Combinator Continuity fund, Peter Thiel, Max Levchin, Yuri Milner, financial services VC Ribbit Capital and former Visa CEO Carl Pascarella. Y Combinator Continuity fund partner Anu Hariharan and Ribbit Capital managing partner Meyer Malka are joining the company’s board of directors.

“We want to be the best corporate credit card for startups,” Dubugras said. “We’re don’t require a personal guarantee or deposit, and we can give people a credit limit that’s as much as ten times higher. We can get you a virtual credit card in literally 5 minutes, versus traditional banks, in which you’d have to personally guarantee the card and get a low limit and it takes weeks to approve.”

Startup executives go to Brex’s website, sign up, and then put in their bank account info. They then use that banking information to underwrite the card, with the idea being that the service can see that the start has raised millions of dollars and doesn’t have the kind of wild liability that those banks think they might have given how young they are. Once the application is done, companies get a virtual credit card, and they can start divvying up virtual cards with custom limits for their employees. The company says it has attracted more than 1,000 customers and is now opening up globally.

The cards are designed to have better spending limits, and also offer company executives more granular ways to assign those limits to employees. The cards have to be paid off by the end of the month, and the rolling balance for those cards is dependent on the amount of capital each startup has available. The total limit available is, instead, a percentage of the company’s cash balance available. So rather than having to go through the process of getting approved for a card, the service can look at how much money is in a startup’s bank account and adjust the spending limit for all those cards accordingly.

Another aspect is automating the whole expense and auditing process. Rather than just going through typical applications like Concur and inputting specifics, card users can send a text message of a receipt through Brex associated with each transaction. Users will just get a text message about a charge — like a cup of coffee for a meeting with a potential business partner — and reply to that text with a message of the receipt to log the whole process. Everything is geared toward simplifying the whole process for startups that have an opportunity to be a bit more nimble and aren’t bogged down with complex layers of enterprise software. Each expense is looped in with a vendor, so executives can see the total amount of spending that’s happening at that scale.

The ability to have those dynamic spending limits is just one example of what Dubugras hopes will make Brex competitive. Rather than slotting into existing systems, Brex has an opportunity to recreate the back-end processes that power those cards, which larger institutions might not be able to do as they’ve hit a massive scale and get less and less agile. Dubugras and Franceschi previously worked on and sold Pagar.me, a Brazilian payments processor, where they saw firsthand the complex nature of working with global financial institutions — and some of the holes they could exploit.

“It’s not like we’re two geniuses that came up with a lot of things that no one came up with,” Dubugras said. “Implementing them with third-party processors is hard, but we didn’t have any of [those integrations], so we can rebuild them from scratch. It’s hard for banks to throw money at a problem and build those tools. We’ve rebuilt the way that these things work internally — they’d have to change fundamentally how the system works.”

While there are plenty of startups looking to quickly offer virtual cards, like Revolut’s disposable virtual card service, Brex aims to be what’s effectively a corporate card — just one that’s easier to get and works basically the same as a normal card. Users still have to pay off the balance at the end of the month, but the idea there is that Brex can de-risk itself by doing that while still offering startups a way to get a card with a high limit to start paying for the services or tools they need to get started.

Jun
18
2018
--

Percona Database Performance Blog Commenting Issues

Percona Database Performance Blog Comments

We are experiencing an intermittent commenting problem on the Percona Database Performance Blog.

At Percona, part of our purpose is to engage with the open source database community. A big part of this engagement is the Percona Database Performance Blog, and the participation of the community in reading and commenting. We appreciate your interest and contributions.

Currently, we are experiencing an intermittent problem with comments on the blog. Some users are unable to post comments and can receive an error message similar to the following:

Percona Blog Error MsgWe are working on correcting the issue, and apologize if it affects you. If you have a comment that you want to make on a specific blog and this issue affects you, you can also leave comments on Twitter, Facebook or LinkedIn – all blog posts are socialized on those platforms.

Thanks for your patience.

The post Percona Database Performance Blog Commenting Issues appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com