Feb
27
2021
--

Storm Ventures promotes Pascale Diaine and Frederik Groce to partners

Storm Ventures, a venture firm that focuses on early stage B2B enterprise startups, announced this week that it has promoted Pascale Diaine and Frederik Groce to partners at the firm.

The two new partners have worked their way up over the last several years. Groce joined Storm in 2016 and has invested in enterprise SaaS startups like Workato, Splashtop, NextRequest and Camino. Diaine joined a year later and has invested in firms like Sendoso, German Bionic, InEvent and Talkdesk.

Groce, who is also a founder at BLCK VC and helped organize the Black Venture Institute to create a network of Black investors, says that these promotions show that venture needs to be more diverse, and Storm recognizes this.  “If you think about the way our team works, that’s the way I think venture teams will need to work to be able to be successful in the next 40 years. And so the hope is that over time everyone does this and we’re just early to it,” Groce told me.

Unfortunately, right now that’s not the case, not even close. According to research by Crunchbase, just 12% of venture capitalists are women and two-thirds of firms don’t have any female investors. Meanwhile, only about 4% of ventures investors are Black.

Those numbers have an impact on the number of Black and female founders because as Groce points out the lack of founders in underrepresented groups is in part a networking problem. “In a business that’s predicated on networks if you don’t have diversity in the network, or the teams that are driving those networks, you just can’t make sure you’re seeing great talent across all ecosystems,” he said.

Diaine, who is French and started her career by founding Orange Fab, the corporate accelerator of the European Telco Orange, has brought her international business background to Storm where they helped her tune that experience to an investor focus and supported her as she learned the nuances of the investment side of the business.

“I don’t come from the VC world. I come from the innovative corporate world. So they had to train me and spend time getting me up to date. And they did spend so much time making sure I understood everything to make sure I got to this level,” she said.

Both partners bring their own unique views looking beyond Silicon Valley for investment opportunities. Diaine’s investment include a German, Brazilian and Portuguese company, while Groce’s investments include companies in Chicago, Atlanta and Seattle.

The two partners have also developed an algorithm to help find investments based on a number of online signals, something that has become more important during the pandemic when they couldn’t network in person.

“Frederik and I have been working on [an algorithm to find] what are the signals that you can identify online that will tell you this company’s doing well, this company growing.You have to have a nice set of startup search tracking [signals], but what do you track if you can’t just get the revenue in real time, which is impossible. So we’ve developed an algorithm that helps us identify some of these signals and create alerts on which startups we should pay attention to,” Diaine explained.

She says this data-driven approach should be helpful and augment their in-person efforts even after the pandemic is over and increase their overall efficiency in finding and tracking companies in their portfolios.

 

Feb
26
2021
--

Salesforce delivers, Wall Street doubts as stock falls 6.3% post-earnings

Wall Street investors can be fickle beasts. Take Salesforce as an example. The CRM giant announced a $5.82 billion quarter when it reported earnings yesterday. Revenue was up 20% year over year. The company also reported $21.25 billion in total revenue for the just-closed FY2021, up 24% YoY. If that wasn’t enough, it raised its FY2022 guidance (its upcoming fiscal year) to over $25 billion. What’s not to like?

You want higher quarterly revenue, Salesforce gave you higher revenue. You want high growth and solid projected revenue — check and check. In fact, it’s hard to find anything to complain about in the report. The company is performing and growing at a rate that is remarkable for an organization of its size and maturity — and it is expected to continue to perform and grow.

How did Wall Street react to this stellar report? It punished the stock with the price down over 6%, a pretty dismal day considering the company brought home such a promising report card.

2/6/21 Salesforce stock report with stock down 6.31%

Image Credits: Google

So what is going on here? It could be that investors simply don’t believe the growth is sustainable or that the company overpaid when it bought Slack at the end of last year for over $27 billion. It could be it’s just people overreacting to a cooling market this week. But if investors are looking for a high-growth company, Salesforce is delivering that.

While Slack was expensive, it reported revenue over $250 million yesterday, pushing it over the $1 billion run rate with more than 100 customers paying over $1 million in ARR. Those numbers will eventually get added to Salesforce’s bottom line.

Canaccord Genuity analyst David Hynes Jr. wrote that he was baffled by investors’ reaction to this report. Like me, he saw a lot of positives. Yet Wall Street decided to focus on the negative, and see “the glass half empty,” as he put it in his note to investors.

“The stock is clearly in the show-me camp, which means it’s likely to take another couple of quarters for investors to buy into the idea that fundamentals are actually quite solid here, and that Slack was opportunistic (and yes, pricey), but not an attempt to mask suddenly deteriorating growth,” Hynes wrote.

During the call with analysts yesterday, Brad Zelnick from Credit Suisse asked how well the company could accelerate out of the pandemic-induced economic malaise, and Gavin Patterson, Salesforce’s president and chief revenue officer, says the company is ready whenever the world moves past the pandemic.

“And let me reassure you, we are building the capability in terms of the sales force. You’d be delighted to hear that we’re investing significantly in terms of our direct sales force to take advantage of that demand. And I’m very confident we’ll be able to meet it. So I think you’re hearing today a message from us all that the business is strong, the pipeline is strong and we’ve got confidence going into the year,” Patterson said.

While Salesforce execs were clearly pumped up yesterday with good reason, there’s still doubt out in investor land that manifested itself in the stock starting down and staying down all day. It will be, as Hynes suggested, up to Salesforce to keep proving them wrong. As long as they keep producing quarters like the one they had this week, they should be just fine, regardless of what the naysayers on Wall Street may be thinking today.

Feb
26
2021
--

EC roundup: BNPL startups, growth marketing tips, solid state battery market map, more

When I needed a new sofa several months ago, I was pleased to find a buy now, pay later (BNPL) option during the checkout process. I had prepared myself to make a major financial outlay, but the service fees were well worth the convenience of deferring the entire payment.

Coincidentally, I was siting on said sofa this morning and considering that transaction when Alex Wilhelm submitted a column that compared recent earnings for three BNPL providers: Afterpay, Affirm and Klarna.

I asked him why he decided to dig into the sector with such gusto.


Full Extra Crunch articles are only available to members.
Use discount code ECFriday to save 20% off a one- or two-year subscription.


“What struck me about the concept was that we had just seen earnings from Affirm,” he said. “So we had three BNPL players with known earnings, and I had just covered a startup funding round in the space.”

“Toss in some obvious audience interest, and it was an easy choice to write the piece. Now the question is whether I did a good job and people find value in it.”

Thanks very much for reading Extra Crunch this week! Have a great weekend.

Walter Thompson
Senior Editor, TechCrunch
@yourprotagonist

As BNPL startups raise, a look at Klarna, Affirm and Afterpay earnings

Pilot CEO Waseem Daher tears down his company’s $60M Series C pitch deck

Smashing brick work with hammer

Image Credits: Colin Hawkins (opens in a new window) / Getty Images

I avoid running Extra Crunch stories that focus on best practices; you can find those anywhere. Instead, we look for “here’s what worked for me” articles that give readers actionable insights.

That’s a much better use of your time and ours.

With that ethos in mind, Lucas Matney interviewed Pilot CEO Waseem Daher to deconstruct the pitch deck that helped his company land a $60M Series C round.

“If the Series A was about, ‘Do you have the right ingredients to make this work?’ then the Series B is about, ‘Is this actually working?’” Daher tells TechCrunch.

“And then the Series C is more, ‘Well, show me that the core business is really working and that you have unlocked real drivers to allow the business to continue growing.’”

Can solid state batteries power up for the next generation of EVs?

market-maps-battery-alt

Image Credits: Bryce Durbin

A global survey of automobile owners found three hurdles to overcome before consumers will widely embrace electric vehicles:

  • 30-minute charging time
  • 300-mile range
  • $36,000 maximum cost

“Theoretically, solid state batteries (SSB) could deliver all three,” but for now, lithium-ion batteries are the go-to for most EVs (along with laptops and phones).

In our latest market map, we’ve plotted the new and established players in the SSB sector and listed many of the investors who are backing them.

Although SSBs are years away from mass production, “we are on the cusp of some pretty incredible discoveries using major improvements in computational science and machine learning algorithms to accelerate that process,” says SSB startup founder Amy Prieto.

 

Dear Sophie: Which immigration options are the fastest?

lone figure at entrance to maze hedge that has an American flag at the center

Image Credits: Bryce Durbin/TechCrunch

Dear Sophie:

Help! Our startup needs to hire 50 engineers in artificial intelligence and related fields ASAP. Which visa and green card options are the quickest to get for top immigrant engineers?

And will Biden’s new immigration bill help us?

— Mesmerized in Menlo Park

 

Why F5 spent $2.2B on 3 companies to focus on cloud native applications

Dark servers data center room with computers and storage systems

Image Credits: Jasmin Merdan / Getty Images

Founded in 1996, F5 has repositioned itself in the networking market several times in its history. In the last two years, however, it spent $2.2 billion to acquire Shape Security, Volterra and NGINX.

“As large organizations age, they often need to pivot to stay relevant, and I wanted to explore one of these transformational shifts,” said enterprise reporter Ron Miller.

“I spoke to the CEO of F5 to find out the strategy behind his company’s pivot and how he leveraged three acquisitions to push his organization in a new direction.”

 

DigitalOcean’s IPO filing shows a two-class cloud market

Cloud online storage technology concept. Big data data information exchange available. Magnifying glass with analytics data

Image Credits: Who_I_am (opens in a new window) / Getty Images

Cloud hosting company DigitalOcean filed to go public this week, so Ron Miller and Alex Wilhelm unpacked its financials.

“AWS and Microsoft Azure will not be losing too much sleep worrying about DigitalOcean, but it is not trying to compete head-on with them across the full spectrum of cloud infrastructure services,” said John Dinsdale, chief analyst and research director at Synergy Research.

 

Oscar Health’s initial IPO price is so high, it makes me want to swear

I asked Alex Wilhelm to dial back the profanity he used to describe Oscar Health’s proposed valuation, but perhaps I was too conservative.

In March 2018, the insurtech unicorn was valued at around $3.2 billion. Today, with the company aiming to debut at $32 to $34 per share, its fully diluted valuation is closer to $7.7 billion.

“The clear takeaway from the first Oscar Health IPO pricing interval is that public investors have lost their minds,” says Alex.

His advice for companies considering an IPO? “Go public now.”

 

If Coinbase is worth $100 billion, what’s a fair valuation for Stripe?

Last week, Alex wrote about how cryptocurrency trading platform Coinbase was being valued at $77 billion in the private markets.

As of Monday, “it’s now $100 billion, per Axios’ reporting.”

He reviewed Coinbase’s performance from 2019 through the end of Q3 2020 “to decide whether Coinbase at $100 billion makes no sense, a little sense or perfect sense.”

 

Winning enterprise sales teams know how to persuade the Chief Objection Officer

woman hand stop sign on brick wall background

Image Credits: Alla Aramyan (opens in a new window) / Getty Images

A skilled software sales team devotes a lot of resources to pinpointing potential customers.

Poring through LinkedIn and reviewing past speaker lists at industry conferences are good places to find decision-makers, for example.

Despite this detective work, GGV Capital investor Oren Yunger says sales teams still need to identify the deal-blockers who can spike a deal with a single email.

“I call this person the Chief Objection Officer.

 

3 strategies for elevating brand authority in 2021

Young woman standing on top of tall green bar graph against white background

Image Credits: Klaus Vedfelt / Getty Images

Every startup wants to raise its profile, but for many early-stage companies, marketing budgets are too small to make a meaningful difference.

Providing real value through content is an excellent way to build authority in the short and long term,” says Amanda Milligan, marketing director at growth agency Fractl.

 

RIBS: The messaging framework for every company and product

Grilled pork ribs with barbecue sauce on wooden background

Image Credits: luchezar (opens in a new window) / Getty Images

The most effective marketing uses good storytelling, not persuasion.

According to Caryn Marooney, general partner at Coatue Management, every compelling story is relevant, inevitable, believable and simple.

“Behind most successful companies is a story that checks every one of those boxes,” says Marooney, but “this is a central challenge for every startup.”

 

Ironclad’s Jason Boehmig: The objective of pricing is to become less wrong over time

On a recent episode of Extra Crunch Live, Ironclad founder and CEO Jason Boehmig and Accel partner Steve Loughlin discussed the pitch that brought them together almost four years ago.

Since that $8 million Series A, Loughlin joined Ironclad’s board. “Both agree that the work they put in up front had paid off” when it comes to how well they work together, says Jordan Crook.

“We’ve always been up front about the fact that we consider the board a part of the company,” said Boehmig.


TC Early Stage: The premiere how-to event for startup entrepreneurs and investors

From April 1-2, some of the most successful founders and VCs will explain how they build their businesses, raise money and manage their portfolios.

At TC Early Stage, we’ll cover topics like recruiting, sales, legal, PR, marketing and brand building. Each session includes ample time for audience questions and discussion.

Use discount code ECNEWSLETTER to take 20% off the cost of your TC Early Stage ticket!

Feb
26
2021
--

Connection Queuing in pgBouncer: Is it a Magical Remedy?

Connection Queuing in pgBouncer

Yes, this post is about connection queueing, not just pooling. Because “connection pooling” – pre-created connections as a pool – is a much-celebrated feature. Almost every discussion on connection pool/pgBouncer starts with the overhead of establishing a new connection to PostgreSQL… and how pre-created connections in the pool can save the world.

But there is a non-celebrity feature in pgBouncer (not denying others) that can address some of the real big operational challenges. It is the connection queueing. Many new PostgreSQL users don’t know there is something like this. In this blog post, I am planning to discuss some of the problems related to connection management first, and then explore how connection queueing can address those problems.

Problem 1: Spike in Load can Jam and Halt the Server

PostgreSQL has a dedicated backend server process for every user connection. So there will be one backend process running on a CPU core for every active queries/sessions. This means there is a one-to-one mapping between active sessions and running processes in the server. If we consider parallel execution of SQL statements, there will be many more running processes than active sessions. In many real-world cases, a sudden spike in load can result in hundreds of active queries starting at once while the server is equipped with a small number of CPUs (sometimes just virtual CPUs with only fraction of performance). As the number of active sessions/processes increases, the overhead of scheduling and context switches takes over.  Many times, the host server becomes unresponsive, and even opening a bash shell/terminal can take time. This is quite easy to simulate. Just 10 active connections on a two virtual CPU server with SELECT only workload can cause this.

With two active sessions:

$ time ssh -t postgres@pghost 'ls -l'
real 0m0.474s
user 0m0.017s
sys 0m0.006s

When there are 10 active sessions on PostgreSQL, just establishing an ssh connection to the server took 15 seconds.

real 0m15.307s
user 0m0.026s
sys 0m0.015s

**These are indicative numbers from a very specific system and do not qualify for a benchmark.

Generally, we could see that as the number of active sessions approaches double the number of CPU cores the performance penalty starts increasing heavily.

Many times, the problem won’t end there. Session level resource allocations (work_mem, temporary tables, etc.) can lead to overall server resource consumption. As the host server slows down, each session will take more time to complete while holding the resources, which could lead to more accumulation of active sessions. It is a spiral of evil. There are many real-world cases, where the entire show ended in a complete halt of the host server or OOM kick-in, terminating the PostgreSQL process and forcing it for crash recovery.

 

Have open source expertise you want to share? Submit your talk for Percona Live ONLINE 2021!

Problem 2: “Too Many Clients Already” Errors

Few smart DBAs will prevent this database disaster by setting max_connections properly to a smaller value than the database can handle, which is the right thing to do from a DB server perspective. Allowing an excessive number of connections to the database can lead to different types of abuses, attacks, and disasters. But the flip side to it is an abusive application may be greeted with the message as follows:

FATAL:  sorry, too many clients already

The same will be logged in the PostgreSQL log.

2021-02-15 13:40:50.017 UTC [12829] FATAL:  sorry, too many clients already

Unfortunately, this could lead to an application crash or misbehavior. From the business point of view, we just shifted the problem from database to application.

Problem 3: Big max_connection Value and Overhead

Because of the above-mentioned problem, it is common to see max_connection to have a very high value. The overhead of connections is probably one of the most actively-discussed topics these days because Postgres 14 is expected to have some of the connection scalability fixes. Andres Freund blogged about the details of analyzing the connection scalability problem and how it is addressed.

Even the idling connection may occupy server resources like memory. The overhead is considered as very low on a properly configured server; however, the impact could be heavy in reality. Again, a lot of things depend on the workload. There are at least a few cases that reported up to 50MB consumption per session. That means 500 idle connections can result in up to 25GB of memory usage.

In addition to this, more connections can lead to more lock management-related overheads. And don’t forget that system becomes vulnerable to sudden spikes as the max_connections are increased.

Solution: Connection Queueing

At the very least, connection queueing is the queueing of connections so that they can absorb the sudden spike in load. The connections can be put into a queue rather than straight away rejecting or sending it to the server and jamming it. This results in streamlining the execution. PostgreSQL server can keep doing what it can do rather than dealing with a jam situation.

Let me demonstrate with an example. For this demonstration, I set the max_connections to “2”, assuming that this is the maximum the server can accommodate without causing too many context switches. Too many connections won’t come and overload my database.

postgres=# show max_connections ;
2

A third connection to the database will result in an error as expected.

psql: error: FATAL: sorry, too many clients already

Now let’s use the pgbouncer for the connection queue. Many of users may not be knowing that it exists, by default. I used the following pgbouncer configuration for testing:

[databases]
pgbounce = host=172.16.2.16 port=5432 dbname=postgres

[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
logfile = pgbouncer.log
pidfile = pgbouncer.pid
admin_users = postgres
application_name_add_host=1
default_pool_size=1
min_pool_size=1

Yes, the pooler will establish only one connection to the database. pgBouncer establishes this one connection when the client connection establishes for the first time because the min_pool_size is 1. The pgBouncer log says:

2021-02-16 15:19:30.725 UTC [2152] LOG C-0x1fb0c58: pgbounce/postgres@172.16.2.56:54754 login attempt: db=pgbounce user=postgres tls=no
2021-02-16 15:19:30.726 UTC [2152] LOG S-0x1fbba80: pgbounce/postgres@172.16.2.16:5432 new connection to server (from 172.16.2.144:58968)

pgbouncer pool statistics also shows the details:

pgbouncer=# show pools;
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us | pool_mode 
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-----------
 pgbounce  | postgres  |         1 |          0 |         0 |       0 |       1 |         0 |        0 |       0 |          0 | session

But the beauty is that we won’t get any more “FATAL: sorry, too many clients already” errors. All client connections are accepted and put into the connection queue. For example, I have five client connections. please see the value of cl_active:

pgbouncer=# show pools;
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us | pool_mode 
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-----------
 pgbounce  | postgres  |         5 |          0 |         0 |       0 |       1 |         0 |        0 |       0 |          0 | session

As each client connection becomes active (with a SQL statement), they will be put into waiting.

pgbouncer=# show pools;
 database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us | pool_mode 
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-----------
 pgbounce  | postgres  |         1 |          4 |         1 |       0 |       0 |         0 |        0 |      28 |     438170 | session

Each client connection will be executed over the available database connection, one after another. This is a case with a single database connection. If the connection count and pool size can be increased, multiple client connections can hit the server at the same time and queue size drops. The following is a case with two connections (pool size two) to the database.

database  |   user    | cl_active | cl_waiting | sv_active | sv_idle | sv_used | sv_tested | sv_login | maxwait | maxwait_us | pool_mode 
-----------+-----------+-----------+------------+-----------+---------+---------+-----------+----------+---------+------------+-----------
 pgbounce  | postgres  |         2 |          1 |         2 |       0 |       0 |         0 |        0 |       4 |     978081 | session

Putting Connection Queueing into a Simple Test

This is not an extensive benchmarking, but a quick test to see the benefits for a typical host with two virtual CPUs. I have created 20 active connections to PostgreSQL with select-only load using pgbench.

pgbench -c 20 -j 20 -h pghost -U postgres -S -T 100

As the 20 server processes started running, the load average went out of the roof.

As you can see in the screenshot, the load average spiked to 17+. And as expected, the server response also becomes very poor consistently.

time ssh -t postgres@pghost 'ls -l'
real 0m17.768s
user 0m0.019s
sys 0m0.007s

At this stage, I tried sending the same 20 active connections through the connection queue of pgbouncer with pool size four (default_pool_size=4). The pgbouncer is at the client-side.

Since there are only four server-side processes, the load average dropped drastically. The maximum I could see is 1.73:

The server response is also very good.

$ time ssh -t postgres@pghost 'ls -l'
real 0m0.559s
user 0m0.021s
sys 0m0.008s

A load average of 17+ vs 1.73! That must be too good to be true.

There was a bit of skepticism about whether the low load on the server and the better server response is coming at the cost of database throughput.  I was expecting to see not-so-great throughput numbers. So I took the same test to a more consistently-performing platform (AWS r5.large with two virtual CPUs) again. To a bit of surprise, the numbers were even better.

The following are the numbers I got. At least it is not bad; it’s better. 5% better.

Direct With Queueing
20190.301781 21663.454921
20308.115945 21646.195661
20434.780692 21218.44989

Since we are using just four connections on the database side in this case, it also gives us the opportunity to reduce the max_connection value on the database side. Another check was whether switching to transaction-level pooling can save more because the database connection will be back to the pool and it could serve another client connection after each transaction. This could result in better concurrency.

Queue + max_ connection=5 Queue + max_connection=5 + transaction level pool
21897.685891 23318.655016
21913.841813 23486.485856
21933.129685 23633.39607

As expected, it delivered even better numbers. I would like to encourage readers to do more tests and proper benchmarking.

Summary and References

A really large number of application/client connections can be multiplexed over a very few database connections using the connection queueing. This queue helps in absorbing any spike in connections without overloading the database server. Instead of session processes queueing up on the OS scheduler/run queue, the connections can be kept outside safely and the database server can operate at full-throttle without any contentions. Streamlined database traffic results in better throughput also.

Feb
26
2021
--

Atlassian is acquiring Chartio to bring data visualization to the platform

The Atlassian platform is chock full of data about how a company operates and communicates. Atlassian launched a machine learning layer, which relies on data on the platform with the addition of Atlassian Smarts last fall. Today the company announced it was acquiring Chartio to add a new data analysis and visualization component to the Atlassian family of products. The companies did not share a purchase price.

The company plans to incorporate Chartio technology across the platform, starting with Jira. Before being acquired, Chartio has generated its share of data, reporting that 280,000 users have created 10.5 million charts for 540,000 dashboards pulled from over 100,000 data sources.

Atlassian sees Chartio as way to bring that data visualization component to the platform and really take advantage of the data locked inside its products. “Atlassian products are home to a treasure trove of data, and our goal is to unleash the power of this data so our customers can go beyond out-of-the-box reports and truly customize analytics to meet the needs of their organization,” Zoe Ghani, head of product experience at platform at Atlassian wrote in a blog post announcing the deal.

Chartio co-founder and CEO Dave Fowler wrote in a blog post on his company website that the two companies started discussing a deal late last year, which culminated in today’s announcement. As is often the case in these deals, he is arguing that his company will be better off as part of large organization like Atlassian with its vast resources than it would have been by remaining stand-alone.

“While we’ve been proudly independent for years, the opportunity to team up our technology with Atlassian’s platform and massive reach was incredibly compelling. Their product-led go to market, customer focus and educational marketing have always been aspirational for us,” Fowler wrote.

As for Chartio customers unfortunately, according to a notice on the company website, the product is going to be going away next year, but customers will have plenty of time to export the data to another tool. The notice includes a link to instructions on how to do this.

Chartio was founded in 2010, and participated in the Y Combinator Summer 2010 cohort. It raised a modest $8.03 million along the way, according to Pitchbook data.

Feb
26
2021
--

Webinar March 18: Moving Your Database to the Cloud – Top 3 Things to Consider

Moving Your Database to the Cloud

Moving Your Database to the CloudJoin Rick Vasquez, Percona Technical Expert, as he discusses the pros and cons of moving your database to the cloud.

Flexibility, performance, and cost management are three things that make cloud database environments an easy choice for many businesses. If you are thinking of moving your database to the cloud you need to know the issues you might encounter, and how you can make the most of your DBaaS cloud configuration.

Our latest webinar looks into a number of key issues and questions, including:

* The real Total Cost of Ownership (TCO) when moving your database to the cloud.

* If performance is a critical factor in your application, how do you achieve the same or better performance in your chosen cloud?

* The “hidden costs” of running your database in the cloud.

* Why do companies choose open source and cloud software? The pros and cons of this decision.

* The case for cloud support (why “fully managed” isn’t always as fully managed as claimed).

* Should you consider a multi-cloud strategy?

* Business continuity planning in the cloud – what if your provider has an outage?

Please join Rick Vasquez, Percona Technical Expert, on Thursday, March 18, 2021, at 1:00 PM EST for his webinar “Moving Your Database to the Cloud – Top 3 Things to Consider“.

Register for Webinar

If you can’t attend, sign up anyway, and we’ll send you the slides and recording afterward.

Feb
25
2021
--

DigitalOcean’s IPO filing shows a two-class cloud market

This morning DigitalOcean, a provider of cloud computing services to SMBs, filed to go public. The company intends to list on the New York Stock Exchange (NYSE) under the ticker symbol “DOCN.”

DigitalOcean’s offering comes amidst a hot streak for tech IPOs, and valuations that are stretched by historical norms. The cloud hosting company was joined by Coinbase in filing its numbers publicly today.

DigitalOcean’s offering comes amidst a hot streak for tech IPOs.

However, unlike the cryptocurrency exchange, DigitalOcean intends to raise capital through its offering. Its S-1 filing lists a $100 million placeholder number, a figure that will update when the company announces an IPO price range target.

This morning let’s explore the company’s financials briefly, and then ask ourselves what its results can tell us about the cloud market as a whole.

DigitalOcean’s financial results

TechCrunch has covered DigitalOcean with some frequency in recent years, including its early-2020 layoffs, its early-2020 $100 million debt raise and its $50 million investment from May of the same year that prior investors Access Industries and Andreessen Horowitz participated in.

From those pieces we knew that the company had reportedly reached $200 million in revenue during 2018, $250 million in 2019 and that DigitalOcean had expected to reach an annualized run rate of $300 million in 2020.

Those numbers held up well. Per its S-1 filing, DigitalOcean generated $203.1 million in 2018 revenue, $254.8 million in 2019 and $318.4 million in 2020. The company closed 2020 out with a self-calculated $357 million in annual run rate.

During its recent years of growth, DigitalOcean has managed to lose modestly increasing amounts of money, calculated using generally accepted accounting principles (GAAP), and non-GAAP profit (adjusted EBITDA) in rising quantities. Observe the rising disconnect:

Feb
25
2021
--

MySQL Monitoring and Reporting Using the MySQL Shell

Monitoring Using the MySQL Shell

Monitoring Using the MySQL ShellMySQL Shell is the advanced MySQL client, which has many excellent features. In this blog, I am going to explain the MySQL shell commands “\show” and “\watch”. Both commands are very useful to monitor the MySQL process. It provides more insights into the foreground and background threads as well. 

Overview

“\show” and “\watch” are the MySQL shell commands, which can be executed using the Javascript (JS), Python (Py), and SQL interfaces. Both commands are providing the same information, but the difference is you can refresh the results when using the command “\watch”. The refresh interval is two seconds. 

  • \show: Run the specified report using the provided options and arguments.
  • \watch: Run the specified report using the provided options and arguments, and refresh the results at regular intervals.

Below are the available options you can use with the “\show” or “\watch” command to retrieve the data.

MySQL  localhost:33060+ ssl  percona  JS > \show
Available reports: query, thread, threads.

MySQL  localhost:33060+ ssl  percona  JS > \watch
Available reports: query, thread, threads.

  • Query
  • Thread
  • Threads

“\show” with “query”

It will just execute the query provided as an argument within the double quotes and print the result. 

MySQL  localhost:33060+ ssl  percona  JS > \show query "select database()"
+------------+
| database() |
+------------+
| percona    |
+------------+
MySQL  localhost:33060+ ssl  percona  JS > \show query --vertical "select database()"
*************************** 1. row ***************************
database(): percona

You can also use the same option with the “\watch” command. Let’s say, if you want to monitor the processlist for every two seconds, then you can use the command like

\watch query “show processlist”

Have open source expertise you want to share? Submit your talk for Percona Live ONLINE 2021!

“\show” with “thread”

This option is designed to provide various information about the specific thread. Below are some of the important details you can retrieve from the specific thread. 

  • InnoDB details ( –innodb )
  • Locks Details ( –locks )
  • Prepared statement details ( –prep-stmts )
  • Client connection details ( –client )
  • Session status ( –status ) and session variables details ( –vars )

Example:

I am going to show the example for the below scenario. 

At session1:

My connection id is 121. I have started the transaction and updated the row where “id=3”. But, still not committed or rolled back the transaction.

mysql> \r
Connection id:    121
Current database: percona

mysql> select * from herc;
+------+--------+
| id   | name   |
+------+--------+
|    1 | jc     |
|    2 | herc7  |
|    3 | sakthi |
+------+--------+
3 rows in set (0.00 sec)

mysql> begin;
Query OK, 0 rows affected (0.00 sec)

mysql> update herc set name='xxx' where id=3;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

At session 2:

My connection id is 123. I have started the transaction and tried to update the same row where “id=3”. The query is still executing because the transaction from session 1 is blocking the row ( id = 3 )

mysql> \r
Connection id:    123
Current database: percona

mysql> begin;
Query OK, 0 rows affected (0.00 sec)

mysql> update herc set name='hercules' where id=3;

Now let’s use the command “\show thread” for both connection IDs (121, 123) and see what information we can get.

General information ( conncetion id = 123 ):

MySQL  localhost:33060+ ssl  JS > \show thread --cid=123 --general
GENERAL
Thread ID:                161
Connection ID:            123
Thread type:              FOREGROUND
Program name:             mysql
User:                     root
Host:                     localhost
Database:                 percona
Command:                  Query
Time:                     00:08:49
State:                    updating
Transaction state:        LOCK WAIT
Prepared statements:      0
Bytes received:           282
Bytes sent:               131
Info:                     update herc set name='hercules' where id=3
Previous statement:       NULL

From the general information, you can find some basic information about your id.

InnoDB information:

MySQL  localhost:33060+ ssl  JS > \show thread --cid=123 --innodb
INNODB STATUS
State:                    LOCK WAIT
ID:                       28139179
Elapsed:                  00:10:23
Started:                  2021-02-23 17:40:06.000000
Isolation level:          REPEATABLE READ
Access:                   READ WRITE
Locked tables:            1
Locked rows:              1
Modified rows:            0

Using the “–innodb” option, you can find out the information about the InnoDB like transaction state,  thread start time, elapsed time, locked tables, rows, modified rows. 

Locks information:

For connection id 123:

MySQL  localhost:33060+ ssl  JS > \show thread --cid=123 --locks
LOCKS
Waiting for InnoDB locks
+---------------------+----------+------------------+--------+-----+-------+----------------+---------------------+----------+
| Wait started        | Elapsed  | Locked table     | Type   | CID | Query | Account        | Transaction started | Elapsed  |
+---------------------+----------+------------------+--------+-----+-------+----------------+---------------------+----------+
| 2021-02-23 17:40:06 | 00:12:27 | `percona`.`herc` | RECORD | 121 | NULL  | root@localhost | 2021-02-23 17:39:32 | 00:13:01 |
+---------------------+----------+------------------+--------+-----+-------+----------------+---------------------+----------+

Waiting for metadata locks
N/A

Blocking InnoDB locks
N/A

Blocking metadata locks
N/A

Connection id 123 is from session 2. Which is currently waiting to release the lock from connection id 121 (session 1). Let’s see the “–locks” status for connection id 121.

MySQL  localhost:33060+ ssl  JS > \show thread --cid=121 --locks
LOCKS

Waiting for InnoDB locks
N/A

Waiting for metadata locks
N/A

Blocking InnoDB locks
+---------------------+----------+------------------+--------+-----+--------------------------------------------+
| Wait started        | Elapsed  | Locked table     | Type   | CID | Query                                      |
+---------------------+----------+------------------+--------+-----+--------------------------------------------+
| 2021-02-23 17:40:06 | 00:14:23 | `percona`.`herc` | RECORD | 123 | update herc set name='hercules' where id=3 |
+---------------------+----------+------------------+--------+-----+--------------------------------------------+

Blocking metadata locks
N/A

Here, you can find the details on “Blocking InnoDB Locks”. It blocks the connection id 123 (session 2).

Like the above example, you can explore the other options as well, which are helpful. 

“\show” with “threads”

This is very helpful to know the details about your ongoing threads. It will provide the details about both “FOREGROUND” and “BACKGROUND” threads. There are many columns, which are very useful to know about thread status. You can filter the needed columns with the option “-o”. By executing the command “\show threads –help”, you can find all the available options and their purposes. 

  • It supports the WHERE clause for generating the report
  • It supports ORDER BY for generating the report
  • It supports LIMIT for generating the report. 

Below, I am sharing some examples, which will help you to understand how we can use the “threads” command with the MySQL shell.

  • How to find the running “FOREGROUND” threads details
  • How to find the running “BACKGROUND” threads details
  • How to find the top five threads, which are consuming more memory from a particular user
  • How to find the Query digest details from ongoing threads
  • How to find the top five threads which consumed huge IO operations
  • How to find the top five blocked and blocking threads

I am running the sysbench against the server to get my database loaded. 

sysbench /usr/share/sysbench/oltp_read_write.lua --events=0 --time=30000 --mysql-host=localhost --mysql-user=root --mysql-password=Course@321 --mysql-port=3306 --delete_inserts=10 --index_updates=10 --non_index_updates=10 --report-interval=1 --threads=100 run

How to Find the Running “FOREGROUND” Threads Details

You can use the option “–foreground” to see all the running foreground threads.

MySQL  localhost:33060+ ssl  JS > \show threads --foreground
+-----+-----+-----------------+-----------+---------+---------+----------+------------------------+-----------+-------------------------------------------------------------------+-----------+
| tid | cid | user            | host      | db      | command | time     | state                  | txstate   | info                                                              | nblocking |
+-----+-----+-----------------+-----------+---------+---------+----------+------------------------+-----------+-------------------------------------------------------------------+-----------+
| 27  | 114 | root            | localhost | NULL    | Query   | 00:00:00 | executing              | NULL      | SELECT json_object('cid',t.PRO ... READ_ID = io.thread_id WHERE t | 0         |
| 42  | 5   | event_scheduler | localhost | NULL    | Daemon  | 17:42:20 | Waiting on empty queue | NULL      | NULL                                                              | 0         |
| 46  | 7   | NULL            | NULL      | NULL    | Daemon  | 17:42:20 | Suspending             | NULL      | NULL                                                              | 0         |
| 158 | 120 | root            | localhost | NULL    | Sleep   | 00:32:24 | NULL                   | NULL      | 

.  . . . .. . ... .     . . .. . .. . .. . 
.  . . . .. . ... .     . . .. . .. . .. . 
.  . . . .. . ... .     . . .. . .. . .. . 
                                                              | 0         |
| 260 | 222 | root            | localhost | sbtest  | Execute | 00:00:00 | updating               | LOCK WAIT | NULL                                                              | 1         |
| 261 | 223 | root            | localhost | sbtest  | Execute | 00:00:00 | updating               | LOCK WAIT | NULL                                                              | 0         |
+-----+-----+-----------------+-----------+---------+---------+----------+------------------------+-----------+-------------------------------------------------------------------+-----------+

How to Find the Running “BACKGROUND” Threads Details

This will give detailed information about the background threads, mostly InnoDB. You can use the flag “–background” to get these details. These details will be really helpful for debugging the performance issues.

MySQL  localhost:33060+ ssl  JS > \show threads --background
+-----+--------------------------------------+---------+-----------+------------+------------+------------+
| tid | name                                 | nio     | ioltncy   | iominltncy | ioavgltncy | iomaxltncy |
+-----+--------------------------------------+---------+-----------+------------+------------+------------+
| 1   | sql/main                             | 92333   | 192.51 ms | 229.63 ns  | 96.68 us   | 1.42 ms    |
| 3   | innodb/io_ibuf_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 4   | innodb/io_log_thread                 | NULL    | NULL      | NULL       | NULL       | NULL       |
| 5   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 6   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 7   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 8   | innodb/io_read_thread                | NULL    | NULL      | NULL       | NULL       | NULL       |
| 9   | innodb/io_write_thread               | 37767   | 45.83 s   | 1.26 us    | 1.21 ms    | 17.81 ms   |
| 10  | innodb/io_write_thread               | 36763   | 44.57 s   | 1.23 us    | 1.21 ms    | 30.11 ms   |
| 11  | innodb/io_write_thread               | 37989   | 45.87 s   | 1.26 us    | 1.21 ms    | 24.03 ms   |
| 12  | innodb/io_write_thread               | 37745   | 45.78 s   | 1.23 us    | 1.21 ms    | 28.93 ms   |
| 13  | innodb/page_flush_coordinator_thread | 456128  | 2.19 min  | 5.27 us    | 419.75 us  | 29.98 ms   |
| 14  | innodb/log_checkpointer_thread       | 818     | 479.84 ms | 2.62 us    | 710.63 us  | 9.26 ms    |
| 15  | innodb/log_flush_notifier_thread     | NULL    | NULL      | NULL       | NULL       | NULL       |
| 16  | innodb/log_flusher_thread            | 1739344 | 41.71 min | 1.46 us    | 1.44 ms    | 30.22 ms   |
| 17  | innodb/log_write_notifier_thread     | NULL    | NULL      | NULL       | NULL       | NULL       |
| 18  | innodb/log_writer_thread             | 5239157 | 10.23 min | 1.14 us    | 117.16 us  | 29.02 ms   |
| 19  | innodb/srv_lock_timeout_thread       | NULL    | NULL      | NULL       | NULL       | NULL       |
| 20  | innodb/srv_error_monitor_thread      | NULL    | NULL      | NULL       | NULL       | NULL       |
| 21  | innodb/srv_monitor_thread            | NULL    | NULL      | NULL       | NULL       | NULL       |
| 22  | innodb/buf_resize_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 23  | innodb/srv_master_thread             | 270     | 4.02 ms   | 6.75 us    | 14.90 us   | 41.74 us   |
| 24  | innodb/dict_stats_thread             | 3088    | 429.12 ms | 3.22 us    | 138.96 us  | 5.93 ms    |
| 25  | innodb/fts_optimize_thread           | NULL    | NULL      | NULL       | NULL       | NULL       |
| 26  | mysqlx/worker                        | NULL    | NULL      | NULL       | NULL       | NULL       |
| 28  | mysqlx/acceptor_network              | NULL    | NULL      | NULL       | NULL       | NULL       |
| 32  | innodb/buf_dump_thread               | 1060    | 7.61 ms   | 2.74 us    | 7.18 us    | 647.18 us  |
| 33  | innodb/clone_gtid_thread             | 4       | 689.86 us | 4.46 us    | 172.46 us  | 667.95 us  |
| 34  | innodb/srv_purge_thread              | 7668    | 58.21 ms  | 3.34 us    | 336.20 us  | 1.64 ms    |
| 35  | innodb/srv_worker_thread             | 30      | 278.22 us | 5.57 us    | 9.27 us    | 29.69 us   |
| 36  | innodb/srv_purge_thread              | NULL    | NULL      | NULL       | NULL       | NULL       |
| 37  | innodb/srv_worker_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 38  | innodb/srv_worker_thread             | 24      | 886.23 us | 5.24 us    | 36.93 us   | 644.75 us  |
| 39  | innodb/srv_worker_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 40  | innodb/srv_worker_thread             | 22      | 223.92 us | 5.84 us    | 10.18 us   | 18.34 us   |
| 41  | innodb/srv_worker_thread             | NULL    | NULL      | NULL       | NULL       | NULL       |
| 43  | sql/signal_handler                   | NULL    | NULL      | NULL       | NULL       | NULL       |
| 44  | mysqlx/acceptor_network              | NULL    | NULL      | NULL       | NULL       | NULL       |
+-----+--------------------------------------+---------+-----------+------------+------------+------------+

How to Find the Top Five Threads, Which are Consuming More Memory From a Particular User

From the below example, I am finding the top five threads, which are consuming more memory from user “root”. 

MySQL  localhost:33060+ ssl  JS > \show threads --foreground -o tid,user,memory,started --order-by=memory --desc --where "user = 'root'" --limit=5
+-----+------+----------+---------------------+
| tid | user | memory   | started             |
+-----+------+----------+---------------------+
| 247 | root | 9.47 MiB | 2021-02-23 18:30:29 |
| 166 | root | 9.42 MiB | 2021-02-23 18:30:29 |
| 248 | root | 9.41 MiB | 2021-02-23 18:30:29 |
| 186 | root | 9.39 MiB | 2021-02-23 18:30:29 |
| 171 | root | 9.38 MiB | 2021-02-23 18:30:29 |
+-----+------+----------+---------------------+

How to Find the Query Digest Details From Ongoing Threads

You can use the options “digest” and “digesttxt” to find the digest output of the running threads.

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,info,digest,digesttxt --where "digesttxt like 'UPDATE%'" --vertical
*************************** 1. row ***************************
      tid: 161
      cid: 123
     info: update herc set name='hercules' where id=3
   digest: 7832494e46eee2b28a46dc1fdae2e1b18d1e5c00d42f56b5424e5716d069fd39
digesttxt: UPDATE `herc` SET NAME = ? WHERE `id` = ?

How to Find the Top Five Threads Which Consumed Huge IO Operations

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,nio --order-by=nio --desc --limit=5
+-----+-----+-------+
| tid | cid | nio   |
+-----+-----+-------+
| 27  | 114 | 36982 |
| 238 | 200 | 2857  |
| 215 | 177 | 2733  |
| 207 | 169 | 2729  |
| 232 | 194 | 2724  |
+-----+-----+-------+

Nio ? Total number of IO events for the thread.

How to Find the Top Five Blocked and Blocking Threads

  • nblocked  – The number of other threads blocked by the thread
  • nblocking – The number of other threads blocking the thread
  • Ntxrlckd   – The approximate number of rows locked by the current InnoDB transaction

Blocking threads:

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,nblocked,nblocking,ntxrlckd,txstate --order-by=nblocking --desc --limit 5
+-----+-----+----------+-----------+----------+-----------+
| tid | cid | nblocked | nblocking | ntxrlckd | txstate   |
+-----+-----+----------+-----------+----------+-----------+
| 230 | 192 | 0        | 7         | 5        | LOCK WAIT |
| 165 | 127 | 0        | 6         | 2        | LOCK WAIT |
| 215 | 177 | 0        | 5         | 9        | LOCK WAIT |
| 221 | 183 | 0        | 4         | NULL     | NULL      |
| 233 | 195 | 1        | 4         | NULL     | NULL      |
+-----+-----+----------+-----------+----------+-----------+

Blocked threads:

MySQL  localhost:33060+ ssl  JS > \show threads -o tid,cid,nblocked,nblocking,ntxrlckd,txstate --order-by=nblocked --desc --limit 5
+-----+-----+----------+-----------+----------+-----------+
| tid | cid | nblocked | nblocking | ntxrlckd | txstate   |
+-----+-----+----------+-----------+----------+-----------+
| 203 | 165 | 15       | 0         | 8        | LOCK WAIT |
| 181 | 143 | 10       | 1         | 5        | LOCK WAIT |
| 223 | 185 | 9        | 0         | 8        | LOCK WAIT |
| 209 | 171 | 9        | 1         | 5        | LOCK WAIT |
| 178 | 140 | 6        | 0         | 7        | LOCK WAIT |
+-----+-----+----------+-----------+----------+-----------+

Like this, you have many options to explore and you can generate the report based on your requirements. I hope this blog post is helpful to understand the “\show” and “\watch” commands from the MySQL shell!

Feb
25
2021
--

Why F5 spent $2.2B on 3 companies to focus on cloud native applications

It’s essential for older companies to recognize changes in the marketplace or face the brutal reality of being left in the dust. F5 is an old-school company that launched back in the 90s, yet has been able to transform a number of times in its history to avoid major disruption. Over the last two years, the company has continued that process of redefining itself, this time using a trio of acquisitions — NGINX, Shape Security and Volterra — totaling $2.2 billion to push in a new direction.

While F5 has been associated with applications management for some time, it recognized that the way companies developed and managed applications was changing in a big way with the shift to Kubernetes, microservices and containerization. At the same time, applications have been increasingly moving to the edge, closer to the user. The company understood that it needed to up its game in these areas if it was going to keep up with customers.

Taken separately, it would be easy to miss that there was a game plan behind the three acquisitions, but together they show a company with a clear opinion of where they want to go next. We spoke to F5 president and CEO François Locoh-Donou to learn why he bought these companies and to figure out the method in his company’s acquisition spree madness.

Looking back, looking forward

F5, which was founded in 1996, has found itself at a number of crossroads in its long history, times where it needed to reassess its position in the market. A few years ago it found itself at one such juncture. The company had successfully navigated the shift from physical appliance to virtual, and from data center to cloud. But it also saw the shift to cloud native on the horizon and it knew it had to be there to survive and thrive long term.

“We moved from just keeping applications performing to actually keeping them performing and secure. Over the years, we have become an application delivery and security company. And that’s really how F5 grew over the last 15 years,” said Locoh-Donou.

Today the company has over 18,000 customers centered in enterprise verticals like financial services, healthcare, government, technology and telecom. He says that the focus of the company has always been on applications and how to deliver and secure them, but as they looked ahead, they wanted to be able to do that in a modern context, and that’s where the acquisitions came into play.

As F5 saw it, applications were becoming central to their customers’ success and their IT departments were expending too many resources connecting applications to the cloud and keeping them secure. So part of the goal for these three acquisitions was to bring a level of automation to this whole process of managing modern applications.

“Our view is you fast forward five or 10 years, we are going to move to a world where applications will become adaptive, which essentially means that we are going to bring automation to the security and delivery and performance of applications, so that a lot of that stuff gets done in a more native and automated way,” Locoh-Donou said.

As part of this shift, the company saw customers increasingly using microservices architecture in their applications. This means instead of delivering a large monolithic application, developers were delivering them in smaller pieces inside containers, making it easier to manage, deploy and update.

At the same time, it saw companies needing a new way to secure these applications as they shifted from data center to cloud to the edge. And finally, that shift to the edge would require a new way to manage applications.

Feb
25
2021
--

DataJoy raises $6M seed to help SaaS companies track key business metrics

Every business needs to track fundamental financial information, but the data typically lives in a variety of silos, making it a constant challenge to understand a company’s overall financial health. DataJoy, an early-stage startup, wants to solve that issue. The company announced a $6 million seed round today led by Foundation Capital with help from Quarry VC, Partech Partners, IGSB, Bow Capital and SVB.

Like many startup founders, CEO Jon Lee has experienced the frustration firsthand of trying to gather this financial data, and he decided to start a company to deal with it once and for all. “The reason why I started this company was that I was really frustrated at Copper, my last company, because it was really hard just to find the answers to simple business questions in my data,” he told me.

These include basic questions like how the business is doing this quarter, if there are any surprises that could throw the company off track and where are the best places to invest in the business to accelerate more quickly.

The company has decided to concentrate its efforts for starters on SaaS companies and their requirements. “We basically focus on taking the work out of revenue intelligence, and just give you the insights that successful companies in the SaaS vertical depend on to be the largest and fastest growing in the market,” Lee explained.

The idea is to build a product with a way to connect to key business systems, pull the data and answer a very specific set of business questions, while using machine learning to provide more proactive advice.

While the company is still in the process of building the product and is pre-revenue, it has begun developing the pieces to ultimately help companies answer these questions. Eventually it will have a set of connectors to various key systems like Salesforce for CRM, HubSpot and Marketo for marketing, NetSuite for ERP, Gainsight for customer experience and Amplitude for product intelligence.

Lee says the set of connectors will be as specific as the questions themselves and based on their research with potential customers and what they are using to track this information. Ashu Garg, general partner at lead investor Foundation Capital, says that he was attracted to the founding team’s experience, but also to the fact they were solving a problem he sees all the time sitting on the boards of various SaaS startups.

“I spend my life in the board meetings. It’s what I do, and every CEO, every board is looking for straight answers for what should be obvious questions, but they require this intersection of data,” Garg said. He says to an extent, it’s only possible now due to the evolution of technology to pull this all together in a way that simplifies this process.

The company currently has 11 employees, with plans to double that by the middle of this year. As a longtime entrepreneur, Lee says that he has found that building a diverse workforce is essential to building a successful company. “People have found diversity usually [results in a company that is] more productive, more creative and works faster,” Lee said. He said that that’s why it’s important to focus on diversity from the earliest days of the company, while being proactive to make that happen. For example, ensuring you have a diverse set of candidates to choose from when you are reviewing resumes.

For now, the company is 100% remote. In fact, Lee and his co-founder, Chief Product Officer Ken Wong, who previously ran AI and machine learning at Tableau, have yet to meet in person, but they are hoping that changes soon. The company will eventually have a presence in Vancouver and San Mateo whenever offices start to open.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com