4 key areas SaaS startups must address to scale infrastructure for the enterprise

Startups and SMBs are usually the first to adopt many SaaS products. But as these customers grow in size and complexity — and as you rope in larger organizations — scaling your infrastructure for the enterprise becomes critical for success.

Below are four tips on how to advance your company’s infrastructure to support and grow with your largest customers.

Address your customers’ security and reliability needs

If you’re building SaaS, odds are you’re holding very important customer data. Regardless of what you build, that makes you a threat vector for attacks on your customers. While security is important for all customers, the stakes certainly get higher the larger they grow.

Given the stakes, it’s paramount to build infrastructure, products and processes that address your customers’ growing security and reliability needs. That includes the ethical and moral obligation you have to make sure your systems and practices meet and exceed any claim you make about security and reliability to your customers.

Here are security and reliability requirements large customers typically ask for:

Formal SLAs around uptime: If you’re building SaaS, customers expect it to be available all the time. Large customers using your software for mission-critical applications will expect to see formal SLAs in contracts committing to 99.9% uptime or higher. As you build infrastructure and product layers, you need to be confident in your uptime and be able to measure uptime on a per customer basis so you know if you’re meeting your contractual obligations.

While it’s hard to prioritize asks from your largest customers, you’ll find that their collective feedback will pull your product roadmap in a specific direction.

Real-time status of your platform: Most larger customers will expect to see your platform’s historical uptime and have real-time visibility into events and incidents as they happen. As you mature and specialize, creating this visibility for customers also drives more collaboration between your customer operations and infrastructure teams. This collaboration is valuable to invest in, as it provides insights into how customers are experiencing a particular degradation in your service and allows for you to communicate back what you found so far and what your ETA is.

Backups: As your customers grow, be prepared for expectations around backups — not just in terms of how long it takes to recover the whole application, but also around backup periodicity, location of your backups and data retention (e.g., are you holding on to the data too long?). If you’re building your backup strategy, thinking about future flexibility around backup management will help you stay ahead of these asks.


ConverseNow is targeting restaurant drive-thrus with new $15M round

One year after voice-based AI technology company ConverseNow raised a $3.3 million seed round, the company is back with a cash infusion of $15 million in Series A funding in a round led by Craft Ventures.

The Austin-based company’s AI voice ordering assistants George and Becky work inside quick-serve restaurants to take orders via phone, chat, drive-thru and self-service kiosks, freeing up staff to concentrate on food preparation and customer service.

Joining Craft in the Series A round were LiveOak Venture Partners, Tensility Venture Partners, Knoll Ventures, Bala Investments, 2048 Ventures, Bridge Investments, Moneta Ventures and angel investors Federico Castellucci and Ashish Gupta. This new investment brings ConverseNow’s total funding to $18.3 million, Vinay Shukla, co-founder and CEO of ConverseNow, told TechCrunch.

As part of the investment, Bryan Rosenblatt, partner at Craft Ventures, is joining the company’s board of directors, and said in a written statement that “post-pandemic, quick-service restaurants are primed for digital transformation, and we see a unique opportunity for ConverseNow to become a driving force in the space.”

At the time when ConverseNow raised its seed funding in 2020, it was piloting its technology in just a handful of stores. Today, it is live in over 750 stores and grew seven times in revenue and five times in headcount.

Restaurants were some of the hardest-hit industries during the pandemic, and as they reopen, Shukla said their two main problems will be labor and supply chain, and “that is where our technology intersects.”

The AI assistants are able to step in during peak times when workers are busy to help take orders so that customers are not waiting to place their orders, or calls get dropped or abandoned, something Shukla said happens often.

It can also drive more business. ConverseNow said it is shown to increase average orders by 23% and revenue by 20%, while adding up to 12 hours of extra deployable labor time per store per week.

Company co-founder Rahul Aggarwal said more people prefer to order remotely, which has led to an increase in volume. However, the more workers have to multitask, the less focus they have on any one job.

“If you step into restaurants with ConverseNow, you see them reimagined,” Aggarwal said. “You find workers focusing on the job they like to do, which is preparing food. It is also driving better work balance, while on the customer side, you don’t have to wait in the queue. Operators have more time to churn orders, and service time comes down.”

ConverseNow is one of the startups within the global restaurant management software market that is forecasted to reach $6.94 billion by 2025, according to Grand View Research. Over the past year, startups in the space attracted both investors and acquirers. For example, point-of-sale software company Lightspeed acquired Upserve in December for $430 million. Earlier this year, Sunday raised $24 million for its checkout technology.

The new funding will enable ConverseNow to continue developing its line-busting technology and invest in marketing, sales and product innovation. It will also be working on building a database from every conversation and onboarding new customers quicker, which involves inputting the initial menu.

By leveraging artificial intelligence, the company will be able to course-correct any inconsistencies, like background noise on a call, and better predict what a customer might be saying. It will also correct missing words and translate the order better. In the future, Shukla and Aggarwal also want the platform to be able to tell what is going on around the restaurant — what traffic is like, the weather and any menu promotions to drive upsell.



Homebase raises $71M for a team management platform aimed at SMBs and their hourly workers

Small and medium enterprises have become a big opportunity in the world of B2B technology in the last several years, and today a startup that’s building tools aimed at helping them manage their teams of workers is announcing some funding that underscores the state of that market.

Homebase, which provides a platform that helps SMBs manage various services related to their hourly workforces, has closed $71 million in funding, a Series C that values the company between $500 million and $600 million, according to sources close to the startup.

The round has a number of big names in it that are as much a sign of how large VCs are valuing the SMB market right now as it is of the strategic interest of the individuals who are participating. GGV Capital is leading the round, with past backers Bain Capital Ventures, Baseline Ventures, Bedrock, Cowboy Ventures and Khosla Ventures also participating. Individuals include Focus Brands President Kat Cole; Jocelyn Mangan, a board member at Papa John’s and Chownow and former COO of Snag; former CFO of payroll and benefits company Gusto, Mike Dinsdale; Guild Education founder Rachel Carlson; star athletes Jrue and Lauren Holiday; and alright alright alright actor and famous everyman and future political candidate Matthew McConaughey.

Homebase has raised $108 million to date.

The funding is coming on the heels of strong growth for Homebase (which is not to be confused with the U.K./Irish home improvement chain of the same name, nor the YC-backed Vietnamese proptech startup).

The company now has some 100,000 small businesses, with 1 million employees in total, on its platform. Businesses use Homebase to manage all manner of activities related to workers that are paid hourly, including (most recently) payroll, as well as shift scheduling, timeclocks and timesheets, hiring and onboarding, communication and HR compliance.

John Waldmann, Homebase’s founder and CEO, said the funding will go toward both continuing to bring on more customers as well as expanding the list of services offered to them, which could include more features geared to frontline and service workers, as well as features for small businesses who might also have some “desk” workers who might still work hourly.

The common thread, Waldmann said, is not the exact nature of those jobs, but the fact that all of them, partly because of that hourly aspect, have been largely underserved by tech up to now.

“From the beginning, our mission was to help local businesses and their teams,” he said. Part of his inspiration came from people he knew: a childhood friend who owned an independent, expanding restaurant chain, and was going through the challenges of managing his teams there, carrying out most of his work on paper; and his sister, who worked in hospitality, which didn’t look all that different from his restaurant friend’s challenges. She had to call in to see when she was working, writing her hours in a notebook to make sure she got paid accurately. 

“There are a lot of tech companies focused on making work easier for folks that sit at computers or desks, but are building tools for these others,” Waldmann said. “In the world of work, the experience just looks different with technology.”

Homebase currently is focused on the North American market — there are some 5 million small businesses in the U.S. alone, and so there is a lot of opportunity there. The huge pressure that many have experienced in the last 16 months of COVID-19 living, leading some to shut down altogether, has also focused them on how to manage and carry out work much more efficiently and in a more organized way, ensuring you know where your staff is and that your staff knows what it should be doing at all times.

What will be interesting is to see what kinds of services Homebase adds to its platform over time: In a way, it’s a sign of how hourly wage workers are becoming a more sophisticated and salient aspect of the workforce, with their own unique demands. Payroll, which is now live in 27 states, also comes with pay advances, opening the door to other kinds of financial services for Homebase, for example.

“Small businesses are the lifeblood of the American economy, with more than 60% of Americans employed by one of our 30 million small businesses. In a post-pandemic world, technology has never been more important to businesses of all sizes, including SMBs,” Jeff Richards, managing partner at GGV Capital and new Homebase board member, said in a statement. “The team at Homebase has worked tirelessly for years to bring technology to SMBs in a way that helps drive increased profitability, better hiring and growth. We’re thrilled to see Homebase playing such an important role in America’s small business recovery and thrilled to be part of the mission going forward.”

It’s interesting to see McConaughey involved in this round, given that he’s most recently made a turn toward politics, with plans to run for governor of Texas in 2022.

“Hardworking people who work in and run restaurants and local businesses are important to all of us,” he said in a statement. “They play an important role in giving our cities a sense of livelihood, identity and community. This is why I’ve invested in Homebase. Homebase brings small business operations into the modern age and helps folks across the country not only continue to work harder, but work smarter.”


Coralogix logs $55M for its new take on production analytics, now valued at $300M-$400M

Data may be the new oil, but it’s only valuable if you make good use of it. Today, a startup that has built a new kind of production analytics platform for developers, security engineers, and data scientists to track and better understand how data is moving around their networks is announcing a round of funding that underscores the demand for their technology.

Coralogix, which provides stateful streaming services to engineering teams, has picked up $55 million in a Series C round of funding.

The round was led by Greenfield Partners, with Red Dot Capital Partners, StageOne Ventures, Eyal Ofer’s O.G. Tech, Janvest Capital Partners, Maor Investments and 2B Angels also participating.

This Series C is coming about 10 months after the company’s Series B of $25 million, and from what we understand, Coralogix’s valuation is now in the range of $300 million to $400 million, a big jump for the startup, coming on the back of it growing 250% since this time last year, racking up some 2,000 paying customers, with some small teams paying as little as $100/year and large enterprises paying $1.5 million/year.

Previously, Coralogix — founded in Tel Aviv and with an HQ also in San Francisco — had also raised a round of $10 million.

Coralogix got its start as a platform aimed at quality assurance support for R&D and engineering teams. The focus here is on log analytics and metrics for platform engineers, and this still forms a big part of its business today. Added to that, in recent years, Coralogix’s tools are being applied to cloud security services, contributing to a company’s threat intelligence by providing a way to observe data for any inconsistencies that typically might point to a breach or another incident. (It integrates with Alien Vault and others for this purpose.)

The third area that is just picking up now and will be developed further — one of the uses of this investment, in fact — will be to develop how Coralogix is used for business intelligence. This is a particularly interesting area because it plays into how Coralogix is built, to provide analytics on data before it is indexed.

“It’s about high-volume, but low-value data,” Ariel Assaraf, Coralogix’s CEO, said in an interview. “Customers don’t want to store the data [or index it] but want to view it live and visualize it. We are starting to see a use case where business information and our analytics come together for sentiment analysis and other areas.”

There are dozens of strong companies providing tools these days to cover log analytics and data observability, underscoring the general growth and importance of DevOps these days. They include companies like DataDog, Sumo Logic and Splunk.

However, Assaraf believes that what sets his company apart is its approach: Essentially, it has devised a way of observing and analyzing data streams before they get indexed, giving engineers more flexibility to query the data in different ways and glean more insights, faster. The other issue with indexing, he said, is that it impacts latency, which also has a big impact on overall costs for an organization.

For many of Coralogix’s competitors, turning around the nature of the business to focus not first on indexing would be akin to completely rebuilding the business, hard to do at their scale (although this is what Coralogix did when it pivoted as a small company several years ago, which is when Assaraf took on the role of CEO). One company he believes might be more of a direct rival is Confluent.

“I think we will see Confluent getting into observability very soon because they have the streaming capabilities,” he said, “but not the tools we have.” Another potential competitor looming on the horizon: Salesforce, and its potential move into that area, underscores the shifting sands of what is powering enterprise IT investment decisions today.

Salesforce already has Heroku, Slack and Tableau, three major tools developers use for tracking and working with data, Assaraf pointed out, and there were strong rumors of it trying to buy DataDog, “so we definitely see where they are going. For sure, they understand the way things are changing. All the budgets when Salesforce first started were in marketing and sales. Now you sell to IT. Salesforce understands that shift to developers, and so that is where they are going.”

It makes for a very interesting landscape and future for companies like Coralogix, one that investors believe the startup will continue to shape as it has up to now.

“The dramatic shift in digital transformation is generating an explosion of data, which until now has forced enterprises to decide between cost and coverage,” said Shay Grinfeld, managing partner at Greenfield Partners. “Coralogix’s real-time streaming analytics pipeline employs proprietary algorithms to break this tradeoff and generate significant cost savings. Coralogix has built a customer roster that comprises some of the largest and most innovative companies in the world. We’re thrilled to partner with Ariel and the Coralogix team on their journey to reinvent the future of data observability.”


MongoDB: Modifying Documents Using Aggregation Pipelines and Update Expressions

MongoDB Modifying Documents

MongoDB Modifying DocumentsUpdating documents on MongoDB prior to version 4.2 was quite limited. It was not possible to set values to a conditional expression, combine fields, or update a field based on the value of another field on the server-side. Tracing a parallel to the SQL update statements, for example, it wasn’t possible to do something like the following:

Update t1 set t1.f1 = t1.f2 where…

It wasn’t possible to use a conditional expression either, something easily achieved with SQL standards:

UPDATE t1 SET t1.f1 = CASE WHEN f2 = 1 THEN 1 WHEN f2 = 2 THEN 5 END WHERE…

If something similar to both examples above was required and the deployment was 3.4+, probably the usage of $addFields would be an alternative way to accomplish it. However, it would not touch the current document because the $out output destination could only be a different collection.

With older versions, the only way around was creating a cursor with aggregation pipelines and iterating it on the client side. Inside the loop, it was possible to update using the proper $set values. It was a difficult and tedious task, which would result in a full javascript code.

With MongoDB 4.2 and onwards, it is possible to use an aggregation pipeline to update MongoDB documents conditionally, allowing the update/creation of a field based on another field. This article presents some very common/basic operations which are easily achieved with SQL databases.

Field Expressions in Update Commands (v4.2+)

Updating a field with the value of some other field:

This is similar to the classic example of an SQL command: update t1 set t1.f1 = t1.f2 + t1.f3

replset:PRIMARY> db.getSiblingDB("dbtest").colltest2.update({_id:3},[{$set:{result:{$add: [ "$f2", "$f3" ] } }} ]);
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })

replset:PRIMARY> db.getSiblingDB("dbtest").colltest2.find({_id:3});
{ "_id" : 3, "f1" : 30, "f2" : 300, "f3" : 3000, "result" : 3300 }

The key point is the “$” on the front of the field names being referenced (“f2” and “f3” in this example). These are the simplest type of field path expression, as they’re called in the MongoDB documentation. You’ve probably seen them in the aggregation pipeline before, but it was only in v4.2 that you could also use them in a normal update command.

Applying “CASE” conditions:

It is quite suitable now to determine conditions for a field value while updating a collection:

replset:PRIMARY> db.getSiblingDB("dbtest").colltest3.find({_id:3});
{ "_id" : 3, "grade" : 8 }

replset:PRIMARY> db.getSiblingDB("dbtest").colltest3.update(
  { _id: 3}, 
    { $set: {result : { 
      $switch: {branches: [
        { case: { $gte: [ "$grade", 7 ] }, then: "PASSED" }, 
        { case: { $lte: [ "$grade", 5 ] }, then: "NOPE" }, 
        { case: { $eq: [ "$grade", 6 ] }, then: "UNDER ANALYSIS" } 
      ] } 
    } } } 
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })

replset:PRIMARY> db.getSiblingDB("dbtest").colltest3.find({ _id: 3});
{ "_id" : 3, "grade" : 8, "result" : "PASSED" }

Adding new fields for a specific filtered doc:

Let’s say that you want to stamp a document with the updated date = NOW and add a simple comment field:

replset:PRIMARY> db.getSiblingDB("dbtest").colltest.find({_id:3})
{ "_id" : 3, "description" : "Field 3", "rating" : 2, "updt_date" : ISODate("2021-05-06T22:00:00Z") }

replset:PRIMARY> db.getSiblingDB("dbtest").colltest.update( 
  { _id: 3 }, 
    { $set: { "comment": "Comment3", mod_date: "$$NOW"} } 
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })

replset:PRIMARY> db.getSiblingDB("dbtest").colltest.find({_id:3})
{ "_id" : 3, "description" : "Field 3", "rating" : 2, "updt_date" : ISODate("2021-05-06T22:00:00Z"), "comment" : "Comment3", "mod_date" : ISODate("2021-07-05T18:48:44.710Z") }

Reaching several Docs with the same expression:

It is possible now to either use the command updateMany() and reach multiple docs with the same pipeline. 

replset:PRIMARY> db.getSiblingDB("dbtest").colltest3.find({});
{ "_id" : 1, "grade" : 8}
{ "_id" : 2, "grade" : 5}
{ "_id" : 3, "grade" : 8}

replset:PRIMARY> db.getSiblingDB("dbtest").colltest3.updateMany({}, 
    { $set: {result : { $switch: {branches: [{ case: { $gte: [ "$grade", 7 ] }, then: "PASSED" }, { case: { $lte: [ "$grade", 5 ] }, then: "NOPE" }, { case: { $eq: [ "$grade", 6 ] }, then: "UNDER ANALYSIS" } ] } } } } 
{ "acknowledged" : true, "matchedCount" : 3, "modifiedCount" : 2 }

replset:PRIMARY> db.getSiblingDB("dbtest").colltest3.find({});
{ "_id" : 1, "grade" : 8, "result" : "PASSED" }
{ "_id" : 2, "grade" : 5, "result" : "NOPE" }
{ "_id" : 3, "grade" : 8, "result" : "PASSED" }

Or use the command option { multi: true } if you want to stick to using the original db.collection.update() command. Note that the default is false, which means that only the first occurrence will be updated.

replset:PRIMARY> db.getSiblingDB("dbtest").colltest4.update({}, [{ $set: {result : { $switch: {branches: [{ case: { $gte: [ "$grade", 7 ] }, then: "PASSED" }, { case: { $lte: [ "$grade", 5 ] }, then: "NOPE" }, { case: { $eq: [ "$grade", 6 ] }, then: "UNDER ANALYSIS" } ] } } } } ] )
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })

replset:PRIMARY> db.getSiblingDB("dbtest").colltest4.find({});
{ "_id" : 1, "grade" : 8, "result" : "PASSED" }
{ "_id" : 2, "grade" : 5 }
{ "_id" : 3, "grade" : 8 }

When specifying {multi:true} the expected outcome is finally achieved:

replset:PRIMARY> db.getSiblingDB("dbtest").colltest4.update({}, [{ $set: {result : { $switch: {branches: [{ case: { $gte: [ "$grade", 7 ] }, then: "PASSED" }, { case: { $lte: [ "$grade", 5 ] }, then: "NOPE" }, { case: { $eq: [ "$grade", 6 ] }, then: "UNDER ANALYSIS" } ] } } } } ],{multi:true} )
WriteResult({ "nMatched" : 3, "nUpserted" : 0, "nModified" : 2 })

replset:PRIMARY> db.getSiblingDB("dbtest").colltest4.find({});
{ "_id" : 1, "grade" : 8, "result" : "PASSED" }
{ "_id" : 2, "grade" : 5, "result" : "NOPE" }
{ "_id" : 3, "grade" : 8, "result" : "PASSED" }

Update by $merge Stage in the Aggregation Pipeline

Prior to version 4.2, addressing the result of an aggregate pipeline to a new collection was achieved by using $out. Starting on version 4.2 it is possible to use $merge which is way more flexible considering that while using $out, the entire collection will be replaced, and with merge, it is possible to replace a single document and a few or more things. You may want to refer to the comparison table described here:


With MongoDB 4.4 and onwards, it is allowed to update a collection directly on the aggregate pipeline through the $merge stage. The magic happens after determining the output collection with the same name as the one being aggregated. The example below illustrates how to flag the max grade of the student Rafa in math class:

  • Original document
replset:PRIMARY> db.getSiblingDB("dbtest").students2.find({"name": "Rafa","class":"math"})
{ "_id" : ObjectId("6100081e21f08fe0d19bda41"), "name" : "Rafa", "grades" : [ 4, 5, 6, 9 ], "class" : "math" }

  • The aggregation pipeline
replset:PRIMARY> db.getSiblingDB("dbtest").students2.aggregate( [{ $match : { "name": "Rafa","class":"math" } }, {$project:{maxGrade:{$max:"$grades"}}}, {$merge : { into: { db: "dbtest", coll: "students2" }, on: "_id",  whenMatched: "merge", whenNotMatched:"discard"} } ]);

  • Checking the result
replset:PRIMARY> db.getSiblingDB("dbtest").students2.find({"name": "Rafa","class":"math"})
{ "_id" : ObjectId("6100081e21f08fe0d19bda41"), "name" : "Rafa", "grades" : [ 4, 5, 6, 9 ], "class" : "math", "maxGrade" : 9 }

Note that the maxGrade field was merged into the doc, flagging that the max grade achieved by that student in math was 9.

Watch out: behind the scenes, the merge will trigger an update against the same collection. If that update changes the physical location of the document, the update might revisit the same document multiple times or even get into an infinite loop (Halloween Problem)

The other cool thing is using the $merge stage to work exactly how a SQL command INSERT AS SELECT works (and this is possible with MongoDB 4.2 and onwards). The example below demonstrates how to fill the collection colltest_reporting with the result of an aggregation hit against colltest5.

replset:PRIMARY> db.getSiblingDB("dbtest").colltest5.aggregate( [{ $match : { class: "A" } }, { $group: { _id: "$class",maxGrade: { $max: "$grade" } }},  {$merge : { into: { db: "dbtest", coll: "colltest_reporting" }, on: "_id",  whenMatched: "replace", whenNotMatched: "insert" } } ] );
replset:PRIMARY> db.getSiblingDB("dbtest").colltest_reporting.find()
{ "_id" : "A", "maxGrade" : 8 }


There are plenty of new possibilities which will make a developer’s life easier (especially the life of those developers who are coming from SQL databases) considering that the aggregation framework provides several operators and various different stages to play. Although, it is important to highlight that the complexity of a pipeline may incur performance degradation (that may be a topic for another blog post). For more information on updates with aggregation pipelines, please refer to the official documentation.


Building and Testing Percona Distribution for MongoDB Operator

Testing Percona Distribution for MongoDB Operator

Testing Percona Distribution for MongoDB OperatorRecently I wanted to play with the latest and greatest Percona Distribution for MongoDB Operator which had a bug fix I was interested in. The bug fix was merged in the main branch of the git repository, but no version of the Operator that includes this fix was released yet. I started the Operator by cloning the main branch, but the bug was still reproducible. The reason was simple – the main branch had the last released version of the Operator in bundle.yaml, instead of the main branch build:

        - name: percona-server-mongodb-operator
          image: percona/percona-server-mongodb-operator:1.9.0

instead of 

        - name: percona-server-mongodb-operator
          image: perconalab/percona-server-mongodb-operator:main

Then I decided to dig deeper to see how hard it is to do a small change in the Operator code and test it.

This blog post is a beginner contributor guide where I tried to follow our CONTRIBUTING.md and Building and testing the Operator manual to build and test Percona Distribution for MongoDB Operator.


The requirements section was the first blocker for me as I’m used to running Ubuntu, but examples that we have are for CentOS and MacOS. For all Ubuntu fans below are the instructions: 

echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update
sudo apt-get install -y google-cloud-sdk docker.io kubectl jq
sudo snap install helm --classic
sudo snap install yq --channel=v3/stable
curl -s -L https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz | sudo tar -C /usr/bin --strip-components 1 --wildcards -zxvpf - '*/oc'

I have also prepared a Pull Request to fix our docs and drafted a cloud-init file to simplify environment provisioning.


Get the code from GitHub main branch:

git clone https://github.com/percona/percona-server-mongodb-operator

Change some code. Now it is time to build the Operator image and push it to the registry. DockerHub is a nice choice for beginners as it does not require any installation or configuration, but for keeping it local you might want to install your own registry. See Docker Registry, Harbor, Trow.


command builds the image and pushes it to the registry which you specify in IMAGE environment variable like this:

export IMAGE=bob/my_repository_for_test_images:K8SPSMDB-372-fix-feature-X

Fixing the Issues

For me the execution of the build command failed for multiple reasons:

1. Most probably you need to run it as root to get access to docker unix socket or just add the user to the docker group:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock

2. Once I ran it with root I got the following error:

"--squash" is only supported on a Docker daemon with experimental features enabled

It is quite easy to fix it by adding the experimental flag into /etc/docker/daemon.json file:

    "experimental": true

I have added it into the cloud-init file and will fix it in the same PR in the docs.

3. The third failure was on the last stage of pushing the image: 

denied: requested access to the resource is denied

Obviously, you should be authorized to push to the registry.

docker login

fixed it for me just fine.

Finally, the image is built and pushed to the registry:

The push refers to repository [docker.io/bob/my_repository_for_test_images]
0014bf17d462: Pushed
K8SPSMDB-372-fix-feature-X: digest: sha256:458066396fdd6ac358bcd78ed4d8f5279ff0295223f1d7fbec0e6d429c01fb16 size: 949



command executes the tests in e2e-tests folder one-by-one, as you see there are multiple scenarios:

"$dir/init-deploy/run" || fail "init-deploy"
"$dir/limits/run" || fail "limits"
"$dir/scaling/run" || fail "scaling"
"$dir/monitoring/run" || fail "monitoring"
"$dir/monitoring-2-0/run" || fail "monitoring-2-0"
"$dir/liveness/run" || fail "liveness"
"$dir/one-pod/run" || fail "one-pod"
"$dir/service-per-pod/run" || fail "service-per-pod"
"$dir/arbiter/run" || fail "arbiter"
"$dir/demand-backup/run" || fail "demand-backup"
"$dir/demand-backup-sharded/run" || fail "demand-backup-sharded"
"$dir/scheduled-backup/run" || fail "scheduled-backup"
"$dir/security-context/run" || fail "security-context"
"$dir/storage/run" || fail "storage"
"$dir/self-healing/run" || fail "self-healing"
"$dir/self-healing-chaos/run" || fail "self-healing-chaos"
"$dir/operator-self-healing/run" || fail "operator-self-healing"
"$dir/operator-self-healing-chaos/run" || fail "operator-self-healing-chaos"
"$dir/smart-update/run" || fail "smart-update"
"$dir/version-service/run" || fail "version-service"
"$dir/users/run" || fail "users"
"$dir/rs-shard-migration/run" || fail "rs-shard-migration"
"$dir/data-sharded/run" || fail "data-sharded"
"$dir/upgrade/run" || fail "upgrade"
"$dir/upgrade-sharded/run" || fail "upgrade-sharded"
"$dir/upgrade-consistency/run" || fail "upgrade-consistency"
"$dir/pitr/run" || fail "pitr"
"$dir/pitr-sharded/run" || fail "pitr-sharded"

Obviously, it is possible to run the tests one by one.

It is required to have kubectl configured and pointing to the working Kubernetes cluster. If something is missing or not working the tests are going to tell you that.

The only issue I faced is the readability of the test results. The logging of the test execution is pretty verbose, so I would recommend redirecting the output to some file for further debugging purposes. 

`./e2e-tests/run >> /tmp/e2e-tests.out 2>&1

We in Percona rely on Jenkins to automatically test and verify the results for each Pull Request.


Contribution guides are written for developers by developers, so they often have some gaps or unclear instructions which sometimes require experience to resolve. Such minor issues might scare off potential contributors and as a result, the project does not get the Pull Request with an awesome implementation of the brightest idea. Percona embraces open source culture and values contributors by providing simple tools to develop and test the ideas.

Writing this blog post resulted in two Pull Requests:

  1. Use

     tag for container images in the main branch (link)

  2. Removing some gaps in the docs (link)

There is always room for improvement and a time to find a better way. Please let us know if you face any issues with contributing your ideas to Percona products. You can do that on the Community Forum or JIRA. Read more about contribution guidelines for Percona Distribution for MongoDB Operator in CONTRIBUTING.md.


Atera raises $77M at a $500M valuation to help SMBs manage their remote networks like enterprises do

When it comes to software to help IT manage workers’ devices wherever they happen to be, enterprises have long been spoiled for choice — a situation that has come in especially handy in the last 18 months, when many offices globally have gone remote and people have logged into their systems from home. But the same can’t really be said for small and medium enterprises: As with so many other aspects of tech, they’ve long been overlooked when it comes to building modern IT management solutions tailored to their size and needs.

But there are signs of that changing. Today, a startup called Atera that has been building remote, and low-cost, predictive IT management solutions specifically for organizations with less than 1,000 employees, is announcing a funding round of $77 million — a sign of the demand in the market, and Atera’s own success in addressing it. The investment values Atera at $500 million, the company confirmed.

The Tel Aviv-based startup has amassed some 7,000 customers to date, managing millions of endpoints — computers and other devices connected to them — across some 90 countries, providing real-time diagnostics across the data points generated by those devices to predict problems with hardware, software and network, or with security issues.

Atera’s aim is to use the funding both to continue building out that customer footprint, and to expand its product — specifically adding more functionality to the AI that it currently uses (and for which Atera has been granted patents) to run predictive analytics, one of the technologies that today are part and parcel of solutions targeting larger enterprises but typically are absent from much of the software out there aimed at SMBs.

“We are in essence democratizing capabilities that exist for enterprises but not for the other half of the economy, SMBs,” said Gil Pekelman, Atera’s CEO, in an interview.

The funding is being led by General Atlantic, and it is notable for being only the second time that Atera has ever raised money — the first was earlier this year, a $25 million round from K1 Investment Management, which is also in this latest round. Before this year, Atera, which was founded in 2016, turned profitable in 2017 and then intentionally went out of profit in 2019 as it used cash from its balance sheet to grow. Through all of that, it was bootstrapped. (And it still has cash from that initial round earlier this year.)

As Pekelman — who co-founded the company with Oshri Moyal (CTO) — describes it, Atera’s approach to remote monitoring and management, as the space is typically called, starts first with software clients installed at the endpoints that connect into a network, which give IT managers the ability to monitor a network, regardless of the actual physical range, as if it’s located in a single office. Around that architecture, Atera essentially monitors and collects “data points” covering activity from those devices — currently taking in some 40,000 data points per second.

To be clear, these data points are not related to what a person is working on, or any content at all, but how the devices behave, and the diagnostics that Atera amasses and focuses on cover three main areas: hardware performance, networking and software performance and security. Through this, Atera’s system can predict when something might be about to go wrong with a machine, or why a network connection might not be working as it should, or if there is some suspicious behavior that might need a security-oriented response. It supplements its work in the third area with integrations with third-party security software — Bitdefender and Acronis among them — and by issuing updated security patches for devices on the network.

The whole system is built to be run in a self-service way. You buy Atera’s products online, and there are no salespeople involved — in fact most of its marketing today is done through Facebook and Google, Pekelman said, which is one area where it will continue to invest. This is one reason why it’s not really targeting larger enterprises (the others are the level of customization that would be needed; as well as more sophisticated service level agreements). But it is also the reason why Atera is so cheap: it costs $89 per month per IT technician, regardless of the number of endpoints that are being managed.

“Our constituencies are up to 1,000 employees, which is a world that was in essence quite neglected up to now,” Pekelman said. “The market we are targeting and that we care about are these smaller guys and they just don’t have tools like these today.” Since its model is $89 dollars per month per technician using the software, it means that a company with 500 people with four technicians is paying $356 per month to manage their networks, peanuts in the greater scheme of IT services, and one reason why Atera has caught on as more and more employees have gone remote and are looking like they will stay that way.

The fact that this model is thriving is also one of the reason and investors are interested.

“Atera has developed a compelling all-in-one platform that provides immense value for its customer base, and we are thrilled to be supporting the company in this important moment of its growth trajectory,” said Alex Crisses, MD, global head of New Investment Sourcing and co-head of Emerging Growth at General Atlantic, in a statement. “We are excited to work with a category-defining Israeli company, extending General Atlantic’s presence in the country’s cutting-edge technology sector and marking our fifth investment in the region. We look forward to partnering with Gil, Oshri and the Atera team to help the company realize its vision.”


Business messaging platform Gupshup raises $240 million from Tiger Global, Fidelity and others

Gupshup, a business messaging platform that began its journey in India 15 years ago, surprised many when it raised $100 million in April this year, roughly 10 years after its last financing round, and attained the coveted unicorn status. Now just three months later, the San Francisco-headquartered startup has secured even more capital from high-profile investors.

On Wednesday, Gupshup said it had raised an additional $240 million as part of the same Series F financing round. The new investment was led by Fidelity Management, Tiger Global, Think Investments, Malabar Investments, Harbor Spring Capital, certain accounts managed by Neuberger Berman Investment Advisers and White Oak.

Neeraj Arora, formerly a high-profile executive at WhatsApp who played an instrumental role in helping the messaging platform sell to Facebook, also wrote a significant check to Gupshup in the new tranche of investment, which continues to value the startup at $1.4 billion as in April.

In an interview with TechCrunch earlier this week, Beerud Sheth, co-founder and chief executive of Gupshup, said he extended the financing round after receiving too many inbound requests from investors. The new investors will provide the startup with crucial insight and expertise, he said. The round is now closed, he continued.

The startup, which operates a conversational messaging platform that is used by over 100,000 businesses and developers today to build their own messaging and conversational experiences to serve their users and customers, is beginning to consider exploring the public markets by next year, said Sheth, though he cautioned a final decision is yet to be made.

“Conversation is becoming a bigger part of doing business and it has partly been driven by the pandemic,” he said over a phone call. “Second, we have always been the leader in this space, but the product innovation we have focused on in the last two to three years has worked in our favor.”

The new investment, which includes some secondary buyback (some early investors and employees are selling their stakes), will be deployed into broadening the product offerings of Gupshup, he said. The startup is also eyeing some M&A opportunities and may close some deals this year, he added.

Some of the notable customers of Gupshup, which leads the business messaging market. Image Credits: Gupshup

Before Gupshup became so popular with businesses, it existed in a different avatar. For the first six years of its existence, Gupshup was best known for enabling users in India to send group messages to friends. (These cheap texts and other clever techniques enabled tens of millions of Indians to stay in touch with one another on phones a decade ago.)

That model eventually became unfeasible to continue, Sheth told TechCrunch in an earlier interview.

“For that service to work, Gupshup was subsidizing the messages. We were paying the cost to the mobile operators. The idea was that once we scale up, we will put advertisements in those messages. Long story short, we thought as the volume of messages increases, operators will lower their prices, but they didn’t. And also the regulator said we can’t put ads in the messages,” he said earlier this year.

That’s when Gupshup decided to pivot. “We were neither able to subsidize the messages, nor monetize our user base. But we had all of this advanced technology for high-performance messaging. So we switched from a consumer model to an enterprise model. So we started to serve banks, e-commerce firms and airlines that need to send high-level messages and can afford to pay for it,” said Sheth, who also co-founded freelance workplace Elance in 1998.

Over the years, Gupshup has expanded to newer messaging channels, including conversational bots and it also helps businesses set up and run their WhatsApp channels to engage with customers.

Sheth said scores of major firms worldwide in banking, e-commerce, travel and hospitality and other sectors are among the clients of Gupshup. These firms are using Gupshup to send their customers transaction information and authentication codes, among other use cases. “These are not advertising or promotional messages. These are core service information,” he said.

“We have followed Gupshup’s progress for a long while and believe that they are the most evolved customer communications platform In India and increasingly in other emerging markets, with a leadership position in the most attractive and fastest growing subsegments of the market,” said Sumeet Nagar, managing director of Malabar Investments, in a statement.

“We believe that Beerud and team have the unique opportunity to expand the addressable market on the back of new offerings and scale the business up significantly, which is a perfect recipe for massive value creation. I have known Beerud for over three decades, and all of us at Malabar are delighted to partner with Gupshup in the next stage of their journey.”


RapidSOS learned that the best product design is sometimes no product design

Sometimes, the best missions are the hardest to fund.

For the founders of RapidSOS, improving the quality of emergency response by adding useful data, like location, to 911 calls was an inspiring objective, and one that garnered widespread support. There was just one problem: How would they create a viable business?

The roughly 5,700 public safety answering points (PSAPs) in America weren’t great contenders. Cash-strapped and highly decentralized, 911 centers already spent their meager budgets on staffing and maintaining decades-old equipment, and they had few resources to improve their systems. Plus, appropriations bills in Congress to modernize centers have languished for more than a decade, a topic we’ll explore more in part four of this EC-1.

Who would pay? Who was annoyed enough with America’s antiquated 911 system to be willing to shell out dollars to fix it?

People obviously desire better emergency services — after all, they are the ones who will dial 911 and demand help someday. Yet, they never think about emergencies until they actually happen, as RapidSOS learned from the poor adoption of its Haven app we discussed in part one. People weren’t ready to pay a monthly subscription for these services in advance.

So, who would pay? Who was annoyed enough with America’s antiquated 911 system to be willing to shell out dollars to fix it?

Ultimately, the company iterated itself into essentially an API layer between the thousands of PSAPs on one side and developers of apps and consumer devices on the other. These developers wanted to include safety features in their products, but didn’t want to engineer hundreds of software integrations across thousands of disparate agencies. RapidSOS’ business model thus became offering free software to 911 call centers while charging tech companies to connect through its platform.

It was a tough road and a classic chicken-and-egg problem. Without call center integrations, tech companies wouldn’t use the API — it was essentially useless in that case. Call centers, for their part, didn’t want to use software that didn’t offer any immediate value, even if it was being given away for free.

This is the story of how RapidSOS just plowed ahead against those headwinds from 2017 onward, ultimately netting itself hundreds of millions in venture funding, thousands of call agency clients, dozens of revenue deals with the likes of Apple, Google and Uber, and partnerships with more software integrators than any startup has any right to secure. Smart product decisions, a carefully calibrated business model and tenacity would eventually lend the company the escape velocity to not just expand across America, but increasingly across the world as well.

In this second part of the EC-1, I’ll analyze RapidSOS’ current product offerings and business strategy, explore the company’s pivot from consumer app to embedded technology and take a look at its nascent but growing international expansion efforts. It offers key lessons on the importance of iterating, how to secure the right customer feedback and determining the best product strategy.

The 411 on a 911 API

It became clear from the earliest stages of RapidSOS’ journey that getting data into the 911 center would be its first key challenge. The entire 911 system — even today in most states — is built for voice and not data.

Karin Marquez, senior director of public safety at RapidSOS, who we met in the introduction, worked for decades at a PSAP near Denver, working her way up from call taker to a senior supervisor. “When I started, it was a one-man dispatch center. So, I was working alone, I was answering 911 calls, non-emergency calls, dispatching police, fire and EMS,” she said.

RapidSOS senior director of public safety Karin Marquez. Image Credits: RapidSOS

As a 911 call taker, her very first requirement for every call was figuring out where an emergency is taking place — even before characterizing what is happening. “Everything starts with location,” she said. “If I don’t know where you are, I can’t send you help. Everything else we can kind of start to build our house on. Every additional data [point] will help to give us a better understanding of what that emergency is, who may be involved, what kind of vehicle they’re involved in — but if I don’t have an address, I can’t send you help.”


No-code Bubble raises $100M to make technical co-founders obsolete

Among Silicon Valley circles, a fun parlor game is to ask to what extent world GDP levels are held back by a lack of computer science and technical training. How many startups could be built if hundreds of thousands or even millions more people could code and bring their entrepreneurial ideas to fruition? How many bureaucratic processes could be eliminated if developers were more latent in every business?

The answer, of course, is on the order of “a lot,” but the barriers to reaching this world remain formidable. Computer science is a challenging field, and despite proactive attempts by legislatures to add more coding skills into school curriculums, the reality is that the demand for software engineering vastly outstrips the supply available in the market.

Coding is not a bubble, and Bubble wants to empower the democratization of software development and the creation of new startups. Through its platform, Bubble enables anyone — coder or not — to begin building modern web applications using a click-and-drag interface that can connect data sources and other software together in one fluid interface.

It’s a bold bet — and it’s just received a bold bet as well. Bubble announced today that Ryan Hinkle of Insight Partners has led a $100 million Series A round into the company. Hinkle, a longtime managing director at the firm, specializes in growth buyout deals as well as growth SaaS companies.

If that round size seems huge, it’s because Bubble has had a long history as a bootstrapped company before reaching its current scale. Co-founders Emmanuel Straschnov and Josh Haas spent seven years bootstrapping and tinkering with the product before securing a $6.5 million seed round in June 2019 led by SignalFire. Interestingly, according to Straschnov, Insight was the first venture firm to reach out to Bubble all the way back in 2014. Seven years on, the two have now signed and closed a deal.

Since the seed round, Bubble has been expanding its functionality. As a no-code tool, any missing feature could potentially block an application from being built. “In our business, it’s a features game,” Straschnov said. “[Our users] are not technical, but they have high standards.” He noted that the company introduced a plugins system that allows the Bubble community to build their own additions to the platform.

Image Credits: Bubble. Its editor offers a clickable interface for designing dynamic web applications. 

As the platform matured, it happened to nail the timing of the COVID-19 pandemic last year, which saw people scrambling for new skills and improving their prospects amid a gloomy job market. Straschnov says that Bubble saw an immediate bump in usage in March and April 2020, and the company has tripled revenue over the past 12 months.

Bubble’s focus for the past eight years has been on helping people turn their ideas into startups. The company’s proposition is that a large number of even venture-backed companies could be built using Bubble without the expense of a large engineering team writing code from scratch.

Unlike other no-code tools, which focus on building internal corporate apps, Straschnov says that the company remains as focused today on these new companies as it has always been. “[We’re] not trying to move upmarket just yet — we are trying to do the same thing that AWS and Stripe did five years ago,” he said. Instead of trying to dominate the enterprise, Bubble wants to grow with its nascent customers as they expand in scale.

The company today charges a range of prices depending on the performance and scale requirements of an application. There’s a free tier, and then professional pricing starts at $25/month all the way to $475/month for its top-listed offering. Enterprise pricing is also available, as is special pricing for students.

On the latter point, Bubble is looking to invest heavily in education using its newly raised capital. While the platform is easy to use, the reality is that any design of a web application can be intimidating for a new user, particularly one who isn’t technical. So the company wants to create more videos and documentation while also heavily investing in partnerships with universities to get more students using the platform.

While the no-code space has seen prodigious investment, Straschnov said that “I don’t look at all the no-code players as competition … the true competition we have is code.” He noted that while the no-code label has been assumed by more and more startups, very few companies are focused on his company’s specific niche, and he believes he offers a compelling value proposition in that category.

The company has doubled headcount since the beginning of the pandemic, growing from around 21 employees to about 45 today. They are lightly concentrated in New York City, but the company operates remotely and has folks in 15 states as well as in France. Straschnov says that the company is looking to aggressively hire technical talent to build out the product using its new funds.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com