Nov
18
2020
--

CloudBolt announces $35M Series B debt/equity investment to help manage hybrid cloud

CloudBolt, a Bethesda, Maryland startup that helps companies manage hybrid cloud environments, announced a $35 million Series B investment today. It was split between $15 million in equity investment and $20 million in debt.

Insight Partners provided the equity side of the equation, while Hercules Capital and Bridge Bank supplied the venture debt. The company has now raised more than $61 million in equity and debt, according to Crunchbase data.

CEO Jeff Kukowski says that his company helps customers with cloud and DevOps management including cost control, compliance and security. “We help [our customers] take advantage of the fact that most organizations are already hybrid cloud, multi cloud and/or multi tool. So you have all of this innovation happening in the world, and we make it easier for them to take advantage of it,” he said.

As he sees it, the move to cloud and DevOps, which was supposed to simplify everything, has actually created new complexity, and the tools his company sells are designed to help companies reduce some of that added complexity. What they do is provide a way to automate, secure and optimize their workloads, regardless of the tools or approach to infrastructure they are using.

The company closed the funding round at the end of last quarter and put it to work with a couple of acquisitions — Kumolus and SovLabs — to help accelerate and fill in the road map. Kumolus, which was founded in 2011 and raised $1.7 million, according to Crunchbase, really helps CloudBolt extend its vision from managing on premises to the public cloud.

SovLabs was an early-stage startup working on a very specific problem creating a framework for extending VMware automation.

CloudBolt currently has 170 employees. While Kukowski didn’t want to get specific about the number of additional employees he might be adding to that in the next 12 months, he says that as he does, he thinks about diversity in three ways.

“One is just pure education. So we as a company regularly meet and educate on issues around inclusion, social justice and diversity. We also recruit with those ideas in mind. And then we also have a standing committee within the company that continues to look at issues not only for discussion, but quite frankly for investment in terms of time and fundraising,” he said.

Kukowski says that going remote because of COVID has allowed the company to hire from anywhere, but he still looks forward to a time when he can meet face-to-face with his employees and customers, and sees that as always being part of his company’s culture.

CloudBolt was founded in 2012 and has around 200 customers. Kukowski says that the company is growing between 40% and 50% year over year, although he wouldn’t share specific revenue numbers.

Nov
17
2020
--

Wondering How to Run Percona XtraDB Cluster on Kubernetes? Try Our Operator!

Run Percona XtraDB Cluster on Kubernetes

Run Percona XtraDB Cluster on KubernetesKubernetes has been a big trend for a while now, particularly well-suited for microservices. Running your main databases on Kubernetes is probably NOT what you are looking for. However, there’s a niche market for them. My colleague Stephen Thorn did a great job explaining this in The Criticality of a Kubernetes Operator for Databases. If you are considering running your database on Kubernetes, have a look at it first. And, if after reading it you start wondering how the Operator works, Stephen also wrote an Introduction to Percona Kubernetes Operator for Percona XtraDB Cluster (PXC), which presents the Kubernetes architecture and how the Percona Operator simplifies the deployment of a full HA PXC cluster in this environment, proxies included!

Now, if you are curious about how it actually works in practice but are afraid the entry barrier is too high, I can help you with that. In fact, this technology is widespread now, with most cloud providers offering a dedicated Kubernetes engine. In this blog post, I’ll walk you over the steps on how to deploy a Percona XtraDB Cluster (PXC) using the Percona Operator for Kubernetes on Google Cloud Platform (GCP).

Creating a Virtual Environment to Run Kubernetes on GCP

Google Cloud Platform includes among its products the Google Kubernetes Engine (GKE). We can take advantage of their trial offer to create our test cluster there: https://cloud.google.com.

After you sign up, you can access all the bells and whistles in their web interface. Note the Kubernetes Engine API is not enabled by default, you need to do it by visiting the Kubernetes Engine section in the left menu, under COMPUTE.

For the purpose of deploying our environment, we should install their SDK and work from the command line: see https://cloud.google.com/sdk/docs/install and follow the respective installation instructions for your OS (you will probably want to install the SDK on your personal computer).

With the SDK installed, we can initialize our environment, which requires authenticating to the Google Cloud account:

gcloud init

You will be prompted to choose a cloud project to use: there’s one created by default when the account is activated, named “My First Project”. It will receive a unique id, which you can verify in the Google Cloud interface, but usually, it is displayed as the first option presented in the prompt.

Alternatively, you can use gcloud config set to configure your default project and zone, among other settings.

For this exercise, we will be creating a 3-node cluster named k8-test-cluster with n1-standard-4 instances in the us-central1-b zone:

gcloud container clusters create --machine-type n1-standard-4 --num-nodes 3 --zone us-central1-b --cluster-version latest k8-test-cluster

If the command above was successful, you should see your newly created cluster in the list returned by:

gcloud container clusters list

Getting Ready to Work with Kubernetes

Besides the Google Cloud SDK that is used to manage the cloud instances, we also need the Kubernetes command-line tool, kubectl, to manage the Kubernetes cluster. One way to install it is through gcloud itself:

gcloud components install kubectl

This method won’t work for everyone though, as the Cloud SDK component manager is disabled for certain kinds of installation, such as through apt or yum in Linux. I find myself in this group, using Ubuntu, but the failed attempt to install kubectl through gcloud suggested another approach that worked for me:

sudo apt install kubectl

Deploying a PXC Cluster Using the Percona Kubernetes Operator

The Percona operators are available on Github. The most straightforward way to obtain a copy is by cloning the operator’s repository. The latest version of the PXC operator is 1.6.0 and we can clone it with the following command:

git clone -b v1.6.0 https://github.com/percona/percona-xtradb-cluster-operator

Move inside the created directory:

cd percona-xtradb-cluster-operator

and run the following sequence of commands:

  1. Define the Custom Resource Definitions for PXC:

    kubectl apply -f deploy/crd.yaml
  2. Create a namespace on Kubernetes and associate it to your current context:

    kubectl create namespace pxc
    kubectl config set-context $(kubectl config current-context) --namespace=pxc
  3. Define Role-Based Access Control (RBAC) for PXC:

    kubectl apply -f deploy/rbac.yaml
  4. Start the operator within Kubernetes:

    kubectl apply -f deploy/operator.yaml
  5. Configure PXC users and their credentials:

    kubectl apply -f deploy/secrets.yaml
  6. Finally, deploy the cluster:

    kubectl apply -f deploy/cr.yaml

You can find a more detailed explanation of each of these steps, as well as how to customize your installation, in the Percona Kubernetes Operator for Percona XtraDB Cluster online documentation, which includes a quickstart guide for GKE.

Now, it is a matter of waiting for the deployment to complete, which you can monitor with:

kubectl get pods

A successful deployment will show output for the above command similar to:

NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-haproxy-0                                 2/2     Running   0          4m21s
cluster1-haproxy-1                                 2/2     Running   0          2m47s
cluster1-haproxy-2                                 2/2     Running   0          2m21s
cluster1-pxc-0                                     1/1     Running   0          4m22s
cluster1-pxc-1                                     1/1     Running   0          2m52s
cluster1-pxc-2                                     1/1     Running   0          111s
percona-xtradb-cluster-operator-79d786dcfb-9lthw   1/1     Running   0          4m37s

As you can see above, the operator will deploy seven pods with the default settings, and those are distributed across the three GKE n1-standard-4 machines we created at first:

kubectl get nodes
NAME                                             STATUS   ROLES    AGE    VERSION
gke-k8-test-cluster-default-pool-02c370e1-gvfg   Ready    <none>   152m   v1.17.13-gke.1400
gke-k8-test-cluster-default-pool-02c370e1-lvh7   Ready    <none>   152m   v1.17.13-gke.1400
gke-k8-test-cluster-default-pool-02c370e1-qn3p   Ready    <none>   152m   v1.17.13-gke.1400

Accessing the Cluster

One way to access the cluster is by creating an interactive shell in the Kubernetes cluster:

kubectl run -i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il

From there, we can access MySQL through the cluster’s HAproxy writer node:

mysql -h cluster1-haproxy -uroot -proot_password

Note the hostname used above is an alias, the connection being routed to one of the HAproxy servers available in the cluster. It is also possible to connect to a specific node by modifying the host option -h with the node’s name:

mysql -h cluster1-pxc-0 -uroot -proot_password

This is where all the fun and experimentation starts: you can test and break things without worrying too much as you can easily and quickly start again from scratch.

Destroying the Cluster and Deleting the Test Environment

Once you are done playing with your Kubernetes cluster, you can destroy it with:

gcloud container clusters delete --zone=us-central1-b k8-test-cluster

It’s important to note the command above will not discard the persistent disk volumes that were created and used by the nodes, which you can check with the command:

gcloud compute disks list

A final purging command is required to remove those as well:

gcloud compute disks delete <disk_name_1> <disk_name_2> <disk_name_3> --zone=us-central1-b

If you are feeling overzealous, you can double-check that all has been deleted:

gcloud container clusters list

gcloud compute disks list

Learn More About Percona Kubernetes Operator for Percona XtraDB Cluster

Interested In Hands-On Learning?

Be sure to get in touch with Percona’s Training Department to schedule your PXC Kubernetes training engagement. Our expert trainers will guide your team firstly through the basics, cover all the configuration noted above (and then some), and then dive deeper into how the operator functions along with High-Availability exercises, disaster recovery scenarios, backups, restore, and much more.

Nov
17
2020
--

Marketing automation platform Klaviyo scores $200M Series C on $4.15B valuation

Boston-based marketing automation firm Klaviyo wants to change the way marketers interact with data, giving them direct access to their data and their customers. It believes that makes it easier to customize the messages and produce better results. Investors apparently agree, awarding the company a $200 million Series C on a hefty $4.15 billion valuation today.

The round was led by Accel with help from Summit Partners. It comes on the heels of last year’s $150 million Series B, and brings the total raised to $385.5 million, according the company. Accel’s Ping Li will also be joining the company board under the terms of today’s announcement.

Marketing automation and communication takes on a special significance as we find ourselves in the midst of this pandemic and companies need to find ways to communicate in meaningful ways with customers who can’t come into brick and mortar establishments. Company CEO and co-founder Andrew Bialecki says that his company’s unique use of data helps in this regard.

“I think our success is because we are a hybrid customer data and marketing platform. We think about what it takes to create these owned experiences. They’re very contextual and you need all of that customer data, not some of it, all of it, and you need that to be tightly coupled with how you’re building customer experiences,” Bialecki explained.

Andrew Bialecki, CEO and co-founder at Klaviyo

Andrew Bialecki, CEO and co-founder at Klaviyo Image Credits: Klaviyo

He believes that by providing a platform of this scope that combines the data, the ability to customize messages and the use of machine learning to keep improving that, it will help them compete with the largest platforms. In fact his goal is to help companies understand that they don’t have to give up their customer data to Amazon, Google and Facebook.

“The flip side of that is growing through Amazon where you give up all your customer data, or Facebook or Google where you kind of are delegated to wherever their algorithms decide where you get to show up,” he said. With Klaviyo, the company retains its own data, and Ping Li, who is leading the investment at Accel says that it where the e-commerce market is going.

“So the question is, is there a tool that allows you to do that as easily as going on Facebook and Google, and I think that’s the vision and the promise that Klaviyo is delivering on,” Li said.  He believes that this will allow their customers to actually build that kind of fidelity with their customers by going directly to them, instead of through a third-party intermediary.

The company has seen some significant success with 50,000 customers in 125 countries along with that lofty valuation. The customer number has doubled year over year, even during the economic malaise brought on by the pandemic.

Today, the company has 500 employees with plans to double that in the next year. As he grows his company, Bialecki believes diversity is not just the right thing to do, it’s also smart business. “I think the competitive advantages that tech companies are going to have going forward, especially for the tech companies that are not the leaders today, but [could be] leaders in the coming decades, it’s because they have the most diverse teams and inclusive culture and those are both big focuses for us,” he said.

As they move forward flush with this cash, the company wants to continue to build out the platform, giving customers access to a set of tools that allow them to know their own customers on an increasingly granular level, while delivering more meaningful interactions. “It’s all about accelerating product development and getting into new markets,” Bialecki said. They certainly have plenty of runway to do that now.

Nov
17
2020
--

Tame Black Friday Gremlins — Optimize Your Database for High Traffic Events

Optimize Your Database for High Traffic Events

Optimize Your Database for High Traffic EventsIt’s that time of year! The Halloween decorations have come down and the leaves have started to change and the Black Friday/Cyber Monday buying season is upon us!

For consumers, it can be a magical time of year, but for those of us that have worked in e-commerce or retail, it usually brings up…different emotions. It’s much like the Gremlins — cute and cuddly unless you break the RULES:

  1. Don’t expose them to sunlight,
  2. Don’t let them come in contact with water,
  3. NEVER feed them after midnight!

I love this analogy and how it parallels the difficulties that we experience in the database industry — especially this time of year. When things go well, it’s a great feeling. When things go wrong, they can spiral out of control in destructive and lasting ways.

Let’s put these fun examples to work and optimize your database!

Don’t Expose Your Database to “Sunlight”

One sure-fire way to make sure that your persistent data storage cannot do its job, and effectively kill it is to let it run out of storage. Before entering the high-traffic holiday selling season, make sure that you have ample storage space to make it all the way to the other side. This may sound basic, but so is not putting a cute, fuzzy pet in the sunlight — it’s much harder than you think!

Here are some great ways to ensure the storage needs for your database are met (most obvious to least obvious):

  1. If you are on a DBaaS such as Amazon RDS, leverage something like Amazon RDS Storage Auto Scaling
  2. In a cloud or elastic infrastructure:
    1. make sure network-attached storage is extensible on the fly, or
    2. properly tune the database mount point to be leveraging logical volume management or software raid to add additional volumes (capacity) on the fly.
  3. In an on-premise or pre-purchased infrastructure, make sure you are overprovisioned — even by end of season estimates — by ~25%.
  4. Put your logs somewhere else than the main drive. The database may not be happy about running out of log space, but logs can be deleted easily — data files cannot!

Don’t Let Your Database Come in “Contact With Water”

We don’t want to feed or allow simple issues to multiply. Actions we take to get out of a bind in the near term can cause problems that require more attention in the future — just like when you put water on a Gremlin, it will multiply!

What are some of these scenarios?

  1. Not having a documented plan of action can cause confusion and chaos if something doesn’t go quite right. Having a plan documented and distributed will keep things from getting overly complicated when issues occur.
  2. Throwing hardware at a problem. Unless you know how it will actually fix an issue, it could be like throwing gasoline on a fire and throw your stack into disarray with blocked and unblocked queries. It also mandates database tuning to be effective.
  3. Understanding (or misunderstanding) how users behave when or if the database slows down:
    1. Do users click to retry five times in five seconds causing even more load?
    2. Is there a way to divert attention to retry later?
    3. Can your application(s) ignore retries within a certain time frame?
  4. Not having just a few sources of truth, with as much availability as possible:
    1. Have at least one failover candidate
    2. Have off-server transaction storage (can you rebuild in a disaster?)
    3. If you have the two above, then delayed replicas are your friend!

Never “Feed” Your Database After “Midnight”

What’s the one thing that can ensure that all heck breaks loose on Black Friday? CHANGE is the food here, and typically, BLACK FRIDAY is the midnight.

Have you ever felt like there is just one thing that you missed and want to get off your backlog? It could be a schema change, a data type change, or an application change from an adjacent team. The ‘no feeding’ rule is parallel to CODE FREEZE in production.

Most companies see this freeze start at the beginning of November when the most stable prod is the one that is already out there, not the one that you have to make stable after a new release:

  1. Change Management is your friend; change that needs to happen should still have a way to happen.
  2. Observability is also your friend; know in absolute terms what is happening to your database and stack so you don’t throw a wrench in it (Percona Monitoring and Management can help).
  3. Educate business stakeholders on the release or change process BEFORE the event, not DURING the event.
  4. Don’t be afraid to “turn it off” when absolute chaos is happening. Small downtime is better than an unusable site over a longer period of time.

Conclusion

Black Friday, Cyber Monday, and the Holidays can be the most wonderful time of the year — and now that we’ve covered the rules, some of the “Gremlins” can stay small and fuzzy and your business won’t get wrecked by pesky database issues or outages.

How Percona Can Help

Percona experts optimize your database performance with open source database support, highly-rated training, managed services, and professional services.

Contact Us to Tame Your Database Gremlins!

Nov
17
2020
--

Dropbox shifts business product focus to remote work with Spaces update

In a September interview at TechCrunch Disrupt, Dropbox co-founder and CEO Drew Houston talked about how the pandemic had forced the company to rethink what work means, and how his company is shifting with the new requirements of a work-from-home world. Today, the company announced broad changes to Dropbox Spaces, the product introduced last year, to make it a collaboration and project management tool designed with these new requirements in mind.

Dropbox president Timothy Young says that the company has always been about making it easy to access files wherever you happen to be and whatever device you happen to be on, whether that was in a consumer or business context. As the company has built out its business products over the last several years, that involved sharing content internally or externally. Today’s announcement is about helping teams plan and execute around the content you create with a strong project focus.

“Now what we’re basically trying to do is really help distributed teams stay organized, collaborate together and keep moving along, but also do so in a really secure way and support IT, administrators and companies with some features around that as well, while staying true to Dropbox principles,” Young said.

This involves updating Spaces to be a full-fledged project management tool designed with a distributed workforce in mind. Spaces connects to other tools like your calendar, people directory, project management software — and of course files. You can create a project, add people and files, then set up a timeline and assign and track tasks, In addition, you can access meetings directly from Spaces and communicate with team members, who can be inside or outside the company.

Houston suggested a product like this could be coming in his September interview when he said:

“Back in March we started thinking about this, and how [the rapid shift to distributed work] just kind of happened. It wasn’t really designed. What if you did design it? How would you design this experience to be really great? And so starting in March we reoriented our whole product road map around distributed work,” he said.

Along these same lines, Young says the company itself plans to continue to be a remote first company even after the pandemic ends, and will continue to build tools to make it easier to collaborate and share information with that personal experience in mind.

Today’s announcement is a step in that direction. Dropbox Spaces has been in private beta and should be available at the beginning of next year.

Nov
17
2020
--

Cato Network snags $130M Series E on $1B valuation as cloud wide area networking thrives

Cato Networks has spent the last five years building a cloud-based wide area network that lets individuals connect to network resources regardless of where they are. When the pandemic hit, and many businesses shifted to work from home, it was the perfect moment for technology like this. Today, the company was rewarded with a $130 million Series E investment on $1 billion valuation.

Lightspeed Venture Partners led the round, with participation from new investor Coatue and existing investors Greylock, Aspect Ventures/Acrew Capital, Singtel Innov8 and Shlomo Kramer (who is the co-founder and CEO of the company). The company reports it has now raised $332 million since inception.

Kramer is a serial entrepreneur. He co-founded Check Point Software, which went public in 1996, and Imperva, which went public in 2011 and was later acquired by private equity firm Thoma Bravo in 2018. He helped launch Cato in 2015. “In 2015, we identified that the wide area networks (WANs), which is a tens of billions of dollars market, was still built on the same technology stack […] that connects physical locations, and appliances that protect physical locations and was primarily sold by the telcos and MSPs for many years,” Kramer explained.

The idea with Cato was to take that technology and redesign it for a mobile and cloud world, not one that was built for the previous generation of software that lived in private data centers and was mostly accessed from an office. Today they have a cloud-based network of 60 Points of Presence (PoPs) around the world, giving customers access to networking resources and network security no matter where they happen to be.

The bet they made was a good one because the world has changed, and that became even more pronounced this year when COVID hit and forced many people to work from home. Now suddenly having the ability to sign in from anywhere became more important than ever, and they have been doing well, with 2x growth in ARR this year (although he wouldn’t share specific revenue numbers).

As a company getting Series E funding, Kramer doesn’t shy away from the idea of eventually going public, especially since he’s done it twice before, but neither is he ready to commit any time table. For now, he says the company is growing rapidly, with almost 700 customers — and that’s why it decided to take such a large capital influx right now.

Cato currently has 270 employees, with plans to grow to 400 by the end of next year. He says that Cato is a global company with headquarters in Israel, where diversity involves religion, but he is trying to build a diverse and inclusive culture regardless of the location.

“My feeling is that inclusion needs to happen in the earlier stages of the funnel. I’m personally involved in these efforts, at the educational sector level, and when students are ready to be recruited by startups, we are already competitive, and if you look at our employee base it’s very diverse,” Kramer said.

With the new funds, he plans to keep building the company and the product. “There’s a huge opportunity and we want to move as fast as possible. We are also going to make very big investments on the engineering side to take the solution and go to the next level,” he said.

Nov
16
2020
--

PingCAP, the open-source developer behind TiDB, closes $270 million Series D

PingCAP, the open-source software developer best known for NewSQL database TiDB, has raised a $270 million Series D. TiDB handles hybrid transactional and analytical processing (HTAP), and is aimed at high-growth companies, including payment and e-commerce services, that need to handle increasingly large amounts of data.

The round’s lead investors were GGV Capital, Access Technology Ventures, Anatole Investment, Jeneration Capital and 5Y Capital (formerly known as Morningside Venture Capital). It also included participation from Coatue, Bertelsmann Asia Investment Fund, FutureX Capital, Kunlun Capital, Trustbridge Partners, and returning investors Matrix Partners China and Yunqi Partners.

The funding brings PingCAP’s total raised so far to $341.6 million. Its last round, a Series C of $50 million, was announced back in September 2018.

PingCAP says TiDB has been adopted by about 1,500 companies across the world. Some examples include Square; Japanese mobile payments company PayPay; e-commerce app Shopee; video-sharing platform Dailymotion; and ticketing platfrom BookMyShow. TiDB handles online transactional processing (OLTP) and online analytical processing (OLAP) in the same database, which PingCAP says results in faster real-time analytics than other distributed databases.

In June, PingCAP launched TiDB Cloud, which it describes as fully-managed “TiDB as a Service,” on Amazon Web Services and Google Cloud. The company plans to add more platforms, and part of the funding will be used to increase TiDB Cloud’s global user base.

Nov
16
2020
--

Gretel announces $12M Series A to make it easier to anonymize data

As companies work with data, one of the big obstacles they face is making sure they are not exposing personally identifiable information (PII) or other sensitive data. It usually requires a painstaking manual effort to strip out that data. Gretel, an early stage startup, wants to change that by making it faster and easier to anonymize data sets. Today the company announced a $12 million Series A led by Greylock. The company has now raised $15.5 million.

Gretel co-founder and CEO Alex Watson says that his company was founded to make it simpler to anonymize data and unlock data sets that were previously out of reach because of privacy concerns.

“As a developer, you want to test an idea or build a new feature, and it can take weeks to get access to the data you need. Then essentially it boils down to getting approvals to get started, then snapshotting a database, and manually removing what looks like personal data and hoping that you got everything,”

Watson, who previously worked as a GM at AWS, believed that there needed to be a faster and more reliable way to anonymize the data, and that’s why he started Gretel. The first product is an open source, synthetic machine learning library for developers that strips out personally identifiable information.

“Developers use our open source library, which trains machine learning models on their sensitive data, then as that training is happening we are enforcing something called differential privacy, which basically ensures that the model doesn’t memorize details about secrets for individual people inside of the data,” he said. The result is a new artificial data set that is anonymized and safe to share across a business.

The company was founded last year, and they have actually used this year to develop the open source product and build an open source community around it. “So our approach and our go-to-market here is we’ve open sourced our underlying libraries, and we will also build a SaaS service that makes it really easy to generate synthetic data and anonymized data at scale,” he said.

As the founders build the company, they are looking at how to build a diverse and inclusive organization, something that they discuss at their regular founders’ meetings, especially as they look to take these investment dollars and begin to hire additional senior people.

“We make a conscious effort to have diverse candidates apply, and to really make sure we reach out to them and have a conversation, and that’s paid off, or is in the process of paying off I would say, with the candidates in our pipeline right now. So we’re excited. It’s tremendously important that we avoid group think that happens so often,” he said.

The company doesn’t have paying customers, but the plan is to build off the relationships it has with design partners and begin taking in revenue next year. Sridhar Ramaswamy, the partner at Greylock, who is leading the investment, says that his firm is placing a bet on a pre-revenue company because he sees great potential for a service like this.

“We think Gretel will democratize safe and controlled access to data for the whole world the way Github democratized source code access and control,” Ramaswamy said.

Nov
16
2020
--

Undock raises $1.6M to help solve your group scheduling nightmares

Over the past decade, many startups have tried (and many have failed) to rethink the way we schedule our meetings and calls. But we seem to be in a calendrical renaissance, with incumbents like Google and Outlook getting smarter and smarter and newcomers like Calendly growing significantly.

Undock, an Entrepreneurs Roundtable Accelerator-backed startup, is looking to enter the space.

The startup recently closed a $1.6 million seed round with investors that include Lightship Capital, Bessemer Venture Partners, Lerer Hippeau, Alumni Ventures Group, Active Capital, Arlan Hamilton of Backstage Capital, Sarah Imbach of PayPal/LinkedIn and several other angel investors.

For now, Undock is a Chrome extension that allows users to seamlessly see mutual availability across a group, whether or not all users in the group have Undock, all from within their email. Founder and CEO Nash Ahmed wouldn’t go into too much detail about the technology that allows Undock to accomplish this. But, on the surface, users who don’t yet have Undock can temporarily link their calendar to the individual meeting request to automatically find times that work for everyone in the group. Otherwise, they can see the suggested times of the rest of the group and mark the ones that work for them.

This is just the beginning of the journey for Undock. The company plans to launch a full-featured calendar in Q1 of 2021, that would include collaborative editing right within calendar events, and embedded video conferencing.

According to Ahmed, the most important differentiating features of Undock are that it focuses on mutual availability (not just singular availability) and that it does so right within the email client.

Image Credits: Undock

Scheduling will always be free within Undock, but the full calendar (when it’s released publicly) will have a variety of tiers starting at $10/month per user. Undock will also borrow from the Slack model and charge more for retention of information.

“The greatest challenge is definitely customer education,” said Ahmed, explaining that early on some users were confused by the product’s simplicity. “We messaged it by saying it’s like autocomplete. And early users would get into their email and then ask what to do next, or if they had to go back to Undock or to the Chrome extension. And we’d have to say ‘no, just keep typing.’ ”

The Undock team, which is Black and female-founded, numbers 18 people; 28% of the team is female, 22% are Black and 11% are LGBTQ, and the diversity of the leadership team is even higher.

Nov
16
2020
--

Computer vision startup Chooch.ai scores $20M Series A

Chooch.ai, a startup that hopes to bring computer vision more broadly to companies to help them identify and tag elements at high speed, announced a $20 million Series A today.

Vickers Venture Partners led the round with participation from 212, Streamlined Ventures, Alumni Ventures Group, Waterman Ventures and several other unnamed investors. Today’s investment brings the total raised to $25.8 million, according to the company.

“Basically we set out to copy human visual intelligence in machines. That’s really what this whole journey is about,” CEO and co-founder Emrah Gultekin explained. As the company describes it, “Chooch Al can rapidly ingest and process visual data from any spectrum, generating AI models in hours that can detect objects, actions, processes, coordinates, states, and more.”

Chooch is trying to differentiate itself from other AI startups by taking a broader approach that could work in any setting, rather than concentrating on specific vertical applications. Using the pandemic as an example, Gultekin says you could use his company’s software to identify everyone who is not wearing a mask in the building or everyone who is not wearing a hard hat at a construction site.

With 22 employees spread across the U.S., India and Turkey, Chooch is building a diverse company just by virtue of its geography, but as it doubles the workforce in the coming year, it wants to continue to build on that.

“We’re immigrants. We’ve been through a lot of different things, and we recognize some of the issues and are very sensitive to them. One of our senior members is a person of color and we are very cognizant of the fact that we need to develop that part of our company,” he said. At a recent company meeting, he said that they were discussing how to build diversity into the policies and values of the company as they move forward.

The company currently has 18 enterprise clients and hopes to use the money to add engineers, data scientists and begin to build out a worldwide sales team to continue to build the product and expand its go-to-market effort.

Gultekin says that the company’s unusual name comes from a mix of the words choose and search. He says that it is also an old Italian insult. “It means dummy or idiot, which is what artificial intelligence is today. It’s a poor reflection of humanity or human intelligence in humans,” he said. His startup aims to change that.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com