Sep
07
2021
--

Seqera Labs grabs $5.5M to help sequence COVID-19 variants and other complex data problems

Bringing order and understanding to unstructured information located across disparate silos has been one of the more significant breakthroughs of the big data era, and today a European startup that has built a platform to help with this challenge specifically in the area of life sciences — and has, notably, been used by labs to sequence and so far identify two major COVID-19 variants — is announcing some funding to continue building out its tools to a wider set of use cases, and to expand into North America.

Seqera Labs, a Barcelona-based data orchestration and workflow platform tailored to help scientists and engineers order and gain insights from cloud-based genomic data troves, as well as to tackle other life science applications that involve harnessing complex data from multiple locations, has raised $5.5 million in seed funding.

Talis Capital and Speedinvest co-led this round, with participation also from previous backer BoxOne Ventures and a grant from the Chan Zuckerberg Initiative, Mark Zuckerberg and Dr. Priscilla Chan’s effort to back open source software projects for science applications.

Seqera — a portmanteau of “sequence” and “era”, the age of sequencing data, basically — had previously raised less than $1 million, and quietly, it is already generating revenues, with five of the world’s biggest pharmaceutical companies part of its customer base, alongside biotech and other life sciences customers.

Seqera was spun out of the Centre for Genomic Regulation, a biomedical research center based out of Barcelona, where it was built as the commercial application of Nextflow, open source workflow and data orchestration software originally created by the founders of Seqera, Evan Floden and Paolo Di Tommaso, at the CGR.

Floden, Seqera’s CEO, told TechCrunch that he and Di Tommaso were motivated to create Seqera in 2018 after seeing Nextflow gain a lot of traction in the life science community, and subsequently getting a lot of repeat requests for further customization and features. Both Nextflow and Seqera have seen a lot of usage: the Nextflow runtime has been downloaded more than 2 million times, the company said, while Seqera’s commercial cloud offering has now processed more than 5 billion tasks.

The COVID-19 pandemic is a classic example of the acute challenge that Seqera (and by association Nextflow) aims to address in the scientific community. With COVID-19 outbreaks happening globally, each time a test for COVID-19 is processed in a lab, live genetic samples of the virus get collected. Taken together, these millions of tests represent a goldmine of information about the coronavirus and how it is mutating, and when and where it is doing so. For a new virus about which so little is understood and that is still persisting, that’s invaluable data.

So the problem is not if the data exists for better insights (it does); it is that it’s nearly impossible to use more legacy tools to view that data as a holistic body. It’s in too many places, and there is just too much of it, and it’s growing every day (and changing every day), which means that traditional approaches of porting data to a centralized location to run analytics on it just wouldn’t be efficient, and would cost a fortune to execute.

That is where Segera comes in. The company’s technology treats each source of data across different clouds as a salient pipeline which can be merged and analyzed as a single body, without that data ever leaving the boundaries of the infrastructure where it already exists. Customised to focus on genomic troves, scientists can then query that information for more insights. Seqera was central to the discovery of both the Alpha and Delta variants of the virus, and work is still ongoing as COVID-19 continues to hammer the globe.

Seqera is being used in other kinds of medical applications, such as in the realm of so-called “precision medicine.” This is emerging as a very big opportunity in complex fields like oncology: cancer mutates and behaves differently depending on many factors, including genetic differences of the patients themselves, which means that treatments are less effective if they are “one size fits all.”

Increasingly, we are seeing approaches that leverage machine learning and big data analytics to better understand individual cancers and how they develop for different populations, to subsequently create more personalized treatments, and Seqera comes into play as a way to sequence that kind of data.

This also highlights something else notable about the Seqera platform: it is used directly by the people who are analyzing the data — that is, the researchers and scientists themselves, without data specialists necessarily needing to get involved. This was a practical priority for the company, Floden told me, but nonetheless, it’s an interesting detail of how the platform is inadvertently part of that bigger trend of “no-code/low-code” software, designed to make highly technical processes usable by non-technical people.

It’s both the existing opportunity and how Seqera might be applied in the future across other kinds of data that lives in the cloud that makes it an interesting company, and it seems an interesting investment, too.

“Advancements in machine learning, and the proliferation of volumes and types of data, are leading to increasingly more applications of computer science in life sciences and biology,” said Kirill Tasilov, principal at Talis Capital, in a statement. “While this is incredibly exciting from a humanity perspective, it’s also skyrocketing the cost of experiments to sometimes millions of dollars per project as they become computer-heavy and complex to run. Nextflow is already a ubiquitous solution in this space and Seqera is driving those capabilities at an enterprise level – and in doing so, is bringing the entire life sciences industry into the modern age. We’re thrilled to be a part of Seqera’s journey.”

“With the explosion of biological data from cheap, commercial DNA sequencing, there is a pressing need to analyse increasingly growing and complex quantities of data,” added Arnaud Bakker, principal at Speedinvest. “Seqera’s open and cloud-first framework provides an advanced tooling kit allowing organisations to scale complex deployments of data analysis and enable data-driven life sciences solutions.”

Although medicine and life sciences are perhaps Seqera’s most obvious and timely applications today, the framework originally designed for genetics and biology can be applied to any a number of other areas: AI training, image analysis and astronomy are three early use cases, Floden said. Astronomy is perhaps very apt, since it seems that the sky is the limit.

“We think we are in the century of biology,” Floden said. “It’s the center of activity and it’s becoming data-centric, and we are here to build services around that.”

Seqera is not disclosing its valuation with this round.

Aug
19
2021
--

Companies betting on data must value people as much as AI

The Pareto principle, also known as the 80-20 rule, asserts that 80% of consequences come from 20% of causes, rendering the remainder way less impactful.

Those working with data may have heard a different rendition of the 80-20 rule: A data scientist spends 80% of their time at work cleaning up messy data as opposed to doing actual analysis or generating insights. Imagine a 30-minute drive expanded to two-and-a-half hours by traffic jams, and you’ll get the picture.

As tempting as it may be to think of a future where there is a machine learning model for every business process, we do not need to tread that far right now.

While most data scientists spend more than 20% of their time at work on actual analysis, they still have to waste countless hours turning a trove of messy data into a tidy dataset ready for analysis. This process can include removing duplicate data, making sure all entries are formatted correctly and doing other preparatory work.

On average, this workflow stage takes up about 45% of the total time, a recent Anaconda survey found. An earlier poll by CrowdFlower put the estimate at 60%, and many other surveys cite figures in this range.

None of this is to say data preparation is not important. “Garbage in, garbage out” is a well-known rule in computer science circles, and it applies to data science, too. In the best-case scenario, the script will just return an error, warning that it cannot calculate the average spending per client, because the entry for customer #1527 is formatted as text, not as a numeral. In the worst case, the company will act on insights that have little to do with reality.

The real question to ask here is whether re-formatting the data for customer #1527 is really the best way to use the time of a well-paid expert. The average data scientist is paid between $95,000 and $120,000 per year, according to various estimates. Having the employee on such pay focus on mind-numbing, non-expert tasks is a waste both of their time and the company’s money. Besides, real-world data has a lifespan, and if a dataset for a time-sensitive project takes too long to collect and process, it can be outdated before any analysis is done.

What’s more, companies’ quests for data often include wasting the time of non-data-focused personnel, with employees asked to help fetch or produce data instead of working on their regular responsibilities. More than half of the data being collected by companies is often not used at all, suggesting that the time of everyone involved in the collection has been wasted to produce nothing but operational delay and the associated losses.

The data that has been collected, on the other hand, is often only used by a designated data science team that is too overworked to go through everything that is available.

All for data, and data for all

The issues outlined here all play into the fact that save for the data pioneers like Google and Facebook, companies are still wrapping their heads around how to re-imagine themselves for the data-driven era. Data is pulled into huge databases and data scientists are left with a lot of cleaning to do, while others, whose time was wasted on helping fetch the data, do not benefit from it too often.

The truth is, we are still early when it comes to data transformation. The success of tech giants that put data at the core of their business models set off a spark that is only starting to take off. And even though the results are mixed for now, this is a sign that companies have yet to master thinking with data.

Data holds much value, and businesses are very much aware of it, as showcased by the appetite for AI experts in non-tech companies. Companies just have to do it right, and one of the key tasks in this respect is to start focusing on people as much as we do on AIs.

Data can enhance the operations of virtually any component within the organizational structure of any business. As tempting as it may be to think of a future where there is a machine learning model for every business process, we do not need to tread that far right now. The goal for any company looking to tap data today comes down to getting it from point A to point B. Point A is the part in the workflow where data is being collected, and point B is the person who needs this data for decision-making.

Importantly, point B does not have to be a data scientist. It could be a manager trying to figure out the optimal workflow design, an engineer looking for flaws in a manufacturing process or a UI designer doing A/B testing on a specific feature. All of these people must have the data they need at hand all the time, ready to be processed for insights.

People can thrive with data just as well as models, especially if the company invests in them and makes sure to equip them with basic analysis skills. In this approach, accessibility must be the name of the game.

Skeptics may claim that big data is nothing but an overused corporate buzzword, but advanced analytics capacities can enhance the bottom line for any company as long as it comes with a clear plan and appropriate expectations. The first step is to focus on making data accessible and easy to use and not on hauling in as much data as possible.

In other words, an all-around data culture is just as important for an enterprise as the data infrastructure.

Jul
15
2021
--

CockroachDB, the database that just won’t die

There is an art to engineering, and sometimes engineering can transform art. For Spencer Kimball and Peter Mattis, those two worlds collided when they created the widely successful open-source graphics program, GIMP, as college students at Berkeley.

That project was so successful that when the two joined Google in 2002, Sergey Brin and Larry Page personally stopped by to tell the new hires how much they liked it and explained how they used the program to create the first Google logo.

Cockroach Labs was started by developers and stays true to its roots to this day.

In terms of good fortune in the corporate hierarchy, when you get this type of recognition in a company such as Google, there’s only one way you can go — up. They went from rising stars to stars at Google, becoming the go-to guys on the Infrastructure Team. They could easily have looked forward to a lifetime of lucrative employment.

But Kimball, Mattis and another Google employee, Ben Darnell, wanted more — a company of their own. To realize their ambitions, they created Cockroach Labs, the business entity behind their ambitious open-source database CockroachDB. Can some of the smartest former engineers in Google’s arsenal upend the world of databases in a market spotted with the gravesites of storage dreams past? That’s what we are here to find out.

Berkeley software distribution

Mattis and Kimball were roommates at Berkeley majoring in computer science in the early-to-mid-1990s. In addition to their usual studies, they also became involved with the eXperimental Computing Facility (XCF), an organization of undergraduates who have a keen, almost obsessive interest in CS.

Mar
22
2021
--

No-code business intelligence service y42 raises $2.9M seed round

Berlin-based y42 (formerly known as Datos Intelligence), a data warehouse-centric business intelligence service that promises to give businesses access to an enterprise-level data stack that’s as simple to use as a spreadsheet, today announced that it has raised a $2.9 million seed funding round led by La Famiglia VC. Additional investors include the co-founders of Foodspring, Personio and Petlab.

The service, which was founded in 2020, integrates with more than 100 data sources, covering all the standard B2B SaaS tools, from Airtable to Shopify and Zendesk, as well as database services like Google’s BigQuery. Users can then transform and visualize this data, orchestrate their data pipelines and trigger automated workflows based on this data (think sending Slack notifications when revenue drops or emailing customers based on your own custom criteria).

Like similar startups, y42 extends the idea data warehouse, which was traditionally used for analytics, and helps businesses operationalize this data. At the core of the service is a lot of open source and the company, for example, contributes to GitLabs’ Meltano platform for building data pipelines.

y42 founder and CEO Hung Dang

y42 founder and CEO Hung Dang. Image Credits: y42

“We’re taking the best of breed open-source software. What we really want to accomplish is to create a tool that is so easy to understand and that enables everyone to work with their data effectively,” Y42 founder and CEO Hung Dang told me. “We’re extremely UX obsessed and I would describe us as a no-code/low-code BI tool — but with the power of an enterprise-level data stack and the simplicity of Google Sheets.”

Before y42, Vietnam-born Dang co-founded a major events company that operated in more than 10 countries and made millions in revenue (but with very thin margins), all while finishing up his studies with a focus on business analytics. And that in turn led him to also found a second company that focused on B2B data analytics.

Image Credits: y42

Even while building his events company, he noted, he was always very product- and data-driven. “I was implementing data pipelines to collect customer feedback and merge it with operational data — and it was really a big pain at that time,” he said. “I was using tools like Tableau and Alteryx, and it was really hard to glue them together — and they were quite expensive. So out of that frustration, I decided to develop an internal tool that was actually quite usable and in 2016, I decided to turn it into an actual company. ”

He then sold this company to a major publicly listed German company. An NDA prevents him from talking about the details of this transaction, but maybe you can draw some conclusions from the fact that he spent time at Eventim before founding y42.

Given his background, it’s maybe no surprise that y42’s focus is on making life easier for data engineers and, at the same time, putting the power of these platforms in the hands of business analysts. Dang noted that y42 typically provides some consulting work when it onboards new clients, but that’s mostly to give them a head start. Given the no-code/low-code nature of the product, most analysts are able to get started pretty quickly — and for more complex queries, customers can opt to drop down from the graphical interface to y42’s low-code level and write queries in the service’s SQL dialect.

The service itself runs on Google Cloud and the 25-people team manages about 50,000 jobs per day for its clients. The company’s customers include the likes of LifeMD, Petlab and Everdrop.

Until raising this round, Dang self-funded the company and had also raised some money from angel investors. But La Famiglia felt like the right fit for y42, especially due to its focus on connecting startups with more traditional enterprise companies.

“When we first saw the product demo, it struck us how on top of analytical excellence, a lot of product development has gone into the y42 platform,” said Judith Dada, general partner at LaFamiglia VC. “More and more work with data today means that data silos within organizations multiply, resulting in chaos or incorrect data. y42 is a powerful single source of truth for data experts and non-data experts alike. As former data scientists and analysts, we wish that we had y42 capabilities back then.”

Dang tells me he could have raised more but decided that he didn’t want to dilute the team’s stake too much at this point. “It’s a small round, but this round forces us to set up the right structure. For the Series A, which we plan to be towards the end of this year, we’re talking about a dimension which is 10x,” he told me.

Mar
16
2021
--

A crypto company’s journey to Data 3.0

Data is a gold mine for a company.

If managed well, it provides the clarity and insights that lead to better decision-making at scale, in addition to an important tool to hold everyone accountable.

However, most companies are stuck in Data 1.0, which means they are leveraging data as a manual and reactive service. Some have started moving to Data 2.0, which employs simple automation to improve team productivity. The complexity of crypto data has opened up new opportunities in data, namely to move to the new frontier of Data 3.0, where you can scale value creation through systematic intelligence and automation. This is our journey to Data 3.0.

The complexity of crypto data has opened up new opportunities in data, namely to move to the new frontier of Data 3.0, where you can scale value creation through systematic intelligence and automation.

Coinbase is neither a finance company nor a tech company — it’s a crypto company. This distinction has big implications for how we work with data. As a crypto company, we work with three major types of data (instead of the usual one or two types of data), each of which is complex and varied:

  1. Blockchain: decentralized and publicly available.
  2. Product: large and real-time.
  3. Financial: high-precision and subject to many financial/legal/compliance regulations.

Image Credits: Michael Li/Coinbase

Our focus has been on how we can scale value creation by making this varied data work together, eliminating data silos, solving issues before they start and creating opportunities for Coinbase that wouldn’t exist otherwise.

Having worked at tech companies like LinkedIn and eBay, and also those in the finance sector, including Capital One, I’ve observed firsthand the evolution from Data 1.0 to Data 3.0. In Data 1.0, data is seen as a reactive function providing ad-hoc manual services or firefighting in urgent situations.

Nov
12
2020
--

Databricks launches SQL Analytics

AI and data analytics company Databricks today announced the launch of SQL Analytics, a new service that makes it easier for data analysts to run their standard SQL queries directly on data lakes. And with that, enterprises can now easily connect their business intelligence tools like Tableau and Microsoft’s Power BI to these data repositories as well.

SQL Analytics will be available in public preview on November 18.

In many ways, SQL Analytics is the product Databricks has long been looking to build and that brings its concept of a “lake house” to life. It combines the performance of a data warehouse, where you store data after it has already been transformed and cleaned, with a data lake, where you store all of your data in its raw form. The data in the data lake, a concept that Databricks’ co-founder and CEO Ali Ghodsi has long championed, is typically only transformed when it gets used. That makes data lakes cheaper, but also a bit harder to handle for users.

Image Credits: Databricks

“We’ve been saying Unified Data Analytics, which means unify the data with the analytics. So data processing and analytics, those two should be merged. But no one picked that up,” Ghodsi told me. But “lake house” caught on as a term.

“Databricks has always offered data science, machine learning. We’ve talked about that for years. And with Spark, we provide the data processing capability. You can do [extract, transform, load]. That has always been possible. SQL Analytics enables you to now do the data warehousing workloads directly, and concretely, the business intelligence and reporting workloads, directly on the data lake.”

The general idea here is that with just one copy of the data, you can enable both traditional data analyst use cases (think BI) and the data science workloads (think AI) Databricks was already known for. Ideally, that makes both use cases cheaper and simpler.

The service sits on top of an optimized version of Databricks’ open-source Delta Lake storage layer to enable the service to quickly complete queries. In addition, Delta Lake also provides auto-scaling endpoints to keep the query latency consistent, even under high loads.

While data analysts can query these data sets directly, using standard SQL, the company also built a set of connectors to BI tools. Its BI partners include Tableau, Qlik, Looker and Thoughtspot, as well as ingest partners like Fivetran, Fishtown Analytics, Talend and Matillion.

Image Credits: Databricks

“Now more than ever, organizations need a data strategy that enables speed and agility to be adaptable,” said Francois Ajenstat, chief product officer at Tableau. “As organizations are rapidly moving their data to the cloud, we’re seeing growing interest in doing analytics on the data lake. The introduction of SQL Analytics delivers an entirely new experience for customers to tap into insights from massive volumes of data with the performance, reliability and scale they need.”

In a demo, Ghodsi showed me what the new SQL Analytics workspace looks like. It’s essentially a stripped-down version of the standard code-heavy experience with which Databricks users are familiar. Unsurprisingly, SQL Analytics provides a more graphical experience that focuses more on visualizations and not Python code.

While there are already some data analysts on the Databricks platform, this obviously opens up a large new market for the company — something that would surely bolster its plans for an IPO next year.

Aug
28
2019
--

ThoughtSpot hauls in $248M Series E on $1.95B valuation

ThoughtSpot was started by a bunch of ex-Googlers looking to bring the power of search to data. Seven years later the company is growing fast, sporting a fat valuation of almost $2 billion and looking ahead to a possible IPO. Today it announced a hefty $248 million Series E round as it continues on its journey.

Investors include Silver Lake Waterman, Silver Lake’s late-stage growth capital fund, along with existing investors Lightspeed Venture Partners, Sapphire Ventures and Geodesic Capital. Today’s funding brings the total raised to $554 million, according to the company.

The company wants to help customers bring speed to data analysis by answering natural language questions about the data without having to understand how to formulate a SQL query. As a person enters questions, ThoughtSpot translates that question into SQL, then displays a chart with data related to the question, all almost instantly (at least in the demo).

It doesn’t stop there though. It also uses artificial intelligence to understand intent to help come up the exact correct answer. ThoughtSpot CEO Sudheesh Nair says that this artificial intelligence underpinning is key to the product. As he explained, if you are looking for the answer to a specific question, like “What is the profit margin of red shoes in Portland?,” there won’t be multiple answers. There is only one answer, and that’s where artificial intelligence really comes into play.

“The bar on delivering that kind of answer is very high and because of that, understanding intent is critical. We use AI for that. You could ask, ‘How did we do with red shoes in Portland?’ I could ask, ‘What is the profit margin of red shoes in Portland?’ The system needs to know that we both are asking the same question. So there’s a lot of AI that goes behind it to understand the intent,” Nair explained.

image 10

Image: ThoughtSpot

ThoughtSpot gets answers to queries by connecting to a variety of internal systems, like HR, CRM and ERP, and uses all of this data to answer the question, as best it can. So far, it appears to be working. The company has almost 250 large-company customers, and is on a run rate of close to $100 million.

Nair said the company didn’t necessarily need the money, with $100 million still in the bank, but he saw an opportunity, and he seized it. He says the money gives him a great deal of flexibility moving forward, including the possibility of acquiring companies to fill in missing pieces or to expand the platform’s capabilities. It also will allow him to accelerate growth. Plus, he sees the capital markets possibly tightening next year and he wanted to strike while the opportunity was in front of him.

Nair definitely sees the company going public at some point. “With these kind of resources behind us, it actually opens up an opportunity for us to do any sort of IPO that we want. I do think that a company like this will benefit from going public because Global 2000 kind of customers, where we have our most of our business, appreciate the transparency and the stability represented by public companies,” he said.

He added, “And with $350 million in the bank, it’s totally [possible to] IPO, which means that a year and a half from now if we are ready to take the company public, we can actually have all options open, including a direct listing, potentially. I’m not saying we will do that, but I’m saying that with this kind of funding behind us, we have all those options open.”

Mar
06
2019
--

Clari platform aims to unify go-to-market operations data

Clari started as a company that wanted to give sales teams more information about their sales process than could be found in the CRM database. Today, the company announced a much broader platform, one that can provide insight across sales, marketing and customer service to give a more unified view of a company’s go-to-market operations, all enhanced by AI.

Company co-founder and CEO Andy Byrne says this involves pulling together a variety of data and giving each department the insight to improve their mission. “We are analyzing large volumes of data found in various revenue systems — sales, marketing, customer success, etc. — and we’re using that data to provide a new platform that’s connecting up all of the different revenue departments,” Byrne told TechCrunch.

For sales, that would mean driving more revenue. For marketing it would it involve more targeted plans to drive more sales. And for customer success it would be about increasing customer retention and reducing churn.

Screenshot: ClariThe company’s original idea when it launched in 2012 was looking at a range of data that touched the sales process, such as email, calendars and the CRM database, to bring together a broader view of sales than you could get by looking at the basic customer data stored in the CRM alone. The Clari data could tell the reps things like which deals would be most likely to close and which ones were at risk.

“We were taking all of these signals that had been historically disconnected from each other and we were connecting it all into a new interface for sales teams that’s very different than a CRM,” Byrne said.

Over time, that involved using AI and machine learning to make connections in the data that humans might not have been seeing. The company also found that customers were using the product to look at processes adjacent to sales, and they decided to formalize that and build connectors to relevant parts of the go-to-market system like marketing automation tools from Marketo or Eloqua and customer tools such as Dialpad, Gong.io and Salesloft.

With Clari’s approach, companies can get a unified view without manually pulling all this data together. The goal is to provide customers with a broad view of the go-to-market operation that isn’t possible looking at siloed systems.

The company has experienced tremendous growth over the last year, leaping from 80 customers to 250. These include Okta and Alteryx, two companies that went public in recent years. Clari is based in the Bay Area and has around 120 employees. It has raised more than $60 million. The most recent round was a $35 million Series C last May led by Tenaya Capital.

Nov
01
2018
--

Rockset launches out of stealth with $21.5 M investment

Rockset, a startup that came out of stealth today, announced $21.5M in previous funding and the launch of its new data platform that is designed to simplify much of the processing to get to querying and application building faster.

As for the funding, it includes $3 million in seed money they got when they started the company, and a more recent $18.5 million Series A, which was led by Sequoia and Greylock.

Jerry Chen, who is a partner at Greylock sees a team that understands the needs of modern developers and data scientists, one that was born in the cloud and can handle a lot of the activities that data scientists have traditionally had to handle manually. “Rockset can ingest any data from anywhere and let developers and data scientists query it using standard SQL. No pipelines. No glue. Just real time operational apps,” he said.

Company co-founder and CEO Venkat Venkataramani is a former Facebook engineer where he learned a bit about processing data at scale. He wanted to start a company that would help data scientists get to insights more quickly.

Data typically requires a lot of massaging before data scientists and developers can make use of it and Rockset has been designed to bypass much of that hard work that can take days, weeks or even months to complete.

“We’re building out our service with innovative architecture and unique capabilities that allows full-featured fast SQL directly on raw data. And we’re offering this as a service. So developers and data scientists can go from useful data in any shape, any form to useful applications in a matter of minutes. And it would take months today,” Venkataramani explained.

To do this you simply connect your data set wherever it lives to Rockset and it deals with the data ingestion, building the schema, cleaning the data, everything. It also makes sure you have the right amount of infrastructure to manage the level of data you are working with. In other words, it can potentially simplify highly complex data processing tasks to start working with the raw data almost immediately using SQL queries.

To achieve the speed, Venkataramani says they use a number of indexing techniques. “Our indexing technology essentially tries to bring the best of search engines and columnar databases into one. When we index the data, we build more than one type of index behind the scenes so that a wide spectrum of pre-processing can be automatically fast out of the box,” he said. That takes the burden of processing and building data pipelines off of the user.

The company was founded in 2016. Chen and Sequoia partners Mike Vernal joined the Rockset board under the terms of the Series A funding, which closed last August.

Oct
12
2018
--

Anaplan hits the ground running with strong stock market debut up over 42 percent

You might think that Anaplan CEO, Frank Calderoni would have had a few sleepless nights this week. His company picked a bad week to go public as market instability rocked tech stocks. Still he wasn’t worried, and today the company had by any measure a successful debut with the stock soaring up over 42 percent. As of 4 pm ET, it hit $24.18, up from the IPO price of $17. Not a bad way to launch your company.

Stock Chart: Yahoo Finance

“I feel good because it really shows the quality of the company, the business model that we have and how we’ve been able to build a growing successful business, and I think it provides us with a tremendous amount of opportunity going forward,” Calderoni told TechCrunch.

Calderoni joined the company a couple of years ago, and seemed to emerge from Silicon Valley central casting as former CFO at Red Hat and Cisco along with stints at IBM and SanDisk. He said he has often wished that there were a tool around like Anaplan when he was in charge of a several thousand person planning operation at Cisco. He indicated that while they were successful, it could have been even more so with a tool like Anaplan.

“The planning phase has not had much change in in several decades. I’ve been part of it and I’ve dealt with a lot of the pain. And so having something like Anaplan, I see it’s really being a disrupter in the planning space because of the breadth of the platform that we have. And then it goes across organizations to sales, supply chain, HR and finance, and as we say, really connects the data, the people and the plan to make for better decision making as a result of all that,” he said.

Calderoni describes Anaplan as a planning and data analysis tool. In his previous jobs he says that he spent a ton of time just gathering data and making sure they had the right data, but precious little time on analysis. In his view Anaplan, lets companies concentrate more on the crucial analysis phase.

“Anaplan allows customers to really spend their time on what I call forward planning where they can start to run different scenarios and be much more predictive, and hopefully be able to, as we’ve seen a lot of our customers do, forecast more accurately,” he said.

Anaplan was founded in 2006 and raised almost $300 million along the way. It achieved a lofty valuation of $1.5 billion in its last round, which was $60 million in 2017. The company has just under 1000 customers including Del Monte, VMware, Box and United.

Calderoni says although the company has 40 percent of its business outside the US, there are plenty of markets left to conquer and they hope to use today’s cash infusion in part to continue to expand into a worldwide company.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com