Oct
14
2020
--

Dataloop raises $11M Series A round for its AI data management platform

Dataloop, a Tel Aviv-based startup that specializes in helping businesses manage the entire data life cycle for their AI projects, including helping them annotate their data sets, today announced that it has now raised a total of $16 million. This includes a $5 seed round that was previously unreported, as well as an $11 million Series A round that recently closed.

The Series A round was led by Amiti Ventures, with participation from F2 Venture Capital, crowdfunding platform OurCrowd, NextLeap Ventures and SeedIL Ventures.

“Many organizations continue to struggle with moving their AI and ML projects into production as a result of data labeling limitations and a lack of real-time validation that can only be achieved with human input into the system,” said Dataloop CEO Eran Shlomo. “With this investment, we are committed, along with our partners, to overcoming these roadblocks and providing next generation data management tools that will transform the AI industry and meet the rising demand for innovation in global markets.”

Image Credits: Dataloop

For the most part, Dataloop specializes in helping businesses manage and annotate their visual data. It’s agnostic to the vertical its customers are in, but we’re talking about anything from robotics and drones to retail and autonomous driving.

The platform itself centers around the “humans in the loop” model that complements the automated systems, with the ability for humans to train and correct the model as needed. It combines the hosted annotation platform with a Python SDK and REST API for developers, as well as a serverless Functions-as-a-Service environment that runs on top of a Kubernetes cluster for automating dataflows.

Image Credits: Dataloop

The company was founded in 2017. It’ll use the new funding to grow its presence in the U.S. and European markets, something that’s pretty standard for Israeli startups, and build out its engineering team as well.

Oct
08
2020
--

Grid AI raises $18.6M Series A to help AI researchers and engineers bring their models to production

Grid AI, a startup founded by the inventor of the popular open-source PyTorch Lightning project, William Falcon, that aims to help machine learning engineers work more efficiently, today announced that it has raised an $18.6 million Series A funding round, which closed earlier this summer. The round was led by Index Ventures, with participation from Bain Capital Ventures and firstminute. 

Falcon co-founded the company with Luis Capelo, who was previously the head of machine learning at Glossier. Unsurprisingly, the idea here is to take PyTorch Lightning, which launched about a year ago, and turn that into the core of Grid’s service. The main idea behind Lightning is to decouple the data science from the engineering.

The time argues that a few years ago, when data scientists tried to get started with deep learning, they didn’t always have the right expertise and it was hard for them to get everything right.

“Now the industry has an unhealthy aversion to deep learning because of this,” Falcon noted. “Lightning and Grid embed all those tricks into the workflow so you no longer need to be a PhD in AI nor [have] the resources of the major AI companies to get these things to work. This makes the opportunity cost of putting a simple model against a sophisticated neural network a few hours’ worth of effort instead of the months it used to take. When you use Lightning and Grid it’s hard to make mistakes. It’s like if you take a bad photo with your phone but we are the phone and make that photo look super professional AND teach you how to get there on your own.”

As Falcon noted, Grid is meant to help data scientists and other ML professionals “scale to match the workloads required for enterprise use cases.” Lightning itself can get them partially there, but Grid is meant to provide all of the services its users need to scale up their models to solve real-world problems.

What exactly that looks like isn’t quite clear yet, though. “Imagine you can find any GitHub repository out there. You get a local copy on your laptop and without making any code changes you spin up 400 GPUs on AWS — all from your laptop using either a web app or command-line-interface. That’s the Lightning “magic” applied to training and building models at scale,” Falcon said. “It is what we are already known for and has proven to be such a successful paradigm shift that all the other frameworks like Keras or TensorFlow, and companies have taken notice and have started to modify what they do to try to match what we do.”

The service is now in private beta.

With this new funding, Grid, which currently has 25 employees, plans to expand its team and strengthen its corporate offering via both Grid AI and through the open-source project. Falcon tells me that he aims to build a diverse team, not in the least because he himself is an immigrant, born in Venezuela, and a U.S. military veteran.

“I have first-hand knowledge of the extent that unethical AI can have,” he said. “As a result, we have approached hiring our current 25 employees across many backgrounds and experiences. We might be the first AI company that is not all the same Silicon Valley prototype tech-bro.”

“Lightning’s open-source traction piqued my interest when I first learned about it a year ago,” Index Ventures’ Sarah Cannon told me. “So intrigued in fact I remember rushing into a closet in Helsinki while at a conference to have the privacy needed to hear exactly what Will and Luis had built. I promptly called my colleague Bryan Offutt who met Will and Luis in SF and was impressed by the ‘elegance’ of their code. We swiftly decided to participate in their seed round, days later. We feel very privileged to be part of Grid’s journey. After investing in seed, we spent a significant amount with the team, and the more time we spent with them the more conviction we developed. Less than a year later and pre-launch, we knew we wanted to lead their Series A.”

Sep
23
2020
--

WhyLabs brings more transparancy to ML ops

WhyLabs, a new machine learning startup that was spun out of the Allen Institute, is coming out of stealth today. Founded by a group of former Amazon machine learning engineers, Alessya Visnjic, Sam Gracie and Andy Dang, together with Madrona Venture Group principal Maria Karaivanova, WhyLabs’ focus is on ML operations after models have been trained — not on building those models from the ground up.

The team also today announced that it has raised a $4 million seed funding round from Madrona Venture Group, Bezos Expeditions, Defy Partners and Ascend VC.

Visnjic, the company’s CEO, used to work on Amazon’s demand forecasting model.

“The team was all research scientists, and I was the only engineer who had kind of tier-one operating experience,” she told me. “So I thought, “Okay, how bad could it be? I carried the pager for the retail website before. But it was one of the first AI deployments that we’d done at Amazon at scale. The pager duty was extra fun because there were no real tools. So when things would go wrong — like we’d order way too many black socks out of the blue — it was a lot of manual effort to figure out why issues were happening.”

Image Credits: WhyLabs

But while large companies like Amazon have built their own internal tools to help their data scientists and AI practitioners operate their AI systems, most enterprises continue to struggle with this — and a lot of AI projects simply fail and never make it into production. “We believe that one of the big reasons that happens is because of the operating process that remains super manual,” Visnjic said. “So at WhyLabs, we’re building the tools to address that — specifically to monitor and track data quality and alert — you can think of it as Datadog for AI applications.”

The team has brought ambitions, but to get started, it is focusing on observability. The team is building — and open-sourcing — a new tool for continuously logging what’s happening in the AI system, using a low-overhead agent. That platform-agnostic system, dubbed WhyLogs, is meant to help practitioners understand the data that moves through the AI/ML pipeline.

For a lot of businesses, Visnjic noted, the amount of data that flows through these systems is so large that it doesn’t make sense for them to keep “lots of big haystacks with possibly some needles in there for some investigation to come in the future.” So what they do instead is just discard all of this. With its data logging solution, WhyLabs aims to give these companies the tools to investigate their data and find issues right at the start of the pipeline.

Image Credits: WhyLabs

According to Karaivanova, the company doesn’t have paying customers yet, but it is working on a number of proofs of concepts. Among those users is Zulily, which is also a design partner for the company. The company is going after mid-size enterprises for the time being, but as Karaivanova noted, to hit the sweet spot for the company, a customer needs to have an established data science team with 10 to 15 ML practitioners. While the team is still figuring out its pricing model, it’ll likely be a volume-based approach, Karaivanova said.

“We love to invest in great founding teams who have built solutions at scale inside cutting-edge companies, who can then bring products to the broader market at the right time. The WhyLabs team are practitioners building for practitioners. They have intimate, first-hand knowledge of the challenges facing AI builders from their years at Amazon and are putting that experience and insight to work for their customers,” said Tim Porter, managing director at Madrona. “We couldn’t be more excited to invest in WhyLabs and partner with them to bring cross-platform model reliability and observability to this exploding category of MLOps.”

May
06
2020
--

Enterprise companies find MLOps critical for reliability and performance

Enterprise startups UIPath and Scale have drawn huge attention in recent years from companies looking to automate workflows, from RPA (robotic process automation) to data labeling.

What’s been overlooked in the wake of such workflow-specific tools has been the base class of products that enterprises are using to build the core of their machine learning (ML) workflows, and the shift in focus toward automating the deployment and governance aspects of the ML workflow.

That’s where MLOps comes in, and its popularity has been fueled by the rise of core ML workflow platforms such as Boston-based DataRobot. The company has raised more than $430 million and reached a $1 billion valuation this past fall serving this very need for enterprise customers. DataRobot’s vision has been simple: enabling a range of users within enterprises, from business and IT users to data scientists, to gather data and build, test and deploy ML models quickly.

Founded in 2012, the company has quietly amassed a customer base that boasts more than a third of the Fortune 50, with triple-digit yearly growth since 2015. DataRobot’s top four industries include finance, retail, healthcare and insurance; its customers have deployed over 1.7 billion models through DataRobot’s platform. The company is not alone, with competitors like H20.ai, which raised a $72.5 million Series D led by Goldman Sachs last August, offering a similar platform.

Why the excitement? As artificial intelligence pushed into the enterprise, the first step was to go from data to a working ML model, which started with data scientists doing this manually, but today is increasingly automated and has become known as “auto ML.” An auto-ML platform like DataRobot’s can let an enterprise user quickly auto-select features based on their data and auto-generate a number of models to see which ones work best.

As auto ML became more popular, improving the deployment phase of the ML workflow has become critical for reliability and performance — and so enters MLOps. It’s quite similar to the way that DevOps has improved the deployment of source code for applications. Companies such as DataRobot and H20.ai, along with other startups and the major cloud providers, are intensifying their efforts on providing MLOps solutions for customers.

We sat down with DataRobot’s team to understand how their platform has been helping enterprises build auto-ML workflows, what MLOps is all about and what’s been driving customers to adopt MLOps practices now.

The rise of MLOps

Feb
26
2020
--

Freshworks acquires AnsweriQ

Customer engagement platform Freshworks today announced that it has acquired AnsweriQ, a startup that provides AI tools for self-service solutions and agent-assisted use cases where the ultimate goal is to quickly provide customers with answers and make agents more efficient.

The companies did not disclose the acquisition price. AnsweriQ last raised a funding round in 2017, when it received $5 million in a Series A round from Madrona Venture Group.

Freshworks founder and CEO Girish Mathrubootham tells me that he was introduced to the company through a friend, but that he had also previously come across AnsweriQ as a player in the customer service automation space for large clients in high-volume call centers.

“We really liked the team and the product and their ability to go up-market and win larger deals,” Mathrubootham said. “In terms of using the AI/ML customer service, the technology that they’ve built was perfectly complementary to everything else that we were building.”

He also noted the client base, which doesn’t overlap with Freshworks’, and the talent at AnsweriQ, including the leadership team, made this a no-brainer.

AnsweriQ, which has customers that use Freshworks and competing products, will continue to operate its existing products for the time being. Over time, Freshworks, of course, hopes to convert many of these users into Freshworks users as well. The company also plans to integrate AnsweriQ’s technology into its Freddy AI engine. The exact branding for these new capabilities remains unclear, but Mathrubootham suggested FreshiQ as an option.

As for the AnsweriQ leadership team, CEO Pradeep Rathinam will be joining Freshworks as chief customer officer.

Rathinam told me that the company was at the point where he was looking to raise the next round of funding. “As we were going to raise the next round of funding, our choices were to go out and raise the next round and go down this path, or look for a complementary platform on which we can vet our products and then get faster customer acquisition and really scale this to hundreds or thousands of customers,” he said.

He also noted that as a pure AI player, AnsweriQ had to deal with lots of complex data privacy and residency issues, so a more comprehensive platform like Freshworks made a lot of sense.

Freshworks has always been relatively acquisitive. Last year, the company acquired the customer success service Natero, for example. With the $150 million Series H round it announced last November, the company now also has the cash on hand to acquire even more customers. Freshworks is currently valued at about $3.5 billion and has 2,7000 employees in 13 offices. With the acquisition of AnsweriQ, it now also has a foothold in Seattle, which it plans to use to attract local talent to the company.

Jan
28
2020
--

RealityEngines launches its autonomous AI service

RealityEngines.AI, an AI and machine learning startup founded by a number of former Google executives and engineers, is coming out of stealth today and announcing its first set of products.

When the company first announced its $5.25 million seed round last year, CEO Bindu Reddy wasn’t quite ready to disclose RealityEngines’ mission beyond saying that it planned to make machine learning easier for enterprises. With today’s launch, the team is putting this into practice by launching a set of tools that specifically tackle a number of standard enterprise use cases for ML, including user churn predictions, fraud detection, sales lead forecasting, security threat detection and cloud spend optimization. For use cases that don’t fit neatly into these buckets, the service also offers a more general predictive modeling service.

Before co-founding RealiyEngines, Reddy was the head of product for Google Apps and general manager for AI verticals at AWS. Her co-founders are Arvind Sundararajan (formerly at Google and Uber) and Siddartha Naidu (who founded BigQuery at Google). Investors in the company include Eric Schmidt, Ram Shriram, Khosla Ventures and Paul Buchheit.

As Reddy noted, the idea behind this first set of products from RealityEngines is to give businesses an easy entry into machine learning, even if they don’t have data scientists on staff.

Besides talent, another issue that businesses often face is that they don’t always have massive amounts of data to train their networks effectively. That has long been a roadblock for many companies that want to see what AI can do for them but that didn’t have the right resources to do so. RealityEngines overcomes this by creating realistic synthetic data that it can then use to augment a company’s existing data. In its tests, this creates models that are up to 15% more accurate than models that were trained without the synthetic data.

“The most prominent use of generative adversarial networks — GANS — has been to create deepfakes,” said Reddy. “Deepfakes have captured the public’s imagination by highlighting how easy it to spread misinformation with these doctored videos and images. However, GANS can also be applied to productive and good use. They can be used to create synthetic data sets which when then be combined with the original data, to produce robust AI models even when a business doesn’t have much training data.”

RealityEngines currently has about 20 employees, most of whom have a deep background in ML/AI, both as researchers and practitioners.

May
14
2019
--

Algorithmia raises $25M Series B for its AI automation platform

Algorithmia, a Seattle-based startup that offers a cloud-agnostic AI automation platform for enterprises, today announced a $25 million Series B funding round led by Norwest Partners. Madrona, Gradient Ventures, Work-Bench, Osage University Partners and Rakuten Ventures also participated in this round.

While the company started out five years ago as a marketplace for algorithms, it now mostly focuses on machine learning and helping enterprises take their models into production.

“It’s actually really hard to productionize machine learning models,” Algorithmia CEO Diego Oppenheimer told me. “It’s hard to help data scientists to not deal with data infrastructure but really being able to build out their machine learning and AI muscle.”

To help them, Algorithmia essentially built out a machine learning DevOps platform that allows data scientists to train their models on the platform and with the framework of their choice, bring it to Algorithmia — a platform that has already been blessed by their IT departments — and take it into production.

“Every Fortune 500 CIO has an AI initiative but they are bogged down by the difficulty of managing and deploying ML models,” said Rama Sekhar, a partner at Norwest Venture Partners, who has now joined the company’s board. “Algorithmia is the clear leader in building the tools to manage the complete machine learning life cycle and helping customers unlock value from their R&D investments.”

With the new funding, the company will double down on this focus by investing in product development to solve these issues, but also by building out its team, with a plan to double its headcount over the next year. A year from now, Oppenheimer told me, he hopes that Algorithmia will be a household name for data scientists and, maybe more importantly, their platform of choice for putting their models into production.

“How does Algorithmia succeed? Algorithmia succeeds when our customers are able to deploy AI and ML applications,” Oppenheimer said. “And although there is a ton of excitement around doing this, the fact is that it’s really difficult for companies to do so.”

The company previously raised a $10.5 million Series A round led by Google’s AI fund. It’s customers now include the United Nations, a number of U.S. intelligence agencies and Fortune 500 companies. In total, more than 90,000 engineers and data scientists are now on the platform.

May
02
2019
--

Microsoft launches a drag-and-drop machine learning tool

Microsoft today announced three new services that all aim to simplify the process of machine learning. These range from a new interface for a tool that completely automates the process of creating models, to a new no-code visual interface for building, training and deploying models, all the way to hosted Jupyter-style notebooks for advanced users.

Getting started with machine learning is hard. Even to run the most basic of experiments take a good amount of expertise. All of these new tools great simplify this process by hiding away the code or giving those who want to write their own code a pre-configured platform for doing so.

The new interface for Azure’s automated machine learning tool makes creating a model as easy importing a data set and then telling the service which value to predict. Users don’t need to write a single line of code, while in the backend, this updated version now supports a number of new algorithms and optimizations that should result in more accurate models. While most of this is automated, Microsoft stresses that the service provides “complete transparency into algorithms, so developers and data scientists can manually override and control the process.”

For those who want a bit more control from the get-go, Microsoft also today launched a visual interface for its Azure Machine Learning service into preview that will allow developers to build, train and deploy machine learning models without having to touch any code.

This tool, the Azure Machine Learning visual interface looks suspiciously like the existing Azure ML Studio, Microsoft’s first stab at building a visual machine learning tool. Indeed, the two services look identical. The company never really pushed this service, though, and almost seemed to have forgotten about it despite that fact that it always seemed like a really useful tool for getting started with machine learning.

Microsoft says that this new version combines the best of Azure ML Studio with the Azure Machine Learning service. In practice, this means that while the interface is almost identical, the Azure Machine Learning visual interface extends what was possible with ML Studio by running on top of the Azure Machine Learning service and adding that services’ security, deployment and lifecycle management capabilities.

The service provides an easy interface for cleaning up your data, training models with the help of different algorithms, evaluating them and, finally, putting them into production.

While these first two services clearly target novices, the new hosted notebooks in Azure Machine Learning are clearly geared toward the more experiences machine learning practitioner. The notebooks come pre-packaged with support for the Azure Machine Learning Python SDK and run in what the company describes as a “secure, enterprise-ready environment.” While using these notebooks isn’t trivial either, this new feature allows developers to quickly get started without the hassle of setting up a new development environment with all the necessary cloud resources.

Apr
10
2019
--

The right way to do AI in security

Artificial intelligence applied to information security can engender images of a benevolent Skynet, sagely analyzing more data than imaginable and making decisions at lightspeed, saving organizations from devastating attacks. In such a world, humans are barely needed to run security programs, their jobs largely automated out of existence, relegating them to a role as the button-pusher on particularly critical changes proposed by the otherwise omnipotent AI.

Such a vision is still in the realm of science fiction. AI in information security is more like an eager, callow puppy attempting to learn new tricks – minus the disappointment written on their faces when they consistently fail. No one’s job is in danger of being replaced by security AI; if anything, a larger staff is required to ensure security AI stays firmly leashed.

Arguably, AI’s highest use case currently is to add futuristic sheen to traditional security tools, rebranding timeworn approaches as trailblazing sorcery that will revolutionize enterprise cybersecurity as we know it. The current hype cycle for AI appears to be the roaring, ferocious crest at the end of a decade that began with bubbly excitement around the promise of “big data” in information security.

But what lies beneath the marketing gloss and quixotic lust for an AI revolution in security? How did AL ascend to supplant the lustrous zest around machine learning (“ML”) that dominated headlines in recent years? Where is there true potential to enrich information security strategy for the better – and where is it simply an entrancing distraction from more useful goals? And, naturally, how will attackers plot to circumvent security AI to continue their nefarious schemes?

How did AI grow out of this stony rubbish?

The year AI debuted as the “It Girl” in information security was 2017. The year prior, MIT completed their study showing “human-in-the-loop” AI out-performed AI and humans individually in attack detection. Likewise, DARPA conducted the Cyber Grand Challenge, a battle testing AI systems’ offensive and defensive capabilities. Until this point, security AI was imprisoned in the contrived halls of academia and government. Yet, the history of two vendors exhibits how enthusiasm surrounding security AI was driven more by growth marketing than user needs.

Apr
03
2019
--

Okta unveils $50M in-house venture capital fund

Identity management software provider Okta, which went public two years ago in what was one of the first pure-cloud subscription-based company IPOs, wants to fund the next generation of identity, security and privacy startups.

At its big customer conference Oktane, where the company has also announced a new level of identity protection at the server level, chief operating officer Frederic Kerrest (pictured above, right, with chief executive officer Todd McKinnon) will unveil a $50 million investment fund meant to back early-stage startups leveraging artificial intelligence, machine learning and blockchain technology.

“We view this as a natural extension of what we are doing today,” Okta senior vice president Monty Gray told TechCrunch. Gray was hired last year to oversee corporate development, i.e. beef up Okta’s M&A strategy.

Gray and Kerrest tell TechCrunch that Okta Ventures will invest capital in existing Okta partners, as well as other companies in the burgeoning identity management ecosystem. The team managing the fund will look to Okta’s former backers, Sequoia, Andreessen Horowitz and Greylock, for support in the deal sourcing process.

Okta Ventures will write checks sized between $250,000 and $2 million to eight to 10 early-stage businesses per year.

“It’s just a way of making sure we are aligning all our work and support with the right companies who have the right vision and values because there’s a lot of noise around identity, ML and AI,” Kerrest said. “It’s about formalizing the support strategy we’ve had for years and making sure people are clear of the fact we are helping these organizations build because it’s helpful to our customers.”

Okta Ventures’ first bet is Trusted Key, a blockchain-based digital identity platform that previously raised $3 million from Founders Co-Op. Okta’s investment in the startup, founded by former Microsoft, Oracle and Symantec executives, represents its expanding interest in the blockchain.

“Blockchain as a backdrop for identity is cutting edge if not bleeding edge,” Gray said.

Okta, founded in 2009, had raised precisely $231 million from Sequoia, Andreessen Horowitz, Greylock, Khosla Ventures, Floodgate and others prior to its exit. The company’s stock has fared well since its IPO, debuting at $17 per share in 2017 and climbing to more than $85 apiece with a market cap of $9.6 billion as of Tuesday closing.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com