Aug
13
2021
--

ThirdAI raises $6M to democratize AI to any hardware

Houston-based ThirdAI, a company building tools to speed up deep learning technology without the need for specialized hardware like graphics processing units, brought in $6 million in seed funding.

Neotribe Ventures, Cervin Ventures and Firebolt Ventures co-led the investment, which will be used to hire additional employees and invest in computing resources, Anshumali Shrivastava, Third AI co-founder and CEO, told TechCrunch.

Shrivastava, who has a mathematics background, was always interested in artificial intelligence and machine learning, especially rethinking how AI could be developed in a more efficient manner. It was when he was at Rice University that he looked into how to make that work for deep learning. He started ThirdAI in April with some Rice graduate students.

ThirdAI’s technology is designed to be “a smarter approach to deep learning,” using its algorithm and software innovations to make general-purpose central processing units (CPU) faster than graphics processing units for training large neural networks, Shrivastava said. Companies abandoned CPUs years ago in favor of graphics processing units that could more quickly render high-resolution images and video concurrently. The downside is that there is not much memory in graphics processing units, and users often hit a bottleneck while trying to develop AI, he added.

“When we looked at the landscape of deep learning, we saw that much of the technology was from the 1980s, and a majority of the market, some 80%, were using graphics processing units, but were investing in expensive hardware and expensive engineers and then waiting for the magic of AI to happen,” he said.

He and his team looked at how AI was likely to be developed in the future and wanted to create a cost-saving alternative to graphics processing units. Their algorithm, “sub-linear deep learning engine,” instead uses CPUs that don’t require specialized acceleration hardware.

Swaroop “Kittu” Kolluri, founder and managing partner at Neotribe, said this type of technology is still early. Current methods are laborious, expensive and slow, and for example, if a company is running language models that require more memory, it will run into problems, he added.

“That’s where ThirdAI comes in, where you can have your cake and eat it, too,” Kolluri said. “It is also why we wanted to invest. It is not just the computing, but the memory, and ThirdAI will enable anyone to do it, which is going to be a game changer. As technology around deep learning starts to get more sophisticated, there is no limit to what is possible.”

AI is already at a stage where it has the capability to solve some of the hardest problems, like those in healthcare and seismic processing, but he notes there is also a question about climate implications of running AI models.

“Training deep learning models can be more expensive than having five cars in a lifetime,” Shrivastava said. “As we move on to scale AI, we need to think about those.”

 

Dec
08
2020
--

AWS expands on SageMaker capabilities with end-to-end features for machine learning

Nearly three years after it was first launched, Amazon Web Services’ SageMaker platform has gotten a significant upgrade in the form of new features, making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said.

As machine learning moves into the mainstream, business units across organizations will find applications for automation, and AWS is trying to make the development of those bespoke applications easier for its customers.

“One of the best parts of having such a widely adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables,” said AWS vice president of machine learning, Swami Sivasubramanian. “Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability and automation at scale.”

Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino’s Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.

The company’s new products include Amazon SageMaker Data Wrangler, which the company said was providing a way to normalize data from disparate sources so the data is consistently easy to use. Data Wrangler can also ease the process of grouping disparate data sources into features to highlight certain types of data. The Data Wrangler tool contains more than 300 built-in data transformers that can help customers normalize, transform and combine features without having to write any code.

Amazon also unveiled the Feature Store, which allows customers to create repositories that make it easier to store, update, retrieve and share machine learning features for training and inference.

Another new tool that Amazon Web Services touted was Pipelines, its workflow management and automation toolkit. The Pipelines tech is designed to provide orchestration and automation features not dissimilar from traditional programming. Using pipelines, developers can define each step of an end-to-end machine learning workflow, the company said in a statement. Developers can use the tools to re-run an end-to-end workflow from SageMaker Studio using the same settings to get the same model every time, or they can re-run the workflow with new data to update their models.

To address the longstanding issues with data bias in artificial intelligence and machine learning models, Amazon launched SageMaker Clarify. First announced today, this tool allegedly provides bias detection across the machine learning workflow, so developers can build with an eye toward better transparency on how models were set up. There are open-source tools that can do these tests, Amazon acknowledged, but the tools are manual and require a lot of lifting from developers, according to the company.

Other products designed to simplify the machine learning application development process include SageMaker Debugger, which enables developers to train models faster by monitoring system resource utilization and alerting developers to potential bottlenecks; Distributed Training, which makes it possible to train large, complex, deep learning models faster than current approaches by automatically splitting data across multiple GPUs to accelerate training times; and SageMaker Edge Manager, a machine learning model management tool for edge devices, which allows developers to optimize, secure, monitor and manage models deployed on fleets of edge devices.

Last but not least, Amazon unveiled SageMaker JumpStart, which provides developers with a searchable interface to find algorithms and sample notebooks so they can get started on their machine learning journey. The company said it would give developers new to machine learning the option to select several pre-built machine learning solutions and deploy them into SageMaker environments.

Oct
08
2020
--

Grid AI raises $18.6M Series A to help AI researchers and engineers bring their models to production

Grid AI, a startup founded by the inventor of the popular open-source PyTorch Lightning project, William Falcon, that aims to help machine learning engineers work more efficiently, today announced that it has raised an $18.6 million Series A funding round, which closed earlier this summer. The round was led by Index Ventures, with participation from Bain Capital Ventures and firstminute. 

Falcon co-founded the company with Luis Capelo, who was previously the head of machine learning at Glossier. Unsurprisingly, the idea here is to take PyTorch Lightning, which launched about a year ago, and turn that into the core of Grid’s service. The main idea behind Lightning is to decouple the data science from the engineering.

The time argues that a few years ago, when data scientists tried to get started with deep learning, they didn’t always have the right expertise and it was hard for them to get everything right.

“Now the industry has an unhealthy aversion to deep learning because of this,” Falcon noted. “Lightning and Grid embed all those tricks into the workflow so you no longer need to be a PhD in AI nor [have] the resources of the major AI companies to get these things to work. This makes the opportunity cost of putting a simple model against a sophisticated neural network a few hours’ worth of effort instead of the months it used to take. When you use Lightning and Grid it’s hard to make mistakes. It’s like if you take a bad photo with your phone but we are the phone and make that photo look super professional AND teach you how to get there on your own.”

As Falcon noted, Grid is meant to help data scientists and other ML professionals “scale to match the workloads required for enterprise use cases.” Lightning itself can get them partially there, but Grid is meant to provide all of the services its users need to scale up their models to solve real-world problems.

What exactly that looks like isn’t quite clear yet, though. “Imagine you can find any GitHub repository out there. You get a local copy on your laptop and without making any code changes you spin up 400 GPUs on AWS — all from your laptop using either a web app or command-line-interface. That’s the Lightning “magic” applied to training and building models at scale,” Falcon said. “It is what we are already known for and has proven to be such a successful paradigm shift that all the other frameworks like Keras or TensorFlow, and companies have taken notice and have started to modify what they do to try to match what we do.”

The service is now in private beta.

With this new funding, Grid, which currently has 25 employees, plans to expand its team and strengthen its corporate offering via both Grid AI and through the open-source project. Falcon tells me that he aims to build a diverse team, not in the least because he himself is an immigrant, born in Venezuela, and a U.S. military veteran.

“I have first-hand knowledge of the extent that unethical AI can have,” he said. “As a result, we have approached hiring our current 25 employees across many backgrounds and experiences. We might be the first AI company that is not all the same Silicon Valley prototype tech-bro.”

“Lightning’s open-source traction piqued my interest when I first learned about it a year ago,” Index Ventures’ Sarah Cannon told me. “So intrigued in fact I remember rushing into a closet in Helsinki while at a conference to have the privacy needed to hear exactly what Will and Luis had built. I promptly called my colleague Bryan Offutt who met Will and Luis in SF and was impressed by the ‘elegance’ of their code. We swiftly decided to participate in their seed round, days later. We feel very privileged to be part of Grid’s journey. After investing in seed, we spent a significant amount with the team, and the more time we spent with them the more conviction we developed. Less than a year later and pre-launch, we knew we wanted to lead their Series A.”

Sep
02
2020
--

Hypatos gets $11.8M for a deep learning approach to document processing

Process automation startup Hypatos has raised a €10 million (~$11.8 million) seed round of funding from investors including Blackfin Tech, Grazia Equity, UVC Partners and Plug & Play Ventures.

The Germany and Poland-based company was spun out of AI for accounting startup Smacc at the back end of 2018 to apply deep learning tech to power a wider range of back-office automation, with a focus on industries with heavy financial document processing needs, such as the financial and insurance sectors.

Hypatos is applying language processing AI and computer vision tech to speed up financial document processing for business use cases such as invoices, travel and expense management, loan application validation and insurance claims handling via — touting a training data set of more than 10 million annotated data entities.

It says the new seed funding will go on R&D to expand its portfolio of AI models so it can automate business processing for more types of documents, as well as for fueling growth in Europe, North American and Asia. Its customer base at this point includes Fortune 500 companies, major accounting firms and more than 300 software companies.

While there are plenty of business process automation plays, Hypatos says its use of deep learning tech supports an “in-depth understanding” of document content — which in turn allows it to offer customers a “soup to nuts” automation menu that covers document classification, information capturing, content validation and data enrichment.

It dubs its approach “cognitive process automation” (CPA) versus more basic applications of business process automation with software robots (RPA), which it argues aren’t so contextually savvy — thereby claiming an edge.

As well as document processing solutions, it has developed machine learning modules for enhancing customers’ existing systems (e.g. ECM, ERP, CRM, RPA); and offers APIs for software providers to draw on its machine learning tech for their own applications.

“All offerings include machine learning pipeline software for continuous model training in the cloud or in on-premise deployments,” it notes in a press release.

“We have deep knowledge of how financial documents are processed and millions of data entities in our training data,” says chief commercial officer Cem Dilmegani, discussing where Hypatos fits in the business process automation landscape. “We get compared to RPA companies like UiPath, enterprise content management (ECM) companies like Kofax Readsoft as well as generalist ML document automation companies like Hyperscience. However, we are quite different.

“We focus on end-to-end automation, we don’t only help companies capture data, we help them process it using our deep domain understanding, enabling higher rates of automation. For example, to automate incoming invoice processing (A/P automation) we apply our document understanding AI to capture all data, classify the document, identify the specific goods and services, validate for internal/external compliance and assign financial accounts, cost centers, cost categories etc. to automate all processing tasks.

“Finally, we offer this technology as components easily accessible via APIs. This allows RPA or ECM users to leverage our technology and increase their level of automation.”

Hypatos claims it’s seeing uplift as a result of the coronavirus pandemic — noting it’s providing a service to more than a dozen Fortune 500 companies to help with in-shoring efforts, which it says are accelerating as a result of COVID-19 putting pressure on the traditional business process outsourcing model as offshore workforce productivity in lower wage regions is affected by coronavirus lockdowns.

“We believe that we are in a pivotal moment of machine learning adoption in large organizations,” adds Andreas Unseld, partner at UVC Partners, in a supporting statement. “Hypatos’ technology provides ample opportunity to transform many core business processes. We’re impressed by the Hypatos machine learning technology and see the team in a perfect position to take a leading role in the machine learning revolution to come.”

Apr
21
2020
--

AWS and Facebook launch an open-source model server for PyTorch

AWS and Facebook today announced two new open-source projects around PyTorch, the popular open-source machine learning framework. The first of these is TorchServe, a model-serving framework for PyTorch that will make it easier for developers to put their models into production. The other is TorchElastic, a library that makes it easier for developers to build fault-tolerant training jobs on Kubernetes clusters, including AWS’s EC2 spot instances and Elastic Kubernetes Service.

In many ways, the two companies are taking what they have learned from running their own machine learning systems at scale and are putting this into the project. For AWS, that’s mostly SageMaker, the company’s machine learning platform, but as Bratin Saha, AWS VP and GM for Machine Learning Services, told me, the work on PyTorch was mostly motivated by requests from the community. And while there are obviously other model servers like TensorFlow Serving and the Multi Model Server available today, Saha argues that it would be hard to optimize those for PyTorch.

“If we tried to take some other model server, we would not be able to quote optimize it as much, as well as create it within the nuances of how PyTorch developers like to see this,” he said. AWS has lots of experience in running its own model servers for SageMaker that can handle multiple frameworks, but the community was asking for a model server that was tailored toward how they work. That also meant adapting the server’s API to what PyTorch developers expect from their framework of choice, for example.

As Saha told me, the server that AWS and Facebook are now launching as open source is similar to what AWS is using internally. “It’s quite close,” he said. “We actually started with what we had internally for one of our model servers and then put it out to the community, worked closely with Facebook, to iterate and get feedback — and then modified it so it’s quite close.”

Bill Jia, Facebook’s VP of AI Infrastructure, also told me, he’s very happy about how his team and the community has pushed PyTorch forward in recent years. “If you look at the entire industry community — a large number of researchers and enterprise users are using AWS,” he said. “And then we figured out if we can collaborate with AWS and push PyTorch together, then Facebook and AWS can get a lot of benefits, but more so, all the users can get a lot of benefits from PyTorch. That’s our reason for why we wanted to collaborate with AWS.”

As for TorchElastic, the focus here is on allowing developers to create training systems that can work on large distributed Kubernetes clusters where you might want to use cheaper spot instances. Those are preemptible, though, so your system has to be able to handle that, while traditionally, machine learning training frameworks often expect a system where the number of instances stays the same throughout the process. That, too, is something AWS originally built for SageMaker. There, it’s fully managed by AWS, though, so developers never have to think about it. For developers who want more control over their dynamic training systems or to stay very close to the metal, TorchElastic now allows them to recreate this experience on their own Kubernetes clusters.

AWS has a bit of a reputation when it comes to open source and its engagement with the open-source community. In this case, though, it’s nice to see AWS lead the way to bring some of its own work on building model servers, for example, to the PyTorch community. In the machine learning ecosystem, that’s very much expected, and Saha stressed that AWS has long engaged with the community as one of the main contributors to MXNet and through its contributions to projects like Jupyter, TensorFlow and libraries like NumPy.

Jan
28
2020
--

RealityEngines launches its autonomous AI service

RealityEngines.AI, an AI and machine learning startup founded by a number of former Google executives and engineers, is coming out of stealth today and announcing its first set of products.

When the company first announced its $5.25 million seed round last year, CEO Bindu Reddy wasn’t quite ready to disclose RealityEngines’ mission beyond saying that it planned to make machine learning easier for enterprises. With today’s launch, the team is putting this into practice by launching a set of tools that specifically tackle a number of standard enterprise use cases for ML, including user churn predictions, fraud detection, sales lead forecasting, security threat detection and cloud spend optimization. For use cases that don’t fit neatly into these buckets, the service also offers a more general predictive modeling service.

Before co-founding RealiyEngines, Reddy was the head of product for Google Apps and general manager for AI verticals at AWS. Her co-founders are Arvind Sundararajan (formerly at Google and Uber) and Siddartha Naidu (who founded BigQuery at Google). Investors in the company include Eric Schmidt, Ram Shriram, Khosla Ventures and Paul Buchheit.

As Reddy noted, the idea behind this first set of products from RealityEngines is to give businesses an easy entry into machine learning, even if they don’t have data scientists on staff.

Besides talent, another issue that businesses often face is that they don’t always have massive amounts of data to train their networks effectively. That has long been a roadblock for many companies that want to see what AI can do for them but that didn’t have the right resources to do so. RealityEngines overcomes this by creating realistic synthetic data that it can then use to augment a company’s existing data. In its tests, this creates models that are up to 15% more accurate than models that were trained without the synthetic data.

“The most prominent use of generative adversarial networks — GANS — has been to create deepfakes,” said Reddy. “Deepfakes have captured the public’s imagination by highlighting how easy it to spread misinformation with these doctored videos and images. However, GANS can also be applied to productive and good use. They can be used to create synthetic data sets which when then be combined with the original data, to produce robust AI models even when a business doesn’t have much training data.”

RealityEngines currently has about 20 employees, most of whom have a deep background in ML/AI, both as researchers and practitioners.

Oct
30
2019
--

Google launches TensorFlow Enterprise with long-term support and managed services

Google open-sourced its TensorFlow machine learning framework back in 2015 and it quickly became one of the most popular platforms of its kind. Enterprises that wanted to use it, however, had to either work with third parties or do it themselves. To help these companies — and capture some of this lucrative market itself — Google is launching TensorFlow Enterprise, which includes hands-on, enterprise-grade support and optimized managed services on Google Cloud.

One of the most important features of TensorFlow Enterprise is that it will offer long-term support. For some versions of the framework, Google will offer patches for up to three years. For what looks to be an additional fee, Google will also offer to companies that are building AI models engineering assistance from its Google Cloud and TensorFlow teams.

All of this, of course, is deeply integrated with Google’s own cloud services. “Because Google created and open-sourced TensorFlow, Google Cloud is uniquely positioned to offer support and insights directly from the TensorFlow team itself,” the company writes in today’s announcement. “Combined with our deep expertise in AI and machine learning, this makes TensorFlow Enterprise the best way to run TensorFlow.”

Google also includes Deep Learning VMs and Deep Learning Containers to make getting started with TensorFlow easier, and the company has optimized the enterprise version for Nvidia GPUs and Google’s own Cloud TPUs.

Today’s launch is yet another example of Google Cloud’s focus on enterprises, a move the company accelerated when it hired Thomas Kurian to run the Cloud businesses. After years of mostly ignoring the enterprise, the company is now clearly looking at what enterprises are struggling with and how it can adapt its products for them.

Jul
11
2019
--

Andrew Ng to talk about how AI will transform business at TC Sessions: Enterprise

When it comes to applying AI to the world around us, Andrew Ng has few if any peers. We are delighted to announce that the renowned founder, investor, AI expert and Stanford professor will join us onstage at the TechCrunch Sessions: Enterprise show on September 5 at the Yerba Buena Center in San Francisco. 

AI promises to transform the $500 billion enterprise world like nothing since the cloud and SaaS. Hundreds of startups are already seizing the AI moment in areas like recruiting, marketing and communications and customer experience. The oceans of data required to power AI are becoming dramatically more valuable, which in turn is fueling the rise of new data platforms, another big topic of the show

Last year, Ng launched the $175 million AI Fund, backed by big names like Sequoia, NEA, Greylock and SoftBank. The fund’s goal is to develop new AI businesses in a studio model and spin them out when they are ready for prime time. The first of that fund’s cohort is Landing AI, which also launched last year and aims to “empower companies to jumpstart AI and realize practical value.” It’s a wave businesses will want to catch if Ng is anywhere near right in his conviction that AI will generate $13 trillion in GDP growth globally in the next 20 years. You heard that right. 

At TC Sessions: Enterprise, TechCrunch’s editors will ask Ng to detail how he believes AI will unfold in the enterprise world and bring big productivity gains to business. 

As the former chief scientist at Baidu and the founding lead of Google Brain, Ng led the AI transformation of two of the world’s leading technology companies. Dr. Ng is the co-founder of Coursera, an online learning platform, and founder of deeplearning.ai, an AI education platform. Dr. Ng is also an adjunct professor at Stanford University’s Computer Science Department and holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.

Early Bird tickets to see Andrew at TC Sessions: Enterprise are on sale for just $249 when you book here; but hurry, prices go up by $100 soon! Students, grab your discounted tickets for just $75 here.


Feb
05
2019
--

Backed by Benchmark, Blue Hexagon just raised $31 million for its deep learning cybersecurity software

Nayeem Islam spent nearly 11 years with chipmaker Qualcomm, where he founded its Silicon Valley-based R&D facility, recruited its entire team and oversaw research on all aspects of security, including applying machine learning on mobile devices and in the network to detect threats early.

Islam was nothing if not prolific, developing a system for on-device machine learning for malware detection, libraries for optimizing deep learning algorithms on mobile devices and systems for parallel compute on mobile devices, among other things.

In fact, because of his work, he also saw a big opportunity in better protecting enterprises from cyberthreats through deep neural networks that are capable of processing every raw byte within a file and that can uncover complex relations within data sets. So two years ago, Islam and Saumitra Das, a former Qualcomm engineer with 330 patents to his name and another 450 pending, struck out on their own to create Blue Hexagon, a now 30-person Sunnyvale, Calif.-based company that is today disclosing it has raised $31 million in funding from Benchmark and Altimeter.

The funding comes roughly one year after Benchmark quietly led a $6 million Series A round for the firm.

So what has investors so bullish on the company’s prospects, aside from its credentialed founders? In a word, speed, seemingly. According to Islam, Blue Hexagon has created a real-time, cybersecurity platform that he says can detect known and unknown threats at first encounter, then block them in “sub seconds” so the malware doesn’t have time to spread.

The industry has to move to real-time detection, he says, explaining that four new and unique malware samples are released every second, and arguing that traditional security methods can’t keep pace. He says that sandboxes, for example, meaning restricted environments that quarantine cyberthreats and keep them from breaching sensitive files, are no longer state of the art. The same is true of signatures, which are mathematical techniques used to validate the authenticity and integrity of a message, software or digital document but are being bypassed by rapidly evolving new malware.

Only time will tell if Blue Hexagon is far more capable of identifying and stopping attackers, as Islam insists is the case. It is not the only startup to apply deep learning to cybersecurity, though it’s certainly one of the first. Critics, some who are protecting their own corporate interests, also worry that hackers can foil security algorithms by targeting the warning flags they look for.

Still, with its technology, its team and its pitch, Blue Hexagon is starting to persuade not only top investors of its merits, but a growing — and broad — base of customers, says Islam. “Everyone has this issue, from large banks, insurance companies, state and local governments. Nowhere do you find someone who doesn’t need to be protected.”

Blue Hexagon can even help customers that are already under attack, Islam says, even if it isn’t ideal. “Our goal is to catch an attack as early in the kill chain as possible. But if someone is already being attacked, we’ll see that activity and pinpoint it and be able to turn it off.”

Some damage may already be done, of course. It’s another reason to plan ahead, he says. “With automated attacks, you need automated techniques.” Deep learning, he insists, “is one way of leveling the playing field against attackers.”

Apr
25
2018
--

Allegro.AI nabs $11M for ‘deep learning as a service’, for businesses to build computer vision products

Artificial intelligence and the application of it across nearly every aspect of our lives is shaping up to be one of the major step changes of our modern society. Today, a startup that wants to help other companies capitalise on AI’s advances is announcing funding and emerging from stealth mode.

Allegro.AI, which has built a deep learning platform that companies can use to build and train computer-vision-based technologies — from self-driving car systems through to security, medical and any other services that require a system to read and parse visual data — is today announcing that it has raised $11 million in funding, as it prepares for a full-scale launch of its commercial services later this year after running pilots and working with early users in a closed beta.

The round may not be huge by today’s startup standards, but the presence of strategic investors speaks to the interest that the startup has sparked and the gap in the market for what it is offering. It includes MizMaa Ventures — a Chinese fund that is focused on investing in Israeli startups, along with participation from Robert Bosch Venture Capital GmbH (RBVC), Samsung Catalyst Fund and Israeli fund Dynamic Loop Capital. Other investors (the $11 million actually covers more than one round) are not being disclosed.

Nir Bar-Lev, the CEO and cofounder (Moses Guttmann, another cofounder, is the company’s CTO; and the third cofounder, Gil Westrich, is the VP of R&D), started Allegro.AI first as Seematics in 2016 after he left Google, where he had worked in various senior roles for over 10 years. It was partly that experience that led him to the idea that with the rise of AI, there would be an opportunity for companies that could build a platform to help other less AI-savvy companies build AI-based products.

“We’re addressing a gap in the industry,” he said in an interview. Although there are a number of services, for example Rekognition from Amazon’s AWS, which allow a developer to ping a database by way of an API to provide analytics and some identification of a video or image, these are relatively basic and couldn’t be used to build and “teach” full-scale navigation systems, for example.

“An ecosystem doesn’t exist for anything deep-learning based.” Every company that wants to build something would have to invest 80-90 percent of their total R&D resources on infrastructure, before getting to the many other apsects of building a product, he said, which might also include the hardware and applications themselves. “We’re providing this so that the companies don’t need to build it.”

Instead, the research scientists that will buy in the Allegro.AI platform — it’s not intended for non-technical users (not now at least) — can concentrate on overseeing projects and considering strategic applications and other aspects of the projects. He says that currently, its direct target customers are tech companies and others that rely heavily on tech, “but are not the Googles and Amazons of the world.”

Indeed, companies like Google, AWS, Microsoft, Apple and Facebook have all made major inroads into AI, and in one way or another each has a strong interest in enterprise services and may already be hosting a lot of data in their clouds. But Bar-Lev believes that companies ultimately will be wary to work with them on large-scale AI projects:

“A lot of the data that’s already on their cloud is data from before the AI revolution, before companies realized that the asset today is data,” he said. “If it’s there, it’s there and a lot of it is transactional and relational data.

“But what’s not there is all the signal-based data, all of the data coming from computer vision. That is not on these clouds. We haven’t spoken to a single automotive who is sharing that with these cloud providers. They are not even sharing it with their OEMs. I’ve worked at Google, and I know how companies are afraid of them. These companies are terrified of tech companies like Amazon and so on eating them up, so if they can now stop and control their assets they will do that.”

Customers have the option of working with Allegro either as a cloud or on-premise product, or a combination of the two, and this brings up the third reason that Allegro believes it has a strong opportunity. The quantity of data that is collected for image-based neural networks is massive, and in some regards it’s not practical to rely on cloud systems to process that. Allegro’s emphasis is on building computing at the edge to work with the data more efficiently, which is one of the reasons investors were also interested.

“AI and machine learning will transform the way we interact with all the devices in our lives, by enabling them to process what they’re seeing in real time,” said David Goldschmidt, VP and MD at Samsung Catalyst Fund, in a statement. “By advancing deep learning at the edge, Allegro.AI will help companies in a diverse range of fields—from robotics to mobility—develop devices that are more intelligent, robust, and responsive to their environment. We’re particularly excited about this investment because, like Samsung, Allegro.AI is committed not just to developing this foundational technology, but also to building the open, collaborative ecosystem that is necessary to bring it to consumers in a meaningful way.”

Allegro.AI is not the first company with hopes of providing AI and deep learning as a service to the enterprise world: Element.AI out of Canada is another startup that is being built on the premise that most companies know they will need to consider how to use AI in their businesses, but lack the in-house expertise or budget (or both) to do that. Until the wider field matures and AI know-how becomes something anyone can buy off-the-shelf, it’s going to present an interesting opportunity for the likes of Allegro and others to step in.

 

 

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com