Aug
04
2021
--

Enterprise AI 2.0: The acceleration of B2B AI innovation has begun

Two decades after businesses first started deploying AI solutions, one can argue that they’ve made little progress in achieving significant gains in efficiency and profitability relative to the hype that drove initial expectations.

On the surface, recent data supports AI skeptics. Almost 90% of data science projects never make it to production; only 20% of analytics insights through 2022 will achieve business outcomes; and even companies that have developed an enterprisewide AI strategy are seeing failure rates of up to 50%.

But the past 25 years have only been the first phase in the evolution of enterprise AI — or what we might call Enterprise AI 1.0. That’s where many businesses remain today. However, companies on the leading edge of AI innovation have advanced to the next generation, which will define the coming decade of big data, analytics and automation — Enterprise AI 2.0.

The difference between these two generations of enterprise AI is not academic. For executives across the business spectrum — from healthcare and retail to media and finance — the evolution from 1.0 to 2.0 is a chance to learn and adapt from past failures, create concrete expectations for future uses and justify the rising investment in AI that we see across industries.

Two decades from now, when business leaders look back to the 2020s, the companies who achieved Enterprise AI 2.0 first will have come to be big winners in the economy, having differentiated their services, scooped up market share and positioned themselves for ongoing innovation.

Framing the digital transformations of the future as an evolution from Enterprise AI 1.0 to 2.0 provides a conceptual model for business leaders developing strategies to compete in the age of automation and advanced analytics.

Enterprise AI 1.0 (the status quo)

Starting in the mid-1990s, AI was a sector marked by speculative testing, experimental interest and exploration. These activities occurred almost exclusively in the domain of data scientists. As Gartner wrote in a recent report, these efforts were “alchemy … run by wizards whose talents will not scale in the organization.”

Jul
20
2021
--

How we built an AI unicorn in 6 years

Today, Tractable is worth $1 billion. Our AI is used by millions of people across the world to recover faster from road accidents, and it also helps recycle as many cars as Tesla puts on the road.

And yet six years ago, Tractable was just me and Raz (Razvan Ranca, CTO), two college grads coding in a basement. Here’s how we did it, and what we learned along the way.

Build upon a fresh technological breakthrough

In 2013, I was fortunate to get into artificial intelligence (more specifically, deep learning) six months before it blew up internationally. It started when I took a course on Coursera called “Machine learning with neural networks” by Geoffrey Hinton. It was like being love struck. Back then, to me AI was science fiction, like “The Terminator.”

Narrowly focusing on a branch of applied science that was undergoing a paradigm shift which hadn’t yet reached the business world changed everything.

But an article in the tech press said the academic field was amid a resurgence. As a result of 100x larger training data sets and 100x higher compute power becoming available by reprogramming GPUs (graphics cards), a huge leap in predictive performance had been attained in image classification a year earlier. This meant computers were starting to be able to understand what’s in an image — like humans do.

The next step was getting this technology into the real world. While at university — Imperial College London — teaming up with much more skilled people, we built a plant recognition app with deep learning. We walked our professor through Hyde Park, watching him take photos of flowers with the app and laughing from joy as the AI recognized the right plant species. This had previously been impossible.

I started spending every spare moment on image classification with deep learning. Still, no one was talking about it in the news — even Imperial’s computer vision lab wasn’t yet on it! I felt like I was in on a revolutionary secret.

Looking back, narrowly focusing on a branch of applied science undergoing a breakthrough paradigm shift that hadn’t yet reached the business world changed everything.

Search for complementary co-founders who will become your best friends

I’d previously been rejected from Entrepreneur First (EF), one of the world’s best incubators, for not knowing anything about tech. Having changed that, I applied again.

The last interview was a hackathon, where I met Raz. He was doing machine learning research at Cambridge, had topped EF’s technical test, and published papers on reconstructing shredded documents and on poker bots that could detect bluffs. His bare-bones webpage read: “I seek data-driven solutions to currently intractable problems.” Now that had a ring to it (and where we’d get the name for Tractable).

That hackathon, we coded all night. The morning after, he and I knew something special was happening between us. We moved in together and would spend years side by side, 24/7, from waking up to Pantera in the morning to coding marathons at night.

But we also wouldn’t have got where we are without Adrien (Cohen, president), who joined as our third co-founder right after our seed round. Adrien had previously co-founded Lazada, an online supermarket in South East Asia like Amazon and Alibaba, which sold to Alibaba for $1.5 billion. Adrien would teach us how to build a business, inspire trust and hire world-class talent.

Find potential customers early so you can work out market fit

Tractable started at EF with a head start — a paying customer. Our first use case was … plastic pipe welds.

It was as glamorous as it sounds. Pipes that carry water and natural gas to your home are made of plastic. They’re connected by welds (melt the two plastic ends, connect them, let them cool down and solidify again as one). Image classification AI could visually check people’s weld setups to ensure good quality. Most of all, it was real-world value for breakthrough AI.

And yet in the end, they — our only paying customer — stopped working with us, just as we were raising our first round of funding. That was rough. Luckily, the number of pipe weld inspections was too small a market to interest investors, so we explored other use cases — utilities, geology, dermatology and medical imaging.

Jun
02
2021
--

Iterative raises $20M for its MLOps platform

Iterative, an open-source startup that is building an enterprise AI platform to help companies operationalize their models, today announced that it has raised a $20 million Series A round led by 468 Capital and Mesosphere co-founder Florian Leibert. Previous investors True Ventures and Afore Capital also participated in this round, which brings the company’s total funding to $25 million.

The core idea behind Iterative is to provide data scientists and data engineers with a platform that closely resembles a modern GitOps-driven development stack.

After spending time in academia, Iterative co-founder and CEO Dmitry Petrov joined Microsoft as a data scientist on the Bing team in 2013. He noted that the industry has changed quite a bit since then. While early on, the questions were about how to build machine learning models, today the problem is how to build predictable processes around machine learning, especially in large organizations with sizable teams. “How can we make the team productive, not the person? This is a new challenge for the entire industry,” he said.

Big companies (like Microsoft) were able to build their own proprietary tooling and processes to build their AI operations, Petrov noted, but that’s not an option for smaller companies.

Currently, Iterative’s stack consists of a couple of different components that sit on top of tools like GitLab and GitHub. These include DVC for running experiments and data and model versioning, CML, the company’s CI/CD platform for machine learning, and the company’s newest product, Studio, its SaaS platform for enabling collaboration between teams. Instead of reinventing the wheel, Iterative essentially provides data scientists who already use GitHub or GitLab to collaborate on their source code with a tool like DVC Studio that extends this to help them collaborate on data and metrics, too.

Image Credits: Iterative

“DVC Studio enables machine learning developers to run hundreds of experiments with full transparency, giving other developers in the organization the ability to collaborate fully in the process,” said Petrov. “The funding today will help us bring more innovative products and services into our ecosystem.”

Petrov stressed that he wants to build an ecosystem of tools, not a monolithic platform. When the company closed this current funding round about three months ago, Iterative had about 30 employees, many of whom were previously active in the open-source community around its projects. Today, that number is already closer to 60.

“Data, ML and AI are becoming an essential part of the industry and IT infrastructure,” said Leibert, general partner at 468 Capital. “Companies with great open-source adoption and bottom-up market strategy, like Iterative, are going to define the standards for AI tools and processes around building ML models.”

Mar
17
2021
--

New Relic expands its AIOps services

In recent years, the publicly traded observability service New Relic started adding more machine learning-based tools to its platform for AI-assisted incident response when things don’t quite go as planned. Today, it is expanding this feature set with the launch of a number of new capabilities for what it calls its “New Relic Applied Intelligence Service.”

This expansion includes an anomaly detection service that is even available for free users, the ability to group alerts from multiple tools when the models think it’s a single issue that is triggering all of these alerts and new ML-based root cause analysis to help eliminate some of the guesswork when problems occur. Also new (and in public beta) is New Relic’s ability to detect patterns and outliers in log data that is stored in the company’s data platform.

The main idea here, New Relic’s director of product marketing Michael Olson told me, is to make it easier for companies of all sizes to reap the benefits of AI-enhanced ops.

Image Credits: New Relic

“It’s been about a year since we introduced our first set of AIops capabilities with New Relic Applied Intelligence to the market,” he said. “During that time, we’ve seen significant growth in adoption of AIops capabilities through New Relic. But one of the things that we’ve heard from organizations that have yet to foray into adopting AIops capabilities as part of their incident response practice is that they often find that things like steep learning curves and long implementation and training times — and sometimes lack of confidence, or knowledge of AI and machine learning — often stand in the way.”

The new platform should be able to detect emerging problems in real time — without the team having to pre-configure alerts. And when it does so, it’ll smartly group all of the alerts from New Relic and other tools together to cut down on the alert noise and let engineers focus on the incident.

“Instead of an alert storm when a problem occurs across multiple tools, engineers get one actionable issue with alerts automatically grouped based on things like time and frequency, based on the context that they can read in the alert messages. And then now with this launch, we’re also able to look at relationship data across your systems to intelligently group and correlate alerts,” Olson explained.

Image Credits: New Relic

Maybe the highlight for the ops teams that will use these new features, though, is New Relic’s ability to pinpoint the probable root cause of a problem. As Guy Fighel, the general manager of applied intelligence and vice president of product engineering at New Relic, told me, the idea here is not to replace humans but to augment teams.

“We provide a non-black-box experience for teams to craft the decisions and correlation and logic based on their own knowledge and infuse the system with their own knowledge,” Fighel noted. “So you can get very specific based on your environment and needs. And so because of that and because we see a lot of data coming from different tools — all going into New Relic One as the data platform — our probable root cause is very accurate. Having said that, it is still a probable root cause. So although we are opinionated about it, we will never tell you, ‘hey, go fix that, because we’re 100% sure that’s the case.’ You’re the human, you’re in control.”

The AI system also asks users for feedback, so that the model gets refined with every new incident, too.

Fighel tells me that New Relic’s tools rely on a variety of statistical analysis methods and machine learning models. Some of those are unique to individual users while others are used across the company’s user base. He also stressed that all of the engineers who worked on this project have a background in site reliability engineering — so they are intimately familiar with the problems in this space.

With today’s launch, New Relic is also adding a new integration with PagerDuty and other incident management tools so that the state of a given issue can be synchronized bi-directionally between them.

“We want to meet our customers where they are and really be data source agnostic and enable customers to pull in data from any source, where we can then enrich that data, reduce noise and ultimately help our customers solve problems faster,” said Olson.


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

Sep
23
2020
--

WhyLabs brings more transparancy to ML ops

WhyLabs, a new machine learning startup that was spun out of the Allen Institute, is coming out of stealth today. Founded by a group of former Amazon machine learning engineers, Alessya Visnjic, Sam Gracie and Andy Dang, together with Madrona Venture Group principal Maria Karaivanova, WhyLabs’ focus is on ML operations after models have been trained — not on building those models from the ground up.

The team also today announced that it has raised a $4 million seed funding round from Madrona Venture Group, Bezos Expeditions, Defy Partners and Ascend VC.

Visnjic, the company’s CEO, used to work on Amazon’s demand forecasting model.

“The team was all research scientists, and I was the only engineer who had kind of tier-one operating experience,” she told me. “So I thought, “Okay, how bad could it be? I carried the pager for the retail website before. But it was one of the first AI deployments that we’d done at Amazon at scale. The pager duty was extra fun because there were no real tools. So when things would go wrong — like we’d order way too many black socks out of the blue — it was a lot of manual effort to figure out why issues were happening.”

Image Credits: WhyLabs

But while large companies like Amazon have built their own internal tools to help their data scientists and AI practitioners operate their AI systems, most enterprises continue to struggle with this — and a lot of AI projects simply fail and never make it into production. “We believe that one of the big reasons that happens is because of the operating process that remains super manual,” Visnjic said. “So at WhyLabs, we’re building the tools to address that — specifically to monitor and track data quality and alert — you can think of it as Datadog for AI applications.”

The team has brought ambitions, but to get started, it is focusing on observability. The team is building — and open-sourcing — a new tool for continuously logging what’s happening in the AI system, using a low-overhead agent. That platform-agnostic system, dubbed WhyLogs, is meant to help practitioners understand the data that moves through the AI/ML pipeline.

For a lot of businesses, Visnjic noted, the amount of data that flows through these systems is so large that it doesn’t make sense for them to keep “lots of big haystacks with possibly some needles in there for some investigation to come in the future.” So what they do instead is just discard all of this. With its data logging solution, WhyLabs aims to give these companies the tools to investigate their data and find issues right at the start of the pipeline.

Image Credits: WhyLabs

According to Karaivanova, the company doesn’t have paying customers yet, but it is working on a number of proofs of concepts. Among those users is Zulily, which is also a design partner for the company. The company is going after mid-size enterprises for the time being, but as Karaivanova noted, to hit the sweet spot for the company, a customer needs to have an established data science team with 10 to 15 ML practitioners. While the team is still figuring out its pricing model, it’ll likely be a volume-based approach, Karaivanova said.

“We love to invest in great founding teams who have built solutions at scale inside cutting-edge companies, who can then bring products to the broader market at the right time. The WhyLabs team are practitioners building for practitioners. They have intimate, first-hand knowledge of the challenges facing AI builders from their years at Amazon and are putting that experience and insight to work for their customers,” said Tim Porter, managing director at Madrona. “We couldn’t be more excited to invest in WhyLabs and partner with them to bring cross-platform model reliability and observability to this exploding category of MLOps.”

May
19
2020
--

Microsoft launches Project Bonsai, its new machine teaching service for building autonomous systems

At its Build developer conference, Microsoft today announced that Project Bonsai, its new machine teaching service, is now in public preview.

If that name sounds familiar, it’s probably because you remember that Microsoft acquired Bonsai, a company that focuses on machine teaching, back in 2018. Bonsai combined simulation tools with different machine learning techniques to build a general-purpose deep reinforcement learning platform, with a focus on industrial control systems.

It’s maybe no surprise then that Project Bonsai, too, has a similar focus on helping businesses teach and manage their autonomous machines. “With Project Bonsai, subject-matter experts can add state-of-the-art intelligence to their most dynamic physical systems and processes without needing a background in AI,” the company notes in its press materials.

“The public preview of Project Bonsai builds on top of the Bonsai acquisition and the autonomous systems private preview announcements made at Build and Ignite of last year,” a Microsoft spokesperson told me.

Interestingly, Microsoft notes that project Bonsai is only the first block of a larger vision to help its customers build these autonomous systems. The company also stresses the advantages of machine teaching over other machine learning approach, especially the fact that it’s less of a black box approach than other methods, which makes it easier for developers and engineers to debug systems that don’t work as expected.

In addition to Bonsai, Microsoft also today announced Project Moab, an open-source balancing robot that is meant to help engineers and developers learn the basics of how to build a real-world control system. The idea here is to teach the robot to keep a ball balanced on top of a platform that is held by three arms.

Potential users will be able to either 3D print the robot themselves or buy one when it goes on sale later this year. There is also a simulation, developed by MathWorks, that developers can try out immediately.

“You can very quickly take it into areas where doing it in traditional ways would not be easy, such as balancing an egg instead,” said Mark Hammond, Microsoft General Manager
for Autonomous Systems. “The point of the Project Moab system is to provide that
playground where engineers tackling various problems can learn how to use the tooling and simulation models. Once they understand the concepts, they can apply it to their novel use case.”

Nov
07
2019
--

How Microsoft is trying to become more innovative

Microsoft Research is a globally distributed playground for people interested in solving fundamental science problems.

These projects often focus on machine learning and artificial intelligence, and since Microsoft is on a mission to infuse all of its products with more AI smarts, it’s no surprise that it’s also seeking ways to integrate Microsoft Research’s innovations into the rest of the company.

Across the board, the company is trying to find ways to become more innovative, especially around its work in AI, and it’s putting processes in place to do so. Microsoft is unusually open about this process, too, and actually made it somewhat of a focus this week at Ignite, a yearly conference that typically focuses more on technical IT management topics.

At Ignite, Microsoft will for the first time present these projects externally at a dedicated keynote. That feels similar to what Google used to do with its ATAP group at its I/O events and is obviously meant to showcase the cutting-edge innovation that happens inside of Microsoft (outside of making Excel smarter).

To manage its AI innovation efforts, Microsoft created the Microsoft AI group led by VP Mitra Azizirad, who’s tasked with establishing thought leadership in this space internally and externally, and helping the company itself innovate faster (Microsoft’s AI for Good projects also fall under this group’s purview). I sat down with Azizirad to get a better idea of what her team is doing and how she approaches getting companies to innovate around AI and bring research projects out of the lab.

“We began to put together a narrative for the company of what it really means to be in an AI-driven world and what we look at from a differentiated perspective,” Azizirad said. “What we’ve done in this area is something that has resonated and landed well. And now we’re including AI, but we’re expanding beyond it to other paradigm shifts like human-machine interaction, future of computing and digital responsibility, as more than just a set of principles and practices but an area of innovation in and of itself.”

Currently, Microsoft is doing a very good job at talking and thinking about horizon one opportunities, as well as horizon three projects that are still years out, she said. “Horizon two, we need to get better at, and that’s what we’re doing.”

It’s worth stressing that Microsoft AI, which launched about two years ago, marks the first time there’s a business, marketing and product management team associated with Microsoft Research, so the team does get a lot of insights into upcoming technologies. Just in the last couple of years, Microsoft has published more than 6,000 research papers on AI, some of which clearly have a future in the company’s products.

Jun
12
2019
--

RealityEngines.AI raises $5.25M seed round to make ML easier for enterprises

RealityEngines.AI, a research startup that wants to help enterprises make better use of AI, even when they only have incomplete data, today announced that it has raised a $5.25 million seed funding round. The round was led by former Google CEO and Chairman Eric Schmidt and Google founding board member Ram Shriram. Khosla Ventures, Paul Buchheit, Deepchand Nishar, Elad Gil, Keval Desai, Don Burnette and others also participated in this round.

The fact that the service was able to raise from this rather prominent group of investors clearly shows that its overall thesis resonates. The company, which doesn’t have a product yet, tells me that it specifically wants to help enterprises make better use of the smaller and noisier data sets they have and provide them with state-of-the-art machine learning and AI systems that they can quickly take into production. It also aims to provide its customers with systems that can explain their predictions and are free of various forms of bias, something that’s hard to do when the system is essentially a black box.

As RealityEngines CEO Bindu Reddy, who was previously the head of products for Google Apps, told me, the company plans to use the funding to build out its research and development team. The company, after all, is tackling some of the most fundamental and hardest problems in machine learning right now — and that costs money. Some, like working with smaller data sets, already have some available solutions like generative adversarial networks that can augment existing data sets and that RealityEngines expects to innovate on.

Reddy is also betting on reinforcement learning as one of the core machine learning techniques for the platform.

Once it has its product in place, the plan is to make it available as a pay-as-you-go managed service that will make machine learning more accessible to large enterprise, but also to small and medium businesses, which also increasingly need access to these tools to remain competitive.

May
02
2019
--

Microsoft launches a drag-and-drop machine learning tool

Microsoft today announced three new services that all aim to simplify the process of machine learning. These range from a new interface for a tool that completely automates the process of creating models, to a new no-code visual interface for building, training and deploying models, all the way to hosted Jupyter-style notebooks for advanced users.

Getting started with machine learning is hard. Even to run the most basic of experiments take a good amount of expertise. All of these new tools great simplify this process by hiding away the code or giving those who want to write their own code a pre-configured platform for doing so.

The new interface for Azure’s automated machine learning tool makes creating a model as easy importing a data set and then telling the service which value to predict. Users don’t need to write a single line of code, while in the backend, this updated version now supports a number of new algorithms and optimizations that should result in more accurate models. While most of this is automated, Microsoft stresses that the service provides “complete transparency into algorithms, so developers and data scientists can manually override and control the process.”

For those who want a bit more control from the get-go, Microsoft also today launched a visual interface for its Azure Machine Learning service into preview that will allow developers to build, train and deploy machine learning models without having to touch any code.

This tool, the Azure Machine Learning visual interface looks suspiciously like the existing Azure ML Studio, Microsoft’s first stab at building a visual machine learning tool. Indeed, the two services look identical. The company never really pushed this service, though, and almost seemed to have forgotten about it despite that fact that it always seemed like a really useful tool for getting started with machine learning.

Microsoft says that this new version combines the best of Azure ML Studio with the Azure Machine Learning service. In practice, this means that while the interface is almost identical, the Azure Machine Learning visual interface extends what was possible with ML Studio by running on top of the Azure Machine Learning service and adding that services’ security, deployment and lifecycle management capabilities.

The service provides an easy interface for cleaning up your data, training models with the help of different algorithms, evaluating them and, finally, putting them into production.

While these first two services clearly target novices, the new hosted notebooks in Azure Machine Learning are clearly geared toward the more experiences machine learning practitioner. The notebooks come pre-packaged with support for the Azure Machine Learning Python SDK and run in what the company describes as a “secure, enterprise-ready environment.” While using these notebooks isn’t trivial either, this new feature allows developers to quickly get started without the hassle of setting up a new development environment with all the necessary cloud resources.

Jul
06
2017
--

H2O.ai’s Driverless AI automates machine learning for businesses

 Driverless AI is the latest product from H2O.ai aimed at lowering the barrier to making data science work in a corporate context. The tool assists non-technical employees with preparing data, calibrating parameters and determining the optimal algorithms for tackling specific business problems with machine learning. Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com