Jan
17
2019
--

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former director of Engineering for Facebook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in less than a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from Nvidia and Google are virtually available for anyone to run their tests.

Individual users can get on for free, specify the type of GPU they need to compute their experiment and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a “spell hyper” command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the “size” of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent, as well as a designer.

Jan
16
2019
--

Nvidia’s T4 GPUs are now available in beta on Google Cloud

Google Cloud today announced that Nvidia’s Turing-based Tesla T4 data center GPUs are now available in beta in its data centers in Brazil, India, Netherlands, Singapore, Tokyo and the United States. Google first announced a private test of these cards in November, but that was a very limited alpha test. All developers can now take these new T4 GPUs for a spin through Google’s Compute Engine service.

The T4, which essentially uses the same processor architecture as Nvidia’s RTX cards for consumers, slots in-between the existing Nvidia V100 and P4 GPUs on the Google Cloud Platform . While the V100 is optimized for machine learning, though, the T4 (as its P4 predecessor) is more of a general-purpose GPU that also turns out to be great for training models and inferencing.

In terms of machine and deep learning performance, the 16GB T4 is significantly slower than the V100, though if you are mostly running inference on the cards, you may actually see a speed boost. Unsurprisingly, using the T4 is also cheaper than the V100, starting at $0.95 per hour compared to $2.48 per hour for the V100, with another discount for using preemptible VMs and Google’s usual sustained use discounts.

Google says that the card’s 16GB memory should easily handle large machine learning models and the ability to run multiple smaller models at the same time. The standard PCI Express 3.0 card also comes with support for Nvidia’s Tensor Cores to accelerate deep learning and Nvidia’s new RTX ray-tracing cores. Performance tops out at 260 TOPS and developers can connect up to four T4 GPUs to a virtual machine.

It’s worth stressing that this is also the first GPU in the Google Cloud lineup that supports Nvidia’s ray-tracing technology. There isn’t a lot of software on the market yet that actually makes use of this technique, which allows you to render more lifelike images in real time, but if you need a virtual workstation with a powerful next-generation graphics card, that’s now an option.

With today’s beta launch of the T4, Google Cloud now offers quite a variety of Nvidia GPUs, including the K80, P4, P100 and V100, all at different price points and with different performance characteristics.

Jan
16
2019
--

HyperScience, the machine learning startup tackling data entry, raises $30 million Series B

HyperScience, the machine learning company that turns human readable data into machine readable data, has today announced the close of a $30 million Series B funding round led by Stripes Group, with participation from existing investors FirstMark Capital and Felicis Ventures, as well as new investors Battery Ventures, Global Founders Capital, TD Ameritrade and QBE.

HyperScience launched out of stealth in 2016 with a suite of enterprise products focused on the healthcare, insurance, finance and government industries. The original products were HSForms (which handled data-entry by converting hand-written forms to digital), HSFreeForm (which did a similar function for hand-written emails or other non-form content) and HSEvaluate (which could parse through complex data on a form to help insurance companies approve or deny claims by pulling out all the relevant info).

Now, the company has combined all three of those products into a single product called HyperScience. The product is meant to help companies and organizations reduce their data-entry backlog and better serve their customers, saving money and resources.

The idea is that many of the forms we use in life or in the workplace are in an arbitrary format. My bank statements don’t look the same as your bank statements, and invoices from your company might look different than invoices from my company.

HyperScience is able to take those forms and pipe them into the system quickly and easily, without help from humans.

Instead of charging by seat, HyperScience charges by documents, as the mere use of HyperScience should mean that fewer humans are actually “using” the product.

The latest round brings HyperScience’s total funding to $50 million, and the company plans to use a good deal of that funding to grow the team.

“We have a product that works and a phenomenally good product market fit,” said CEO Peter Brodsky. “What will determine our success is our ability to build and scale the team.”

Dec
08
2018
--

Why you need a supercomputer to build a house

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

When directions become deterrents

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Startups are helping people crack the code

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit:

Nov
28
2018
--

Amazon Textract brings intelligence to OCR

One of the challenges just about every business faces is converting forms to a useful digital format. This has typically involved using human data entry clerks to enter the data into the computer. State of the art involved using OCR to read forms automatically, but AWS CEO Andy Jassy explained that OCR is basically just a dumb text reader. It doesn’t recognize text types. Amazon wanted to change that and today it announced Amazon Textract, an intelligent OCR tool to move data from forms to a more useable digital format.

In an example, he showed a form with tables. Regular OCR didn’t recognize the table and interpreted it as a string of text. Textract is designed to recognize common page elements like a table and pull the data in a sensible way.

Jassy said that forms also often change, and if you are using a template as a workaround for OCR’s lack of intelligence, the template breaks if you move anything. To fix that, Textract is smart enough to understand common data types like Social Security numbers, dates of birth and addresses, and interprets them correctly no matter where they fall on the page.

“We have taught Textract to recognize this set of characters is a date of birth and this is a Social Security number. If forms change, Textract won’t miss it,” Jassy explained.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS announces new Inferentia machine learning chip

AWS is not content to cede any part of any market to any company. When it comes to machine learning chips, names like Nvidia or Google come to mind, but today at AWS re:Invent in Las Vegas, the company announced a new dedicated machine learning chip of its own called Inferentia.

“Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor,” AWS CEO Andy Jassy explained during the announcement.

Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future.

“The speed and cost of running machine learning operations — ideally in deep learning — are a competitive differentiator for enterprises. Speed advantages will make or break success of enterprises (and nations when you think of warfare). That speed can only be achieved with custom hardware, and Inferentia is AWS’s first step to get in to this game,” Mueller told TechCrunch. As he pointed out, Google has a 2-3 year head start with its TPU infrastructure.

Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What’s more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX.

Of course, being an Amazon product, it also supports data from popular AWS products such as EC2, SageMaker and the new Elastic Inference Engine announced today.

While the chip was announced today, AWS CEO Andy Jassy indicated it won’t actually be available until next year.

more AWS re:Invent 2018 coverage

Nov
13
2018
--

Cognigo raises $8.5M for its AI-driven data protection platform

Cognigo, a startup that aims to use AI and machine learning to help enterprises protect their data and stay in compliance with regulations like GDPR, today announced that it has raised an $8.5 million Series A round. The round was led by Israel-based crowdfunding platform OurCrowd, with participation from privacy company Prosegur and State of Mind Ventures.

The company promises that it can help businesses protect their critical data assets and prevent personally identifiable information from leaking outside of the company’s network. And it says it can do so without the kind of hands-on management that’s often required in setting up these kinds of systems and managing them over time. Indeed, Cognigo says that it can help businesses achieve GDPR compliance in days instead of months.

To do this, the company tells me, it’s using pre-trained language models for data classification. That model has been trained to detect common categories like payslips, patents, NDAs and contracts. Organizations can also provide their own data samples to further train the model and customize it for their own needs. “The only human intervention required is during the systems configuration process, which would take no longer than a single day’s work,” a company spokesperson told me. “Apart from that, the system is completely human-free.”

The company tells me that it plans to use the new funding to expand its R&D, marketing and sales teams, all with the goal of expanding its market presence and enhancing awareness of its product. “Our vision is to ensure our customers can use their data to make smart business decisions while making sure that the data is continuously protected and in compliance,” the company tells me.

Nov
08
2018
--

Google Cloud wants to make it easier for data scientists to share models

Today, Google Cloud announced Kubeflow pipelines and AI Hub, two tools designed to help data scientists put to work across their organizations the models they create.

Rajen Sheth, director of product management for Google Cloud’s AI and ML products, says that the company recognized that data scientists too often build models that never get used. He says that if machine learning is really a team sport, as Google believes, models must get passed from data scientists to data engineers and developers who can build applications based on them.

To help fix that, Google is announcing Kubeflow pipelines, which are an extension of Kubeflow, an open-source framework built on top of Kubernetes designed specifically for machine learning. Pipelines are essentially containerized building blocks that people in the machine learning ecosystem can string together to build and manage machine learning workflows.

By placing the model in a container, data scientists can simply adjust the underlying model as needed and relaunch in a continuous delivery kind of approach. Sheth says this opens up even more possibilities for model usage in a company.

“[Kubeflow pipelines] also give users a way to experiment with different pipeline variants to identify which ones produce the best outcomes in a reliable and reproducible environment,” Sheth wrote in a blog post announcing the new machine learning features.

The company is also announcing AI Hub, which, as the name implies, is a central place where data scientists can go to find different kinds of ML content, including Kubeflow pipelines, Jupyter notebooks, TensorFlow modules and so forth. This will be a public repository seeded with resources developed by Google Cloud AI, Google Research and other teams across Google, allowing data scientists to take advantage of Google’s own research and development expertise.

But Google wanted the hub to be more than a public library — it also sees it as a place where teams can share information privately inside their organizations, giving it a dual purpose. This should provide another way to extend model usage by making essential building blocks available in a central repository.

AI Hub will be available in Alpha starting today with some initial components from Google, as well as tools for sharing some internal resources, but the plan is to keep expanding the offerings and capabilities over time.

Google believes if it provides easier ways to share model building blocks across an organization, the more likely they will be put to work. These tools are a step toward achieving that.

Oct
15
2018
--

Celonis brings intelligent process automation software to cloud

Celonis has been helping companies analyze and improve their internal processes using machine learning. Today the company announced it was providing that same solution as a cloud service with a few nifty improvements you won’t find on prem.

The new approach, called Celonis Intelligent Business Cloud, allows customers to analyze a workflow, find inefficiencies and offer improvements very quickly. Companies typically follow a workflow that has developed over time and very rarely think about why it developed the way it did, or how to fix it. If they do, it usually involves bringing in consultants to help. Celonis puts software and machine learning to bear on the problem.

Co-founder and CEO Alexander Rinke says that his company deals with massive volumes of data and moving all of that to the cloud makes sense. “With Intelligent Business Cloud, we will unlock that [on prem data], bring it to the cloud in a very efficient infrastructure and provide much more value on top of it,” he told TechCrunch.

The idea is to speed up the whole ingestion process, allowing a company to see the inefficiencies in their business processes very quickly. Rinke says it starts with ingesting data from sources such as Salesforce or SAP and then creating a visual view of the process flow. There may be hundreds of variants from the main process workflow, but you can see which ones would give you the most value to change, based on the number of times the variation occurs.

Screenshot: Celonis

By packaging the Celonis tools as a cloud service, they are reducing the complexity of running and managing it. They are also introducing an app store with over 300 pre-packaged options for popular products like Salesforce and ServiceNow and popular process like order to cash. This should also help get customers up and running much more quickly.

New Celonis App Store. Screenshot: Celonis

The cloud service also includes an Action Engine, which Rinke describes as a big step toward moving Celonis from being purely analytical to operational. “Action Engine focuses on changing and improving processes. It gives workers concrete info on what to do next. For example in process analysis, it would notice on time delivery isn’t great because order to cash is to slow. It helps accelerate changes in system configuration,” he explained.

Celonis Action Engine. Screenshot: Celonis

The new cloud service is available today. Celonis was founded in 2011. It has raised over $77 million. The most recent round was a $50 million Series B on a valuation over $1 billion.

Oct
09
2018
--

After its acquisition, Magento starts integrating Adobe’s personalization and analytics tools

It’s been less than six months since Adobe acquired commerce platform Magento for $1.68 billion and today, at Magento’s annual conference, the company announced the first set of integrations that bring the analytics and personalization features of Adobe’s Experience Cloud to Magento’s Commerce Cloud.

In many ways, the acquisition of Magento helps Adobe close the loop in its marketing story by giving its customers a full spectrum of services that go from analytics, marketing and customer acquisition all the way to closing the transaction. It’s no surprise then that the Experience Cloud and Commerce Cloud are growing closer to, in Adobe’s words, “make every experience shoppable.”

“From the time that this company started to today, our focus has been pretty much exactly the same,” Adobe’s SVP of Strategic Marketing Aseem Chandra told me. “This is, how do we deliver better experiences across any channel in which our customers are interacting with a brand? If you think about the way that customers interact today, every experience is valuable and important. […] It’s no longer just about the product, it’s more about the experience that we deliver around that product that really counts.”

So with these new integrations, Magento Commerce Cloud users will get access to an integration with Adobe Target, for example, the company’s machine learning-based tool for personalizing shopping experiences. Similarly, they’ll get easy access to predictive analytics from Adobe Analytics to analyze their customers’ data and predict future churn and purchasing behavior, among other things.

These kinds of AI/ML capabilities were something Magento had long been thinking about, Magento’s former CEO and new Adobe SVP fo Commerce Mark Lavelle told me, but it took the acquisition by Adobe to really be able to push ahead with this. “Where the world’s going for Magento clients — and really for all of Adobe’s clients — is you can’t do this yourself,” he said. “you need to be associated with a platform that has not just technology and feature functionality, but actually has this living and breathing data environment that that learns and delivers intelligence back into the product so that your job is easier. That’s what Amazon and Google and all of the big companies that we’re all increasingly competing against or cooperating with have. They have that type of scale.” He also noted that at least part of this match-up of Adobe and Magento is to give their clients that kind of scale, even if they are small- or medium-sized merchants.

The other new Adobe-powered feature that’s now available is an integration with the Adobe Experience Manager. That’s Adobe’s content management tool that itself integrates many of these AI technologies for building personalized mobile and web content and shopping experiences.

“The goal here is really in unifying that profile, where we have a lot of behavioral information about our consumers,” said Aseem. “And what Magento allows us to do is bring in the transactional information and put those together so we get a much richer view of who the consumers are and how we personalize that experience with the next interaction that they have with a Magento-based commerce site.”

It’s worth noting that Magento is also launching a number of other new features to its Commerce Cloud that include a new drag-and-drop editing tool for site content, support for building Progressive Web Applications, a streamlined payment tool with improved risk management capabilities, as well as a new integration with the Amazon Sales Channel so Magento stores can sync their inventory with Amazon’s platform. Magneto is also announcing integrations with Google’s Merchant Center and Advertising Channels for Google Smart Shopping Campaigns.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com