Aug
12
2019
--

Polarity raises $8.1M for its AI software that constantly analyzes employee screens and highlights key info

Reference docs and spreadsheets seemingly make the world go ’round, but what if employees could just close those tabs for good without losing that knowledge?

One startup is taking on that complicated challenge. Predictably, the solution is quite complicated, as well, from a tech perspective, involving an AI solution that analyzes everything on your PC screen — all the time — and highlights text onscreen for which you could use a little bit more context. The team at Polarity wants its tech to help teams lower the knowledge barrier to getting stuff done and allow people to focus more on procedure and strategy than memorizing file numbers, IP addresses and jargon.

The Connecticut startup just closed an $8.1 million “AA” round led by TechOperators, with Shasta Ventures, Strategic Cyber Ventures, Gula Tech Adventures and Kaiser Permanente Ventures also participating in the round. The startup closed its $3.5 million Series A in early 2017.

Interestingly, the enterprise-centric startup pitches itself as an AR company, augmenting what’s happening on your laptop screen much like a pair of AR glasses could.

The startup’s computer vision software that uses character recognition to analyze what’s on a user’s screen can be helpful for enterprise teams importing things like a company Rolodex so that bios are always collectively a click away, but the real utility comes from team-wide flagging of things like suspicious IP addresses that will allow entire teams to learn about new threats and issues at the same time without having to constantly check in with their co-workers. The startup’s current product has a big focus on analysts and security teams.

Polarity before and after two

via Polarity

Using character recognition to analyze a screen for specific keywords is useful in itself, but that’s also largely a solved computer vision problem.

Polarity’s big advance has been getting these processes to occur consistently on-device without crushing a device’s CPU. CEO Paul Battista says that for the average customer, Polarity’s software generally eats up about 3-6% of their computer’s processing power, though it can spike much higher if the system is getting fed a ton of new information at once.

“We spent years building the tech to accomplish [efficiency], readjusting how people think of [object character recognition] and then doing it in real time,” Battista tells me. “The more data that you have onscreen, the more power you use. So it does use a significant percentage of the CPU.”

Why bother with all of this AI trickery and CPU efficiency when you could pull this functionality off in certain apps with an API? The whole deliverable here is that it doesn’t matter if you’re working in Chrome, or Excel or pulling up a scanned document, the software is analyzing what’s actually being rendered onscreen, not what the individual app is communicating.

When it comes to a piece of software analyzing everything on your screen at all times, there are certainly some privacy concerns, not only from the employee’s perspective but from a company’s security perspective.

Battista says the intent with this product isn’t to be some piece of corporate spyware, and that it won’t be something running in the background — it’s an app that users will launch. “If [companies] wanted to they could collect all of the data on everybody’s screens, but we don’t have any customers doing that. The software is built to have a user interface for users to interact with so if the user didn’t choose to subscribe or turn on a metric, then [the company] wouldn’t be able to force them to collect it in the current product.”

Battista says that teams at seven Fortune 100 companies are currently paying for Polarity, with many more in pilot programs. The team is currently around 20 people and with this latest fundraise, Battista wants to double the size of the team in the next 18 months as they look to scale to larger rollouts at major companies.

Jul
31
2019
--

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

May
17
2019
--

Under the hood on Zoom’s IPO, with founder and CEO Eric Yuan

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Kate Clark sat down with Eric Yuan, the founder and CEO of video communications startup Zoom, to go behind the curtain on the company’s recent IPO process and its path to the public markets.

Since hitting the trading desks just a few weeks ago, Zoom stock is up over 30%. But the Zoom’s path to becoming a Silicon Valley and Wall Street darling was anything but easy. Eric tells Kate how the company’s early focus on profitability, which is now helping drive the stock’s strong performance out of the gate, actually made it difficult to get VC money early on, and the company’s consistent focus on user experience led to organic growth across different customer bases.

Eric: I experienced the year 2000 dot com crash and the 2008 financial crisis, and it almost wiped out the company. I only got seed money from my friends, and also one or two VCs like AME Cloud Ventures and Qualcomm Ventures.

nd all other institutional VCs had no interest to invest in us. I was very paranoid and always thought “wow, we are not going to survive next week because we cannot raise the capital. And on the way, I thought we have to look into our own destiny. We wanted to be cash flow positive. We wanted to be profitable.

nd so by doing that, people thought I wasn’t as wise, because we’d probably be sacrificing growth, right? And a lot of other companies, they did very well and were not profitable because they focused on growth. And in the future they could be very, very profitable.

Eric and Kate also dive deeper into Zoom’s founding and Eric’s initial decision to leave WebEx to work on a better video communication solution. Eric also offers his take on what the future of video conferencing may look like in the next five to 10 years and gives advice to founders looking to build the next great company.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Kate Clark: Well thanks for joining us Eric.

Eric Yuan: No problem, no problem.

Kate: Super excited to chat about Zoom’s historic IPO. Before we jump into questions, I’m just going to review some of the key events leading up to the IPO, just to give some context to any of the listeners on the call.

May
02
2019
--

Microsoft announces the $3,500 HoloLens 2 Development Edition

As part of its rather odd Thursday afternoon pre-Build news dump, Microsoft today announced the HoloLens 2 Development Edition. The company announced the much-improved HoloLens 2 at MWC Barcelona earlier this year, but it’s not shipping to developers yet. Currently, the best release date we have is “later this year.” The Development Edition will launch alongside the regular HoloLens 2.

The Development Edition, which will retail for $3,500 to own outright or on a $99 per month installment plan, doesn’t feature any special hardware. Instead, it comes with $500 in Azure credits and 3-month trials of Unity Pro and the Unity PiXYZ plugin for bringing engineering renderings into Unity.

To get the Development Edition, potential buyers have to join the Microsoft Mixed Reality Developer Program and those who already pre-ordered the standard edition will be able to change their order later this year.

As far as HoloLens news goes, that’s all a bit underwhelming. Anybody can get free Azure credits, after all (though usually only $200) and free trials of Unity Pro are also readily available (though typically limited to 30 days).

Oddly, the regular HoloLens 2 was also supposed to cost $3,500. It’s unclear if the regular edition will now be somewhat cheaper, cost the same but come without the credits, or really why Microsoft isn’t doing this at all. Turning this into a special “Development Edition” feels more like a marketing gimmick than anything else, as well as an attempt to bring some of the futuristic glamour of the HoloLens visor to today’s announcements.

The folks at Unity are clearly excited, though. “Pairing HoloLens 2 with Unity’s real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways,” says Tim McDonough, GM of Industrial at Unity, in today’s announcement. “The addition of Unity Pro and PiXYZ Plugin to HoloLens 2 Development Edition gives businesses the immediate ability to create real-time 2D, 3D, VR, and AR interactive experiences while allowing for the importing and preparation of design data to create real-time experiences.”

Microsoft also today noted that Unreal Engine 4 support for HoloLens 2 will become available by the end of May.

May
02
2019
--

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.

May
02
2019
--

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Apr
02
2019
--

Edgybees’s new developer platform brings situational awareness to live video feeds

San Diego-based Edgybees today announced the launch of Argus, its API-based developer platform that makes it easy to add augmented reality features to live video feeds.

The service has long used this capability to run its own drone platform for first responders and enterprise customers, which allows its users to tag and track objects and people in emergency situations, for example, to create better situational awareness for first responders.

I first saw a demo of the service a year ago, when the team walked a group of journalists through a simulated emergency, with live drone footage and an overlay of a street map and the location of ambulances and other emergency personnel. It’s clear how these features could be used in other situations as well, given that few companies have the expertise to combine the video footage, GPS data and other information, including geographic information systems, for their own custom projects.

Indeed, that’s what inspired the team to open up its platform. As the Edgybees team told me during an interview at the Ourcrowd Summit last month, it’s impossible for the company to build a new solution for every vertical that could make use of it. So instead of even trying (though it’ll keep refining its existing products), it’s now opening up its platform.

“The potential for augmented reality beyond the entertainment sector is endless, especially as video becomes an essential medium for organizations relying on drone footage or CCTV,” said Adam Kaplan, CEO and co-founder of Edgybees. “As forward-thinking industries look to make sense of all the data at their fingertips, we’re giving developers a way to tailor our offering and set them up for success.”

In the run-up to today’s launch, the company has already worked with organizations like the PGA to use its software to enhance the live coverage of its golf tournaments.

Mar
24
2019
--

Alibaba acquires Israeli startup Infinity Augmented Reality

Infinity Augmented Reality, an Israeli startup, has been acquired by Alibaba, the companies announced this weekend. The deal’s terms were not disclosed. Alibaba and InfinityAR have had a strategic partnership since 2016, when Alibaba Group led InfinityAR’s Series C. Since then, the two have collaborated on augmented reality, computer vision and artificial intelligence projects.

Founded in 2013, the startup’s augmented glasses platform enables developers in a wide range of industries (retail, gaming, medical, etc.) to integrate AR into their apps. InfinityAR’s products include software for ODMs and OEMs and a SDK plug-in for 3D engines.

Alibaba’s foray into virtual reality started three years ago, when it invested in Magic Leap and then announced a new research lab in China to develop ways of incorporating virtual reality into its e-commerce platform.

InfinityAR’s research and development team will begin working out of Alibaba’s Israel Machine Laboratory, part of Alibaba DAMO Academy, the R&D initiative into which it is pouring $15 billion with the goal of eventually serving two billion customers and creating 100 million jobs by 2036. DAMO Academy collaborates with universities around the world, and Alibaba’s Israel Machine Laboratory has a partnership with Tel Aviv University focused on video analysis and machine learning.

In a press statement, the laboratory’s head, Lihi Zelnik-Manor, said “Alibaba is delighted to be working with InfinityAR as one team after three years of partnership. The talented team brings unique knowhow in sensor fusion, computer vision and navigation technologies. We look forward to exploring these leading technologies and offering additional benefits to customers, partners and developers.”

Mar
05
2019
--

Matterport raises $48M to ramp up its 3D imaging platform

The growth of augmented and virtual reality applications and hardware is ushering in a new age of digital media and imaging technologies, and startups that are putting themselves at the center of that are attracting interest.

TechCrunch has learned and confirmed that Matterport — which started out making cameras but has since diversified into a wider platform to capture, create, search and utilise 3D imagery of interior and enclosed spaces in immersive real estate, design, insurance and other B2C and B2B applications — has raised $48 million. Sources tell us the money came at a pre-money valuation of around $325 million, although the company is not commenting on that.

From what we understand, the funding is coming ahead of a larger growth round from existing and new investors, to tap into what they see as a big opportunity for building and providing (as a service) highly accurate 3D images of enclosed spaces.

The company in December appointed a new CEO, RJ Pittman — who had been the chief product officer at eBay, and before that held executive roles at Apple and Google — to help fill out that bigger strategy.

Matterport had raised just under $63 million prior to this and had been valued at around $207 million, according to PitchBook estimates.This current round is coming from existing backers, which include Lux Capital, DCM, Qualcomm Ventures and more.

Matterport’s roots are in high-end cameras built to capture multiple images to create 3D interior imagery for a variety of applications, from interior design and real estate to gaming. Changing tides in the worlds of industry and hardware have somewhat shifted its course.

On the hardware side, we’ve seen a rise in the functionality of smartphone cameras, as well as a proliferation of specialised 3D cameras at lower price points. So while Matterport still sells its own high-end cameras, it is also starting to work with less expensive devices with spherical lenses — such as the Ricoh Theta, which is nearly 10 times less expensive than Matterport’s Pro2 camera — and smartphones.

Using an AI engine — which it has been building for some time — packaged into a service it calls Matterport Cloud 3.0, it converts 2D panoramic and 360-degree images into 3D images. (Matterport Cloud 3.0 is currently in beta and will be launching fully on the 18th of March, initially supporting the Ricoh Theta V, the Theta Z1, the Insta360 ONE X and the Leica Geosystems BLK360 laser scanner.)

Matterport is further using this technology to grow its wider database of images. It already has racked up 1.6 million 3D images and millions of 2D images, and at its current growth rate, the aim is to expand its library to 100 million in the coming years, positioning it as a Getty for 3D enclosed images.

These, in turn, will be used in two ways: to feed Matterport’s machine learning to train it to create better and faster 3D images; and to become part of a wider library, accessible to other businesses by way of a set of APIs.

And, from what I understand, the object will not just be to use images as they are: people would be able to manipulate the images to, for example, remove all the furniture in a room and re-stage it completely without needing to physically do that work ahead of listing a house for sale. Another is adding immersive interior shots into mapping applications like Google’s Street View.

“We are a data company,” Pittman told me when I met him for coffee last month.

The ability to convert 2D into 3D images using artificial intelligence to help automate the process is a potentially big area that Matterport, and its investors, believe will be in increasing demand. That’s not just because people still think there will one day be a bigger market for virtual reality headsets, which will need more interesting content, but because we as consumers already have come to expect more realistic and immersive experiences today, even when viewing things on regular screens — and because B2B and enterprise services (for example design or insurance applications) have also grown in sophistication and now require these kinds of images.

(That demand is driving the creation of other kinds of 3D imaging startups, too. Threedy.ai launched last week with a seed round from a number of angels and VCs to perform a similar kind of 2D-to-3D mapping technique for objects rather than interior spaces. It is already working with a number of e-commerce sites to bypass some of the costs and inefficiencies of more established, manual methods of 3D rendering.)

While Matterport is doubling down on its cloud services strategy, it also has been making some hires to take the business to its next steps. In addition to Pittman, they have added Dave Lippman, formerly design head at eBay, as its chief design officer; and engineering veteran Lou Marzano as its VP of hardware, R&D and manufacturing, with more hires to come.

Feb
24
2019
--

Say hello to Microsoft’s new $3,500 HoloLens with twice the field of view

Microsoft unveiled the latest version of its HoloLens ‘mixed reality’ headset at MWC Barcelona today. The new HoloLens 2 features a significantly larger field of view, higher resolution and a device that’s more comfortable to wear. Indeed, Microsoft says the device is three times as comfortable to wear (though it’s unclear how Microsoft measured this).

Later this year, HoloLens 2 will be available in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand for $3,500.

One of the knocks against the original HoloLens was its limited field of view. When whatever you wanted to look at was small and straight ahead of you, the effect was striking. But when you moved your head a little bit or looked at a larger object, it suddenly felt like you were looking through a stamp-sized screen. HoloLens 2 features a field of view that’s twice as large as the original.

“Kinect was the first intelligent device to enter our homes,” HoloLens chief Alex Kipman said in today’s keynote, looking back the the device’s history. “It drove us to create Microsoft HoloLens. […] Over the last few years, individual developers, large enterprises, brand new startup have been dreaming up beautiful things, helpful things.”

The HoloLens was always just as much about the software as the hardware, though. For HoloLens, Microsoft developed a special version of Windows, together with a new way of interacting with the AR objects through gestures like air tap and bloom. In this new version, the interaction is far more natural and lets you tap objects. The device also tracks your gaze more accurately to allow the software to adjust to where you are looking.

“HoloLens 2 adapts to you,” Kipman stressed. “HoloLens 2 evolves the interaction model by significantly advancing how people engage with holograms.”

In its demos, the company clearly emphasized how much faster and fluid the interaction with HoloLens applications becomes when you can use slides, for example, by simply grabbing the slider and moving it, or by tapping on a button with either a finger or two or with your full hand. Microsoft event built a virtual piano that you can play with ten fingers to show off how well the HoloLens can track movement. The company calls this ‘instinctual interaction.’

Microsoft first unveiled the HoloLens concept at a surprise event on its Redmond campus back in 2015. After a limited, invite-only release that started days after the end of MWC 2016, the device went on sale to everybody in August  2016. Four years is a long time between hardware releases, but the company clearly wanted to seed the market and give developer a chance to build the first set of HoloLens applications on a stable platform.

To support developers, Microsoft is also launching a number of Azure services for HoloLens today. These include spatial anchors and remote rendering to help developers stream high-polygon content to HoloLens.

It’s worth noting that Microsoft never positioned the device as consumer hardware. I may have shown off the occasional game, but its focus was always on business applications, with a bit of educational applications thrown in, too. That trend continued today. Microsoft showed off the ability to have multiple people collaborate around a single hologram, for example. That’s not new, of course, but goes to show how Microsoft is positioning this technology.

For these enterprises, Microsoft will also offer the ability to customize the device.

“When you change the way you see the world, you change the world you see,” Microsoft CEO Satya Nadella said, repeating a line from the company’s first HoloLens announcement four years ago. He noted that he believes that connecting the physical world with the virtual world will transform the way we will work.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com