Jun
30
2021
--

How to cut through the promotional haze and select a digital building platform

Everyone from investors to casual LinkedIn observers has more reasons than ever to look at buildings and wonder what’s going on inside. The property industry is known for moving slowly when it comes to adopting new technologies, but novel concepts and products are now entering this market at a dizzying pace.

However, this ever-growing array of smart-building products has made it confusing for professionals who seek to implement digital building platform (DBP) technologies in their spaces, let alone across their entire enterprise. The waters get even murkier when it comes to cloud platforms and their impact on ROI with regard to energy usage and day-to-day operations.

Breaking down technology decisions into bite-sized pieces, starting with fundamental functions, is the most straightforward way to cut through the promotional haze.

Facility managers, energy professionals and building operators are increasingly hit with daily requests to review the latest platform for managing and operating their buildings. Here are a few tips to help decision-makers clear through the marketing fluff and put DBP platforms to the test.

The why, how and what

Breaking down technology decisions into bite-sized pieces, starting with fundamental functions, is the most straightforward way to cut through the promotional haze. Ask two simple questions: Who on your team will use this technology and what problem will it solve for them? Answers to these questions will help you maintain your key objectives, making it easier to narrow down the hundreds of options to a handful.

Another way to prioritize problems and solutions when sourcing smart-building technology is to identify your use cases. If you don’t know why you need a technology platform for your smart building, you’ll find it difficult to tell which option is better. Further, once you have chosen one, you’ll be hard put to determine if it has been successful. We find use cases draw the most direct line from why to how and what.

For example, let’s examine the why, how and what questions for a real estate developer planning to construct or modernize a commercial office building:

  • Why will people come? — Our building will be full of amenities and technological touches that will make discerning tenants feel comfortable, safe and part of a warm community of like-minded individuals.
  • How will we do it? — Implement the latest tenant-facing technology offering services and capabilities that are not readily available at home. We will create indoor and outdoor environments that make people feel comfortable and happy.
  • What tools, products and technology will we use?

This last question is often the hardest to answer and is usually left until the last possible moment. For building systems integrators, this is where the real work begins.

Focus on desired outcomes

When various stakeholder groups begin their investigations of the technology, it is crucial to define the outcomes everyone hopes to achieve for each use case. When evaluating specific products, it helps to categorize them at high levels.

Several high-level outcomes, such as digital twin enablement, data normalization and data storage are expected across multiple categories of systems. However, only an enterprise building management system includes the most expected outcomes. Integration platform as a service, bespoke reports and dashboarding, analytics as a service and energy-optimization platforms have various enabled and optional outcomes.

The following table breaks down a list of high-level outcomes and aligns them to a category of smart-building platforms available in the market. Expanded definitions of each item are included at the end of this article.

May
18
2021
--

Artificial raises $21M led by Microsoft’s M12 for a lab automation platform aimed at life sciences R&D

Automation is extending into every aspect of how organizations get work done, and today comes news of a startup that is building tools for one industry in particular: life sciences. Artificial, which has built a software platform for laboratories to assist with, or in some cases fully automate, research and development work, has raised $21.5 million.

It plans to use the funding to continue building out its software and its capabilities, to hire more people, and for business development, according to Artificial’s CEO and co-founder David Fuller. The company already has a number of customers including Thermo Fisher and Beam Therapeutics using its software directly and in partnership for their own customers. Sold as aLab Suite, Artificial’s technology can both orchestrate and manage robotic machines that labs might be using to handle some work; and help assist scientists when they are carrying out the work themselves.

“The basic premise of what we’re trying to do is accelerate the rate of discovery in labs,” Fuller said in an interview. He believes the process of bringing in more AI into labs to improve how they work is long overdue. “We need to have a digital revolution to change the way that labs have been operating for the last 20 years.”

The Series A is being led by Microsoft’s venture fund M12 — a financial and strategic investor — with Playground Global and AME Cloud Ventures also participating. Playground Global, the VC firm co-founded by ex-Google exec and Android co-creator Andy Rubin (who is no longer with the firm), has been focusing on robotics and life sciences and it led Artificial’s first and only other round. Artificial is not disclosing its valuation with this round.

Fuller hails from a background in robotics, specifically industrial robots and automation. Before founding Artificial in 2019, he was at Kuka, the German robotics maker, for a number of years, culminating in the role of CTO; prior to that, Fuller spent 20 years at National Instruments, the instrumentation, test equipment and industrial software giant. Meanwhile, Artificial’s co-founder, Nikhita Singh, has insight into how to bring the advances of robotics into environments that are quite analogue in culture. She previously worked on human-robot interaction research at the MIT Media Lab, and before that spent years at Palantir and working on robotics at Berkeley.

As Fuller describes it, he saw an interesting gap (and opportunity) in the market to apply automation, which he had seen help advance work in industrial settings, to the world of life sciences, both to help scientists track what they are doing better, and help them carry out some of the more repetitive work that they have to do day in, day out.

This gap is perhaps more in the spotlight today than ever before, given the fact that we are in the middle of a global health pandemic. This has hindered a lot of labs from being able to operate full in-person teams, and increased the reliance on systems that can crunch numbers and carry out work without as many people present. And, of course, the need for that work (whether it’s related directly to Covid-19 or not) has perhaps never appeared as urgent as it does right now.

There have been a lot of advances in robotics — specifically around hardware like robotic arms — to manage some of the precision needed to carry out some work, but up to now no real efforts made at building platforms to bring all of the work done by that hardware together (or in the words of automation specialists, “orchestrate” that work and data); nor link up the data from those robot-led efforts, with the work that human scientists still carry out. Artificial estimates that some $10 billion is spent annually on lab informatics and automation software, yet data models to unify that work, and platforms to reach across it all, remain absent. That has, in effect, served as a barrier to labs modernising as much as they could.

A lab, as he describes it, is essentially composed of high-end instrumentation for analytics, alongside then robotic systems for liquid handling. “You can really think of a lab, frankly, as a kitchen,” he said, “and the primary operation in that lab is mixing liquids.”

But it is also not unlike a factory, too. As those liquids are mixed, a robotic system typically moves around pipettes, liquids, in and out of plates and mixes. “There’s a key aspect of material flow through the lab, and the material flow part of it is much more like classic robotics,” he said. In other words, there is, as he says, “a combination of bespoke scientific equipment that includes automation, and then classic material flow, which is much more standard robotics,” and is what makes the lab ripe as an applied environment for automation software.

To note: the idea is not to remove humans altogether, but to provide assistance so that they can do their jobs better. He points out that even the automotive industry, which has been automated for 50 years, still has about 6% of all work done by humans. If that is a watermark, it sounds like there is a lot of movement left in labs: Fuller estimates that some 60% of all work in the lab is done by humans. And part of the reason for that is simply because it’s just too complex to replace scientists — who he described as “artists” — altogether (for now at least).

“Our solution augments the human activity and automates the standard activity,” he said. “We view that as a central thesis that differentiates us from classic automation.”

There have been a number of other startups emerging that are applying some of the learnings of artificial intelligence and big data analytics for enterprises to the world of science. They include the likes of Turing, which is applying this to helping automate lab work for CPG companies; and Paige, which is focusing on AI to help better understand cancer and other pathology.

The Microsoft connection is one that could well play out in how Artificial’s platform develops going forward, not just in how data is perhaps handled in the cloud, but also on the ground, specifically with augmented reality.

“We see massive technical synergy,” Fuller said. “When you are in a lab you already have to wear glasses… and we think this has the earmarks of a long-term use case.”

Fuller mentioned that one area it’s looking at would involve equipping scientists and other technicians with Microsoft’s HoloLens to help direct them around the labs, and to make sure people are carrying out work consistently by comparing what is happening in the physical world to a “digital twin” of a lab containing data about supplies, where they are located, and what needs to happen next.

It’s this and all of the other areas that have yet to be brought into our very AI-led enterprise future that interested Microsoft.

“Biology labs today are light- to semi-automated—the same state they were in when I started my academic research and biopharmaceutical career over 20 years ago. Most labs operate more like test kitchens rather than factories,” said Dr. Kouki Harasaki, an investor at M12, in a statement. “Artificial’s aLab Suite is especially exciting to us because it is uniquely positioned to automate the masses: it’s accessible, low code, easy to use, highly configurable, and interoperable with common lab hardware and software. Most importantly, it enables Biopharma and SynBio labs to achieve the crowning glory of workflow automation: flexibility at scale.”

Harasaki is joining Peter Barratt, a founder and general partner at Playground Global, on Artificial’s board with this round.

“It’s become even more clear as we continue to battle the pandemic that we need to take a scalable, reproducible approach to running our labs, rather than the artisanal, error-prone methods we employ today,” Barrett said in a statement. “The aLab Suite that Artificial has pioneered will allow us to accelerate the breakthrough treatments of tomorrow and ensure our best and brightest scientists are working on challenging problems, not manual labor.”

May
11
2021
--

SightCall raises $42M for its AR-based visual assistance platform

Long before COVID-19 precipitated “digital transformation” across the world of work, customer services and support was built to run online and virtually. Yet it too is undergoing an evolution supercharged by technology.

Today, a startup called SightCall, which has built an augmented reality platform to help field service teams, the companies they work for, and their customers carry out technical and mechanical maintenance or repairs more effectively, is announcing $42 million in funding, money that it plans to use to invest in its tech stack with more artificial intelligence tools and expanding its client base.

The core of its service, explained CEO and co-founder Thomas Cottereau, is AR technology (which comes embedded in their apps or the service apps its customers use, with integrations into other standard software used in customer service environments including Microsoft, SAP, Salesforce and ServiceNow). The augmented reality experience overlays additional information, pointers and other tools over the video stream.

This is used by, say, field service engineers coordinating with central offices when servicing equipment; or by manufacturers to provide better assistance to customers in emergencies or situations where something is not working but might be repaired quicker by the customers themselves rather than engineers that have to be called out; or indeed by call centers, aided by AI, to diagnose whatever the problem might be. It’s a big leap ahead for scenarios that previously relied on work orders, hastily drawn diagrams, instruction manuals and voice-based descriptions to progress the work in question.

“We like to say that we break the barriers that exist between a field service organization and its customer,” Cottereau said.

The tech, meanwhile, is unique to SightCall, built over years and designed to be used by way of a basic smartphone, and over even a basic mobile network — essential in cases where reception is bad or the locations are remote. (More on how it works below.)

Originally founded in Paris, France before relocating to San Francisco, SightCall has already built up a sizable business across a pretty wide range of verticals, including insurance, telecoms, transportation, telehealth, manufacturing, utilities and life sciences/medical devices.

SightCall has some 200 big-name enterprise customers on its books, including the likes of Kraft-Heinz, Allianz, GE Healthcare and Lincoln Motor Company, providing services on a B2B basis as well as for teams that are out in the field working for consumer customers, too. After seeing 100% year-over-year growth in annual recurring revenue in 2019 and 2020, SightCall’s CEO says it’s looking like it will hit that rate this year as well, with a goal of $100 million in annual recurring revenue.

The funding is being led by InfraVia, a European private equity firm, with Bpifrance also participating. The valuation of this round is not being disclosed, but I should point out that an investor told me that PitchBook’s estimate of $122 million post-money is not accurate (we’re still digging on this and will update as and when we learn more).

For some further context on this investment, InfraVia invests in a number of industrial businesses, alongside investments in tech companies building services related to them such as recent investments in Jobandtalent, so this is in part a strategic investment. SightCall has raised $67 million to date.

There has been an interesting wave of startups emerging in recent years building out the tech stack used by people working in the front lines and in the field, a shift after years of knowledge workers getting most of the attention from startups building a new generation of apps.

Workiz and Jobber are building platforms for small business tradespeople to book jobs and manage them once they’re on the books; BigChange helps manage bigger fleets; and Hover has built a platform for builders to be able to assess and estimate costs for work by using AI to analyze images captured by their or their would-be customers’ smartphone cameras.

And there is Streem, which I discovered is a close enough competitor to SightCall that they’ve acquired AdWords ads based on SightCall searches in Google. Just ahead of the COVID-19 pandemic breaking wide open, General Catalyst-backed Streem was acquired by Frontdoor to help with the latter’s efforts to build out its home services business, another sign of how all of this is leaping ahead.

What’s interesting in part about SightCall and sets it apart is its technology. Co-founded in 2007 by Cottereau and Antoine Vervoort (currently SVP of product and engineering), the two are long-time telecoms industry vets who had both worked on the technical side of building next-generation networks.

SightCall started life as a company called Weemo that built video chat services that could run on WebRTC-based frameworks, which emerged at a time when we were seeing a wider effort to bring more rich media services into mobile web and SMS apps. For consumers and to a large extent businesses, mobile phone apps that work “over the top” (distributed not by your mobile network carrier but the companies that run your phone’s operating system, and thus partly controlled by them) really took the lead and continue to dominate the market for messaging and innovations in messaging.

After a time, Weemo pivoted and renamed itself as SightCall, focusing on packaging the tech that it built into whichever app (native or mobile web) where one of its enterprise customers wanted the tech to live.

The key to how it works comes by way of how SightCall was built, Cottereau explained. The company has spent 10 years building and optimizing a network across data centers close to where its customers are, which interconnects with Tier 1 telecoms carriers and has a lot of latency in the system to ensure uptime. “We work with companies where this connectivity is mission critical,” he said. “The video solution has to work.”

As he describes it, the hybrid system SightCall has built incorporates its own IP that works both with telecoms hardware and software, resulting in a video service that provides 10 different ways for streaming video and a system that automatically chooses the best in a particular environment, based on where you are, so that even if mobile data or broadband reception don’t work, video streaming will. “Telecoms and software are still very separate worlds,” Cottereau said. “They still don’t speak the same language, and so that is part of our secret sauce, a global roaming mechanism.”

The tech that the startup has built to date not only has given it a firm grounding against others who might be looking to build in this space, but has led to strong traction with customers. The next steps will be to continue building out that technology to tap deeper into the automation that is being adopted across the industries that already use SightCall’s technology.

“SightCall pioneered the market for AR-powered visual assistance, and they’re in the best position to drive the digital transformation of remote service,” said Alban Wyniecki, partner at InfraVia Capital Partners, in a statement. “As a global leader, they can now expand their capabilities, making their interactions more intelligent and also bringing more automation to help humans work at their best.”

“SightCall’s $42M Series B marks the largest funding round yet in this sector, and SightCall emerges as the undisputed leader in capital, R&D resources and partnerships with leading technology companies enabling its solutions to be embedded into complex enterprise IT,” added Antoine Izsak of Bpifrance. “Businesses are looking for solutions like SightCall to enable customer-centricity at a greater scale while augmenting technicians with knowledge and expertise that unlocks efficiencies and drives continuous performance and profit.”

Cottereau said that the company has had a number of acquisition offers over the years — not a surprise when you consider the foundational technology it has built for how to architect video networks across different carriers and data centers that work even in the most unreliable of network environments.

“We want to stay independent, though,” he said. “I see a huge market here, and I want us to continue the story and lead it. Plus, I can see a way where we can stay independent and continue to work with everyone.”

Feb
03
2021
--

TouchCast raises $55M to grow its mixed reality-based virtual event platform

Events — when they haven’t been cancelled altogether in the last 12 months due to the global health pandemic — have gone virtual and online, and a wave of startups that are helping people create and participate in those experiences are seeing a surge of attention — and funding.

In the latest development, New York video startup TouchCast — which has developed a platform aimed at companies to produce lifelike, virtual conferences and other events without much technical heavy-lifting — has picked up funding of $55 million, money that co-founder and CEO Edo Segal said the startup will use to build out its services and teams after being “overrun by demand” in the wake of COVID-19.

The funding is being led by a strategic investor, Accenture Ventures — the investment arm of the systems integrator and consultancy behemoth — with Alexander Capital Ventures, Saatchi Invest, Ronald Lauder and other unnamed investors also participating. The startup up to now has been largely self-funded, and while Segal isn’t disclosing the valuation, he said it was definitely in the nine-figures (that is, somewhere in the large region of hundreds of millions of dollars).

Accenture has been using TouchCast’s technology for its own events, but that is likely just one part of its interest: Accenture also has a lot of corporate customers that tap it to build and implement interactive services, so potentially this could lead to more customers in TouchCast’s pipeline.

(Case in point: My interview with Segal, over Zoom, found me speaking to him in the middle of a vast aircraft hangar, with a 747 from one of the big airlines of the world — I won’t say which — parked behind him. He said he’d just come from a business pitch with the airline in question.)

A lot of what we have seen in virtual events, and in particular conferences, has to date been, effectively, a managed version of a group call on one of the established videoconferencing platforms like Zoom, Google’s Hangout, Microsoft’s Teams, Webex and so on.

You get a screen with participants’ individual video streams presented to you in a grid more reminiscent of the opening credits of the Brady Bunch or Hollywood Squares than an actual stage or venue.

There are some, of course, that are taking a much different route. Witness Apple’s online events in the last year, productions that have elevated what a virtual event can mean, with more detail and information, and less awkwardness, than an actual live event.

The problem is that not every company is Apple, unable to afford much less execute Hollywood-level presentations.

The essence of what TouchCast has built, as Segal describes it, is a platform that combines computer vision, video streaming technology and natural language processing to let other organizations create experiences that are closer to that of the iPhone giant’s than they are to a game show.

“We have created a platform so that all companies can create events like Apple’s,” Segal said. “We’re taking them on a journey beyond people sitting in their home offices.”

Yet “home office” remains the operative phrase. With TouchCast, people (the organizers and the onstage participants) still use basic videoconferencing solutions like Zoom and Teams — in their homes, even — to produce the action. But behind the scenes, TouchCast is taking those videos, using computer vision to trim out the people and place them into virtual “venues” so that they appear as if they are on stage in an actual conference.

These venues come from a selection of templates, or the organiser can arrange for a specific venue to be shot and used. And in addition to the actual event, TouchCast then also provides tools for audience members to participate with questions and to chat to each other. As the event is progressing, TouchCast also produces transcriptions and summaries of the key points for those who want them.

Segal said that TouchCast is not planning to make this a consumer-focused product, not even on the B2B2C side, but it’s preparing a feature so that when business conference organisers do want to hold a music segment with a special guest, those can be incorporated, too. (In all honesty, it seems like a small leap to use this for more consumer-focused events, too.)

TouchCast’s growth into a startup serving an audience of hungry and anxious event planners has been an interesting pivot that is a reminder to founders (and investors) that the right opportunities might not be the ones you think they are.

You might recall that the company first came out of stealth back in 2013, with former TechCrunch editor Erick Schonfeld one of the co-founders.

Back then, the company’s concept was to supercharge online video, by making it easier for creators to bring in interactive elements and media widgets into their work, to essentially make videos closer to the kind of interactivity and busy media mix that we find on web pages themselves.

All that might have been too clever by half. Or, it was simply not the right time for that technology. The service never made many waves, and one of my colleagues even assumed it had deadpooled at some point.

Not at all, it turns out. Segal (a serial entrepreneur who also used to work at AOL as VP of emerging platforms — AOL being the company that acquired TechCrunch and eventually became a part of Verizon) notes that the technology that TouchCast is using for its conferencing solution is essentially the same as what it built for its original video product.

After launching an earlier, less feature-rich version of what it has on the market today, it took the company about six months to retool it, adding in more mixed reality customization via the use of Unreal Engine, to make it what it is now, and to meet the demand it started to see from customers, who approached the startup for their own events after attending conferences held by others using TouchCast.

“It took us eight years to get to our overnight success story,” Segal joked.

Figures from Grand View Research cited by TouchCast estimate that virtual events will be a $400 billion business by 2027, and that has made for a pretty large array of companies building out experiences that will make those events worth attending, and putting on.

They include the likes of Hopin and Bizzabo — both of which have recently also raised big rounds — but also more enhanced services from the big, established players in videoconferencing like Zoom, Google, Microsoft, Cisco and more.

It’s no surprise to see Accenture throwing its hat into that ring as a backer of what it has decided is one of the more interesting technology players in that mix.

The reason is because many understand and now accept that — similar to working life in general — it’s very likely that even when we do return to “live” events, the virtual component, and the expectation that it will work well and be compelling enough to watch, is here to stay.

“Digital disruption, distributed workforces, and customer experience are the driving forces behind the need for companies to transform how they do business and move toward the future of work,” said Tom Lounibos, managing director, Accenture Ventures, in a statement. “For organizations to harness the power of virtual experiences to deliver business impact, the pandemic has shown that quality interactions and insights are needed. Our investment in Touchcast demonstrates our commitment to identifying the latest technologies that help address our clients’ critical business needs.”

Aug
12
2019
--

Polarity raises $8.1M for its AI software that constantly analyzes employee screens and highlights key info

Reference docs and spreadsheets seemingly make the world go ’round, but what if employees could just close those tabs for good without losing that knowledge?

One startup is taking on that complicated challenge. Predictably, the solution is quite complicated, as well, from a tech perspective, involving an AI solution that analyzes everything on your PC screen — all the time — and highlights text onscreen for which you could use a little bit more context. The team at Polarity wants its tech to help teams lower the knowledge barrier to getting stuff done and allow people to focus more on procedure and strategy than memorizing file numbers, IP addresses and jargon.

The Connecticut startup just closed an $8.1 million “AA” round led by TechOperators, with Shasta Ventures, Strategic Cyber Ventures, Gula Tech Adventures and Kaiser Permanente Ventures also participating in the round. The startup closed its $3.5 million Series A in early 2017.

Interestingly, the enterprise-centric startup pitches itself as an AR company, augmenting what’s happening on your laptop screen much like a pair of AR glasses could.

The startup’s computer vision software that uses character recognition to analyze what’s on a user’s screen can be helpful for enterprise teams importing things like a company Rolodex so that bios are always collectively a click away, but the real utility comes from team-wide flagging of things like suspicious IP addresses that will allow entire teams to learn about new threats and issues at the same time without having to constantly check in with their co-workers. The startup’s current product has a big focus on analysts and security teams.

Polarity before and after two

via Polarity

Using character recognition to analyze a screen for specific keywords is useful in itself, but that’s also largely a solved computer vision problem.

Polarity’s big advance has been getting these processes to occur consistently on-device without crushing a device’s CPU. CEO Paul Battista says that for the average customer, Polarity’s software generally eats up about 3-6% of their computer’s processing power, though it can spike much higher if the system is getting fed a ton of new information at once.

“We spent years building the tech to accomplish [efficiency], readjusting how people think of [object character recognition] and then doing it in real time,” Battista tells me. “The more data that you have onscreen, the more power you use. So it does use a significant percentage of the CPU.”

Why bother with all of this AI trickery and CPU efficiency when you could pull this functionality off in certain apps with an API? The whole deliverable here is that it doesn’t matter if you’re working in Chrome, or Excel or pulling up a scanned document, the software is analyzing what’s actually being rendered onscreen, not what the individual app is communicating.

When it comes to a piece of software analyzing everything on your screen at all times, there are certainly some privacy concerns, not only from the employee’s perspective but from a company’s security perspective.

Battista says the intent with this product isn’t to be some piece of corporate spyware, and that it won’t be something running in the background — it’s an app that users will launch. “If [companies] wanted to they could collect all of the data on everybody’s screens, but we don’t have any customers doing that. The software is built to have a user interface for users to interact with so if the user didn’t choose to subscribe or turn on a metric, then [the company] wouldn’t be able to force them to collect it in the current product.”

Battista says that teams at seven Fortune 100 companies are currently paying for Polarity, with many more in pilot programs. The team is currently around 20 people and with this latest fundraise, Battista wants to double the size of the team in the next 18 months as they look to scale to larger rollouts at major companies.

Jul
31
2019
--

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

May
17
2019
--

Under the hood on Zoom’s IPO, with founder and CEO Eric Yuan

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Kate Clark sat down with Eric Yuan, the founder and CEO of video communications startup Zoom, to go behind the curtain on the company’s recent IPO process and its path to the public markets.

Since hitting the trading desks just a few weeks ago, Zoom stock is up over 30%. But the Zoom’s path to becoming a Silicon Valley and Wall Street darling was anything but easy. Eric tells Kate how the company’s early focus on profitability, which is now helping drive the stock’s strong performance out of the gate, actually made it difficult to get VC money early on, and the company’s consistent focus on user experience led to organic growth across different customer bases.

Eric: I experienced the year 2000 dot com crash and the 2008 financial crisis, and it almost wiped out the company. I only got seed money from my friends, and also one or two VCs like AME Cloud Ventures and Qualcomm Ventures.

nd all other institutional VCs had no interest to invest in us. I was very paranoid and always thought “wow, we are not going to survive next week because we cannot raise the capital. And on the way, I thought we have to look into our own destiny. We wanted to be cash flow positive. We wanted to be profitable.

nd so by doing that, people thought I wasn’t as wise, because we’d probably be sacrificing growth, right? And a lot of other companies, they did very well and were not profitable because they focused on growth. And in the future they could be very, very profitable.

Eric and Kate also dive deeper into Zoom’s founding and Eric’s initial decision to leave WebEx to work on a better video communication solution. Eric also offers his take on what the future of video conferencing may look like in the next five to 10 years and gives advice to founders looking to build the next great company.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Kate Clark: Well thanks for joining us Eric.

Eric Yuan: No problem, no problem.

Kate: Super excited to chat about Zoom’s historic IPO. Before we jump into questions, I’m just going to review some of the key events leading up to the IPO, just to give some context to any of the listeners on the call.

May
02
2019
--

Microsoft announces the $3,500 HoloLens 2 Development Edition

As part of its rather odd Thursday afternoon pre-Build news dump, Microsoft today announced the HoloLens 2 Development Edition. The company announced the much-improved HoloLens 2 at MWC Barcelona earlier this year, but it’s not shipping to developers yet. Currently, the best release date we have is “later this year.” The Development Edition will launch alongside the regular HoloLens 2.

The Development Edition, which will retail for $3,500 to own outright or on a $99 per month installment plan, doesn’t feature any special hardware. Instead, it comes with $500 in Azure credits and 3-month trials of Unity Pro and the Unity PiXYZ plugin for bringing engineering renderings into Unity.

To get the Development Edition, potential buyers have to join the Microsoft Mixed Reality Developer Program and those who already pre-ordered the standard edition will be able to change their order later this year.

As far as HoloLens news goes, that’s all a bit underwhelming. Anybody can get free Azure credits, after all (though usually only $200) and free trials of Unity Pro are also readily available (though typically limited to 30 days).

Oddly, the regular HoloLens 2 was also supposed to cost $3,500. It’s unclear if the regular edition will now be somewhat cheaper, cost the same but come without the credits, or really why Microsoft isn’t doing this at all. Turning this into a special “Development Edition” feels more like a marketing gimmick than anything else, as well as an attempt to bring some of the futuristic glamour of the HoloLens visor to today’s announcements.

The folks at Unity are clearly excited, though. “Pairing HoloLens 2 with Unity’s real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways,” says Tim McDonough, GM of Industrial at Unity, in today’s announcement. “The addition of Unity Pro and PiXYZ Plugin to HoloLens 2 Development Edition gives businesses the immediate ability to create real-time 2D, 3D, VR, and AR interactive experiences while allowing for the importing and preparation of design data to create real-time experiences.”

Microsoft also today noted that Unreal Engine 4 support for HoloLens 2 will become available by the end of May.

May
02
2019
--

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.

May
02
2019
--

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com