Mar
16
2020
--

To make locks touchless, Proxy bluetooth ID raises $42M

We need to go hands-off in the age of coronavirus. That means touching fewer doors, elevators, and sign-in iPads. But once a building is using phone-based identity for security, there’s opportunities to speed up access to WIFI networks and printers, or personalize conference rooms and video call set-ups. Keyless office entry startup Proxy wants to deliver all of this while keeping your phone in your pocket.

The door is just a starting point” Proxy co-founder and CEO Denis Mars tells me. “We’re . . . empowering a movement to take back control of our privacy, our sense of self, our humanity, our individuality.”

With the contagion concerns and security risks of people rubbing dirty, cloneable, stealable key cards against their office doors, investors see big potential in Proxy. Today it’s announcing here a $42 million Series B led by Scale Venture Partners with participation from former funders Kleiner Perkins and Y Combinator plus new additions Silicon Valley Bank and West Ventures.

The raise brings Proxy to $58.8 million in funding so it can staff up at offices across the world and speed up deployments of its door sensor hardware and access control software. “We’re spread thin” says Mars. “Part of this funding is to try to grow up as quickly as possible and not grow for growth sake. We’re making sure we’re secure, meeting all the privacy requirements.”

How does Proxy work? Employers get their staff to install an app that knows their identity within the company, including when and where they’re allowed entry. Buildings install Proxy’s signal readers, which can either integrate with existing access control software or the startup’s own management dashboard.

Employees can then open doors, elevators, turnstiles, and garages with a Bluetooth low-energy signal without having to even take their phone out. Bosses can also opt to require a facial scan or fingerprint or a wave of the phone near the sensor. Existing keycards and fobs still work with Proxy’s Pro readers. Proxy costs about $300 to $350 per reader, plus installation and a $30 per month per reader subscription to its management software.

Now the company is expanding access to devices once you’re already in the building thanks to its SDK and APIs. Wifi router-makers are starting to pre-provision their hardware to automatically connect the phones of employees or temporarily allow registered guests with Proxy installed — no need for passwords written on whiteboards. Its new Nano sensors can also be hooked up to printers and vending machines to verify access or charge expense accounts. And food delivery companies can add the Proxy SDK so couriers can be granted the momentary ability to open doors when they arrive with lunch.

Rather than just indiscriminately beaming your identity out into the world, Proxy uses tokenized credentials so only its sensors know who you are. Users have to approve of new networks’ ability to read their tokens, Proxy has SOC-2 security audit certification, and complies with GDPR. “We feel very strongly about where the biometrics are stored . . . they should stay on your phone” says Mars.

Yet despite integrating with the technology for two-factor entry unlocks, Mars says “We’re not big fans of facial recognition. You don’t want every random company having your face in their database. The face becomes the password you were supposed to change every 30 days.”

Keeping your data and identity safe as we see an explosion of Internet Of Things devices was actually the impetus for starting Proxy. Mars had sold his teleconferencing startup Bitplay to Jive Software where he met his eventually co-founder Simon Ratner, who’d joined after his video annotation startup  Omnisio was acquired by YouTube. Mars was frustrated about every IoT lightbulb and appliance wanting him to download an app, set up a profile, and give it his data.

The duo founded Proxy in 2016 as a universal identity signal. Today it has over 60 customers. While other apps want you to constantly open them, Proxy’s purpose is to work silently in the background and make people more productive. “We believe the most important technologies in the world don’t seek your attention. They work for you, they empower you, and they get out of the way so you can focus your attention on what matters most — living your life.”

Now Proxy could actually help save lives. “The nature of our product is contactless interactions in commercial buildings and workplaces so there’s a bit of an unintended benefit that helps prevent the spread of the virus” Mars explains. “We have seen an uptick in customers starting to set doors and other experiences in longer-range hands-free mode so that users can walk up to an automated door and not have to touch the handles or badge/reader every time.”

The big challenge facing Proxy is maintaining security and dependability since it’s a mission-critical business. A bug or outage could potentially lock employees out of their workplace (when they eventually return from quarantine). It will have to keep hackers out of employee files. Proxy needs to stay ahead of access control incumbents like ADT and HID as well as smaller direct competitors like $10 million-funded Nexkey and $28 million-funded Openpath.

Luckily, Proxy has found a powerful growth flywheel. First an office in a big building gets set up, then they convince the real estate manager to equip the lobby’s turnstiles and elevators with Proxy. Other tenants in the building start to use it, so they buy Proxy for their office. Then they get their offices in other cities on board…starting the flywheel again. That’s why Proxy is doubling down on sales to commercial real estate owners.

The question is when Proxy will start knocking on consumers’ doors. While leveling up into the enterprise access control software business might be tough for home smartlock companies like August, Proxy could go down market if it built more physical lock hardware. Perhaps we’ll start to get smart homes that know who’s home, and stop having to carry pointy metal sticks in our pockets.

Feb
25
2020
--

HP offers its investors billions in shareholder returns to avoid a Xerox tie-up

To ward off a hostile takeover bid by Xerox, which is a much smaller company, HP (not to be confused with Hewlett Packard Enterprise, a separate public company) is promising its investors billions and billions of dollars.

All investors have to do to get the goods is reject the Xerox deal.

In a letter to investors, HP called Xerox’s offer a “flawed value exchange” that would lead to an “irresponsible capital structure” that was being sold on “overstated synergies.” Here’s what HP is promising its owners if they do allow it to stay independent:

  • About $16 billion worth of “capital return” between its fiscal 2020 and fiscal 2022 (HP’s Q1 fiscal 2020 wrapped January 31, 2020, for reference). According to the company, the figure “represents approximately 50% of HP’s current market capitalization.” TechCrunch rates that as true, before the company’s share-price gains posted after this news became known.
  • That capital return would be made up of a few things, including boosting the company’s share repurchase program to $15 billion (up from $5 billion, previously). More specifically, HP intends to “repurchase of at least $8 billion of HP shares over 12 months” after its fiscal 2020 meeting. The company also intends to raise its “target long-term return of capital to 100% of free cash flow generation,” allowing for the share purchases and a rising dividend payout (“HP intends to maintain dividend per share growth at least in line with earnings.”)

If all that read like a foreign language, let’s untangle it a bit. What HP is telling investors is that it intends to use all of the cash it generates to reward their ownership of shares in its business. This will come in the form of buybacks (concentrating future earnings on fewer shares, raising the value of held equity) and dividends (rising payouts to owners as HP itself makes more money), powered in part by cost-cutting (boosting cash generation and profitability).

HP is saying, in effect: Please do not sell us to Xerox; if you do not, we will do all that we can to make you money. 

Shares of HP are up 6% as of the time of writing, raising the value of HP’s consumer-focused spinout to just under $34 billion. We’ll see what investors choose for the company. But now, how did we get here?

The road to today

You may ask yourself, how did we get here (to paraphrase Talking Heads). It all began last Fall when Xerox made it known that it wanted to merge with HP, offering in the range of $27 billion to buy the much larger company. As we wrote at the time:

What’s odd about this particular deal is that HP is the company with a much larger market cap of $29 billion, while Xerox is just a tad over $8 billion. The canary is eating the cat here.

HP never liked the idea of the hostile takeover attempt and the gloves quickly came off as the two companies wrangled publicly with one another, culminating with HP’s board unanimously rejecting Xerox’s offer. It called the financial underpinnings of the deal “highly conditional and uncertain.” HP also was unhappy with the aggressive nature of the offer, writing that Xerox was, “intent on forcing a potential combination on opportunistic terms and without providing adequate information.”

Just one day later, Xerox responded, saying it would take the bid directly to HP shareholders in an attempt to by-pass the board of directors, writing in yet another public letter, “We plan to engage directly with HP shareholders to solicit their support in urging the HP Board to do the right thing and pursue this compelling opportunity.”

In January, the shenanigans continued when Xerox announced it was putting forth a friendly slate of candidates for the HP board to replace the ones that had rejected the earlier Xerox offer. And more recently, in an attempt to convince shareholders to vote in favor of the deal, Xerox sweetened the deal to $34 billion or $24 a share.

Xerox wrote that it had on-going conversations with large HP shareholders, and this might have gotten HP’s attention— hence the most recent offer on its part to make an offer to shareholders that would be hard to refuse. The company’s next shareholder meeting is taking place in April when we will finally find out the final reckoning.

 

Feb
05
2020
--

Calling all cosmic startups — pitch at TechCrunch’s space event in LA

Founders — it’s time to shoot for the stars. For the first time ever, TechCrunch is hosting TC Sessions: Space 2020 on June 25th in Los Angeles. But that’s not all, because on June 24th, TechCrunch will host a Pitch Night exclusively for early-stage space startups.

Yep, that’s right. On top of a packed programming day with fireside chats, breakout sessions and Q&As featuring the top experts and game changers in space, TechCrunch will select 10 startups focused on any aspect of space — whether you’re launching rockets, building the next big satellite constellation, translating space-based data into usable insights or even building a colony on the Moon. If your company is all about the new space startup race, and you are early stage, please apply. 

Step 1: Apply to pitch by May 15th. TechCrunch’s editorial team will review all applications and select 10 companies. Founders will be notified by June 7th.  

You’ll pitch your startup at a private event in front of TechCrunch editors, main-stage speakers and industry experts. Our panel of judges will select five finalists to pitch onstage at TC Sessions: Space. 

You will be pitching your startup to the most prestigious, influential and expert industry leaders, and you’ll get video coverage on TechCrunch, too! And the final perk? Each of the 10 startup teams selected for the Pitch Night will be given two free tickets to attend TC Sessions: Space 2020. Shoot your shot — apply here.

Even if you’re not necessarily interested in pitching, grab your ticket for a front-row seat to this event for the early-bird price of $349. If you are interested in bringing a group of five or more from your company, you’ll get an automatic 20% discount. We even have discounts for the government/military, nonprofit/NGOs and students currently attending university. Grab your tickets at these reduced rates before prices increase.

Is your company interested in sponsoring or exhibiting at TC Sessions: Space 2020? Contact our sponsorship sales team by filling out this form.

Feb
04
2020
--

Nomagic, a startup out of Poland, picks up $8.6M for its pick-and-place warehouse robots

Factories and warehouses have been two of the biggest markets for robots in the last several years, with machines taking on mundane, if limited, processes to speed up work and free up humans to do other, more complex tasks. Now, a startup out of Poland that is widening the scope of what those robots can do is announcing funding, a sign not just of how robotic technology has been evolving, but of the growing demand for more automation, specifically in the world of logistics and fulfilment.

Nomagic, which has developed way for a robotic arm to identify an item from an unordered selection, pick it up and then pack it into a box, is today announcing that it has raised $8.6 million in funding, one of the largest-ever seed rounds for a Polish startup. Co-led by Khosla Ventures and Hoxton Ventures, the round also included participation from DN Capital, Capnamic Ventures and Manta Ray, all previous backers of Nomagic.

There are a number of robotic arms on the market today that can be programmed to pick up and deposit items from Point A to Point B. But we are only starting to see a new wave of companies focus on bringing these to fulfilment environments because of the limitations of those arms: they can only work when the items are already “ordered” in a predictable way, such as on an assembly line, which has mean that fulfilment of, for example, online orders is usually carried out by humans.

Nomagic has incorporated a new degree of computer vision, machine learning and other AI-based technologies to  elevate the capabilities of those robotic arm. Robots powered by its tech can successfully select items from an “unstructured” group of objects — that is, not an assembly line, but potentially another box — before picking it up and placing it elsewhere.

Kacper Nowicki, the ex-Googler CEO of Nomagic who co-founded the company with Marek Cygan (an academic) and Tristan d’Orgeval (formerly of Climate Corporation), noted that while there has been some work on the problem of unstructured objects and industrial robots — in the US, there are some live implementations taking shape, with one, Covariant, recently exiting stealth mode — it has been mostly a “missing piece” in terms of the innovation that has been done to make logistics and fulfilment more efficient.

That is to say, there has been little in the way of bigger commercial roll outs of the technology, creating an opportunity in what is a huge market: fulfilment services are projected to be a $56 billion market by 2021 (currently the US is the biggest single region, estimated at between $13.5 billion and $15.5 billion).

“If every product were a tablet or phone, you could automate a regular robotic arm to pick and pack,” Nowicki said. “But if you have something else, say something in plastic, or a really huge diversity of products, then that is where the problems come in.”

Nowicki was a longtime Googler who moved from Silicon Valley back to Poland to build the company’s first engineering team in the country. In his years at Google, Nowicki worked in areas including Google Cloud and search, but also saw the AI developments underway at Google’s DeepMind subsidiary, and decided he wanted to tackle a new problem for his next challenge.

His interest underscores what has been something of a fork in artificial intelligence in recent years. While some of the earliest implementations of the principles of AI were indeed on robots, these days a lot of robotic hardware seems clunky and even outmoded, while much more of the focus of AI has shifted to software and “non-physical” systems aimed at replicating and improving upon human thought. Even the word “robot” is now just as likely to be seen in the phrase “robotic process automation”, which in fact has nothing to do with physical robots, but software.

“A lot of AI applications are not that appealing,” Nowicki simply noted (indeed, while Nowicki didn’t spell it out, DeepMind in particular has faced a lot of controversy over its own work in areas like healthcare). “But improvements in existing robotics systems by applying machine learning and computer vision so that they can operate in unstructured environments caught my attention. There has been so little automation actually in physical systems, and I believe it’s a place where we still will see a lot of change.”

Interestingly, while the company is focusing on hardware, it’s not actually building hardware per se, but is working on software that can run on the most popular robotic arms in the market today to make them “smarter”.

“We believe that most of the intellectual property in in AI is in the software stack, not the hardware,” said Orgeval. “We look at it as a mechatronics problem, but even there, we believe that this is mainly a software problem.”

Having Khosla as a backer is notable given that a very large part of the VC’s prolific investing has been in North America up to now. Nowicki said he had a connection to the firm by way of his time in the Bay Area, where before Google, Vinod Khosla backed a startup of his (which went bust in one of the dot-com downturns).

While there is an opportunity for Nomagic to take its idea global, for now Khosla’s interested because of the a closer opportunity at home, where Nomagic is already working with third-party logistics and fulfilment providers, as well as retailers like Cdiscount, a French Amazon-style, soup-to-nuts online marketplace.

“The Nomagic team has made significant strides since its founding in 2017,” says Sven Strohband, Managing Director of Khosla Ventures, in a statement. “There’s a massive opportunity within the European market for warehouse robotics and automation, and NoMagic is well-positioned to capture some of that market share.”

Jan
12
2020
--

Samsung launches the rugged, enterprise-ready Galaxy XCover Pro

We got a bit of a surprise at the end of CES: some hands-on time with Samsung’s latest rugged phone for the enterprise, the Galaxy XCover Pro. The XCover Pro, which is officially launching today, is a mid-range $499 phone for first-line workers like flight attendants, construction workers or nurses.

It is meant to be very rugged but without the usual bulk that comes with that. With its IP68 rating, Military Standard 810 certification and the promise that it will survive a drop from 1.5 meters (4.9 feet) without a case, it should definitely be able to withstand quite a bit of abuse.

While Samsung is aiming this phone at the enterprise market, the company tells us that it will also sell it to individual customers.

As Samsung stressed during our briefing, the phone is meant for all-day use in the field, with a 4,050 mAh replaceable battery (yes, you read that right, you can replace the battery just like on phones from a few years ago). It’ll feature 4GB of RAM and 64GB of storage space, but you can extend that up to 512GB thanks to the built-in microSD slot. The 6.3-inch FHD+ screen won’t wow you, but it seemed perfectly adequate for most of the use cases. That screen, the company says, should work even in rain or snow and features a glove mode, too.

And while this is obviously not a flagship phone, Samsung still decided to give it a dual rear camera setup, with a standard 25MP sensor and a wide-angle 8MP sensor for those times where you might want to get the full view of a construction site, for example. On the front, there is a small cutout for a 13MP camera, too.

All of this is powered by a 2GHz octa-core Exynos 9611 processor, as one would expect from a Samsung mid-range phone, as well as Android 10.

Traditionally, rugged phones came with large rubber edges (or users decided to put even larger cases around them). The XCover Pro, on the other hand, feels slimmer than most regular phones with a rugged case on them.

By default, the phone features NFC support for contactless payments (the phone has been approved to be part of Visa’s Tap to Phone pilot program) and two programmable buttons so that companies can customize their phones for their specific use cases. One of the first partners here is Microsoft, which lets you map a button to its recently announced walkie talkie feature in Microsoft Teams.

“Microsoft and Samsung have a deep history of bringing together the best hardware and software to help solve our customers’ challenges,” said Microsoft CEO Satya Nadella in today’s announcement. “The powerful combination of Microsoft Teams and the new Galaxy XCover Pro builds on this partnership and will provide frontline workers everywhere with the technology they need to be more collaborative, productive and secure.”

With its Pogo pin charging support and compatibility with third-party tools from a variety of partners for adding scanners, credit card readers and other peripherals from partners like Infinite Peripherals, KOAMTAC, Scandit and Visa.

No enterprise device is complete without security features and the XCover Pro obviously supports all of Samsungs various Knox enterprise security tools and access to the phone itself is controlled by both a facial recognition system and a fingerprint reader that’s built into the power button.

With the Tab Active Pro, Samsung has long offered a rugged tablet for first-line workers. Not everybody needs a full-sized tablet, though, so the XCover Pro fills what Samsung clearly believes is a gap in the market that offers always-on connectivity in a smaller package and in the form of a phone that doesn’t look unlike a consumer device.

I could actually imagine that there are quite a few consumers who may opt for this device. For a while, the company made phones like the Galaxy S8 Active that traded weight and size for larger batteries and ruggedness. the XCover Pro isn’t officially a replacement of this program, but it may just find its fans among former Galaxy Active users.

Nov
19
2019
--

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip

Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning — and particularly deep learning models — have the potential to massively improve a range of products and applications.

The key word though is ‘potential.’ While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that don’t “fit” well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.

There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.

As we talked about in August with the announcement of the company’s “Wafer Scale Engine” — the world’s largest silicon chip according to the company — Cerebras’ theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big — really big.

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

The CS-1 is a “complete solution” product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. It’s 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.

A cross-section look at the CS-1. Photo via Cerebras

Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined — a claim that TechCrunch hasn’t verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.

In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.

In designing the system, CEO and co-founder Andrew Feldman said that “We’ve talked to more than 100 customers over the past year and a bit,“ in order to determine the needs for a new AI system and the software layer that should go on top of it. “What we’ve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.”

I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. “If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car,” Feldman analogized. “Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck.” Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.

A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.

That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, “We looked at other technologies — freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.”

A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.

Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that “ it massively reduces communication time by using locality.”

In computer science, locality is placing data and compute in the right places within, let’s say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, there’s no need for data to flow through multiple storage clusters or ethernet cables — everything that the chip needs to work with is available almost immediately.

According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in “cancer, traumatic brain injury and many other areas important to society today” at the lab. Feldman said that “It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.”

(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).

Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.

It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsoft’s Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.

Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.

Nov
15
2019
--

Three of Apple and Google’s former star chip designers launch NUVIA with $53M in series A funding

Silicon is apparently the new gold these days, or so VCs hope.

What was once a no-go zone for venture investors, who feared the long development lead times and high technical risk required for new entrants in the semiconductor field, has now turned into one of the hottest investment areas for enterprise and data VCs. Startups like Graphcore have reached unicorn status (after its $200 million series D a year ago) while Groq closed $52M from the likes of Chamath Palihapitiya of Social Capital fame and Cerebras raised $112 million in investment from Benchmark and others while announcing that it had produced the first trillion transistor chip (and who I profiled a bit this summer).

Today, we have another entrant with another great technical team at the helm, this time with a Santa Clara, CA-based startup called NUVIA. The company announced this morning that it has raised a $53 million series A venture round co-led by Capricorn Investment Group, Dell Technologies Capital (DTC), Mayfield, and WRVI Capital, with participation from Nepenthe LLC.

Despite only getting started earlier this year, the company currently has roughly 60 employees, 30 more at various stages of accepted offers, and the company may even crack 100 employees before the end of the year.

What’s happening here is a combination of trends in the compute industry. There has been an explosion in data and by extension, the data centers required to store all of that information, just as we have exponentially expanded our appetite for complex machine learning algorithms to crunch through all of those bits. Unfortunately, the growth in computation power is not keeping pace with our demands as Moore’s Law slows. Companies like Intel are hitting the limits of physics and our current know-how to continue to improve computational densities, opening the ground for new entrants and new approaches to the field.

Finding and building a dream team with a “chip” on their shoulder

There are two halves to the NUVIA story. First is the story of the company’s founders, which include John Bruno, Manu Gulati, and Gerard Williams III, who will be CEO. The three overlapped for a number of years at Apple, where they brought their diverse chip skillsets together to lead a variety of initiatives including Apple’s A-series of chips that power the iPhone and iPad. According to a press statement from the company, the founders have worked on a combined 20 chips across their careers and have received more than 100 patents for their work in silicon.

Gulati joined Apple in 2009 as a micro architect (or SoC architect) after a career at Broadcom, and a few months later, Williams joined the team as well. Gulati explained to me in an interview that, “So my job was kind of putting the chip together; his job was delivering the most important piece of IT that went into it, which is the CPU.” A few years later in around 2012, Bruno was poached from AMD and brought to Apple as well.

Gulati said that when Bruno joined, it was expected he would be a “silicon person” but his role quickly broadened to think more strategically about what the chipset of the iPhone and iPad should deliver to end users. “He really got into this realm of system-level stuff and competitive analysis and how do we stack up against other people and what’s happening in the industry,” he said. “So three very different technical backgrounds, but all three of us are very, very hands-on and, you know, just engineers at heart.”

Gulati would take an opportunity at Google in 2017 aimed broadly around the company’s mobile hardware, and he eventually pulled over Bruno from Apple to join him. The two eventually left Google earlier this year in a report first covered by The Information in May. For his part, Williams stayed at Apple for nearly a decade before leaving earlier this year in March.

The company is being stealthy about exactly what it is working on, which is typical in the silicon space because it can take years to design, manufacture, and get a product into market. That said, what’s interesting is that while the troika of founders all have a background in mobile chipsets, they are indeed focused on the data center broadly conceived (i.e. cloud computing), and specifically reading between the lines, to finding more energy-efficient ways that can combat the rising climate cost of machine learning workflows and computation-intensive processing.

Gulati told me that “for us, energy efficiency is kind of built into the way we think.”

The company’s CMO did tell me that the startup is building “a custom clean sheet designed from the ground up” and isn’t encumbered by legacy designs. In other words, the company isn’t building on top of ARM or other existing chip architectures.

Building an investor syndicate that’s willing to “chip” in

Outside of the founders, the other half of this NUVIA story is the collective of investors sitting around the table, all of whom not only have deep technical backgrounds, but also deep pockets who can handle the technical risk that comes with new silicon startups.

Capricorn specifically invested out of what it calls its Technology Impact Fund, which focuses on funding startups that use technology to make a positive impact on the world. Its portfolio according to a statement includes Tesla, Planet Labs, and Helion Energy.

Meanwhile, DTC is the venture wing of Dell Technologies and its associated companies, and brings a deep background in enterprise and data centers, particularly from the group’s server business like Dell EMC. Scott Darling, who leads DTC, is joining NUVIA’s board, although the company is not disclosing the board composition at this time. Navin Chaddha, an electrical engineer by training who leads Mayfield, has invested in companies like HashiCorp, Akamai, and SolarCity. Finally, WRVI has a long background in enterprise and semiconductor companies.

I chatted a bit with Darling of DTC about what he saw in this particular team and their vision for the data center. In addition to liking each founder individually, Darling felt the team as a whole was just very strong. “What’s most impressive is that if you look at them collectively, they have a skillset and breadth that’s also stunning,” he said.

He confirmed that the company is broadly working on data center products, but said the company is going to lie low on its specific strategy during product development. “No point in being specific, it just engenders immune reactions from other players so we’re just going to be a little quiet for a while,” he said.

He apologized for “sounding incredibly cryptic” but said that the investment thesis from his perspective for the product was that “the data center market is going to be receptive to technology evolutions that have occurred in places outside of the data center that’s going to allow us to deliver great products to the data center.”

Interpolating that statement a bit with the mobile chip backgrounds of the founders at Google and Apple, it seems evident that the extreme energy-to-performance constraints of mobile might find some use in the data center, particularly given the heightened concerns about power consumption and climate change among data center owners.

DTC has been a frequent investor in next-generation silicon, including joining the series A investment of Graphcore back in 2016. I asked Darling whether the firm was investing aggressively in the space or sort of taking a wait-and-see attitude, and he explained that the firm tries to keep a consistent volume of investments at the silicon level. “My philosophy on that is, it’s kind of an inverted pyramid. No, I’m not gonna do a ton of silicon plays. If you look at it, I’ve got five or six. I think of them as the foundations on which a bunch of other stuff gets built on top,” he explained. He noted that each investment in the space is “expensive” given the work required to design and field a product, and so these investments have to be carefully made with the intention of supporting the companies for the long haul.

That explanation was echoed by Gulati when I asked how he and his co-founders came to closing on this investor syndicate. Given the reputations of the three, they would have had easy access to any VC in the Valley. He said about the final investors:

They understood that putting something together like this is not going to be easy and it’s not for everybody … I think everybody understands that there’s an opportunity here. Actually capitalizing upon it and then building a team and executing on it is not something that just anybody could possibly take on. And similarly, it is not something that every investor could just possibly take on in my opinion. They themselves need to have a vision on their side and not just believe our story. And they need to strategically be willing to help and put in the money and be there for the long haul.

It may be a long haul, but Gulati noted that “on a day-to-day basis, it’s really awesome to have mostly friends you work with.” With perhaps 100 employees by the end of the year and tens of millions of dollars already in the bank, they have their war chest and their army ready to go. Now comes the fun (and hard) part as we learn how the chips fall.

Oct
08
2019
--

Arm brings custom instructions to its embedded CPUs

At its annual TechCon event in San Jose, Arm today announced Custom Instructions, a new feature of its Armv8-M architecture for embedded CPUs that, as the name implies, enables its customers to write their own custom instructions to accelerate their specific use cases for embedded and IoT applications.

“We already have ways to add acceleration, but not as deep and down to the heart of the CPU. What we’re giving [our customers] here is the flexibility to program your own instructions, to define your own instructions — and have them executed by the CPU,” ARM senior director for its automotive and IoT business, Thomas Ensergueix, told me ahead of today’s announcement.

He noted that Arm always had a continuum of options for acceleration, starting with its memory-mapped architecture for connecting over a bus GPUs and today’s neural processor units. This allows the CPU and the accelerator to run in parallel, but with the bus being the bottleneck. Customers also can opt for a co-processor that’s directly connected to the CPU, but today’s news essentially allows Arm customers to create their own accelerated algorithms that then run directly on the CPU. That means the latency is low, but it’s not running in parallel, as with the memory-mapped solution.

arm instructions

As Arm argues, this setup allows for the lowest-cost (and risk) path for integrating customer workload acceleration, as there are no disruptions to the existing CPU features and it still allows its customers to use the existing standard tools with which they are already familiar.

custom assemblerFor now, custom instructions will only be available to be implemented in the Arm Cortex-M33 CPUs, starting in the first half of 2020. By default, it’ll also be available for all future Cortex-M processors. There are no additional costs or new licenses to buy for Arm’s customers.

Ensergueix noted that as we’re moving to a world with more and more connected devices, more of Arm’s customers will want to optimize their processors for their often very specific use cases — and often they’ll want to do so because by creating custom instructions, they can get a bit more battery life out of these devices, for example.

Arm has already lined up a number of partners to support Custom Instructions, including IAR Systems, NXP, Silicon Labs and STMicroelectronics .

“Arm’s new Custom Instructions capabilities allow silicon suppliers like NXP to offer their customers a new degree of application-specific instruction optimizations to improve performance, power dissipation and static code size for new and emerging embedded applications,” writes NXP’s Geoff Lees, SVP and GM of Microcontrollers. “Additionally, all these improvements are enabled within the extensive Cortex-M ecosystem, so customers’ existing software investments are maximized.”

In related embedded news, Arm also today announced that it is setting up a governance model for Mbed OS, its open-source operating system for embedded devices that run an Arm Cortex-M chip. Mbed OS has always been open source, but the Mbed OS Partner Governance model will allow Arm’s Mbed silicon partners to have more of a say in how the OS is developed through tools like a monthly Product Working Group meeting. Partners like Analog Devices, Cypress, Nuvoton, NXP, Renesas, Realtek,
Samsung and u-blox are already participating in this group.

Sep
12
2019
--

The mainframe business is alive and well, as IBM announces new z15

It’s easy to think about mainframes as some technology dinosaur, but the fact is these machines remain a key component of many large organizations’ computing strategies. Today, IBM announced the latest in their line of mainframe computers, the z15.

For starters, as you would probably expect, these are big and powerful machines capable of handling enormous workloads. For example, this baby can process up to 1 trillion web transactions a day and handle 2.4 million Docker containers, while offering unparalleled security to go with that performance. This includes the ability to encrypt data once, and it stays encrypted, even when it leaves the system, a huge advantage for companies with a hybrid strategy.

Speaking of which, you may recall that IBM bought Red Hat last year for $34 billion. That deal closed in July and the companies have been working to incorporate Red Hat technology across the IBM business including the z line of mainframes.

IBM announced last month that it was making OpenShift, Red Hat’s Kubernetes-based cloud-native tools, available on the mainframe running Linux. This should enable developers, who have been working on OpenShift on other systems, to move seamlessly to the mainframe without special training.

IBM sees the mainframe as a bridge for hybrid computing environments, offering a highly secure place for data that when combined with Red Hat’s tools, can enable companies to have a single control plane for applications and data wherever it lives.

While it could be tough to justify the cost of these machines in the age of cloud computing, Ray Wang, founder and principal analyst at Constellation Research, says it could be more cost-effective than the cloud for certain customers. “If you are a new customer, and currently in the cloud and develop on Linux, then in the long run the economics are there to be cheaper than public cloud if you have a lot of IO, and need to get to a high degree of encryption and security,” he said.

He added, “The main point is that if you are worried about being held hostage by public cloud vendors on pricing, in the long run the z is a cost-effective and secure option for owning compute power and working in a multi-cloud, hybrid cloud world.”

Companies like airlines and financial services companies continue to use mainframes, and while they need the power these massive machines provide, they need to do so in a more modern context. The z15 is designed to provide that link to the future, while giving these companies the power they need.

Sep
10
2019
--

Q-CTRL raises $15M for software that reduces error and noise in quantum computing hardware

As hardware makers continue to work on ways of making wide-scale quantum computing a reality, a startup out of Australia that is building software to help reduce noise and errors on quantum computing machines has raised a round of funding to fuel its U.S. expansion.

Q-CTRL is designing firmware for computers and other machines (such as quantum sensors) that perform quantum calculations, firmware to identify the potential for errors to make the machines more resistant and able to stay working for longer (the Q in its name is a reference to qubits, the basic building block of quantum computing).

The startup is today announcing that it has raised $15 million, money that it plans to use to double its team (currently numbering 25) and set up shop on the West Coast, specifically Los Angeles.

This Series A is coming from a list of backers that speaks to the startup’s success to date in courting quantum hardware companies as customers. Led by Square Peg Capital — a prolific Australian VC that has backed homegrown startups like Bugcrowd and Canva, but also those further afield such as Stripe — it also includes new investor Sierra Ventures as well as Sequoia Capital, Main Sequence Ventures and Horizons Ventures.

Q-CTRL’s customers are some of the bigger names in quantum computing and IT, such as Rigetti, Bleximo and Accenture, among others. IBM — which earlier this year unveiled its first commercial quantum computer — singled it out last year for its work in advancing quantum technology.

The problem that Q-CTRL is aiming to address is basic but arguably critical to solving if quantum computing ever hopes to make the leap out of the lab and into wider use in the real world.

Quantum computers and other machines like quantum sensors, which are built on quantum physics architecture, are able to perform computations that go well beyond what can be done by normal computers today, with the applications for such technology including cryptography, biosciences, advanced geological exploration and much more. But quantum computing machines are known to be unstable, in part because of the fragility of the quantum state, which introduces a lot of noise and subsequent errors, which results in crashes.

As Frederic pointed out recently, scientists are confident that this is ultimately a solvable issue. Q-CTRL is one of the hopefuls working on that, by providing a set of tools that runs on quantum machines, visualises noise and decoherence and then deploys controls to “defeat” those errors.

Q-CTRL currently has four products it offers to the market: Black Opal, Boulder Opal, Open Controls and Devkit — aimed respectively at students/those exploring quantum computing, hardware makers, the research community and end users/algorithm developers.

Q-CTRL was founded in 2017 by Michael Biercuk, a professor of Quantum Physics & Quantum Technology at the University of Sydney and a chief investigator in the Australian Research Council Centre of Excellence for Engineered Quantum Systems, who studied in the U.S., with a PhD in physics from Harvard.

“Being at the vanguard of the birth of a new industry is extraordinary,” he said in a statement. “We’re also thrilled to be assembling one of the most impressive investor syndicates in quantum technology. Finding investors who understand and embrace both the promise and the challenge of building quantum computers is almost magical.”

Why choose Los Angeles for building out a U.S. presence, you might ask? Southern California, it turns out, has shaped up to be a key area for quantum research and development, with several of the universities in the region building out labs dedicated to the area, and companies like Lockheed Martin and Google also contributing to the ecosystem. This means a strong pipeline of talent and conversation in what is still a nascent area.

Given that it is still early days for quantum computing technology, that gives a lot of potential options to a company like Q-CTRL longer-term: The company might continue to build a business as it does today, selling its technology to a plethora of hardware makers and researchers in the field; or it might get snapped up by a specific hardware company to integrate Q-CTRL’s solutions more closely onto its machines (and keep them away from competitors).

Or, it could make like a quantum particle and follow both of those paths at the same time.

“Q-CTRL impressed us with their strategy; by providing infrastructure software to improve quantum computers for R&D teams and end-users, they’re able to be a central player in bringing this technology to reality,” said Tushar Roy, a partner at Square Peg. “Their technology also has applications beyond quantum computing, including in quantum-based sensing, which is a rapidly-growing market. In Q-CTRL we found a rare combination of world-leading technical expertise with an understanding of customers, products and what it takes to build an impactful business.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com