Sep
07
2021
--

Seqera Labs grabs $5.5M to help sequence COVID-19 variants and other complex data problems

Bringing order and understanding to unstructured information located across disparate silos has been one of the more significant breakthroughs of the big data era, and today a European startup that has built a platform to help with this challenge specifically in the area of life sciences — and has, notably, been used by labs to sequence and so far identify two major COVID-19 variants — is announcing some funding to continue building out its tools to a wider set of use cases, and to expand into North America.

Seqera Labs, a Barcelona-based data orchestration and workflow platform tailored to help scientists and engineers order and gain insights from cloud-based genomic data troves, as well as to tackle other life science applications that involve harnessing complex data from multiple locations, has raised $5.5 million in seed funding.

Talis Capital and Speedinvest co-led this round, with participation also from previous backer BoxOne Ventures and a grant from the Chan Zuckerberg Initiative, Mark Zuckerberg and Dr. Priscilla Chan’s effort to back open source software projects for science applications.

Seqera — a portmanteau of “sequence” and “era”, the age of sequencing data, basically — had previously raised less than $1 million, and quietly, it is already generating revenues, with five of the world’s biggest pharmaceutical companies part of its customer base, alongside biotech and other life sciences customers.

Seqera was spun out of the Centre for Genomic Regulation, a biomedical research center based out of Barcelona, where it was built as the commercial application of Nextflow, open source workflow and data orchestration software originally created by the founders of Seqera, Evan Floden and Paolo Di Tommaso, at the CGR.

Floden, Seqera’s CEO, told TechCrunch that he and Di Tommaso were motivated to create Seqera in 2018 after seeing Nextflow gain a lot of traction in the life science community, and subsequently getting a lot of repeat requests for further customization and features. Both Nextflow and Seqera have seen a lot of usage: the Nextflow runtime has been downloaded more than 2 million times, the company said, while Seqera’s commercial cloud offering has now processed more than 5 billion tasks.

The COVID-19 pandemic is a classic example of the acute challenge that Seqera (and by association Nextflow) aims to address in the scientific community. With COVID-19 outbreaks happening globally, each time a test for COVID-19 is processed in a lab, live genetic samples of the virus get collected. Taken together, these millions of tests represent a goldmine of information about the coronavirus and how it is mutating, and when and where it is doing so. For a new virus about which so little is understood and that is still persisting, that’s invaluable data.

So the problem is not if the data exists for better insights (it does); it is that it’s nearly impossible to use more legacy tools to view that data as a holistic body. It’s in too many places, and there is just too much of it, and it’s growing every day (and changing every day), which means that traditional approaches of porting data to a centralized location to run analytics on it just wouldn’t be efficient, and would cost a fortune to execute.

That is where Segera comes in. The company’s technology treats each source of data across different clouds as a salient pipeline which can be merged and analyzed as a single body, without that data ever leaving the boundaries of the infrastructure where it already exists. Customised to focus on genomic troves, scientists can then query that information for more insights. Seqera was central to the discovery of both the Alpha and Delta variants of the virus, and work is still ongoing as COVID-19 continues to hammer the globe.

Seqera is being used in other kinds of medical applications, such as in the realm of so-called “precision medicine.” This is emerging as a very big opportunity in complex fields like oncology: cancer mutates and behaves differently depending on many factors, including genetic differences of the patients themselves, which means that treatments are less effective if they are “one size fits all.”

Increasingly, we are seeing approaches that leverage machine learning and big data analytics to better understand individual cancers and how they develop for different populations, to subsequently create more personalized treatments, and Seqera comes into play as a way to sequence that kind of data.

This also highlights something else notable about the Seqera platform: it is used directly by the people who are analyzing the data — that is, the researchers and scientists themselves, without data specialists necessarily needing to get involved. This was a practical priority for the company, Floden told me, but nonetheless, it’s an interesting detail of how the platform is inadvertently part of that bigger trend of “no-code/low-code” software, designed to make highly technical processes usable by non-technical people.

It’s both the existing opportunity and how Seqera might be applied in the future across other kinds of data that lives in the cloud that makes it an interesting company, and it seems an interesting investment, too.

“Advancements in machine learning, and the proliferation of volumes and types of data, are leading to increasingly more applications of computer science in life sciences and biology,” said Kirill Tasilov, principal at Talis Capital, in a statement. “While this is incredibly exciting from a humanity perspective, it’s also skyrocketing the cost of experiments to sometimes millions of dollars per project as they become computer-heavy and complex to run. Nextflow is already a ubiquitous solution in this space and Seqera is driving those capabilities at an enterprise level – and in doing so, is bringing the entire life sciences industry into the modern age. We’re thrilled to be a part of Seqera’s journey.”

“With the explosion of biological data from cheap, commercial DNA sequencing, there is a pressing need to analyse increasingly growing and complex quantities of data,” added Arnaud Bakker, principal at Speedinvest. “Seqera’s open and cloud-first framework provides an advanced tooling kit allowing organisations to scale complex deployments of data analysis and enable data-driven life sciences solutions.”

Although medicine and life sciences are perhaps Seqera’s most obvious and timely applications today, the framework originally designed for genetics and biology can be applied to any a number of other areas: AI training, image analysis and astronomy are three early use cases, Floden said. Astronomy is perhaps very apt, since it seems that the sky is the limit.

“We think we are in the century of biology,” Floden said. “It’s the center of activity and it’s becoming data-centric, and we are here to build services around that.”

Seqera is not disclosing its valuation with this round.

May
18
2021
--

Artificial raises $21M led by Microsoft’s M12 for a lab automation platform aimed at life sciences R&D

Automation is extending into every aspect of how organizations get work done, and today comes news of a startup that is building tools for one industry in particular: life sciences. Artificial, which has built a software platform for laboratories to assist with, or in some cases fully automate, research and development work, has raised $21.5 million.

It plans to use the funding to continue building out its software and its capabilities, to hire more people, and for business development, according to Artificial’s CEO and co-founder David Fuller. The company already has a number of customers including Thermo Fisher and Beam Therapeutics using its software directly and in partnership for their own customers. Sold as aLab Suite, Artificial’s technology can both orchestrate and manage robotic machines that labs might be using to handle some work; and help assist scientists when they are carrying out the work themselves.

“The basic premise of what we’re trying to do is accelerate the rate of discovery in labs,” Fuller said in an interview. He believes the process of bringing in more AI into labs to improve how they work is long overdue. “We need to have a digital revolution to change the way that labs have been operating for the last 20 years.”

The Series A is being led by Microsoft’s venture fund M12 — a financial and strategic investor — with Playground Global and AME Cloud Ventures also participating. Playground Global, the VC firm co-founded by ex-Google exec and Android co-creator Andy Rubin (who is no longer with the firm), has been focusing on robotics and life sciences and it led Artificial’s first and only other round. Artificial is not disclosing its valuation with this round.

Fuller hails from a background in robotics, specifically industrial robots and automation. Before founding Artificial in 2019, he was at Kuka, the German robotics maker, for a number of years, culminating in the role of CTO; prior to that, Fuller spent 20 years at National Instruments, the instrumentation, test equipment and industrial software giant. Meanwhile, Artificial’s co-founder, Nikhita Singh, has insight into how to bring the advances of robotics into environments that are quite analogue in culture. She previously worked on human-robot interaction research at the MIT Media Lab, and before that spent years at Palantir and working on robotics at Berkeley.

As Fuller describes it, he saw an interesting gap (and opportunity) in the market to apply automation, which he had seen help advance work in industrial settings, to the world of life sciences, both to help scientists track what they are doing better, and help them carry out some of the more repetitive work that they have to do day in, day out.

This gap is perhaps more in the spotlight today than ever before, given the fact that we are in the middle of a global health pandemic. This has hindered a lot of labs from being able to operate full in-person teams, and increased the reliance on systems that can crunch numbers and carry out work without as many people present. And, of course, the need for that work (whether it’s related directly to Covid-19 or not) has perhaps never appeared as urgent as it does right now.

There have been a lot of advances in robotics — specifically around hardware like robotic arms — to manage some of the precision needed to carry out some work, but up to now no real efforts made at building platforms to bring all of the work done by that hardware together (or in the words of automation specialists, “orchestrate” that work and data); nor link up the data from those robot-led efforts, with the work that human scientists still carry out. Artificial estimates that some $10 billion is spent annually on lab informatics and automation software, yet data models to unify that work, and platforms to reach across it all, remain absent. That has, in effect, served as a barrier to labs modernising as much as they could.

A lab, as he describes it, is essentially composed of high-end instrumentation for analytics, alongside then robotic systems for liquid handling. “You can really think of a lab, frankly, as a kitchen,” he said, “and the primary operation in that lab is mixing liquids.”

But it is also not unlike a factory, too. As those liquids are mixed, a robotic system typically moves around pipettes, liquids, in and out of plates and mixes. “There’s a key aspect of material flow through the lab, and the material flow part of it is much more like classic robotics,” he said. In other words, there is, as he says, “a combination of bespoke scientific equipment that includes automation, and then classic material flow, which is much more standard robotics,” and is what makes the lab ripe as an applied environment for automation software.

To note: the idea is not to remove humans altogether, but to provide assistance so that they can do their jobs better. He points out that even the automotive industry, which has been automated for 50 years, still has about 6% of all work done by humans. If that is a watermark, it sounds like there is a lot of movement left in labs: Fuller estimates that some 60% of all work in the lab is done by humans. And part of the reason for that is simply because it’s just too complex to replace scientists — who he described as “artists” — altogether (for now at least).

“Our solution augments the human activity and automates the standard activity,” he said. “We view that as a central thesis that differentiates us from classic automation.”

There have been a number of other startups emerging that are applying some of the learnings of artificial intelligence and big data analytics for enterprises to the world of science. They include the likes of Turing, which is applying this to helping automate lab work for CPG companies; and Paige, which is focusing on AI to help better understand cancer and other pathology.

The Microsoft connection is one that could well play out in how Artificial’s platform develops going forward, not just in how data is perhaps handled in the cloud, but also on the ground, specifically with augmented reality.

“We see massive technical synergy,” Fuller said. “When you are in a lab you already have to wear glasses… and we think this has the earmarks of a long-term use case.”

Fuller mentioned that one area it’s looking at would involve equipping scientists and other technicians with Microsoft’s HoloLens to help direct them around the labs, and to make sure people are carrying out work consistently by comparing what is happening in the physical world to a “digital twin” of a lab containing data about supplies, where they are located, and what needs to happen next.

It’s this and all of the other areas that have yet to be brought into our very AI-led enterprise future that interested Microsoft.

“Biology labs today are light- to semi-automated—the same state they were in when I started my academic research and biopharmaceutical career over 20 years ago. Most labs operate more like test kitchens rather than factories,” said Dr. Kouki Harasaki, an investor at M12, in a statement. “Artificial’s aLab Suite is especially exciting to us because it is uniquely positioned to automate the masses: it’s accessible, low code, easy to use, highly configurable, and interoperable with common lab hardware and software. Most importantly, it enables Biopharma and SynBio labs to achieve the crowning glory of workflow automation: flexibility at scale.”

Harasaki is joining Peter Barratt, a founder and general partner at Playground Global, on Artificial’s board with this round.

“It’s become even more clear as we continue to battle the pandemic that we need to take a scalable, reproducible approach to running our labs, rather than the artisanal, error-prone methods we employ today,” Barrett said in a statement. “The aLab Suite that Artificial has pioneered will allow us to accelerate the breakthrough treatments of tomorrow and ensure our best and brightest scientists are working on challenging problems, not manual labor.”

Aug
19
2020
--

A pandemic and recession won’t stop Atlassian’s SaaS push

No company is completely insulated from the macroeconomic fallout of COVID-19, but we are seeing some companies fare better than others, especially those providing ways to collaborate online. Count Atlassian in that camp, as it provides a suite of tools focused on working smarter in a digital context.

At a time when many employees are working from home, Atlassian’s product approach sounds like a recipe for a smash hit. But in its latest earnings report, the company detailed slowing growth, not the acceleration we might expect. Looking ahead, it’s predicting more of the same — at least for the short term.

Part of the reason for that — beyond some small-business customers, hit by hard times, moving to its new free tier introduced last March — is the pain associated with moving customers off of older license revenue to more predictable subscription revenue. The company has shown that it is willing to sacrifice short-term growth to accelerate that transition.

We sat down with Atlassian CRO Cameron Deatsch to talk about some of the challenges his company is facing as it navigates through these crazy times. Deatsch pointed out that in spite of the turbulence, and the push to subscriptions, Atlassian is well-positioned with plenty of cash on hand and the ability to make strategic acquisitions when needed, while continuing to expand the recurring-revenue slice of its revenue pie.

The COVID-19 effect

Deatsch told us that Atlassian could not fully escape the pandemic’s impact on business, especially in April and May when many companies felt it. His company saw the biggest impact from smaller businesses, which cut back, moved to a free tier, or in some cases closed their doors. There was no getting away from the market chop that SMBs took during the early stages of COVID, and he said it had an impact on Atlassian’s new customer numbers.

Atlassian Q4FY2020 customer growth graph

Image Credits: Atlassian

Still, the company believes it will recover from the slow down in new customers, especially as it begins to convert a percentage of its new, free-tier users to paid users down the road. For this quarter it only translated into around 3000 new customers, but Deatsch didn’t seem concerned. “The customer numbers were off, but the overall financials were pretty strong coming out of [fiscal] Q4 if you looked at it. But also the number of people who are trying our products now because of the free tier is way up. We saw a step change when we launched free,” he said.

Mar
03
2020
--

Honeywell says it will soon launch the world’s most powerful quantum computer

“The best-kept secret in quantum computing.” That’s what Cambridge Quantum Computing (CQC) CEO Ilyas Khan called Honeywell‘s efforts in building the world’s most powerful quantum computer. In a race where most of the major players are vying for attention, Honeywell has quietly worked on its efforts for the last few years (and under strict NDA’s, it seems). But today, the company announced a major breakthrough that it claims will allow it to launch the world’s most powerful quantum computer within the next three months.

In addition, Honeywell also today announced that it has made strategic investments in CQC and Zapata Computing, both of which focus on the software side of quantum computing. The company has also partnered with JPMorgan Chase to develop quantum algorithms using Honeywell’s quantum computer. The company also recently announced a partnership with Microsoft.

Honeywell has long built the kind of complex control systems that power many of the world’s largest industrial sites. It’s that kind of experience that has now allowed it to build an advanced ion trap that is at the core of its efforts.

This ion trap, the company claims in a paper that accompanies today’s announcement, has allowed the team to achieve decoherence times that are significantly longer than those of its competitors.

“It starts really with the heritage that Honeywell had to work from,” Tony Uttley, the president of Honeywell Quantum Solutions, told me. “And we, because of our businesses within aerospace and defense and our business in oil and gas — with solutions that have to do with the integration of complex control systems because of our chemicals and materials businesses — we had all of the underlying pieces for quantum computing, which are just fabulously different from classical computing. You need to have ultra-high vacuum system capabilities. You need to have cryogenic capabilities. You need to have precision control. You need to have lasers and photonic capabilities. You have to have magnetic and vibrational stability capabilities. And for us, we had our own foundry and so we are able to literally design our architecture from the trap up.”

The result of this is a quantum computer that promises to achieve a quantum Volume of 64. Quantum Volume (QV), it’s worth mentioning, is a metric that takes into account both the number of qubits in a system as well as decoherence times. IBM and others have championed this metric as a way to, at least for now, compare the power of various quantum computers.

So far, IBM’s own machines have achieved QV 32, which would make Honeywell’s machine significantly more powerful.

Khan, whose company provides software tools for quantum computing and was one of the first to work with Honeywell on this project, also noted that the focus on the ion trap is giving Honeywell a bit of an advantage. “I think that the choice of the ion trap approach by Honeywell is a reflection of a very deliberate focus on the quality of qubit rather than the number of qubits, which I think is fairly sophisticated,” he said. “Until recently, the headline was always growth, the number of qubits running.”

The Honeywell team noted that many of its current customers are also likely users of its quantum solutions. These customers, after all, are working on exactly the kind of problems in chemistry or material science that quantum computing, at least in its earliest forms, is uniquely suited for.

Currently, Honeywell has about 100 scientists, engineers and developers dedicated to its quantum project.

Feb
05
2020
--

Calling all cosmic startups — pitch at TechCrunch’s space event in LA

Founders — it’s time to shoot for the stars. For the first time ever, TechCrunch is hosting TC Sessions: Space 2020 on June 25th in Los Angeles. But that’s not all, because on June 24th, TechCrunch will host a Pitch Night exclusively for early-stage space startups.

Yep, that’s right. On top of a packed programming day with fireside chats, breakout sessions and Q&As featuring the top experts and game changers in space, TechCrunch will select 10 startups focused on any aspect of space — whether you’re launching rockets, building the next big satellite constellation, translating space-based data into usable insights or even building a colony on the Moon. If your company is all about the new space startup race, and you are early stage, please apply. 

Step 1: Apply to pitch by May 15th. TechCrunch’s editorial team will review all applications and select 10 companies. Founders will be notified by June 7th.  

You’ll pitch your startup at a private event in front of TechCrunch editors, main-stage speakers and industry experts. Our panel of judges will select five finalists to pitch onstage at TC Sessions: Space. 

You will be pitching your startup to the most prestigious, influential and expert industry leaders, and you’ll get video coverage on TechCrunch, too! And the final perk? Each of the 10 startup teams selected for the Pitch Night will be given two free tickets to attend TC Sessions: Space 2020. Shoot your shot — apply here.

Even if you’re not necessarily interested in pitching, grab your ticket for a front-row seat to this event for the early-bird price of $349. If you are interested in bringing a group of five or more from your company, you’ll get an automatic 20% discount. We even have discounts for the government/military, nonprofit/NGOs and students currently attending university. Grab your tickets at these reduced rates before prices increase.

Is your company interested in sponsoring or exhibiting at TC Sessions: Space 2020? Contact our sponsorship sales team by filling out this form.

Sep
10
2019
--

Q-CTRL raises $15M for software that reduces error and noise in quantum computing hardware

As hardware makers continue to work on ways of making wide-scale quantum computing a reality, a startup out of Australia that is building software to help reduce noise and errors on quantum computing machines has raised a round of funding to fuel its U.S. expansion.

Q-CTRL is designing firmware for computers and other machines (such as quantum sensors) that perform quantum calculations, firmware to identify the potential for errors to make the machines more resistant and able to stay working for longer (the Q in its name is a reference to qubits, the basic building block of quantum computing).

The startup is today announcing that it has raised $15 million, money that it plans to use to double its team (currently numbering 25) and set up shop on the West Coast, specifically Los Angeles.

This Series A is coming from a list of backers that speaks to the startup’s success to date in courting quantum hardware companies as customers. Led by Square Peg Capital — a prolific Australian VC that has backed homegrown startups like Bugcrowd and Canva, but also those further afield such as Stripe — it also includes new investor Sierra Ventures as well as Sequoia Capital, Main Sequence Ventures and Horizons Ventures.

Q-CTRL’s customers are some of the bigger names in quantum computing and IT, such as Rigetti, Bleximo and Accenture, among others. IBM — which earlier this year unveiled its first commercial quantum computer — singled it out last year for its work in advancing quantum technology.

The problem that Q-CTRL is aiming to address is basic but arguably critical to solving if quantum computing ever hopes to make the leap out of the lab and into wider use in the real world.

Quantum computers and other machines like quantum sensors, which are built on quantum physics architecture, are able to perform computations that go well beyond what can be done by normal computers today, with the applications for such technology including cryptography, biosciences, advanced geological exploration and much more. But quantum computing machines are known to be unstable, in part because of the fragility of the quantum state, which introduces a lot of noise and subsequent errors, which results in crashes.

As Frederic pointed out recently, scientists are confident that this is ultimately a solvable issue. Q-CTRL is one of the hopefuls working on that, by providing a set of tools that runs on quantum machines, visualises noise and decoherence and then deploys controls to “defeat” those errors.

Q-CTRL currently has four products it offers to the market: Black Opal, Boulder Opal, Open Controls and Devkit — aimed respectively at students/those exploring quantum computing, hardware makers, the research community and end users/algorithm developers.

Q-CTRL was founded in 2017 by Michael Biercuk, a professor of Quantum Physics & Quantum Technology at the University of Sydney and a chief investigator in the Australian Research Council Centre of Excellence for Engineered Quantum Systems, who studied in the U.S., with a PhD in physics from Harvard.

“Being at the vanguard of the birth of a new industry is extraordinary,” he said in a statement. “We’re also thrilled to be assembling one of the most impressive investor syndicates in quantum technology. Finding investors who understand and embrace both the promise and the challenge of building quantum computers is almost magical.”

Why choose Los Angeles for building out a U.S. presence, you might ask? Southern California, it turns out, has shaped up to be a key area for quantum research and development, with several of the universities in the region building out labs dedicated to the area, and companies like Lockheed Martin and Google also contributing to the ecosystem. This means a strong pipeline of talent and conversation in what is still a nascent area.

Given that it is still early days for quantum computing technology, that gives a lot of potential options to a company like Q-CTRL longer-term: The company might continue to build a business as it does today, selling its technology to a plethora of hardware makers and researchers in the field; or it might get snapped up by a specific hardware company to integrate Q-CTRL’s solutions more closely onto its machines (and keep them away from competitors).

Or, it could make like a quantum particle and follow both of those paths at the same time.

“Q-CTRL impressed us with their strategy; by providing infrastructure software to improve quantum computers for R&D teams and end-users, they’re able to be a central player in bringing this technology to reality,” said Tushar Roy, a partner at Square Peg. “Their technology also has applications beyond quantum computing, including in quantum-based sensing, which is a rapidly-growing market. In Q-CTRL we found a rare combination of world-leading technical expertise with an understanding of customers, products and what it takes to build an impactful business.”

Aug
22
2019
--

NASA’s new HPE-built supercomputer will prepare for landing Artemis astronauts on the Moon

NASA and Hewlett Packard Enterprise (HPE) have teamed up to build a new supercomputer, which will serve NASA’s Ames Research Center in California and develop models and simulations of the landing process for Artemis Moon missions.

The new supercomputer is called “Aitken,” named after American astronomer Robert Grant Aitken, and it can run simulations at up to 3.69 petaFLOPs of theoretical performance power. Aitken is custom-designed by HPE and NASA to work with the Ames modular data center, which is a project it undertook starting in 2017 to massively reduce the amount of water and energy used in cooling its supercomputing hardware.

Aitken employs second-generation Intel Xeon processors, Mellanox InfiniBand high-speed networking, and has 221 TB of memory on board for storage. It’s the result of four years of collaboration between NASA and HPE, and it will model different methods of entry, descent and landing for Moon-destined Artemis spacecraft, running simulations to determine possible outcomes and help determine the best, safest approach.

This isn’t the only collaboration between HPE and NASA: The enterprise computer maker built for the agency a new kind of supercomputer able to withstand the rigors of space, and sent it up to the ISS in 2017 for preparatory testing ahead of potential use on longer missions, including Mars. The two partners then opened that supercomputer for use in third-party experiments last year.

HPE also announced earlier this year that it was buying supercomputer company Cray for $1.3 billion. Cray is another long-time partner of NASA’s supercomputing efforts, dating back to the space agency’s establishment of a dedicated computational modeling division and the establishing of its Central Computing Facility at Ames Research Center.

Aug
19
2019
--

The five technical challenges Cerebras overcame in building the first trillion-transistor chip

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The “Wafer Scale Engine” is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).

CS Wafer Keyboard Comparison

Cerebras’ Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems).

It’s made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry’s big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.

Superlatives aside though, the technical challenges that Cerebras had to overcome to reach this milestone I think is the more interesting story here. I sat down with founder and CEO Andrew Feldman this afternoon to discuss what his 173 engineers have been building quietly just down the street here these past few years, with $112 million in venture capital funding from Benchmark and others.

Going big means nothing but challenges

First, a quick background on how the chips that power your phones and computers get made. Fabs like TSMC take standard-sized silicon wafers and divide them into individual chips by using light to etch the transistors into the chip. Wafers are circles and chips are squares, and so there is some basic geometry involved in subdividing that circle into a clear array of individual chips.

One big challenge in this lithography process is that errors can creep into the manufacturing process, requiring extensive testing to verify quality and forcing fabs to throw away poorly performing chips. The smaller and more compact the chip, the less likely any individual chip will be inoperative, and the higher the yield for the fab. Higher yield equals higher profits.

Cerebras throws out the idea of etching a bunch of individual chips onto a single wafer in lieu of just using the whole wafer itself as one gigantic chip. That allows all of those individual cores to connect with one another directly — vastly speeding up the critical feedback loops used in deep learning algorithms — but comes at the cost of huge manufacturing and design challenges to create and manage these chips.

CS Wafer Sean

Cerebras’ technical architecture and design was led by co-founder Sean Lie. Feldman and Lie worked together on a previous startup called SeaMicro, which sold to AMD in 2012 for $334 million (via Cerebras Systems).

The first challenge the team ran into, according to Feldman, was handling communication across the “scribe lines.” While Cerebras’ chip encompasses a full wafer, today’s lithography equipment still has to act like there are individual chips being etched into the silicon wafer. So the company had to invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer. Working with TSMC, they not only invented new channels for communication, but also had to write new software to handle chips with trillion-plus transistors.

The second challenge was yield. With a chip covering an entire silicon wafer, a single imperfection in the etching of that wafer could render the entire chip inoperative. This has been the block for decades on whole-wafer technology: due to the laws of physics, it is essentially impossible to etch a trillion transistors with perfect accuracy repeatedly.

Cerebras approached the problem using redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer. “You have to hold only 1%, 1.5% of these guys aside,” Feldman explained to me. Leaving extra cores allows the chip to essentially self-heal, routing around the lithography error and making a whole-wafer silicon chip viable.

Entering uncharted territory in chip design

Those first two challenges — communicating across the scribe lines between chips and handling yield — have flummoxed chip designers studying whole-wafer chips for decades. But they were known problems, and Feldman said that they were actually easier to solve than expected by re-approaching them using modern tools.

He likens the challenge to climbing Mount Everest. “It’s like the first set of guys failed to climb Mount Everest, they said, ‘Shit, that first part is really hard.’ And then the next set came along and said ‘That shit was nothing. That last hundred yards, that’s a problem.’ ”

And indeed, the toughest challenges, according to Feldman, for Cerebras were the next three, since no other chip designer had gotten past the scribe line communication and yield challenges to actually find what happened next.

The third challenge Cerebras confronted was handling thermal expansion. Chips get extremely hot in operation, but different materials expand at different rates. That means the connectors tethering a chip to its motherboard also need to thermally expand at precisely the same rate, lest cracks develop between the two.

As Feldman explained, “How do you get a connector that can withstand [that]? Nobody had ever done that before, [and so] we had to invent a material. So we have PhDs in material science, [and] we had to invent a material that could absorb some of that difference.”

Once a chip is manufactured, it needs to be tested and packaged for shipment to original equipment manufacturers (OEMs) who add the chips into the products used by end customers (whether data centers or consumer laptops). There is a challenge though: Absolutely nothing on the market is designed to handle a whole-wafer chip.

CS Wafer Inspection

Cerebras designed its own testing and packaging system to handle its chip (via Cerebras Systems).

“How on earth do you package it? Well, the answer is you invent a lot of shit. That is the truth. Nobody had a printed circuit board this size. Nobody had connectors. Nobody had a cold plate. Nobody had tools. Nobody had tools to align them. Nobody had tools to handle them. Nobody had any software to test,” Feldman explained. “And so we have designed this whole manufacturing flow, because nobody has ever done it.” Cerebras’ technology is much more than just the chip it sells — it also includes all of the associated machinery required to actually manufacture and package those chips.

Finally, all that processing power in one chip requires immense power and cooling. Cerebras’ chip uses 15 kilowatts of power to operate — a prodigious amount of power for an individual chip, although relatively comparable to a modern-sized AI cluster. All that power also needs to be cooled, and Cerebras had to design a new way to deliver both for such a large chip.

It essentially approached the problem by turning the chip on its side, in what Feldman called “using the Z-dimension.” The idea was that rather than trying to move power and cooling horizontally across the chip as is traditional, power and cooling are delivered vertically at all points across the chip, ensuring even and consistent access to both.

And so, those were the next three challenges — thermal expansion, packaging and power/cooling — that the company has worked around-the-clock to deliver these past few years.

From theory to reality

Cerebras has a demo chip (I saw one, and yes, it is roughly the size of my head), and it has started to deliver prototypes to customers, according to reports. The big challenge, though, as with all new chips, is scaling production to meet customer demand.

For Cerebras, the situation is a bit unusual. Because it places so much computing power on one wafer, customers don’t necessarily need to buy dozens or hundreds of chips and stitch them together to create a compute cluster. Instead, they may only need a handful of Cerebras chips for their deep-learning needs. The company’s next major phase is to reach scale and ensure a steady delivery of its chips, which it packages as a whole system “appliance” that also includes its proprietary cooling technology.

Expect to hear more details of Cerebras technology in the coming months, particularly as the fight over the future of deep learning processing workflows continues to heat up.

Aug
06
2019
--

Quantum computing is coming to TC Sessions: Enterprise on Sept. 5

Here at TechCrunch, we like to think about what’s next, and there are few technologies quite as exotic and futuristic as quantum computing. After what felt like decades of being “almost there,” we now have working quantum computers that are able to run basic algorithms, even if only for a very short time. As those times increase, we’ll slowly but surely get to the point where we can realize the full potential of quantum computing.

For our TechCrunch Sessions: Enterprise event in San Francisco on September 5, we’re bringing together some of the sharpest minds from some of the leading companies in quantum computing to talk about what this technology will mean for enterprises (p.s. early-bird ticket sales end this Friday). This could, after all, be one of those technologies where early movers will gain a massive advantage over their competitors. But how do you prepare yourself for this future today, while many aspects of quantum computing are still in development?

IBM’s quantum computer demonstrated at Disrupt SF 2018

Joining us onstage will be Microsoft’s Krysta Svore, who leads the company’s Quantum efforts; IBM’s Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort; and Jim Clark, the director of quantum hardware at Intel Labs.

That’s pretty much a Who’s Who of the current state of quantum computing, even though all of these companies are at different stages of their quantum journey. IBM already has working quantum computers, Intel has built a quantum processor and is investing heavily into the technology and Microsoft is trying a very different approach to the technology that may lead to a breakthrough in the long run but that is currently keeping it from having a working machine. In return, though, Microsoft has invested heavily into building the software tools for building quantum applications.

During the panel, we’ll discuss the current state of the industry, where quantum computing can already help enterprises today and what they can do to prepare for the future. The implications of this new technology also go well beyond faster computing (for some use cases); there are also the security issues that will arise once quantum computers become widely available and current encryption methodologies become easily breakable.

The early-bird ticket discount ends this Friday, August 9. Be sure to grab your tickets to get the max $100 savings before prices go up. If you’re a startup in the enterprise space, we still have some startup demo tables available! Each demo table comes with four tickets to the show and a high-visibility exhibit space to showcase your company to attendees — learn more here.

Jul
31
2019
--

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com