Oct
29
2020
--

More chip industry action as Marvell is acquiring Inphi for $10B

It’s been quite a time for chip industry consolidation, and today Marvell joined the acquisition parade when it announced it is acquiring Inphi in a combination of stock and cash valued at approximately $10 billion, according to the company.

Marvell CEO Matt Murphy believes that by adding Inphi, a chip maker that helps connect internal servers in cloud data centers, and then between data centers, using fiber cabling, it will complement Marvell’s copper-based chip portfolio and give it an edge in developing more future-looking use cases where Inphi shines.

“Our acquisition of Inphi will fuel Marvell’s leadership in the cloud and extend our 5G position over the next decade,” Murphy said in a statement.

In the classic buy versus build calculus, this acquisition uses the company’s cash to push it in new directions without having to build all this new technology. “This highly complementary transaction expands Marvell’s addressable market, strengthens customer base and accelerates Marvell’s leadership in hyperscale cloud data centers and 5G wireless infrastructure,” the company said in a statement.

It’s been a busy time for the chip industry as multiple players are combining, hoping for a similar kind of lift that Marvell sees with this deal. In fact, today’s announcement comes in the same week AMD announced it was acquiring Xilinx for $35 billion and follows Nvidia acquiring ARM for $40 billion last month. The three deals combined come to a whopping $85 billion.

There appears to be prevailing wisdom in the industry that by combining forces and using the power of the checkbook, these companies can do more together than they can by themselves.

Certainly Marvell and Inphi are suggesting that. As they highlighted, their combined enterprise value will be more than $40 billion, with hundreds of millions of dollars in market potential. All of this of course depends on how well these combined entities work together, and we won’t know that for some time.

For what it’s worth, the stock market appears unimpressed with the deal, with Marvell’s stock down more than 7% in early trading — but Inphi stock is being bolstered in a big way by the announcement, up almost 23% this morning so far.

The deal, which has been approved by both companies’ boards, is expected to close by the second half of 2021, subject to shareholder and regulatory approval.

Oct
27
2020
--

AMD grabs Xilinx for $35 billion as chip industry consolidation continues

The chip industry consolidation dance continued this morning as AMD has entered into an agreement to buy Xilinx for $35 billion, giving the company access to a broad set of specialized workloads.

AMD sees this deal as combining two companies that complement each other’s strengths without cannibalizing its own markets. CEO Lisa Su believes the acquisition will help make her company the high performance chip leader.

“By combining our world-class engineering teams and deep domain expertise, we will create an industry leader with the vision, talent and scale to define the future of high performance computing,” Su said in a statement.

In an article earlier this year, TechCrunch’s Darrell Etherington described Xilinx new satellite focused chips as offering a couple of industry firsts:

It’s the first 20nm process that’s rated for use in space, offering power and efficiency benefits, and it’s the first to offer specific support for high performance machine learning through neural network-based inference acceleration.

What’s more, the chips are designed to handle radiation and the rigors of launch, using a thick ceramic packaging.

In a call with analysts this morning, Su pointed to these kinds of specialized workloads as one of Xilinx’s strengths. “Xilinx has also built deep strategic partnerships across a diverse set of growing markets in 5G communications, data center, automotive, industrial, aerospace and defense. Xilinx is establishing themselves as a strategic technology partner to a broad set of industry leaders,” she said.

The success of these kinds of mega deals tend to hinge on whether the combined companies can work well together. Su pointed out that the two companies have been partnering for a number of years and already have a relationship, and the two company leaders share a common vision.

“Both AMD and Xilinx share common culture, focused on innovation, execution and collaborating deeply with customers. From a leadership standpoint, Victor and I have a shared vision of where we can take high performance and adaptive computing in the future,” Su said.

In a nod to shareholders of both companies, she said, “This is truly a compelling combination that will create significant value for all stakeholders, including AMD and Xilinx shareholders who will benefit from the future growth and upside potential of the combined company.”

So far stockholders aren’t impressed with AMD stock down over 4% in pre-trading, while Xilinx stock is up over 11% in pre-trading.  Xilinx has a market cap over $28 billion compared with AMD’s $96.5 billion, creating a massive combined company.

This deal comes on the heels of last month’s ARM acquisition by Nvidia for $40 billion. With two deals in less than two months totaling $75 million, the industry is looking at the bigger is better theory. Meanwhile Intel took a hit earlier this month after its earnings report showed weakness in its data center business.

While the deal has been approved by both company’s boards of directors, it still has to pass muster with shareholders and regulators, and is not expected to close until the end of next year.

When that happens Su will be chairman of the combined company, while Xilinx president and CEO, Victor Peng will join AMD as president, where he will be in charge of the Xilinx business and strategic growth initiatives.

It’s worth noting that the Wall Street Journal first reported that a deal between these two companies could be coming together earlier this month.

Sep
24
2020
--

NUVIA raises $240M from Mithril to make climate-ready enterprise chips

Climate change is on everyone’s minds these days, what with the outer Bay Area on fire, orange skies above San Francisco, and a hurricane season that is bearing down on the East Coast with alacrity (and that’s just the United States in the past two weeks).

A major — and growing — source of those emissions is data centers, the cloud infrastructure that powers most of our devices and experiences. That’s led to some novel ideas, such as Microsoft’s underwater data center Project Natick, which just came back to the surface for testing a bit more than a week ago.

Yet, for all the fun experiments, there is a bit more of an obvious solution: just make the chips more energy efficient.

That’s the thesis of NUVIA, which was founded by three ex-Apple chip designers who led the design of the “A” series chip line for the company’s iPhones and iPads for years. Those chips are wicked fast within a very tight energy envelope, and NUVIA’s premise is essentially what happens when you take those sorts of energy constraints (and the experience of its chip design team) and apply them to the data center.

We did a deep profile of the company last year when it announced its $53 million Series A, so definitely read that to understand the founding story and the company’s mission. Now about one year later, it’s coming back to us with news of a whole bunch of more funding.

NUVIA announced today that it has closed on a $240 million Series B round led by Mithril Capital, with a bunch of others involved listed below.

Since we last chatted with the company, we now have a bit more detail of what it’s working on. It has two products under development, a system-on-chip (SoC) unit dubbed “Orion” and a CPU core dubbed “Phoenix.” The company previewed a bit of Phoenix’s performance last month, although as with most chip companies, it is almost certainly too early to make any long-term predictions about how the technology will settle in with existing and future chips coming to the market.

NUVIA’s view is that chips are limited to about 250-300 watts of power given the cooling and power constraints of most data centers. As more cores become common pre chip, each core is going to have to make do with less power availability while maintaining performance. NUVIA’s tech is trying to solve that problem, lowering total cost of ownership for data center operators while also improving overall energy efficiency.

There’s a lot more work to be done of course, so expect to see more product announcements and previews from the company as it gets its technology further finalized. With $240 million more dollars in the bank though, it certainly has the resources to make some progress.

Shortly after we chatted with the company last year, Apple sued company founder and CEO Gerald Williams III for breach of contract, with the company arguing that its former chip designer was trying to poach employees for his nascent startup. Williams counter-sued earlier this year, and the two parties are now in the discovery phase of their lawsuit, which remains ongoing.

In addition to lead Mithril, the round was done “in partnership with” the founders of semiconductor giant Marvell (Sehat Sutardja and Weili Dai), funds managed by BlackRock, Fidelity, and Temasek, plus Atlantic Bridge and Redline Capital along with Series A investors Capricorn Investment Group, Dell Technologies Capital, Mayfield, Nepenthe LLC, and WRVI Capital.

Aug
07
2020
--

Wendell Brooks has resigned as president of Intel Capital

When Wendell Brooks was promoted to president of Intel Capital, the investment arm of the chip giant, in 2015, he knew he had big shoes to fill. He was taking over from Arvind Sodhani, who had run the investment component for 28 years since its inception. Today, the company confirmed reports that Brooks has resigned that role.

Wendell Brooks has resigned from Intel to pursue other opportunities. We thank Wendell for all his contributions and wish him the best for the future,” a company spokesperson told TechCrunch in a rather bland send off.

Anthony Lin, who has been leading mergers and acquisitions and international investing, will take over on an interim basis. Interestingly, when Brooks was promoted, he too was in charge of mergers and acquisitions. Whether Lin keeps that role remains to be seen.

When I spoke to Brooks in 2015 as he was about to take over from Sodhani, he certainly sounded ready for the task at hand. “I have huge shoes to fill in maintaining that track record,” he said at the time. “I view it as a huge opportunity to grow the focus of organization where we can provide strategic value to portfolio companies.”

In that same interview, Brooks described his investment philosophy, saying he preferred to lead, rather than come on as a secondary investor. “I tend to think the lead investor is able to influence the business thesis, the route to market, the direction, the technology of a startup more than a passive investor,” he said. He added that it also tends to get board seats that can provide additional influence.

Comparing his firm to traditional VC firms, he said they were as good or better in terms of the investing record, and as a strategic investor brought some other advantages as well. “Some of the traditional VCs are focused on a company-building value. We can provide strategic guidance and complement some of the company building over other VCs,” he said.

Over the life of the firm, it has invested $12.9 billion in more than 1,500 companies, with 692 of those exiting via IPO or acquisition. Just this year, under Brooks’ leadership, the company has invested $225 million so far, including 11 new investments and 26 investments in companies already in the portfolio.

Jun
23
2020
--

Ampere announces latest chip with a 128-core processor

In the chip game, more is usually better, and to that end, Ampere announced the next chip on its product roadmap today, the Altra Max, a 128-core processor the company says is designed specifically to handle cloud-native, containerized workloads.

What’s more, the company has designed the chip so that it will fit in the same slot as their 80-core product announced last year (and in production now). That means that engineers can use the same slot when designing for the new chip, which saves engineering time and eases production, says Jeff Wittich, VP of products at the company.

Wittich says that his company is working with manufacturers today to make sure they can build for all of the requirements for the more powerful chip. “The reason we’re talking about it now, versus waiting until Q4 when we’ve got samples going out the door is because it’s socket compatible, so the same platforms that the Altra 80 core go into, this 128-core product can go into,” he said.

He says that containerized workloads, video encoding, large scale out databases and machine learning inference will all benefit from having these additional cores.

While he wouldn’t comment on any additional funding, the company has raised $40 million, according to Crunchbase data, and Wittich says they have enough funding to go into high-volume production on their existing products later this year.

Like everyone, the company has faced challenges keeping a consistent supply chain throughout the pandemic, but when it started to hit in Asia at the beginning of this year, the company set a plan in motion to find backup suppliers for the parts they would need should they run into pandemic-related shortages. He says that it took a lot of work, planning and coordination, but they feel confident at this point in being able to deliver their products in spite of the uncertainty that exists.

“Back in January we actually already went through [our list of suppliers], and we diversified our supply chain and made sure that we had options for everything. So we were able to get in front of that before it ever became a problem,” he said.

“We’ve had normal kinds of hiccups here and there that everyone’s had in the supply chain, where things get stuck in shipping and they end up a little bit late, but we’re right on schedule with where we were.”

The company is already planning ahead for its 2022 release, which is in development. “We’ve got a test chip running through five nanometer right now that has the key IP and some of the key features of that product, so that we can start testing those out in silicon pretty soon,” he said.

Finally, the company announced that it’s working with some new partners, including Cloudflare, Packet (which was acquired by Equinix in January), Scaleway and Phoenics Electronics, a division of Avnet. These partnerships provide another way for Ampere to expand its market as it continues to develop.

The company was founded in 2017 by former Intel president Renee James.

Sep
24
2019
--

Alibaba unveils Hanguang 800, an AI inference chip it says significantly increases the speed of machine learning tasks

Alibaba Group introduced its first AI inference chip today, a neural processing unit called Hanguang 800 that it says makes performing machine learning tasks dramatically faster and more energy efficient. The chip, announced today during Alibaba Cloud’s annual Apsara Computing Conference in Hangzhou, is already being used to power features on Alibaba’s e-commerce sites, including product search and personalized recommendations. It will be made available to Alibaba Cloud customers later.

As an example of what the chip can do, Alibaba said it usually takes Taobao an hour to categorize the one billion product images that are uploaded to the e-commerce platform each day by merchants and prepare them for search and personalized recommendations. Using Hanguang 800, Taobao was able to complete the task in only five minutes.

Alibaba is already using Hanguang 800 in many of its business operations that need machine processing. In addition to product search and recommendations, this includes automatic translation on its e-commerce sites, advertising and intelligence customer services.

Though Alibaba hasn’t revealed when the chip will be available to its cloud customers, the chip may help Chinese companies reduce their dependence on U.S. technology as the trade war makes business partnerships between Chinese and American tech companies more difficult. It also can help Alibaba Cloud grow in markets outside of China. Within China, it is the market leader, but in the Asia-Pacific region, Alibaba Cloud still ranks behind Amazon, Microsoft and Google, according to the Synergy Research Group.

Hanguang 800 was created by T-Head, the unit that leads the development of chips for cloud and edge computing within Alibaba DAMO Academy, the global research and development initiative in which Alibaba is investing more than $15 billion. T-Head developed the chip’s hardware and algorithms designed for business apps, including Alibaba’s retail and logistics apps.

In a statement, Alibaba Group CTO and president of Alibaba Cloud Intelligence Jeff Zhang (pictured above) said, “The launch of Hanguang 800 is an important step in our pursuit of next-generation technologies, boosting computing capabilities that will drive both our current and emerging businesses while improving energy-efficiency.”

He added, “In the near future, we plan to empower our clients by providing access through our cloud business to the advanced computing that is made possible by the chip, anytime and anywhere.”

T-Head’s other launches included the XuanTie 910 earlier this year, an IoT processor based on RISC-V, the open-source hardware instruction set that began as a project at UC Berkeley. XuanTie 910 was created for heavy-duty IoT applications, including edge servers, networking, gateway and autonomous vehicles.

Alibaba DAMO Academy collaborates with universities around the world, including UC Berkeley and Tel Aviv University. Researchers in the program focus on machine learning, network security, visual computing and natural language processing, with the goal of serving two billion customers and creating 100 million jobs by 2035.

Aug
07
2019
--

Google and Twitter are using AMD’s new EPYC Rome processors in their data centers

Google and Twitter are among the companies now using EPYC Rome processors, AMD announced today during a launch event for the 7nm chips. The release of EPYC Rome marks a major step in AMD’s processor war with Intel, which said last month that its own 7nm chips, Ice Lake, won’t be available until 2021 (though it is expected to release its 10nm node this year).

Intel is still the biggest data center processor maker by far, however, and also counts Google and Twitter among its customers. But AMD’s latest releases and its strategy of undercutting competitors with lower pricing have quickly transformed it into a formidable rival.

Google has used other AMD chips before, including in its “Millionth Server,” built in 2008, and says it is now the first company to use second-generation EPYC chips in its data centers. Later this year, Google will also make available to Google Cloud customers virtual machines that run on the chips.

In a press statement, Bart Sano, Google vice president of engineering, said “AMD 2nd Gen Epyc processors will help us continue to do what we do best in our datacenters: innovate. Its scalable compute, memory and I/O performance will expand out ability to drive innovation forward in our infrastructure and will give Google Cloud customers the flexibility to choose the best VM for their workloads.”

Twitter plans to begin using EPYC Rome in its data center infrastructure later this year. Its senior director of engineering, Jennifer Fraser, said the chips will reduce the energy consumption of its data centers. “Using the AMD EPYC 7702 processor, we can scale out our compute clusters with more cores in less space using less power, which translates to 25% lower [total cost of ownership] for Twitter.”

In a comparison test between 2-socket Intel Xeon 6242 and AMD EPYC 7702P processors, AMD claimed that its chips were able to reduce total cost of ownership by up to 50% across “numerous workloads.” AMD EPYC Rome’s flagship is the 64-core, 128-thread 7742 chip, with a 2.25 base frequency, 225 default TDP and 256MB of total cache, starts at $6,950.

Apr
25
2019
--

AWS expands cloud infrastructure offerings with new AMD EPYC-powered T3a instances

Amazon is always looking for ways to increase the options it offers developers in AWS, and to that end, today it announced a bunch of new AMD EPYC-powered T3a instances. These were originally announced at the end of last year at re:Invent, AWS’s annual customer conference.

Today’s announcement is about making these chips generally available. They have been designed for a specific type of burstable workload, where you might not always need a sustained amount of compute power.

“These instances deliver burstable, cost-effective performance and are a great fit for workloads that do not need high sustained compute power but experience temporary spikes in usage. You get a generous and assured baseline amount of processing power and the ability to transparently scale up to full core performance when you need more processing power, for as long as necessary,” AWS’s Jeff Barr wrote in a blog post.

These instances are built on the AWS Nitro System, Amazon’s custom networking interface hardware that the company has been working on for the last several years. The primary components of this system include the Nitro Card I/O Acceleration, Nitro Security Chip and the Nitro Hypervisor.

Today’s release comes on top of the announcement last year that the company would be releasing EC2 instances powered by Arm-based AWS Graviton Processors, another option for developers looking for a solution for scale-out workloads.

It also comes on the heels of last month’s announcement that it was releasing EC2 M5 and R5 instances, which use lower-cost AMD chips. These are also built on top of the Nitro System.

The EPCY processors are available starting today in seven sizes in your choice of spot instances, reserved instances or on-demand, as needed. They are available in US East in northern Virginia, US West in Oregon, Europe in Ireland, US East in Ohio and Asia-Pacific in Singapore.

Nov
28
2018
--

AWS announces new Inferentia machine learning chip

AWS is not content to cede any part of any market to any company. When it comes to machine learning chips, names like Nvidia or Google come to mind, but today at AWS re:Invent in Las Vegas, the company announced a new dedicated machine learning chip of its own called Inferentia.

“Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor,” AWS CEO Andy Jassy explained during the announcement.

Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future.

“The speed and cost of running machine learning operations — ideally in deep learning — are a competitive differentiator for enterprises. Speed advantages will make or break success of enterprises (and nations when you think of warfare). That speed can only be achieved with custom hardware, and Inferentia is AWS’s first step to get in to this game,” Mueller told TechCrunch. As he pointed out, Google has a 2-3 year head start with its TPU infrastructure.

Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What’s more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX.

Of course, being an Amazon product, it also supports data from popular AWS products such as EC2, SageMaker and the new Elastic Inference Engine announced today.

While the chip was announced today, AWS CEO Andy Jassy indicated it won’t actually be available until next year.

more AWS re:Invent 2018 coverage

Sep
10
2018
--

Intel acquires NetSpeed Systems to boost its system-on-a-chip business

Intel today is announcing another acquisition as it continues to pick up talent and IP to bolster its next generation of computing chips beyond legacy PCs. The company has acquired NetSpeed Systems, a startup that makes system-on-chip (SoC) design tools and interconnect fabric intellectual property (IP). The company will be joining Intel’s Silicon Engineering Group, and its co-founder and CEO, Sundari Mitra, herself an Intel vet, will be coming on as a VP at Intel where she will continue to lead her team.

Terms of the deal are not being disclosed, but for some context, during NetSpeed’s last fundraise in 2016 (a $10 million Series C) it had a post-money valuation of $60 million, according to data from PitchBook.

SoC is a central part of how newer connected devices are being made. Moving away from traditional motherboards to create all-in-one chips that include processing, memory, input/output and storage is an essential cornerstone when building ever-smaller and more efficient devices. This is an area where Intel is already active but against others like Nvidia and Qualcomm many believe it has some catching up to do, and so this acquisition in important in that context.

“Intel is designing more products with more specialized features than ever before, which is incredibly exciting for Intel architects and for our customers,” said Jim Keller, senior vice president and general manager of the Silicon Engineering Group at Intel, in a statement. “The challenge is synthesizing a broader set of IP blocks for optimal performance while reining in design time and cost. NetSpeed’s proven network-on-chip technology addresses this challenge, and we’re excited to now have their IP and expertise in-house.”

Intel has made a series of acquisitions to speed up development of newer chips to work in connected objects and smaller devices beyond the PCs that helped the company make its name. Another recent acquisition in the same vein include eASIC for IoT chipsets, which Intel acquired in July. Intel has also been acquiring startups in other areas where it hopes to make a bigger mark, such as deep learning (case in point: its acquisition of Movidius in August).

NetSpeed has been around since 2011 and Intel was one of its investors and customers.

“Intel has been a great customer of NetSpeed’s, and I’m thrilled to once again be joining the company,” said Mitra, in a statement. “Intel is world class at designing and optimizing the performance of custom silicon at scale. As part of Intel’s silicon engineering group, we’re excited to help invent new products that will be a foundation for computing’s future.”

Intel said it will to honor NetSpeed’s existing customer contracts, but it also sounds like it the company will not be seeking future business as Intel integrates the company into its bigger business.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com