Jul
13
2018
--

On MySQL and Intel Optane performance

MySQL 8 versus Percona Server with heavy IO application on Intel Optane

Recently, Dimitri published the results of measuring MySQL 8.0 on Intel Optane storage device. In this blog post, I wanted to look at this in more detail and explore the performance of MySQL 8, MySQL 5.7 and Percona Server for MySQL using a similar set up. The Intel Optane is a very capable device, so I was puzzled that Dimitri chose MySQL options that are either not safe or not recommended for production workloads.

Since we have an Intel Optane in our labs, I wanted to run a similar benchmark, but using settings that we would recommend our customers to use, namely:

  • use innodb_checksum
  • use innodb_doublewrite
  • use binary logs with sync_binlog=1
  • enable (by default) Performance Schema

I still used

charset=latin1

  (even though the default is utf8mb4 in MySQL 8) and I set a total size of InnoDB log files to 30GB (as in Dimitri’s benchmark). This setting allocates big InnoDB log files to ensure there is no pressure from adaptive flushing. Though I have concerns about how it works in MySQL 8, this is a topic for another research.

So let’s see how MySQL 8.0 performed with these settings, and compare it with MySQL 5.7 and Percona Server for MySQL 5.7.

I used an Intel Optane SSD 905P 960GB device on the server with 2 socket Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz CPUs.

To highlight the performance difference I wanted to show, I used a single case: sysbench 8 tables 50M rows each (which is about ~120GB of data) and buffer pool 32GB. I ran sysbench oltp_read_write in 128 threads.

First, let’s review the results for MySQL 8 vs MySQL 5.7

After achieving a steady state – we can see that MySQL 8 does not have ANY performance improvements over MySQL 5.7.

Let’s compare this with Percona Server for MySQL 5.7

MySQL 8 versus Percona Server with heavy IO application on Intel Optane

Percona Server for MySQL 5.7 shows about 60% performance improvement over both MySQL 5.7 and MySQL 8.

How did we achieve this? All our improvements are described here: https://www.percona.com/doc/percona-server/LATEST/performance/xtradb_performance_improvements_for_io-bound_highly-concurrent_workloads.html. In short:

  1. Parallel doublewrite.  In both MySQL 5.7 and MySQL 8 writes are serialized by writing to doublewrite.
  2. Multi-threaded LRU flusher. We reported and proposed a solution here https://bugs.mysql.com/bug.php?id=70500. However, Oracle have not incorporated the solution upstream.
  3. Single page eviction. This is another problematic area in MySQL’s flushing algorithm. The bug https://bugs.mysql.com/bug.php?id=81376 was reported over 2 years ago, but unfortunately it’s still overlooked.

Summarizing performance findings:

  • For Percona Server for MySQL during this workload, I observed 1.4 GB/sec  reads and 815 MB/sec  writes
  • For MySQL 5.7 and MySQL 8 the numbers are 824 MB/sec reads and  530 MB/sec writes.

My opinion is that Oracle focused on addressing the wrong performance problems in MySQL 8 and did not address the real issues. In this benchmark, using real production settings, MySQL 8 does not show any significant performance benefits over MySQL 5.7 for workloads characterized by heavy IO writes.

With this, I should admit that Intel Optane is a very performant storage. By comparison, on Intel 3600 SSD under the same workload, for Percona Server I am able to achieve only 2000 tps, which is 2.5x times slower than with Intel Optane.

Drawing some conclusions

So there are a few outcomes I can highlight:

  • Intel Optane is a very capable drive, it is easily the fastest of those we’ve tested so far
  • MySQL 8 is not able to utilize all the power of Intel Optane, unless you use unsafe settings (which to me is the equivalent of driving 200 MPH on a highway without working brakes)
  • Oracle has focused on addressing the wrong IO bottlenecks and has overlooked the real ones
  • To get all the benefits of Intel Optane performance, use a proper server—Percona Server for MySQL—which is able to utilize more IOPS from the device.

The post On MySQL and Intel Optane performance appeared first on Percona Database Performance Blog.

Jul
03
2018
--

Google Cloud’s COO departs after 7 months

At the end of last November, Google announced that Diane Bryant, who at the time was on a leave of absence from her position as the head of Intel’s data center group, would become Google Cloud’s new COO. This was a major coup for Google, but it wasn’t meant to last. After only seven months on the job, Bryant has left Google Cloud, as Business Insider first reported today.

“We can confirm that Diane Bryant is no longer with Google. We are grateful for the contributions she made while at Google and we wish her the best in her next pursuit,” a Google spokesperson told us when we reached out for comment.

The reasons for Bryant’s departure are currently unclear. It’s no secret that Intel is looking for a new CEO and Bryant would fit the bill. Intel also famously likes to recruit insiders as its leaders, though I would be surprised if the company’s board had already decided on a replacement. Bryant spent more than 25 years at Intel and her hire at Google looked like it would be a good match, especially given that Google’s position behind Amazon and Microsoft in the cloud wars means that it needs all the executive talent it can get.

When Bryant was hired, Google Cloud CEO Diane Greene noted that “Diane’s strategic acumen, technical knowledge and client focus will prove invaluable as we accelerate the scale and reach of Google Cloud.” According to the most recent analyst reports, Google Cloud’s market share has ticked up a bit — and its revenue has increased at the same time — but Google remains a distant third in the competition and it doesn’t look like that’s changing anytime soon.

May
08
2018
--

Intel Capital pumps $72M into AI, IoT, cloud and silicon startups, $115M invested so far in 2018

Intel Capital, the investment arm of the computer processor giant, is today announcing $72 million in funding for the 12 newest startups to enter its portfolio, bringing the total invested so far this year to $115 million. Announced at the company’s global summit currently underway in southern California, investments in this latest tranche cover artificial intelligence, Internet of Things, cloud services, and silicon. A detailed list is below.

Other notable news from the event included a new deal between the NBA and Intel Capital to work on more collaborations in delivering sports content, an area where Intel has already been working for years; and the news that Intel has now invested $125 million in startups headed by minorities, women and other under-represented groups as part of its Diversity Initiative. The mark was reached 2.5 years ahead of schedule, it said.

The range of categories of the startups that Intel is investing in is a mark of how the company continues to back ideas that it views as central to its future business — and specifically where it hopes its processors will play a central role, such as AI, IoT and cloud. Investing in silicon startups, meanwhile, is a sign of how Intel is also focusing on businesses that are working in an area that’s close to the company’s own DNA.

It’s hasn’t been a completely smooth road. Intel became a huge presence in the world of IT and early rise of desktop and laptop computers many years ago with its advances in PC processors, but its fortunes changed with the shift to mobile, which saw the emergence of a new wave of chip companies and designs for smaller and faster devices. Mobile is area that Intel itself acknowledged it largely missed out.

Later years have seen still other issues hit the company. For example, the Spectre security flaw (fixes for which are still being rolled out). And some of the business lines where Intel was hoping to make a mark have not panned out as it hoped they would. Just last month, Intel shut down development of its Vaunt smart glasses and reportedly the entirety of its new devices group.

The investments that Intel Capital makes, in contrast, are a fresher and more optimistic aspect of the company’s operations: they represent hopes and possibilities that still have everything to play for. And given that, on balance, things like AI and cloud services still have a long way to go before being truly ubiquitous, there remains a lot of opportunity for Intel.

“These innovative companies reflect Intel’s strategic focus as a data leader,” said Wendell Brooks, Intel senior vice president and president of Intel Capital, in a statement. “They’re helping shape the future of artificial intelligence, the future of the cloud and the Internet of Things, and the future of silicon. These are critical areas of technology as the world becomes increasingly connected and smart.”

Intel Capital since 1991 has put $12.3 billion into 1,530 companies covering everything from autonomous driving to virtual reality and e-commerce and says that more than 660 of these startups have gone public or been acquired. Intel has organised its investment announcements thematically before: last October, it announced $60 million in 15 big data startups.

Here’s a rundown of the investments getting announced today. Unless otherwise noted, the startups are based around Silicon Valley:

Avaamo is a deep learning startup that builds conversational interfaces based on neural networks to address problems in enterprises — part of the wave of startups that are focusing on non-consumer conversational AI solutions.

Fictiv has built a “virtual manufacturing platform” to design, develop and deliver physical products, linking companies that want to build products with manufacturers who can help them. This is a problem that has foxed many a startup (notable failures have included Factorli out of Las Vegas), and it will be interesting to see if newer advances will make the challenges here surmoutable.

Gamalon from Cambridge, MA, says it has built a machine learning platform to “teaches computers actual ideas.” Its so-called Idea Learning technology is able to order free-form data like chat transcripts and surveys into something that a computer can read, making the data more actionable. More from Ron here.

Reconova out of Xiamen, China is focusing on problems in visual perception in areas like retail, smart home and intelligent security.

Syntiant is an Irvine, CA-based AI semiconductor company that is working on ways of placing neural decision making on chips themselves to speed up processing and reduce battery consumption — a key challenge as computing devices move more information to the cloud and keep getting smaller. Target devices include mobile phones, wearable devices, smart sensors and drones.

Alauda out of China is a container-based cloud services provider focusing on enterprise platform-as-a-service solutions. “Alauda serves organizations undergoing digital transformation across a number of industries, including financial services, manufacturing, aviation, energy and automotive,” Intel said.

CloudGenix is a software-defined wide-area network startup, addressing an important area as more businesses take their networks and data into the cloud and look for cost savings. Intel says its customers use its broadband solutions to run unified communications and data center applications to remote offices, cutting costs by 70 percent and seeing big speed and reliability improvements.

Espressif Systems, also based in China, is a fabless semiconductor company, with its system-on-a-chip focused on IoT solutions.

VenueNext is a “smart venue” platform to deliver various services to visitors’ smartphones, providing analytics and more to the facility providing the services. Hospitals, sports stadiums and others are among its customers.

Lyncean Technologies is nearly 18 years old (founded in 2001) and has been working on something called Compact Light Source (CLS), which Intel describes as a miniature synchrotron X-ray source, which can be used for either extremely detailed large X-rays or very microscopic ones. This has both medical and security applications, making it a very timely business.

Movellus “develops semiconductor technologies that enable digital tools to automatically create and implement functionality previously achievable only with custom analog design.” Its main focus is creating more efficient approaches to designing analog circuits for systems on chips, needed for AI and other applications.

SiFive makes “market-ready processor core IP based on the RISC-V instruction set architecture,” founded by the inventors of RISC-V and led by a team of industry veterans.

Apr
02
2018
--

Apple, in a very Apple move, is reportedly working on its own Mac chips

Apple is planning to use its own chips for its Mac devices, which could replace the Intel chips currently running on its desktop and laptop hardware, according to a report from Bloomberg.

Apple already designs a lot of custom silicon, including its chipsets like the W-series for its Bluetooth headphones, the S-series in its watches, its A-series iPhone chips, as well as customized GPU for the new iPhones. In that sense, Apple has in a lot of ways built its own internal fabless chip firm, which makes sense as it looks for its devices to tackle more and more specific use cases and remove some of its reliance on third parties for their equipment. Apple is already in the middle of in a very public spat with Qualcomm over royalties, and while the Mac is sort of a tertiary product in its lineup, it still contributes a significant portion of revenue to the company.

Creating an entire suite of custom silicon could do a lot of things for Apple, the least of which bringing in the Mac into a system where the devices can talk to each other more efficiently. Apple already has a lot of tools to shift user activities between all its devices, but making that more seamless means it’s easier to lock users into the Apple ecosystem. If you’ve ever compared connecting headphones with a W1 chip to the iPhone and just typical Bluetooth headphones, you’ve probably seen the difference, and that could be even more robust with its own chipset. Bloomberg reports that Apple may implement the chips as soon as 2020.

Intel may be the clear loser here, and the market is reflecting that. Intel’s stock is down nearly 8% after the report came out, as it would be a clear shift away from the company’s typical architecture where it has long held its ground as Apple moves on from traditional silicon to its own custom designs. Apple, too, is not the only company looking to design its own silicon, with Amazon looking into building its own AI chips for Alexa in another move to create a lock-in for the Amazon ecosystem. And while the biggest players are looking at their own architecture, there’s an entire suite of startups getting a lot of funding building custom silicon geared toward AI.

Apple declined to comment.

Mar
15
2018
--

The red-hot AI hardware space gets even hotter with $56M for a startup called SambaNova Systems

Another massive financing round for an AI chip company is coming in today, this time for SambaNova Systems — a startup founded by a pair of Stanford professors and a longtime chip company executive — to build out the next generation of hardware to supercharge AI-centric operations.

SambaNova joins an already quite large class of startups looking to attack the problem of making AI operations much more efficient and faster by rethinking the actual substrate where the computations happen. The GPU has become increasingly popular among developers for its ability to handle the kinds of lightweight mathematics in very speedy fashion necessary for AI operations. Startups like SambaNova look to create a new platform from scratch, all the way down to the hardware, that is optimized exactly for those operations. The hope is that by doing that, it will be able to outclass a GPU in terms of speed, power usage, and even potentially the actual size of the chip. SambaNova today said it has raised a massive $56 million series A financing round was co-led by GV and Walden International, with participation from Redline Capital and Atlantic Bridge Ventures.

SambaNova is the product of technology from Kunle Olukotun and Chris Ré, two professors at Stanford, and led by former Oracle SVP of development Rodrigo Liang, who was also a VP at Sun for almost 8 years. When looking at the landscape, the team at SambaNova looked to work their way backwards, first identifying what operations need to happen more efficiently and then figuring out what kind of hardware needs to be in place in order to make that happen. That boils down to a lot of calculations stemming from a field of mathematics called linear algebra done very, very quickly, but it’s something that existing CPUs aren’t exactly tuned to do. And a common criticism from most of the founders in this space is that Nvidia GPUs, while much more powerful than CPUs when it comes to these operations, are still ripe for disruption.

“You’ve got these huge [computational] demands, but you have the slowing down of Moore’s law,” Olukotun said. “The question is, how do you meet these demands while Moore’s law slows. Fundamentally you have to develop computing that’s more efficient. If you look at the current approaches to improve these applications based on multiple big cores or many small, or even FPGA or GPU, we fundamentally don’t think you can get to the efficiencies you need. You need an approach that’s different in the algorithms you use and the underlying hardware that’s also required. You need a combination of the two in order to achieve the performance and flexibility levels you need in order to move forward.”

While a $56 million funding round for a series A might sound massive, it’s becoming a pretty standard number for startups looking to attack this space, which has an opportunity to beat massive chipmakers and create a new generation of hardware that will be omnipresent among any device that is built around artificial intelligence — whether that’s a chip sitting on an autonomous vehicle doing rapid image processing to potentially even a server within a healthcare organization training models for complex medical problems. Graphcore, another chip startup, got $50 million in funding from Sequoia Capital, while Cerebras Systems also received significant funding from Benchmark Capital.

Olukotun and Liang wouldn’t go into the specifics of the architecture, but they are looking to redo the operational hardware to optimize for the AI-centric frameworks that have become increasingly popular in fields like image and speech recognition. At its core, that involves a lot of rethinking of how interaction with memory occurs and what happens with heat dissipation for the hardware, among other complex problems. Apple, Google with its TPU, and reportedly Amazon have taken an intense interest in this space to design their own hardware that’s optimized for products like Siri or Alexa, which makes sense because dropping that latency to as close to zero as possible with as much accuracy as possible in the end improves the user experience. A great user experience leads to more lock-in for those platforms, and while the larger players may end up making their own hardware, GV’s Dave Munichiello — who is joining the company’s board — says this is basically a validation that everyone else is going to need the technology soon enough.

“Large companies see a need for specialized hardware and infrastructure,” he said. “AI and large-scale data analytics are so essential to providing services the largest companies provide that they’re willing to invest in their own infrastructure, and that tells us more investment is coming. What Amazon and Google and Microsoft and Apple are doing today will be what the rest of the Fortune 100 are investing in in 5 years. I think it just creates a really interesting market and an opportunity to sell a unique product. It just means the market is really large, if you believe in your company’s technical differentiation, you welcome competition.”

There is certainly going to be a lot of competition in this area, and not just from those startups. While SambaNova wants to create a true platform, there are a lot of different interpretations of where it should go — such as whether it should be two separate pieces of hardware that handle either inference or machine training. Intel, too, is betting on an array of products, as well as a technology called Field Programmable Gate Arrays (or FPGA), which would allow for a more modular approach in building hardware specified for AI and are designed to be flexible and change over time. Both Munichiello’s and Olukotun’s arguments are that these require developers who have a special expertise of FPGA, which is a sort of niche-within-a-niche that most organizations will probably not have readily available.

Nvidia has been a massive benefactor in the explosion of AI systems, but it clearly exposed a ton of interest in investing in a new breed of silicon. There’s certainly an argument for developer lock-in on Nvidia’s platforms like Cuda. But there are a lot of new frameworks, like TensorFlow, that are creating a layer of abstraction that are increasingly popular with developers. That, too represents an opportunity for both SambaNova and other startups, who can just work to plug into those popular frameworks, Olukotun said. Cerebras Systems CEO Andrew Feldman actually also addressed some of this on stage at the Goldman Sachs Technology and Internet Conference last month.

“Nvidia has spent a long time building an ecosystem around their GPUs, and for the most part, with the combination of TensorFlow, Google has killed most of its value,” Feldman said at the conference. “What TensorFlow does is, it says to researchers and AI professionals, you don’t have to get into the guts of the hardware. You can write at the upper layers and you can write in Python, you can use scripts, you don’t have to worry about what’s happening underneath. Then you can compile it very simply and directly to a CPU, TPU, GPU, to many different hardwares, including ours. If in order to do that work, you have to be the type of engineer that can do hand-tuned assembly or can live deep in the guts of hardware, there will be no adoption… We’ll just take in their TensorFlow, we don’t have to worry about anything else.”

(As an aside, I was once told that Cuda and those other lower-level platforms are really used by AI wonks like Yann LeCun building weird AI stuff in the corners of the Internet.)

There are, also, two big question marks for SambaNova: first, it’s very new, having started in just November while many of these efforts for both startups and larger companies have been years in the making. Munichiello’s answer to this is that the development for those technologies did, indeed, begin a while ago — and that’s not a terrible thing as SambaNova just gets started in the current generation of AI needs. And the second, among some in the valley, is that most of the industry just might not need hardware that’s does these operations in a blazing fast manner. The latter, you might argue, could just be alleviated by the fact that so many of these companies are getting so much funding, with some already reaching close to billion-dollar valuations.

But, in the end, you can now add SambaNova to the list of AI startups that have raised enormous rounds of funding — one that stretches out to include a myriad of companies around the world like Graphcore and Cerebras Systems, as well as a lot of reported activity out of China with companies like Cambricon Technology and Horizon Robotics. This effort does, indeed, require significant investment not only because it’s hardware at its base, but it has to actually convince customers to deploy that hardware and start tapping the platforms it creates, which supporting existing frameworks hopefully alleviates.

“The challenge you see is that the industry, over the last ten years, has underinvested in semiconductor design,” Liang said. “If you look at the innovations at the startup level all the way through big companies, we really haven’t pushed the envelope on semiconductor design. It was very expensive and the returns were not quite as good. Here we are, suddenly you have a need for semiconductor design, and to do low-power design requires a different skillset. If you look at this transition to intelligent software, it’s one of the biggest transitions we’ve seen in this industry in a long time. You’re not accelerating old software, you want to create that platform that’s flexible enough [to optimize these operations] — and you want to think about all the pieces. It’s not just about machine learning.”

Mar
12
2018
--

A brief history of the epic battle over the fate of Qualcomm

The massive fight over the fate of Qualcomm, which chipmaker Broadcom seeks to acquire in the largest deal in technology effort, took another dramatic turn this afternoon when the Trump administration said it would block the deal.

The move is another chapter in a long story, a culmination of a lot of consolidation activity in the semiconductor space and months over jockeying over whether or not Broadcom would be able to complete a hostile bid for the U.S. chipmaker. Following an investigation by the CFIUS, BroadQual is officially on hold.

Let’s review the past few years and see how we got here.

Mar
12
2018
--

A brief history of the epic battle over the fate of Qualcomm

 The massive fight over the fate of Qualcomm, which chipmaker Broadcom seeks to acquire in the largest deal in technology effort, took another dramatic turn this afternoon when the Trump administration said it would block the deal. The move is another chapter in a long story, a culmination of a lot of consolidation activity in the semiconductor space and months over jockeying over whether or… Read More

Feb
07
2018
--

Intel’s latest chip is designed for computing at the edge

 As we develop increasingly sophisticated technologies like self-driving cars and industrial internet of things sensors, it’s going to require that we move computing to the edge. Essentially this means that instead of sending data to the cloud for processing, it needs to be done right on the device itself because even a little bit of latency is too much. Intel announced a new chip… Read More

Jan
12
2018
--

Intel is having reboot issues with its Spectre-Meltdown patches

 It hasn’t been a fun time to be Intel. Last week the company revealed two chip vulnerabilities that have come to be known as Spectre and Meltdown and have been rocking the entire chip industry ever since. This week the company issued some patches to rectify the problem. Today, word leaked that some companies were having a reboot issue after installing them. A bad week just got worse. Read More

Jan
12
2018
--

Intel tried desperately to change the subject from Spectre and Meltdown at CES

 Intel had a bad week last week. It was so bad that the chip maker has to be thrilled to have CES, the massive consumer technology show going on this week in Las Vegas, as a way to change the subject and focus on the other work they are doing. Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com