Aug
20
2019
--

IBM is moving OpenPower Foundation to The Linux Foundation

IBM makes the Power Series chips, and as part of that has open-sourced some of the underlying technologies to encourage wider use of these chips. The open-source pieces have been part of the OpenPower Foundation. Today, the company announced it was moving the foundation under The Linux Foundation, and while it was at it, announced it was open-sourcing several other important bits.

Ken King, general manager for OpenPower at IBM, says that at this point in his organization’s evolution, they wanted to move it under the auspices of the Linux Foundation . “We are taking the OpenPower Foundation, and we are putting it as an entity or project underneath The Linux Foundation with the mindset that we are now bringing more of an open governance approach and open governance principles to the foundation,” King told TechCrunch.

But IBM didn’t stop there. It also announced that it was open-sourcing some of the technical underpinnings of the Power Series chip to make it easier for developers and engineers to build on top of the technology. Perhaps most importantly, the company is open-sourcing the Power Instruction Set Architecture (ISA). These are “the definitions developers use for ensuring hardware and software work together on Power,” the company explained.

King sees open-sourcing this technology as an important step for a number of reasons around licensing and governance. “The first thing is that we are taking the ability to be able to implement what we’re licensing, the ISA instruction set architecture, for others to be able to implement on top of that instruction set royalty free with patent rights,” he explained.

The company is also putting this under an open governance workgroup at the OpenPower Foundation. This matters to open-source community members because it provides a layer of transparency that might otherwise be lacking. What that means in practice is that any changes will be subject to a majority vote, so long as the changes meet compatibility requirements, King said.

Jim Zemlin, executive director at the Linux Foundation, says that making all of this part of the Linux Foundation open-source community could drive more innovation. “Instead of a very, very long cycle of building an application and working separately with hardware and chip designers, because all of this is open, you’re able to quickly build your application, prototype it with hardware folks, and then work with a service provider or a company like IBM to take it to market. So there’s not tons of layers in between the actual innovation and value captured by industry in that cycle,” Zemlin explained.

In addition, IBM made several other announcements around open-sourcing other Power Chip technologies designed to help developers and engineers customize and control their implementations of Power chip technology. “IBM will also contribute multiple other technologies including a softcore implementation of the Power ISA, as well as reference designs for the architecture-agnostic Open Coherent Accelerator Processor Interface (OpenCAPI) and the Open Memory Interface (OMI). The OpenCAPI and OMI technologies help maximize memory bandwidth between processors and attached devices, critical to overcoming performance bottlenecks for emerging workloads like AI,” the company said in a statement.

The softcore implementation of the Power ISA, in particular, should give developers more control and even enable them to build their own instruction sets, Hugh Blemings, executive director of the OpenPower Foundation explained. “They can now actually try crafting their own instruction sets, and try out new ways of the accelerated data processes and so forth at a lower level than previously possible,” he said.

The company is announcing all of this today at the The Linux Foundation Open Source Summit and OpenPower Summit in San Diego.

Aug
19
2019
--

The five technical challenges Cerebras overcame in building the first trillion-transistor chip

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The “Wafer Scale Engine” is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).

CS Wafer Keyboard Comparison

Cerebras’ Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems).

It’s made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry’s big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.

Superlatives aside though, the technical challenges that Cerebras had to overcome to reach this milestone I think is the more interesting story here. I sat down with founder and CEO Andrew Feldman this afternoon to discuss what his 173 engineers have been building quietly just down the street here these past few years, with $112 million in venture capital funding from Benchmark and others.

Going big means nothing but challenges

First, a quick background on how the chips that power your phones and computers get made. Fabs like TSMC take standard-sized silicon wafers and divide them into individual chips by using light to etch the transistors into the chip. Wafers are circles and chips are squares, and so there is some basic geometry involved in subdividing that circle into a clear array of individual chips.

One big challenge in this lithography process is that errors can creep into the manufacturing process, requiring extensive testing to verify quality and forcing fabs to throw away poorly performing chips. The smaller and more compact the chip, the less likely any individual chip will be inoperative, and the higher the yield for the fab. Higher yield equals higher profits.

Cerebras throws out the idea of etching a bunch of individual chips onto a single wafer in lieu of just using the whole wafer itself as one gigantic chip. That allows all of those individual cores to connect with one another directly — vastly speeding up the critical feedback loops used in deep learning algorithms — but comes at the cost of huge manufacturing and design challenges to create and manage these chips.

CS Wafer Sean

Cerebras’ technical architecture and design was led by co-founder Sean Lie. Feldman and Lie worked together on a previous startup called SeaMicro, which sold to AMD in 2012 for $334 million (via Cerebras Systems).

The first challenge the team ran into, according to Feldman, was handling communication across the “scribe lines.” While Cerebras’ chip encompasses a full wafer, today’s lithography equipment still has to act like there are individual chips being etched into the silicon wafer. So the company had to invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer. Working with TSMC, they not only invented new channels for communication, but also had to write new software to handle chips with trillion-plus transistors.

The second challenge was yield. With a chip covering an entire silicon wafer, a single imperfection in the etching of that wafer could render the entire chip inoperative. This has been the block for decades on whole-wafer technology: due to the laws of physics, it is essentially impossible to etch a trillion transistors with perfect accuracy repeatedly.

Cerebras approached the problem using redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer. “You have to hold only 1%, 1.5% of these guys aside,” Feldman explained to me. Leaving extra cores allows the chip to essentially self-heal, routing around the lithography error and making a whole-wafer silicon chip viable.

Entering uncharted territory in chip design

Those first two challenges — communicating across the scribe lines between chips and handling yield — have flummoxed chip designers studying whole-wafer chips for decades. But they were known problems, and Feldman said that they were actually easier to solve than expected by re-approaching them using modern tools.

He likens the challenge to climbing Mount Everest. “It’s like the first set of guys failed to climb Mount Everest, they said, ‘Shit, that first part is really hard.’ And then the next set came along and said ‘That shit was nothing. That last hundred yards, that’s a problem.’ ”

And indeed, the toughest challenges, according to Feldman, for Cerebras were the next three, since no other chip designer had gotten past the scribe line communication and yield challenges to actually find what happened next.

The third challenge Cerebras confronted was handling thermal expansion. Chips get extremely hot in operation, but different materials expand at different rates. That means the connectors tethering a chip to its motherboard also need to thermally expand at precisely the same rate, lest cracks develop between the two.

As Feldman explained, “How do you get a connector that can withstand [that]? Nobody had ever done that before, [and so] we had to invent a material. So we have PhDs in material science, [and] we had to invent a material that could absorb some of that difference.”

Once a chip is manufactured, it needs to be tested and packaged for shipment to original equipment manufacturers (OEMs) who add the chips into the products used by end customers (whether data centers or consumer laptops). There is a challenge though: Absolutely nothing on the market is designed to handle a whole-wafer chip.

CS Wafer Inspection

Cerebras designed its own testing and packaging system to handle its chip (via Cerebras Systems).

“How on earth do you package it? Well, the answer is you invent a lot of shit. That is the truth. Nobody had a printed circuit board this size. Nobody had connectors. Nobody had a cold plate. Nobody had tools. Nobody had tools to align them. Nobody had tools to handle them. Nobody had any software to test,” Feldman explained. “And so we have designed this whole manufacturing flow, because nobody has ever done it.” Cerebras’ technology is much more than just the chip it sells — it also includes all of the associated machinery required to actually manufacture and package those chips.

Finally, all that processing power in one chip requires immense power and cooling. Cerebras’ chip uses 15 kilowatts of power to operate — a prodigious amount of power for an individual chip, although relatively comparable to a modern-sized AI cluster. All that power also needs to be cooled, and Cerebras had to design a new way to deliver both for such a large chip.

It essentially approached the problem by turning the chip on its side, in what Feldman called “using the Z-dimension.” The idea was that rather than trying to move power and cooling horizontally across the chip as is traditional, power and cooling are delivered vertically at all points across the chip, ensuring even and consistent access to both.

And so, those were the next three challenges — thermal expansion, packaging and power/cooling — that the company has worked around-the-clock to deliver these past few years.

From theory to reality

Cerebras has a demo chip (I saw one, and yes, it is roughly the size of my head), and it has started to deliver prototypes to customers, according to reports. The big challenge, though, as with all new chips, is scaling production to meet customer demand.

For Cerebras, the situation is a bit unusual. Because it places so much computing power on one wafer, customers don’t necessarily need to buy dozens or hundreds of chips and stitch them together to create a compute cluster. Instead, they may only need a handful of Cerebras chips for their deep-learning needs. The company’s next major phase is to reach scale and ensure a steady delivery of its chips, which it packages as a whole system “appliance” that also includes its proprietary cooling technology.

Expect to hear more details of Cerebras technology in the coming months, particularly as the fight over the future of deep learning processing workflows continues to heat up.

Aug
14
2019
--

Why chipmaker Broadcom is spending big bucks for aging enterprise software companies

Last year Broadcom, a chipmaker, raised eyebrows when it acquired CA Technologies, an enterprise software company with a broad portfolio of products, including a sizable mainframe software tools business. It paid close to $19 billion for the privilege.

Then last week, the company opened up its wallet again and forked over $10.7 billion for Symantec’s enterprise security business. That’s almost $30 billion for two aging enterprise software companies. There has to be some sound strategy behind these purchases, right? Maybe.

Here’s the thing about older software companies. They may not out-innovate the competition anymore, but what they have going for them is a backlog of licensing revenue that appears to have value.

Jul
31
2019
--

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

Jul
23
2019
--

Arrcus snags $30M Series B as it tries to disrupt networking biz

Arrcus has a bold notion to try and take on the biggest names in networking by building a better networking management system. Today it was rewarded with a $30 million Series B investment led by Lightspeed Venture Partners.

Existing investors General Catalyst and Clear Ventures also participated. The company previously raised a seed and Series A totaling $19 million, bringing the total raised to date to $49 million, according to numbers provided by the company.

Founder and CEO Devesh Garg says the company wanted to create a product that would transform the networking industry, which has traditionally been controlled by a few companies. “The idea basically is to give you the best-in-class [networking] software with the most flexible consumption model at the lowest overall total cost of ownership. So you really as an end customer have the choice to choose best-in-class solutions,” Garg told TechCrunch.

This involves building a networking operating system called ArcOS to run the networking environment. For now, that means working with manufacturers of white-box solutions and offering some combination of hardware and software, depending on what the customer requires. Garg says that players at the top of the market like Cisco, Arista and Juniper tend to keep their technical specifications to themselves, making it impossible to integrate ArcOS with those companies at this time, but he sees room for a company like Arrcus .

“Fundamentally, this is a very large marketplace that’s controlled by two or three incumbents, and when you have lack of competition you get all of the traditional bad behavior that comes along with that, including muted innovation, rigidity in terms of the solutions that are provided and these legacy procurement models, where there’s not much flexibility with artificially high pricing,” he explained.

The company hopes to fundamentally change the current system with its solutions, taking advantage of unbranded hardware that offers a similar experience but can run the Arrcus software. “Think of them as white-box manufacturers of switches and routers. Oftentimes, they come from Taiwan, where they’re unbranded, but it’s effectively the same components that are used in the same systems that are used by the [incumbents],” he said.

The approach seems to be working, as the company has grown to 50 employees since it launched in 2016. Garg says that he expects to double that number in the next six-nine months with the new funding. Currently the company has double-digit paying customers and more than 20 in various stages of proofs of concepts, he said.

Jun
21
2019
--

Three years after moving off AWS, Dropbox infrastructure continues to evolve

Conventional wisdom would suggest that you close your data centers and move to the cloud, not the other way around, but in 2016 Dropbox undertook the opposite journey. It (mostly) ended its long-time relationship with AWS and built its own data centers.

Of course, that same conventional wisdom would say, it’s going to get prohibitively expensive and more complicated to keep this up. But Dropbox still believes it made the right decision and has found innovative ways to keep costs down.

Akhil Gupta, VP of Engineering at Dropbox, says that when Dropbox decided to build its own data centers, it realized that as a massive file storage service, it needed control over certain aspects of the underlying hardware that was difficult for AWS to provide, especially in 2016 when Dropbox began making the transition.

“Public cloud by design is trying to work with multiple workloads, customers and use cases and it has to optimize for the lowest common denominator. When you have the scale of Dropbox, it was entirely possible to do what we did,” Gupta explained.

Alone again, naturally

One of the key challenges of trying to manage your own data centers, or build a private cloud where you still act like a cloud company in a private context, is that it’s difficult to innovate and scale the way the public cloud companies do, especially AWS. Dropbox looked at the landscape and decided it would be better off doing just that, and Gupta says even with a small team — the original team was just 30 people — it’s been able to keep innovating.

Jun
12
2019
--

Helium launches $51M-funded ‘LongFi’ IoT alternative to cellular

With 200X the range of Wi-Fi at 1/1000th of the cost of a cellular modem, Helium’s “LongFi” wireless network debuts today. Its transmitters can help track stolen scooters, find missing dogs via IoT collars and collect data from infrastructure sensors. The catch is that Helium’s tiny, extremely low-power, low-data transmission chips rely on connecting to P2P Helium Hotspots people can now buy for $495. Operating those hotspots earns owners a cryptocurrency token Helium promises will be valuable in the future…

The potential of a new wireless standard has allowed Helium to raise $51 million over the past few years from GV, Khosla Ventures and Marc Benioff, including a new $15 million Series C round co-led by Union Square Ventures and Multicoin Capital. That’s in part because one of Helium’s co-founders is Napster inventor Shawn Fanning. Investors are betting that he can change the tech world again, this time with a wireless protocol that like Wi-Fi and Bluetooth before it could unlock unique business opportunities.

Helium already has some big partners lined up, including Lime, which will test it for tracking its lost and stolen scooters and bikes when they’re brought indoors, obscuring other connectivity, or their battery is pulled, out deactivating GPS. “It’s an ultra low-cost version of a LoJack” Helium CEO Amir Haleem says.

InvisiLeash will partner with it to build more trackable pet collars. Agulus will pull data from irrigation valves and pumps for its agriculture tech business. Nestle will track when it’s time to refill water in its ReadyRefresh coolers at offices, and Stay Alfred will use it to track occupancy status and air quality in buildings. Haleem also imagines the tech being useful for tracking wildfires or radiation.

Haleem met Fanning playing video games in the 2000s. They teamed up with Fanning and Sproutling baby monitor (sold to Mattel) founder Chris Bruce in 2013 to start work on Helium. They foresaw a version of Tile’s trackers that could function anywhere while replacing expensive cell connections for devices that don’t need high bandwith. Helium’s 5 kilobit per second connections will compete with SigFox, another lower-power IoT protocol, though Haleem claims its more centralized infrastructure costs are prohibitive. It’s also facing off against Nodle, which piggybacks on devices’ Bluetooth hardware. Lucky for Helium, on-demand rental bikes and scooters that are perfect for its network have reached mainstream popularity just as Helium launches six years after its start.

Helium says it already pre-sold 80% of its Helium Hotspots for its first market in Austin, Texas. People connect them to their Wi-Fi and put it in their window so the devices can pull in data from Helium’s IoT sensors over its open-source LongFi protocol. The hotspots then encrypt and send the data to the company’s cloud that clients can plug into to track and collect info from their devices. The Helium Hotspots only require as much energy as a 12-watt LED light bulb to run, but that $495 price tag is steep. The lack of a concrete return on investment could deter later adopters from buying the expensive device.

Only 150-200 hotspots are necessary to blanket a city in connectivity, Haleem tells me. But because they need to be distributed across the landscape, so a client can’t just fill their warehouse with the hotspots, and the upfront price is expensive for individuals, Helium might need to sign up some retail chains as partners for deployment. As Haleem admits, “The hard part is the education.” Making hotspot buyers understand the potential (and risks) while demonstrating the opportunities for clients will require a ton of outreach and slick marketing.

Without enough Helium Hotspots, the Helium network won’t function. That means this startup will have to simultaneously win at telecom technology, enterprise sales and cryptocurrency for the network to pan out. As if one of those wasn’t hard enough.

May
17
2019
--

HPE is buying Cray for $1.3 billion

HPE announced it was buying Cray for $1.3 billion, giving it access to the company’s high-performance computing portfolio, and perhaps a foothold into quantum computing in the future.

The purchase price was $35 a share, a $5.19 premium over yesterday’s close of $29.81 a share. Cray was founded in the 1970s and for a time represented the cutting edge of super computing in the United States, but times have changed, and as the market has shifted, a deal like this makes sense.

Ray Wang, founder and principal analyst at Constellation Research, says this is about consolidation at the high end of the market. “This is a smart acquisition for HPE. Cray has been losing money for some time but had a great portfolio of IP and patents that is key for the quantum era,” he told TechCrunch.

While HPE’s president and CEO Antonio Neri didn’t see it in those terms, he did see an opportunity in combining the two organizations. “By combining our world-class teams and technology, we will have the opportunity to drive the next generation of high performance computing and play an important part in advancing the way people live and work,” he said in a statement.

Cray CEO and president Peter Ungaro agreed. “We believe that the combination of Cray and HPE creates an industry leader in the fast-growing High-Performance Computing (HPC) and AI markets and creates a number of opportunities that neither company would likely be able to capture on their own,” he wrote in a blog post announcing the deal.

Patrick Moorhead, principal analyst at Moor Insights & Strategy says HPC is one of the fastest growing markets and HPE has indicated it wants to stake a claim there. “I’m not surprised by the deal. Its degree of success will be determined by the integration of the two companies. HPE brings increased scale and some unique consumption models and Cray brings expertise and unique connectivity IP,” Moorhead explained.

While it’s not clear how this will work over time, this type of consolidation usually involves some job loss on the operations side of the house as the two companies become one. It is also unclear how this will affect Cray’s customers as it moves to become part of HPE, but HPE has plans to create a high-performance computing product family using its new assets in combination with the new Cray products.

HPE was formed when HP split into two companies in 2014. HP Inc. is the printer division, while HPE is the enterprise side.

The deal is subject to the typical regulatory oversight, but if all goes well, it is expected to close in HPE’s fiscal Q1 2020.

May
02
2019
--

Microsoft announces the $3,500 HoloLens 2 Development Edition

As part of its rather odd Thursday afternoon pre-Build news dump, Microsoft today announced the HoloLens 2 Development Edition. The company announced the much-improved HoloLens 2 at MWC Barcelona earlier this year, but it’s not shipping to developers yet. Currently, the best release date we have is “later this year.” The Development Edition will launch alongside the regular HoloLens 2.

The Development Edition, which will retail for $3,500 to own outright or on a $99 per month installment plan, doesn’t feature any special hardware. Instead, it comes with $500 in Azure credits and 3-month trials of Unity Pro and the Unity PiXYZ plugin for bringing engineering renderings into Unity.

To get the Development Edition, potential buyers have to join the Microsoft Mixed Reality Developer Program and those who already pre-ordered the standard edition will be able to change their order later this year.

As far as HoloLens news goes, that’s all a bit underwhelming. Anybody can get free Azure credits, after all (though usually only $200) and free trials of Unity Pro are also readily available (though typically limited to 30 days).

Oddly, the regular HoloLens 2 was also supposed to cost $3,500. It’s unclear if the regular edition will now be somewhat cheaper, cost the same but come without the credits, or really why Microsoft isn’t doing this at all. Turning this into a special “Development Edition” feels more like a marketing gimmick than anything else, as well as an attempt to bring some of the futuristic glamour of the HoloLens visor to today’s announcements.

The folks at Unity are clearly excited, though. “Pairing HoloLens 2 with Unity’s real-time 3D development platform enables businesses to accelerate innovation, create immersive experiences, and engage with industrial customers in more interactive ways,” says Tim McDonough, GM of Industrial at Unity, in today’s announcement. “The addition of Unity Pro and PiXYZ Plugin to HoloLens 2 Development Edition gives businesses the immediate ability to create real-time 2D, 3D, VR, and AR interactive experiences while allowing for the importing and preparation of design data to create real-time experiences.”

Microsoft also today noted that Unreal Engine 4 support for HoloLens 2 will become available by the end of May.

May
02
2019
--

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com