Nov
02
2020
--

AWS launches its next-gen GPU instances

AWS today announced the launch of its newest GPU-equipped instances. Dubbed P4d, these new instances are launching a decade after AWS launched its first set of Cluster GPU instances. This new generation is powered by Intel Cascade Lake processors and eight of Nvidia’s A100 Tensor Core GPUs. These instances, AWS promises, offer up to 2.5x the deep learning performance of the previous generation — and training a comparable model should be about 60% cheaper with these new instances.

Image Credits: AWS

For now, there is only one size available, the p4d.24xlarge instance, in AWS slang, and the eight A100 GPUs are connected over Nvidia’s NVLink communication interface and offer support for the company’s GPUDirect interface as well.

With 320 GB of high-bandwidth GPU memory and 400 Gbps networking, this is obviously a very powerful machine. Add to that the 96 CPU cores, 1.1 TB of system memory and 8 TB of SSD storage and it’s maybe no surprise that the on-demand price is $32.77 per hour (though that price goes down to less than $20/hour for one-year reserved instances and $11.57 for three-year reserved instances.

Image Credits: AWS

On the extreme end, you can combine 4,000 or more GPUs into an EC2 UltraCluster, as AWS calls these machines, for high-performance computing workloads at what is essentially a supercomputer-scale machine. Given the price, you’re not likely to spin up one of these clusters to train your model for your toy app anytime soon, but AWS has already been working with a number of enterprise customers to test these instances and clusters, including Toyota Research Institute, GE Healthcare and Aon.

“At [Toyota Research Institute], we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

Oct
05
2020
--

As it closes in on Arm, Nvidia announces UK supercomputer dedicated to medical research

As Nvidia continues to work through its deal to acquire Arm from SoftBank for $40 billion, the computing giant is making another big move to lay out its commitment to investing in U.K. technology. Today the company announced plans to develop Cambridge-1, a new £40 million AI supercomputer that will be used for research in the health industry in the country, the first supercomputer built by Nvidia specifically for external research access, it said.

Nvidia said it is already working with GSK, AstraZeneca, London hospitals Guy’s and St Thomas’ NHS Foundation Trust, King’s College London and Oxford Nanopore to use the Cambridge-1. The supercomputer is due to come online by the end of the year and will be the company’s second supercomputer in the country. The first is already in development at the company’s AI Center of Excellence in Cambridge, and the plan is to add more supercomputers over time.

The growing role of AI has underscored an interesting crossroads in medical research. On one hand, leading researchers all acknowledge the role it will be playing in their work. On the other, none of them (nor their institutions) have the resources to meet that demand on their own. That’s driving them all to get involved much more deeply with big tech companies like Google, Microsoft and, in this case, Nvidia, to carry out work.

Alongside the supercomputer news, Nvidia is making a second announcement in the area of healthcare in the U.K.: it has inked a partnership with GSK, which has established an AI hub in London, to build AI-based computational processes that will be used in drug vaccine and discovery — an especially timely piece of news, given that we are in a global health pandemic and all drug makers and researchers are on the hunt to understand more about, and build vaccines for, COVID-19.

The news is coinciding with Nvidia’s industry event, the GPU Technology Conference.

“Tackling the world’s most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI,” said Jensen Huang, founder and CEO of Nvidia, in his keynote at the event. “The Cambridge-1 supercomputer will serve as a hub of innovation for the U.K., and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery.”

The company plans to dedicate Cambridge-1 resources in four areas, it said: industry research, in particular joint research on projects that exceed the resources of any single institution; university granted compute time; health-focused AI startups; and education for future AI practitioners. It’s already building specific applications in areas, like the drug discovery work it’s doing with GSK, that will be run on the machine.

The Cambridge-1 will be built on Nvidia’s DGX SuperPOD system, which can process 400 petaflops of AI performance and 8 petaflops of Linpack performance. Nvidia said this will rank it as the 29th fastest supercomputer in the world.

“Number 29” doesn’t sound very groundbreaking, but there are other reasons why the announcement is significant.

For starters, it underscores how the supercomputing market — while still not a mass-market enterprise — is increasingly developing more focus around specific areas of research and industries. In this case, it underscores how health research has become more complex, and how applications of artificial intelligence have both spurred that complexity but, in the case of building stronger computing power, also provides a better route — some might say one of the only viable routes in the most complex of cases — to medical breakthroughs and discoveries.

It’s also notable that the effort is being forged in the U.K. Nvidia’s deal to buy Arm has seen some resistance in the market — with one group leading a campaign to stop the sale and take Arm independent — but this latest announcement underscores that the company is already involved pretty deeply in the U.K. market, bolstering Nvidia’s case to double down even further. (Yes, chip reference designs and building supercomputers are different enterprises, but the argument for Nvidia is one of commitment and presence.)

“AI and machine learning are like a new microscope that will help scientists to see things that they couldn’t see otherwise,” said Dr. Hal Barron, chief scientific officer and president, R&D, GSK, in a statement. “NVIDIA’s investment in computing, combined with the power of deep learning, will enable solutions to some of the life sciences industry’s greatest challenges and help us continue to deliver transformational medicines and vaccines to patients. Together with GSK’s new AI lab in London, I am delighted that these advanced technologies will now be available to help the U.K.’s outstanding scientists.”

“The use of big data, supercomputing and artificial intelligence have the potential to transform research and development; from target identification through clinical research and all the way to the launch of new medicines,” added James Weatherall, PhD, head of Data Science and AI, AstraZeneca, in his statement.

“Recent advances in AI have seen increasingly powerful models being used for complex tasks such as image recognition and natural language understanding,” said Sebastien Ourselin, head, School of Biomedical Engineering & Imaging Sciences at King’s College London. “These models have achieved previously unimaginable performance by using an unprecedented scale of computational power, amassing millions of GPU hours per model. Through this partnership, for the first time, such a scale of computational power will be available to healthcare research – it will be truly transformational for patient health and treatment pathways.”

Dr. Ian Abbs, chief executive & chief medical director of Guy’s and St Thomas’ NHS Foundation Trust Officer, said: “If AI is to be deployed at scale for patient care, then accuracy, robustness and safety are of paramount importance. We need to ensure AI researchers have access to the largest and most comprehensive datasets that the NHS has to offer, our clinical expertise, and the required computational infrastructure to make sense of the data. This approach is not only necessary, but also the only ethical way to deliver AI in healthcare – more advanced AI means better care for our patients.”

“Compact AI has enabled real-time sequencing in the palm of your hand, and AI supercomputers are enabling new scientific discoveries in large-scale genomic data sets,” added Gordon Sanghera, CEO, Oxford Nanopore Technologies. “These complementary innovations in data analysis support a wealth of impactful science in the U.K., and critically, support our goal of bringing genomic analysis to anyone, anywhere.”

 

Jul
07
2020
--

Nvidia’s Ampere GPUs come to Google Cloud

Nvidia today announced that its new Ampere-based data center GPUs, the A100 Tensor Core GPUs, are now available in alpha on Google Cloud. As the name implies, these GPUs were designed for AI workloads, as well as data analytics and high-performance computing solutions.

The A100 promises a significant performance improvement over previous generations. Nvidia says the A100 can boost training and inference performance by over 20x compared to its predecessors (though you’ll mostly see 6x or 7x improvements in most benchmarks) and tops out at about 19.5 TFLOPs in single-precision performance and 156 TFLOPs for Tensor Float 32 workloads.

Image Credits: Nvidia

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads,” said Manish Sainani, Director of Product Management at Google Cloud, in today’s announcement. “With our new A2 VM family, we are proud to be the first major cloud provider to market Nvidia A100 GPUs, just as we were with Nvidia’s T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

Google Cloud users can get access to instances with up to 16 of these A100 GPUs, for a total of 640GB of GPU memory and 1.3TB of system memory.

May
28
2020
--

Mirantis releases its first major update to Docker Enterprise

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year, and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90% of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”

May
04
2020
--

Nvidia acquires Cumulus Networks

Nvidia today announced its plans to acquire Cumulus Networks, an open-source-centric company that specializes in helping enterprises optimize their data center networking stack. Cumulus offers both its own Linux distribution for network switches, as well as tools for managing network operations. With Cumulus Express, the company also offers a hardware solution in the form of its own data center switch.

The two companies did not announce the price of the acquisition, but chances are we are talking about a considerable amount, given that Cumulus had raised $134 million since it was founded in 2010.

Mountain View-based Cumulus already had a previous partnership with Mellanox, which Nvidia acquired for $6.9 billion. That acquisition closed only a few days ago. As Mellanox’s Amit Katz notes in today’s announcement, the two companies first met in 2013, and they formed a first official partnership in 2016. Cumulus, it’s worth noting, was also an early player in the OpenStack ecosystem.

Having both Cumulus and Mellanox in its stable will give Nvidia virtually all the tools it needs to help enterprises and cloud providers build out their high-performance computing and AI workloads in their data centers. While you may mostly think about Nvidia because of its graphics cards, the company has a sizable data center group, which delivered close to $1 billion in revenue in the last quarter, up 43% from a year ago. In comparison, Nvidia’s revenue from gaming was just under $1.5 billion.

“With Cumulus, NVIDIA can innovate and optimize across the entire networking stack from chips and systems to software including analytics like Cumulus NetQ, delivering great performance and value to customers,” writes Katz. “This open networking platform is extensible and allows enterprise and cloud-scale data centers full control over their operations.”

Mar
05
2020
--

Nvidia acquires data storage and management platform SwiftStack

Nvidia today announced that it has acquired SwiftStack, a software-centric data storage and management platform that supports public cloud, on-premises and edge deployments.

The company’s recent launches focused on improving its support for AI, high-performance computing and accelerated computing workloads, which is surely what Nvidia is most interested in here.

“Building AI supercomputers is exciting to the entire SwiftStack team,” says the company’s co-founder and CPO Joe Arnold in today’s announcement. “We couldn’t be more thrilled to work with the talented folks at NVIDIA and look forward to contributing to its world-leading accelerated computing solutions.”

The two companies did not disclose the price of the acquisition, but SwiftStack had previously raised about $23.6 million in Series A and B rounds led by Mayfield Fund and OpenView Venture Partners. Other investors include Storm Ventures and UMC Capital.

SwiftStack, which was founded in 2011, placed an early bet on OpenStack, the massive open-source project that aimed to give enterprises an AWS-like management experience in their own data centers. The company was one of the largest contributors to OpenStack’s Swift object storage platform and offered a number of services around this, though it seems like in recent years, it has downplayed the OpenStack relationship as that platform’s popularity has fizzled in many verticals.

SwiftStack lists the likes of PayPal, Rogers, data center provider DC Blox, Snapfish and Verizon (TechCrunch’s parent company) on its customer page. Nvidia, too, is a customer.

SwiftStack notes that it team will continue to maintain existing set of open source tools like Swift, ProxyFS, 1space and Controller.

“SwiftStack’s technology is already a key part of NVIDIA’s GPU-powered AI infrastructure, and this acquisition will strengthen what we do for you,” says Arnold.

Jan
29
2020
--

Greylock’s Reid Hoffman and Sarah Guo to talk fundraising at Early Stage SF 2020

Early Stage SF is around the corner, on April 28 in San Francisco, and we are more than excited for this brand new event. The intimate gathering of founders, VCs, operators and tech industry experts is all about giving founders the tools they need to find success, no matter the challenge ahead of them.

Struggling to understand the legal aspects of running a company, like negotiating cap tables or hiring international talent? We’ve got breakout sessions for that. Wondering how to go about fundraising, from getting your first yes to identifying the right investors to planning the timeline for your fundraise sprint? We’ve got breakout sessions for that. Growth marketing? PR/Media? Building a tech stack? Recruiting?

We. Got. You.

Hoffman + Guo

Today, we’re very proud to announce one of our few Main Stage sessions that will be open to all attendees. Reid Hoffman and Sarah Guo will join us for a conversation around “How To Raise Your Series A.”

Reid Hoffman is a legendary entrepreneur and investor in Silicon Valley. He was an Executive VP and founding board member at PayPal before going on to co-found LinkedIn in 2003. He led the company to profitability as CEO before joining Greylock in 2009. He serves on the boards of Airbnb, Apollo Fusion, Aurora, Coda, Convoy, Entrepreneur First, Microsoft, Nauto and Xapo, among others. He’s also an accomplished author, with books like “Blitzscaling,” “The Startup of You” and “The Alliance.”

Sarah Guo has a wealth of experience in the tech world. She started her career in high school at a tech firm founded by her parents, called Casa Systems. She then joined Goldman Sachs, where she invested in growth-stage tech startups such as Zynga and Dropbox, and advised both pre-IPO companies (Workday) and publicly traded firms (Zynga, Netflix and Nvidia). She joined Greylock Partners in 2013 and led the firm’s investment in Cleo, Demisto, Sqreen and Utmost. She has a particular focus on B2B applications, as well as infrastructure, cybersecurity, collaboration tools, AI and healthcare.

The format for Hoffman and Guo’s Main Stage chat will be familiar to folks who have followed the investors. It will be an updated, in-person combination of Hoffman’s famously annotated LinkedIn Series B pitch deck that led to Greylock’s investment, and Sarah Guo’s in-depth breakdown of what she looks for in a pitch.

They’ll lay out a number of universally applicable lessons that folks seeking Series A funding can learn from, tackling each from their own unique perspectives. Hoffman has years of experience in consumer-focused companies, with a special expertise in network effects. Guo is one of the top minds when it comes to investment in enterprise software.

We’re absolutely thrilled about this conversation, and to be honest, the entire Early Stage agenda.

How it works

Here’s how it all works:

There will be about 50+ breakout sessions at the event, and attendees will have an opportunity to attend at least seven. The sessions will cover all the core topics confronting early-stage founders — up through Series A — as they build a company, from raising capital to building a team to growth. Each breakout session will be led by notables in the startup world.

Don’t worry about missing a breakout session, because transcripts from each will be available to show attendees. And most of the folks leading the breakout sessions have agreed to hang at the show for at least half the day and participate in CrunchMatch, TechCrunch’s app to connect founders and investors based on shared interests.

Here’s the fine print. Each of the 50+ breakout sessions is limited to around 100 attendees. We expect a lot more attendees, of course, so signups for each session are on a first-come, first-serve basis. Buy your ticket today and you can sign up for the breakouts that we’ve announced. Pass holders will also receive 24-hour advance notice before we announce the next batch. (And yes, you can “drop” a breakout session in favor of a new one, in the event there is a schedule conflict.)

Grab yourself a ticket and start registering for sessions right here. Interested sponsors can hit up the team here.


Aug
26
2019
--

Nvidia and VMware team up to make GPU virtualization easier

Nvidia today announced that it has been working with VMware to bring its virtual GPU technology (vGPU) to VMware’s vSphere and VMware Cloud on AWS. The company’s core vGPU technology isn’t new, but it now supports server virtualization to enable enterprises to run their hardware-accelerated AI and data science workloads in environments like VMware’s vSphere, using its new vComputeServer technology.

Traditionally (as far as that’s a thing in AI training), GPU-accelerated workloads tend to run on bare metal servers, which were typically managed separately from the rest of a company’s servers.

“With vComputeServer, IT admins can better streamline management of GPU accelerated virtualized servers while retaining existing workflows and lowering overall operational costs,” Nvidia explains in today’s announcement. This also means that businesses will reap the cost benefits of GPU sharing and aggregation, thanks to the improved utilization this technology promises.

Note that vComputeServer works with VMware Sphere, vCenter and vMotion, as well as VMware Cloud. Indeed, the two companies are using the same vComputeServer technology to also bring accelerated GPU services to VMware Cloud on AWS. This allows enterprises to take their containerized applications and from their own data center to the cloud as needed — and then hook into AWS’s other cloud-based technologies.

2019 08 25 1849

“From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Nvidia founder and CEO Jensen Huang . “Together with VMware, we’re designing the most advanced and highest performing GPU- accelerated hybrid cloud infrastructure to foster innovation across the enterprise.”

May
16
2019
--

Unveiling its latest cohort, Alchemist announces $4 million in funding for its enterprise accelerator

The enterprise software and services-focused accelerator Alchemist has raised $4 million in fresh financing from investors BASF and the Qatar Development Bank, just in time for its latest demo day unveiling 20 new companies.

Qatar and BASF join previous investors, including the venture firms Mayfield, Khosla Ventures, Foundation Capital, DFJ and USVP, and corporate investors like Cisco, Siemens and Juniper Networks.

While the roster of successes from Alchemist’s fund isn’t as lengthy as Y Combinator, the accelerator program has launched the likes of the quantum computing upstart Rigetti, the soft-launch developer tool LaunchDarkly and drone startup Matternet .

Some (personal) highlights of the latest cohort include:

  • Bayware: Helmed by a former head of software-defined networking from Cisco, the company is pitching a tool that makes creating networks in multi-cloud environments as easy as copying and pasting.
  • MotorCortex.AI: Co-founded by a Stanford engineering professor and a Carnegie Mellon roboticist, the company is using computer vision, machine learning and robotics to create a fruit packer for packaging lines. Starting with avocados, the company is aiming to tackle the entire packaging side of pick and pack in logistics.
  • Resilio: With claims of a 96% effectiveness rate and $35,000 in annual recurring revenue with another $1 million in the pipeline, Resilio is already seeing companies embrace its mobile app that uses a phone’s camera to track stress levels and application-based prompts on how to lower it, according to Alchemist.
  • Operant Networks: It’s a long-held belief (of mine) that if computing networks are already irrevocably compromised, the best thing that companies and individuals can do is just encrypt the hell out of their data. Apparently Operant agrees with me. The company is claiming 50% time savings with this approach, and have booked $1.9 million in 2019 as proof, according to Alchemist.
  • HPC Hub: HPC Hub wants to democratize access to supercomputers by overlaying a virtualization layer and pre-installed software on underutilized super computers to give more companies and researchers easier access to machines… and they’ve booked $92,000 worth of annual recurring revenue.
  • DinoPlusAI: This chip developer is designing a low latency chip for artificial intelligence applications, reducing latency by 12 times over a competing Nvidia chip, according to the company. DinoPlusAI sees applications for its tech in things like real-time AI markets and autonomous driving. Its team is led by a designer from Cadence and Broadcom and the company already has $8 million in letters of intent signed, according to Alchemist.
  • Aero Systems West: Co-founders from the Air Force’s Research Labs and MIT are aiming to take humans out of drone operations and maintenance. The company contends that for every hour of flight time, drones require seven hours of maintenance and check ups. Aero Systems aims to reduce that by using remote analytics, self-inspection, autonomous deployment and automated maintenance to take humans out of the drone business.

Watch a live stream of Alchemist’s demo day pitches, starting at 3PM, here.

 

Jan
17
2019
--

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former director of Engineering for Facebook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in less than a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from Nvidia and Google are virtually available for anyone to run their tests.

Individual users can get on for free, specify the type of GPU they need to compute their experiment and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a “spell hyper” command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the “size” of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent, as well as a designer.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com