Feb
17
2021
--

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

Feb
04
2021
--

Google Cloud launches Apigee X, the next generation of its API management platform

Google today announced the launch of Apigee X, the next major release of the Apgiee API management platform it acquired back in 2016.

“If you look at what’s happening — especially after the pandemic started in March last year — the volume of digital activities has gone up in every kind of industry, all kinds of use cases are coming up. And one of the things we see is the need for a really high-performance, reliable, global digital transformation platform,” Amit Zavery, Google Cloud’s head of platform, told me.

He noted that the number of API calls has gone up 47 percent from last year and that the platform now handles about 2.2 trillion API calls per year.

At the core of the updates are deeper integrations with Google Cloud’s AI, security and networking tools. In practice, this means Apigee users can now deploy their APIs across 24 Google Cloud regions, for example, and use Google’s caching services in more than 100 edge locations.

Image Credits: Google

In addition, Apigee X now integrates with Google’s Cloud Armor firewall and its Cloud Identity Access Management platform. This also means that Apigee users won’t have to use third-party tools for their firewall and identity management needs.

“We do a lot of AI/ML-based anomaly detection and operations management,” Zavery explained. “We can predict any kind of malicious intent or any other things which might happen to those API calls or your traffic by embedding a lot of those insights into our API platform. I think [that] is a big improvement, as well as new features, especially in operations management, security management, vulnerability management and making those a core capability so that as a business, you don’t have to worry about all these things. It comes with the core capabilities and that is really where the front doors of digital front-ends can shine and customers can focus on that.”

The platform now also makes better use of Google’s AI capabilities to help users identify anomalies or predict traffic for peak seasons. The idea here is to help customers automate a lot of the standards automation tasks and, of course, improve security at the same time.

As Zavery stressed, API management is now about more than just managing traffic between applications. But more than just helping customers manage their digital transformation projects, the Apigee team is now thinking about what it calls ‘digital excellence.’ “That’s how we’re thinking of the journey for customers moving from not just ‘hey, I can have a front end,’ but what about all the excellent things you want to do and how we can do that,” Zavery said.

“During these uncertain times, organizations worldwide are doubling-down on their API strategies to operate anywhere, automate processes, and deliver new digital experiences quickly and securely,” said James Fairweather, Chief Innovation Officer at Pitney Bowes. “By powering APIs with new capabilities like reCAPTCHA Enterprise, Cloud Armor (WAF), and Cloud CDN, Apigee X makes it easy for enterprises like us to scale digital initiatives, and deliver innovative experiences to our customers, employees and partners.”

Jan
21
2021
--

Cloud infrastructure startup CloudNatix gets $4.5 million seed round led by DNX Ventures

CloudNatix founder and chief executive officer Rohit Seth

CloudNatix founder and chief executive officer Rohit Seth. Image Credits: CloudNatix

CloudNatix, a startup that provides infrastructure for businesses with multiple cloud and on-premise operations, announced it has raised $4.5 million in seed funding. The round was led by DNX Ventures, an investment firm that focuses on United States and Japanese B2B startups, with participation from Cota Capital. Existing investors Incubate Fund, Vela Partners and 468 Capital also contributed.

The company also added DNX Ventures managing partner Hiro Rio Maeda to its board of directors.

CloudNatix was founded in 2018 by chief executive officer Rohit Seth, who previously held lead engineering roles at Google. The company’s platform helps businesses reduce IT costs by analyzing their infrastructure spending and then using automation to make IT operations across multiple clouds more efficient. The company’s typical customer spends between $500,000 to $50 million on infrastructure each year, and use at least one cloud service provider in addition to on-premise networks.

Built on open-source software like Kubernetes and Prometheus, CloudNatix works with all major cloud providers and on-premise networks. For DevOps teams, it helps configure and manage infrastructure that runs both legacy and modern cloud-native applications, and enables them to transition more easily from on-premise networks to cloud services.

CloudNatix competes most directly with VMware and Red Hat OpenShift. But both of those services are limited to their base platforms, while CloudNatix’s advantage is that it is agnostic to base platforms and cloud service providers, Seth told TechCrunch.

The company’s seed round will be used to scale its engineering, customer support and sales teams.

 

Jan
13
2021
--

Stacklet raises $18M for its cloud governance platform

Stacklet, a startup that is commercializing the Cloud Custodian open-source cloud governance project, today announced that it has raised an $18 million Series A funding round. The round was led by Addition, with participation from Foundation Capital and new individual investor Liam Randall, who is joining the company as VP of business development. Addition and Foundation Capital also invested in Stacklet’s seed round, which the company announced last August. This new round brings the company’s total funding to $22 million.

Stacklet helps enterprises manage their data governance stance across different clouds, accounts, policies and regions, with a focus on security, cost optimization and regulatory compliance. The service offers its users a set of pre-defined policy packs that encode best practices for access to cloud resources, though users can obviously also specify their own rules. In addition, Stacklet offers a number of analytics functions around policy health and resource auditing, as well as a real-time inventory and change management logs for a company’s cloud assets.

The company was co-founded by Travis Stanfield (CEO) and Kapil Thangavelu (CTO). Both bring a lot of industry expertise to the table. Stanfield spent time as an engineer at Microsoft and leading DealerTrack Technologies, while Thangavelu worked at Canonical and most recently in Amazon’s AWSOpen team. Thangavelu is also one of the co-creators of the Cloud Custodian project, which was first incubated at Capital One, where the two co-founders met during their time there, and is now a sandbox project under the Cloud Native Computing Foundation’s umbrella.

“When I joined Capital One, they had made the executive decision to go all-in on cloud and close their data centers,” Thangavelu told me. “I got to join on the ground floor of that movement and Custodian was born as a side project, looking at some of the governance and security needs that large regulated enterprises have as they move into the cloud.”

As companies have sped up their move to the cloud during the pandemic, the need for products like Stacklets has also increased. The company isn’t naming most of its customers, but it has disclosed FICO a design partner. Stacklet isn’t purely focused on the enterprise, though. “Once the cloud infrastructure becomes — for a particular organization — large enough that it’s not knowable in a single person’s head, we can deliver value for you at that time and certainly, whether it’s through the open source or through Stacklet, we will have a story there.” The Cloud Custodian open-source project is already seeing serious use among large enterprises, though, and Stacklet obviously benefits from that as well.

“In just 8 months, Travis and Kapil have gone from an idea to a functioning team with 15 employees, signed early Fortune 2000 design partners and are well on their way to building the Stacklet commercial platform,” Foundation Capital’s Sid Trivedi said. “They’ve done all this while sheltered in place at home during a once-in-a-lifetime global pandemic. This is the type of velocity that investors look for from an early-stage company.”

Looking ahead, the team plans to use the new funding to continue to developed the product, which should be generally available later this year, expand both its engineering and its go-to-market teams and continue to grow the open-source community around Cloud Custodian.

Dec
08
2020
--

AWS expands on SageMaker capabilities with end-to-end features for machine learning

Nearly three years after it was first launched, Amazon Web Services’ SageMaker platform has gotten a significant upgrade in the form of new features, making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said.

As machine learning moves into the mainstream, business units across organizations will find applications for automation, and AWS is trying to make the development of those bespoke applications easier for its customers.

“One of the best parts of having such a widely adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables,” said AWS vice president of machine learning, Swami Sivasubramanian. “Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability and automation at scale.”

Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino’s Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.

The company’s new products include Amazon SageMaker Data Wrangler, which the company said was providing a way to normalize data from disparate sources so the data is consistently easy to use. Data Wrangler can also ease the process of grouping disparate data sources into features to highlight certain types of data. The Data Wrangler tool contains more than 300 built-in data transformers that can help customers normalize, transform and combine features without having to write any code.

Amazon also unveiled the Feature Store, which allows customers to create repositories that make it easier to store, update, retrieve and share machine learning features for training and inference.

Another new tool that Amazon Web Services touted was Pipelines, its workflow management and automation toolkit. The Pipelines tech is designed to provide orchestration and automation features not dissimilar from traditional programming. Using pipelines, developers can define each step of an end-to-end machine learning workflow, the company said in a statement. Developers can use the tools to re-run an end-to-end workflow from SageMaker Studio using the same settings to get the same model every time, or they can re-run the workflow with new data to update their models.

To address the longstanding issues with data bias in artificial intelligence and machine learning models, Amazon launched SageMaker Clarify. First announced today, this tool allegedly provides bias detection across the machine learning workflow, so developers can build with an eye toward better transparency on how models were set up. There are open-source tools that can do these tests, Amazon acknowledged, but the tools are manual and require a lot of lifting from developers, according to the company.

Other products designed to simplify the machine learning application development process include SageMaker Debugger, which enables developers to train models faster by monitoring system resource utilization and alerting developers to potential bottlenecks; Distributed Training, which makes it possible to train large, complex, deep learning models faster than current approaches by automatically splitting data across multiple GPUs to accelerate training times; and SageMaker Edge Manager, a machine learning model management tool for edge devices, which allows developers to optimize, secure, monitor and manage models deployed on fleets of edge devices.

Last but not least, Amazon unveiled SageMaker JumpStart, which provides developers with a searchable interface to find algorithms and sample notebooks so they can get started on their machine learning journey. The company said it would give developers new to machine learning the option to select several pre-built machine learning solutions and deploy them into SageMaker environments.

Dec
04
2020
--

3 ways the pandemic is transforming tech spending

Ever since the pandemic hit the U.S. in full force last March, the B2B tech community keeps asking the same questions: Are businesses spending more on technology? What’s the money getting spent on? Is the sales cycle faster? What trends will likely carry into 2021?

Recently we decided to join forces to answer these questions. We analyzed data from the just-released Q4 2020 Outlook of the Coupa Business Spend Index (BSI), a leading indicator of economic growth, in light of hundreds of conversations we have had with business-tech buyers this year.

A former Battery Ventures portfolio company, Coupa* is a business spend-management company that has cumulatively processed more than $2 trillion in business spending. This perspective gives Coupa unique, real-time insights into tech spending trends across multiple industries.

Tech spending is continuing despite the economic recession — which helps explain why many startups are raising large rounds and even tapping public markets for capital.

Broadly speaking, tech spending is continuing despite the economic recession — which helps explain why many tech startups are raising large financing rounds and even tapping the public markets for capital. Here are our three specific takeaways on current tech spending:

Spending is shifting away from remote collaboration to SaaS and cloud computing

Tech spending ranks among the hottest boardroom topics today. Decisions that used to be confined to the CIO’s organization are now operationally and strategically critical to the CEO. Multiple reasons drive this shift, but the pandemic has forced businesses to operate and engage with customers differently, almost overnight. Boards recognize that companies must change their business models and operations if they don’t want to become obsolete. The question on everyone’s mind is no longer “what are our technology investments?” but rather, “how fast can they happen?”

Spending on WFH/remote collaboration tools has largely run its course in the first wave of adaptation forced by the pandemic. Now we’re seeing a second wave of tech spending, in which enterprises adopt technology to make operations easier and simply keep their doors open.

SaaS solutions are replacing unsustainable manual processes. Consider Rhode Island’s decision to shift from in-person citizen surveying to using SurveyMonkey. Many companies are shifting their vendor payments to digital payments, ditching paper checks entirely. Utility provider PG&E is accelerating its digital transformation roadmap from five years to two years.

The second wave of adaptation has also pushed many companies to embrace the cloud, as this chart makes clear:

Similarly, the difficulty of maintaining a traditional data center during a pandemic has pushed many companies to finally shift to cloud infrastructure under COVID. As they migrate that workload to the cloud, the pie is still expanding. Goldman Sachs and Battery Ventures data suggest $600 billion worth of disruption potential will bleed into 2021 and beyond.

In addition to SaaS and cloud adoption, companies across sectors are spending on technologies to reduce their reliance on humans. For instance, Tyson Foods is investing in and accelerating the adoption of automated technology to process poultry, pork and beef.

All companies are digital product companies now

Mention “digital product company” in the past, and we’d all think of Netflix. But now every company has to reimagine itself as offering digital products in a meaningful way.

Dec
01
2020
--

AWS updates its edge computing solutions with new hardware and Local Zones

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

Nov
18
2020
--

Abacus.AI raises another $22M and launches new AI modules

AI startup RealityEngines.AI changed its name to Abacus.AI in July. At the same time, it announced a $13 million Series A round. Today, only a few months later, it is not changing its name again, but it is announcing a $22 million Series B round, led by Coatue, with Decibel Ventures and Index Partners participating as well. With this, the company, which was co-founded by former AWS and Google exec Bindu Reddy, has now raised a total of $40.3 million.

Abacus co-founder Bindu Reddy, Arvind Sundararajan and Siddartha Naidu. Image Credits: Abacus.AI

In addition to the new funding, Abacus.AI is also launching a new product today, which it calls Abacus.AI Deconstructed. Originally, the idea behind RealityEngines/Abacus.AI was to provide its users with a platform that would simplify building AI models by using AI to automatically train and optimize them. That hasn’t changed, but as it turns out, a lot of (potential) customers had already invested into their own workflows for building and training deep learning models but were looking for help in putting them into production and managing them throughout their lifecycle.

“One of the big pain points [businesses] had was, ‘look, I have data scientists and I have my models that I’ve built in-house. My data scientists have built them on laptops, but I don’t know how to push them to production. I don’t know how to maintain and keep models in production.’ I think pretty much every startup now is thinking of that problem,” Reddy said.

Image Credits: Abacus.AI

Since Abacus.AI had already built those tools anyway, the company decided to now also break its service down into three parts that users can adapt without relying on the full platform. That means you can now bring your model to the service and have the company host and monitor the model for you, for example. The service will manage the model in production and, for example, monitor for model drift.

Another area Abacus.AI has long focused on is model explainability and de-biasing, so it’s making that available as a module as well, as well as its real-time machine learning feature store that helps organizations create, store and share their machine learning features and deploy them into production.

As for the funding, Reddy tells me the company didn’t really have to raise a new round at this point. After the company announced its first round earlier this year, there was quite a lot of interest from others to also invest. “So we decided that we may as well raise the next round because we were seeing adoption, we felt we were ready product-wise. But we didn’t have a large enough sales team. And raising a little early made sense to build up the sales team,” she said.

Reddy also stressed that unlike some of the company’s competitors, Abacus.AI is trying to build a full-stack self-service solution that can essentially compete with the offerings of the big cloud vendors. That — and the engineering talent to build it — doesn’t come cheap.

Image Credits: Abacus.AI

It’s no surprise then that Abacus.AI plans to use the new funding to increase its R&D team, but it will also increase its go-to-market team from two to ten in the coming months. While the company is betting on a self-service model — and is seeing good traction with small- and medium-sized companies — you still need a sales team to work with large enterprises.

Come January, the company also plans to launch support for more languages and more machine vision use cases.

“We are proud to be leading the Series B investment in Abacus.AI, because we think that Abacus.AI’s unique cloud service now makes state-of-the-art AI easily accessible for organizations of all sizes, including start-ups,” Yanda Erlich, a p artner at Coatue Ventures  told me. “Abacus.AI’s end-to-end autonomous AI service powered by their Neural Architecture Search invention helps organizations with no ML expertise easily deploy deep learning systems in production.”

 

Nov
05
2020
--

Alibaba passes IBM in cloud infrastructure market with over $2B in revenue

When Alibaba entered the cloud infrastructure market in earnest in 2015 it had ambitious goals, and it has been growing steadily. Today, the Chinese e-commerce giant announced quarterly cloud revenue of $2.194 billion. With that number, it has passed IBM’s $1.65 billion revenue result (according to Synergy Research market share numbers), a significant milestone.

But while $2 billion is a large figure, it’s one worth keeping in perspective. For example, Amazon announced $11.6 billion in cloud infrastructure revenue for its most recent quarter, while Microsoft’s Azure came in second place with $5.9 billion.

Google Cloud has held onto third place, as it has for as long as we’ve been covering the cloud infrastructure market. In its most recent numbers, Synergy pegged Google at 9% market share, or approximately $2.9 billion in revenue.

While Alibaba is still a fair bit behind Google, today’s numbers puts the company firmly in fourth place now, well ahead of IBM . It’s doubtful it could catch Google anytime soon, especially as the company has become more focused under CEO Thomas Kurian, but it is still fairly remarkable that it managed to pass IBM, a stalwart of enterprise computing for decades, as a relative newcomer to the space.

The 60% growth represented a slight increase from the previous quarter’s 59%, but basically means it held steady, something that’s not easy to do as a company reaches a certain revenue plateau. In its earnings call today, Daniel Zhang, chairman and CEO at Alibaba Group, said that in China, which remains the company’s primary market, digital transformation driven by the pandemic was a primary factor in keeping growth steady.

“Cloud is a fast-growing business. If you look at our revenue breakdown, obviously, cloud is enjoying a very, very fast growth. And what we see is that all the industries are in the process of digital transformation. And moving to the cloud is a very important step for the industries,” Zhang said in the call.

He believes eventually that most business will be done in the cloud, and the growth could continue for the medium term, as there are still many companies that haven’t made the switch yet, but will do so over time.

John Dinsdale, an analyst at Synergy Research, says that while China remains its primary market, the company does have a presence outside the country too, and can afford to play the long game in terms of the current geopolitical situation with trade tensions between the U.S. and China.

“Alibaba has already made some strides outside of China and Hong Kong. While the scale is rather small compared with its Chinese operations, Alibaba has established a data center and cloud presence in a range of countries, including six more APAC countries, U.S., U.K. and UAE. Among these, it is the market leader in both Indonesia and Malaysia,” Dinsdale told TechCrunch.

In its most recent data released a couple of weeks ago, prior to today’s numbers, Synergy broke down the market this way: “Amazon 33%, Microsoft 18%, Google 9%, Alibaba 5%, IBM 5%, Salesforce 3%, Tencent 2%, Oracle 2%, NTT 1%, SAP 1% – to the nearest percentage point.”

Nov
02
2020
--

AWS launches its next-gen GPU instances

AWS today announced the launch of its newest GPU-equipped instances. Dubbed P4d, these new instances are launching a decade after AWS launched its first set of Cluster GPU instances. This new generation is powered by Intel Cascade Lake processors and eight of Nvidia’s A100 Tensor Core GPUs. These instances, AWS promises, offer up to 2.5x the deep learning performance of the previous generation — and training a comparable model should be about 60% cheaper with these new instances.

Image Credits: AWS

For now, there is only one size available, the p4d.24xlarge instance, in AWS slang, and the eight A100 GPUs are connected over Nvidia’s NVLink communication interface and offer support for the company’s GPUDirect interface as well.

With 320 GB of high-bandwidth GPU memory and 400 Gbps networking, this is obviously a very powerful machine. Add to that the 96 CPU cores, 1.1 TB of system memory and 8 TB of SSD storage and it’s maybe no surprise that the on-demand price is $32.77 per hour (though that price goes down to less than $20/hour for one-year reserved instances and $11.57 for three-year reserved instances.

Image Credits: AWS

On the extreme end, you can combine 4,000 or more GPUs into an EC2 UltraCluster, as AWS calls these machines, for high-performance computing workloads at what is essentially a supercomputer-scale machine. Given the price, you’re not likely to spin up one of these clusters to train your model for your toy app anytime soon, but AWS has already been working with a number of enterprise customers to test these instances and clusters, including Toyota Research Institute, GE Healthcare and Aon.

“At [Toyota Research Institute], we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com