Jan
27
2021
--

SAP launches ‘RISE with SAP,’ a concierge service for digital transformation

SAP today announced a new offering it calls ‘RISE with SAP,’ a solution that is meant to help the company’s customers go through their respective digital transformations and become what SAP calls ‘intelligent enterprises.’ RISE is a subscription service that combines a set of services and product offerings.

SAP’s head of product success Sven Denecken (and its COO for S/4Hana) described it as “the best concierge service you can get for your digital transformation” when I talked to him earlier this week. “We need to help our clients to embrace that change that they see currently,” he said. “Transformation is a journey. Every client wants to become that smarter, faster and that nimbler business, but they, of course, also see that they are faced with challenges today and in the future. This continuous transformation is what is happening to businesses. And we do know from working together with them, that actually they agree with those fundamentals. They want to be an intelligent enterprise. They want to adapt and change. But the key question is how to get there? And the key question they ask us is, please help us to get there.”

With RISE for SAP, businesses will get a single contact at SAP to help guide them through their journey, but also access to the SAP partner ecosystem.

The first step in this process, Denecken stressed, isn’t necessarily to bring in new technology, though that is also part of it, but to help businesses redesign and optimize their business processes and implement the best practices in their verticals — and then measure the outcome. “Business process redesign means that you analyze how your business processes perform. How can you get tailored recommendations? How can you benchmark against industry standards? And this helps you to set the tone and also to motivate your people — your IT, your business people — to adapt,” Denecken described. He also noted that in order for a digital transformation project to succeed, IT and business leaders and employees have to work together.

In part, that includes technology offerings and adopting robotic process automation (RPA), for example. As Denecken stressed, all of this builds on top of the work SAP has done with its customers over the years to define business processes and KPIs.

On the technical side, SAP is obviously offering its own services, including its Business Technology Platform, and cloud infrastructure, but it will also support customers on all of the large cloud providers. Also included in RISE is support for more than 2,200 APIs to integrate various on-premises, cloud and non-SAP systems, access to SAP’s low-code and no-code capabilities and, of course, its database and analytics offerings.

“Geopolitical tensions, environmental challenges and the ongoing pandemic are forcing businesses to deal with change faster than ever before,” said Christian Klein, SAP’s CEO, in today’s announcement. “Companies that can adapt their business processes quickly will thrive – and SAP can help them achieve this. This is what RISE with SAP is all about: It helps customers continuously unlock new ways of running businesses in the cloud to stay ahead of their industry.”

With this new offering, SAP is now providing its customers with a number of solutions that were previously available through its partner ecosystem. Denecken doesn’t see this as SAP competing with its own partners, though. Instead, he argues that this is very much a partner play and that this new solution will likely only bring more customers to its partners as well.

“Needless to say, this has been a negotiation with those partners,” he said. “Because yes, it’s sometimes topics that we now take over they [previously] did. But we are looking for scale here. The need in the market for digital transformation has just started. And this is where we see that this is definitely a big offering, together with partners. “

Jan
21
2021
--

Cloud infrastructure startup CloudNatix gets $4.5 million seed round led by DNX Ventures

CloudNatix founder and chief executive officer Rohit Seth

CloudNatix founder and chief executive officer Rohit Seth. Image Credits: CloudNatix

CloudNatix, a startup that provides infrastructure for businesses with multiple cloud and on-premise operations, announced it has raised $4.5 million in seed funding. The round was led by DNX Ventures, an investment firm that focuses on United States and Japanese B2B startups, with participation from Cota Capital. Existing investors Incubate Fund, Vela Partners and 468 Capital also contributed.

The company also added DNX Ventures managing partner Hiro Rio Maeda to its board of directors.

CloudNatix was founded in 2018 by chief executive officer Rohit Seth, who previously held lead engineering roles at Google. The company’s platform helps businesses reduce IT costs by analyzing their infrastructure spending and then using automation to make IT operations across multiple clouds more efficient. The company’s typical customer spends between $500,000 to $50 million on infrastructure each year, and use at least one cloud service provider in addition to on-premise networks.

Built on open-source software like Kubernetes and Prometheus, CloudNatix works with all major cloud providers and on-premise networks. For DevOps teams, it helps configure and manage infrastructure that runs both legacy and modern cloud-native applications, and enables them to transition more easily from on-premise networks to cloud services.

CloudNatix competes most directly with VMware and Red Hat OpenShift. But both of those services are limited to their base platforms, while CloudNatix’s advantage is that it is agnostic to base platforms and cloud service providers, Seth told TechCrunch.

The company’s seed round will be used to scale its engineering, customer support and sales teams.

 

Jan
13
2021
--

Stacklet raises $18M for its cloud governance platform

Stacklet, a startup that is commercializing the Cloud Custodian open-source cloud governance project, today announced that it has raised an $18 million Series A funding round. The round was led by Addition, with participation from Foundation Capital and new individual investor Liam Randall, who is joining the company as VP of business development. Addition and Foundation Capital also invested in Stacklet’s seed round, which the company announced last August. This new round brings the company’s total funding to $22 million.

Stacklet helps enterprises manage their data governance stance across different clouds, accounts, policies and regions, with a focus on security, cost optimization and regulatory compliance. The service offers its users a set of pre-defined policy packs that encode best practices for access to cloud resources, though users can obviously also specify their own rules. In addition, Stacklet offers a number of analytics functions around policy health and resource auditing, as well as a real-time inventory and change management logs for a company’s cloud assets.

The company was co-founded by Travis Stanfield (CEO) and Kapil Thangavelu (CTO). Both bring a lot of industry expertise to the table. Stanfield spent time as an engineer at Microsoft and leading DealerTrack Technologies, while Thangavelu worked at Canonical and most recently in Amazon’s AWSOpen team. Thangavelu is also one of the co-creators of the Cloud Custodian project, which was first incubated at Capital One, where the two co-founders met during their time there, and is now a sandbox project under the Cloud Native Computing Foundation’s umbrella.

“When I joined Capital One, they had made the executive decision to go all-in on cloud and close their data centers,” Thangavelu told me. “I got to join on the ground floor of that movement and Custodian was born as a side project, looking at some of the governance and security needs that large regulated enterprises have as they move into the cloud.”

As companies have sped up their move to the cloud during the pandemic, the need for products like Stacklets has also increased. The company isn’t naming most of its customers, but it has disclosed FICO a design partner. Stacklet isn’t purely focused on the enterprise, though. “Once the cloud infrastructure becomes — for a particular organization — large enough that it’s not knowable in a single person’s head, we can deliver value for you at that time and certainly, whether it’s through the open source or through Stacklet, we will have a story there.” The Cloud Custodian open-source project is already seeing serious use among large enterprises, though, and Stacklet obviously benefits from that as well.

“In just 8 months, Travis and Kapil have gone from an idea to a functioning team with 15 employees, signed early Fortune 2000 design partners and are well on their way to building the Stacklet commercial platform,” Foundation Capital’s Sid Trivedi said. “They’ve done all this while sheltered in place at home during a once-in-a-lifetime global pandemic. This is the type of velocity that investors look for from an early-stage company.”

Looking ahead, the team plans to use the new funding to continue to developed the product, which should be generally available later this year, expand both its engineering and its go-to-market teams and continue to grow the open-source community around Cloud Custodian.

Dec
08
2020
--

AWS expands on SageMaker capabilities with end-to-end features for machine learning

Nearly three years after it was first launched, Amazon Web Services’ SageMaker platform has gotten a significant upgrade in the form of new features, making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said.

As machine learning moves into the mainstream, business units across organizations will find applications for automation, and AWS is trying to make the development of those bespoke applications easier for its customers.

“One of the best parts of having such a widely adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables,” said AWS vice president of machine learning, Swami Sivasubramanian. “Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability and automation at scale.”

Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino’s Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.

The company’s new products include Amazon SageMaker Data Wrangler, which the company said was providing a way to normalize data from disparate sources so the data is consistently easy to use. Data Wrangler can also ease the process of grouping disparate data sources into features to highlight certain types of data. The Data Wrangler tool contains more than 300 built-in data transformers that can help customers normalize, transform and combine features without having to write any code.

Amazon also unveiled the Feature Store, which allows customers to create repositories that make it easier to store, update, retrieve and share machine learning features for training and inference.

Another new tool that Amazon Web Services touted was Pipelines, its workflow management and automation toolkit. The Pipelines tech is designed to provide orchestration and automation features not dissimilar from traditional programming. Using pipelines, developers can define each step of an end-to-end machine learning workflow, the company said in a statement. Developers can use the tools to re-run an end-to-end workflow from SageMaker Studio using the same settings to get the same model every time, or they can re-run the workflow with new data to update their models.

To address the longstanding issues with data bias in artificial intelligence and machine learning models, Amazon launched SageMaker Clarify. First announced today, this tool allegedly provides bias detection across the machine learning workflow, so developers can build with an eye toward better transparency on how models were set up. There are open-source tools that can do these tests, Amazon acknowledged, but the tools are manual and require a lot of lifting from developers, according to the company.

Other products designed to simplify the machine learning application development process include SageMaker Debugger, which enables developers to train models faster by monitoring system resource utilization and alerting developers to potential bottlenecks; Distributed Training, which makes it possible to train large, complex, deep learning models faster than current approaches by automatically splitting data across multiple GPUs to accelerate training times; and SageMaker Edge Manager, a machine learning model management tool for edge devices, which allows developers to optimize, secure, monitor and manage models deployed on fleets of edge devices.

Last but not least, Amazon unveiled SageMaker JumpStart, which provides developers with a searchable interface to find algorithms and sample notebooks so they can get started on their machine learning journey. The company said it would give developers new to machine learning the option to select several pre-built machine learning solutions and deploy them into SageMaker environments.

Dec
01
2020
--

AWS updates its edge computing solutions with new hardware and Local Zones

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

Dec
01
2020
--

AWS adds natural language search service for business intelligence from its data sets

When Amazon Web Services launched QuickSight, its business intelligence service, back in 2016 the company wanted to provide product information and customer information for business users — not just developers.

At the time, the natural language processing technologies available weren’t robust enough to give customers the tools to search databases effectively using queries in plain speech.

Now, as those technologies have matured, Amazon is coming back with a significant upgrade called QuickSight Q, which allows users to just ask a simple question and get the answers they need, according to Andy Jassy’s keynote at AWS re:Invent.

“We will provide natural language to provide what we think the key learning is,” said Jassy. “I don’t like that our users have to know which databases to access or where data is stored. I want them to be able to type into a search bar and get the answer to a natural language question.

That’s what QuickSight Q aims to do. It’s a direct challenge to a number of business intelligence startups and another instance of the way machine learning and natural language processing are changing business processes across multiple industries.

“The way Q works. Type in a question in natural language [like]… ‘Give me the trailing twelve month sales of product X?’… You get an answer in seconds. You don’t have to know tables or have to know data stores.”

It’s a compelling use case and gets at the way AWS is integrating machine learning to provide more no-code services to customers. “Customers didn’t hire us to do machine learning,” Jassy said. “They hired us to answer the questions.”

Nov
12
2020
--

Mirantis brings extensions to its Lens Kubernetes IDE, launches a new Kubernetes distro

Earlier this year, Mirantis, the company that now owns Docker’s enterprise business, acquired Lens, a desktop application that provides developers with something akin to an IDE for managing their Kubernetes clusters. At the time, Mirantis CEO Adrian Ionel told me that the company wants to offer enterprises the tools to quickly build modern applications. Today, it’s taking another step in that direction with the launch of an extensions API for Lens that will take the tool far beyond its original capabilities.

In addition to this update to Lens, Mirantis also today announced a new open-source project: k0s. The company describes it as “a modern, 100% upstream vanilla Kubernetes distro that is designed and packaged without compromise.”

It’s a single optimized binary without any OS dependencies (besides the kernel). Based on upstream Kubernetes, k0s supports Intel and Arm architectures and can run on any Linux host or Windows Server 2019 worker nodes. Given these requirements, the team argues that k0s should work for virtually any use case, ranging from local development clusters to private data centers, telco clusters and hybrid cloud solutions.

“We wanted to create a modern, robust and versatile base layer for various use cases where Kubernetes is in play. Something that leverages vanilla upstream Kubernetes and is versatile enough to cover use cases ranging from typical cloud based deployments to various edge/IoT type of cases,” said Jussi Nummelin, senior principal engineer at Mirantis and founder of k0s. “Leveraging our previous experiences, we really did not want to start maintaining the setup and packaging for various OS distros. Hence the packaging model of a single binary to allow us to focus more on the core problem rather than different flavors of packaging such as debs, rpms and what-nots.”

Mirantis, of course, has a bit of experience in the distro game. In its earliest iteration, back in 2013, the company offered one of the first major OpenStack distributions, after all.

Image Credits: Mirantis

As for Lens, the new API, which will go live next week to coincide with KubeCon, will enable developers to extend the service with support for other Kubernetes-integrated components and services.

“Extensions API will unlock collaboration with technology vendors and transform Lens into a fully featured cloud native development IDE that we can extend and enhance without limits,” said Miska Kaipiainen, the co-founder of the Lens open-source project and senior director of engineering at Mirantis. “If you are a vendor, Lens will provide the best channel to reach tens of thousands of active Kubernetes developers and gain distribution to your technology in a way that did not exist before. At the same time, the users of Lens enjoy quality features, technologies and integrations easier than ever.”

The company has already lined up a number of popular CNCF projects and vendors in the cloud-native ecosystem to build integrations. These include Kubernetes security vendors Aqua and Carbonetes, API gateway maker Ambassador Labs and AIOps company Carbon Relay. Venafi, nCipher, Tigera, Kong and StackRox are also currently working on their extensions.

“Introducing an extensions API to Lens is a game-changer for Kubernetes operators and developers, because it will foster an ecosystem of cloud-native tools that can be used in context with the full power of Kubernetes controls, at the user’s fingertips,” said Viswajith Venugopal, StackRox software engineer and developer of KubeLinter. “We look forward to integrating KubeLinter with Lens for a more seamless user experience.”

Nov
02
2020
--

AWS launches its next-gen GPU instances

AWS today announced the launch of its newest GPU-equipped instances. Dubbed P4d, these new instances are launching a decade after AWS launched its first set of Cluster GPU instances. This new generation is powered by Intel Cascade Lake processors and eight of Nvidia’s A100 Tensor Core GPUs. These instances, AWS promises, offer up to 2.5x the deep learning performance of the previous generation — and training a comparable model should be about 60% cheaper with these new instances.

Image Credits: AWS

For now, there is only one size available, the p4d.24xlarge instance, in AWS slang, and the eight A100 GPUs are connected over Nvidia’s NVLink communication interface and offer support for the company’s GPUDirect interface as well.

With 320 GB of high-bandwidth GPU memory and 400 Gbps networking, this is obviously a very powerful machine. Add to that the 96 CPU cores, 1.1 TB of system memory and 8 TB of SSD storage and it’s maybe no surprise that the on-demand price is $32.77 per hour (though that price goes down to less than $20/hour for one-year reserved instances and $11.57 for three-year reserved instances.

Image Credits: AWS

On the extreme end, you can combine 4,000 or more GPUs into an EC2 UltraCluster, as AWS calls these machines, for high-performance computing workloads at what is essentially a supercomputer-scale machine. Given the price, you’re not likely to spin up one of these clusters to train your model for your toy app anytime soon, but AWS has already been working with a number of enterprise customers to test these instances and clusters, including Toyota Research Institute, GE Healthcare and Aon.

“At [Toyota Research Institute], we’re working to build a future where everyone has the freedom to move,” said Mike Garrison, Technical Lead, Infrastructure Engineering at TRI. “The previous generation P3 instances helped us reduce our time to train machine learning models from days to hours and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.”

Nov
01
2020
--

Warren gets $1.4 million to help local cloud infrastructure providers compete against Amazon and other giants

Started as a side project by its founders, Warren is now helping regional cloud infrastructure service providers compete against Amazon, Microsoft, IBM, Google and other tech giants. Based in Tallinn, Estonia, Warren’s self-service distributed cloud platform is gaining traction in Southeast Asia, one of the world’s fastest-growing cloud service markets, and Europe. It recently closed a $1.4 million seed round led by Passion Capital, with plans to expand in South America, where it recently launched in Brazil.

Warren’s seed funding also included participation from Lemonade Stand and angel investors like former Nokia vice president Paul Melin and Marek Kiisa, co-founder of funds Superangel and NordicNinja.

The leading global cloud providers are aggressively expanding their international businesses by growing their marketing teams and data centers around the world (for example, over the past few months, Microsoft has launched a new data center region in Austria, expanded in Brazil and announced it will build a new region in Taiwan as it competes against Amazon Web Services).

But demand for customized service and control over data still prompt many companies, especially smaller ones, to pick local cloud infrastructure providers instead, Warren co-founder and chief executive officer Tarmo Tael told TechCrunch.

“Local providers pay more attention to personal sales and support, in local language, to all clients in general, and more importantly, take the time to focus on SME clients to provide flexibility and address their custom needs,” he said. “Whereas global providers give a personal touch maybe only to a few big clients in the enterprise sectors.” Many local providers also offer lower prices and give a large amount of bandwidth for free, attracting SMEs.

He added that “the data sovereignty aspect that plays an important role in choosing their cloud platform for many of the clients.”

In 2015, Tael and co-founder Henry Vaaderpass began working on the project that eventually became Warren while running a development agency for e-commerce sites. From the beginning, the two wanted to develop a product of their own and tested several ideas out, but weren’t really excited by any of them, he said. At the same time, the agency’s e-commerce clients were running into challenges as their businesses grew.

Tael and Vaaderpass’s clients tended to pick local cloud infrastructure providers because of lower costs and more personalized support. But setting up new e-commerce projects with scalable infrastructure was costly because many local cloud infrastructure providers use different platforms.

“So we started looking for tools to use for managing our e-commerce projects better and more efficiently,” Tael said. “As we didn’t find what we were looking for, we saw this as an opportunity to build our own.”

After creating their first prototype, Tael and Vaaderpass realized that it could be used by other development teams, and decided to seek angel funding from investors, like Kiisa, who have experience working with cloud data centers or infrastructure providers.

Southeast Asia, one of the world’s fastest-growing cloud markets, is an important part of Warren’s business. Warren will continue to expand in Southeast Asia, while focusing on other developing regions with large domestic markets, like South America (starting with Brazil). Tael said the startup is also in discussion with potential partners in other markets, including Russia, Turkey and China.

Warren’s current clients include Estonian cloud provider Pilw.io and Indonesian cloud provider IdCloudHost. Tael said working with Warren means its customers spend less time dealing with technical issues related to infrastructure software, so their teams, including developers, can instead focus on supporting clients and managing other services they sell.

The company’s goal is to give local cloud infrastructure providers the ability to meet increasing demand, and eventually expand internationally, with tools to handle more installations and end users. These include features like automated maintenance and DevOps processes that streamline feature testing and handling different platforms.

Ultimately, Warren wants to connect providers in a network that end users can access through a single API and user interface. It also envisions the network as a community where Warren’s clients can share resources and, eventually, have a marketplace for their apps and services.

In terms of competition, Tael said local cloud infrastructure providers often turn to OpenStack, Virtuozzo, Stratoscale or Mirantis. The advantage these companies currently have over Warren is a wider network, but Warren is busy building out its own. The company will be able to connect several locations to one provider by the first quarter of 2021. After that, Tael said, it will “gradually connect providers to each other, upgrading our user management and billing services to handle all that complexity.”

Oct
19
2020
--

The OpenStack Foundation becomes the Open Infrastructure Foundation

This has been a long time coming, but the OpenStack Foundation today announced that it is changing its name to “Open Infrastructure Foundation,” starting in 2021.

The announcement, which the foundation made at its virtual developer conference, doesn’t exactly come as a surprise. Over the course of the last few years, the organization started adding new projects that went well beyond the core OpenStack project, and renamed its conference to the “Open Infrastructure Summit.” The organization actually filed for the “Open Infrastructure Foundation” trademark back in April.

Image Credits: OpenStack Foundation

After years of hype, the open-source OpenStack project hit a bit of a wall in 2016, as the market started to consolidate. The project itself, which helps enterprises run their private cloud, found its niche in the telecom space, though, and continues to thrive as one of the world’s most active open-source projects. Indeed, I regularly hear from OpenStack vendors that they are now seeing record sales numbers — despite the lack of hype. With the project being stable, though, the Foundation started casting a wider net and added additional projects like the popular Kata Containers runtime and CI/CD platform Zuul.

“We are officially transitioning and becoming the Open Infrastructure Foundation,” long-term OpenStack Foundation executive president Jonathan Bryce told me. “That is something that I think is an awesome step that’s built on the success that our community has spawned both within projects like OpenStack, but also as a movement […], which is [about] how do you give people choice and control as they build out digital infrastructure? And that is, I think, an awesome mission to have. And that’s what we are recognizing and acknowledging and setting up for another decade of doing that together with our great community.”

In many ways, it’s been more of a surprise that the organization waited as long as it did. As the foundation’s COO Mark Collier told me, the team waited because it wanted to be sure that it did this right.

“We really just wanted to make sure that all the stuff we learned when we were building the OpenStack community and with the community — that started with a simple idea of ‘open source should be part of cloud, for infrastructure.’ That idea has just spawned so much more open source than we could have imagined. Of course, OpenStack itself has gotten bigger and more diverse than we could have imagined,” Collier said.

As part of today’s announcement, the group also announced that its board approved four new members at its Platinum tier, its highest membership level: Ant Group, the Alibaba affiliate behind Alipay, embedded systems specialist Wind River, China’s FiberHome (which was previously a Gold member) and Facebook Connectivity. These companies will join the new foundation in January. To become a Platinum member, companies must contribute $350,000 per year to the foundation and have at least two full-time employees contributing to its projects.

“If you look at those companies that we have as Platinum members, it’s a pretty broad set of organizations,” Bryce noted. “AT&T, the largest carrier in the world. And then you also have a company Ant, who’s the largest payment processor in the world and a massive financial services company overall — over to Ericsson, that does telco, Wind River, that does defense and manufacturing. And I think that speaks to that everybody needs infrastructure. If we build a community — and we successfully structure these communities to write software with a goal of getting all of that software out into production, I think that creates so much value for so many people: for an ecosystem of vendors and for a great group of users and a lot of developers love working in open source because we work with smart people from all over the world.”

The OpenStack Foundation’s existing members are also on board and Bryce and Collier hinted at several new members who will join soon but didn’t quite get everything in place for today’s announcement.

We can probably expect the new foundation to start adding new projects next year, but it’s worth noting that the OpenStack project continues apace. The latest of the project’s bi-annual releases, dubbed “Victoria,” launched last week, with additional Kubernetes integrations, improved support for various accelerators and more. Nothing will really change for the project now that the foundation is changing its name — though it may end up benefitting from a reenergized and more diverse community that will build out projects at its periphery.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com