Jul
29
2021
--

4 key areas SaaS startups must address to scale infrastructure for the enterprise

Startups and SMBs are usually the first to adopt many SaaS products. But as these customers grow in size and complexity — and as you rope in larger organizations — scaling your infrastructure for the enterprise becomes critical for success.

Below are four tips on how to advance your company’s infrastructure to support and grow with your largest customers.

Address your customers’ security and reliability needs

If you’re building SaaS, odds are you’re holding very important customer data. Regardless of what you build, that makes you a threat vector for attacks on your customers. While security is important for all customers, the stakes certainly get higher the larger they grow.

Given the stakes, it’s paramount to build infrastructure, products and processes that address your customers’ growing security and reliability needs. That includes the ethical and moral obligation you have to make sure your systems and practices meet and exceed any claim you make about security and reliability to your customers.

Here are security and reliability requirements large customers typically ask for:

Formal SLAs around uptime: If you’re building SaaS, customers expect it to be available all the time. Large customers using your software for mission-critical applications will expect to see formal SLAs in contracts committing to 99.9% uptime or higher. As you build infrastructure and product layers, you need to be confident in your uptime and be able to measure uptime on a per customer basis so you know if you’re meeting your contractual obligations.

While it’s hard to prioritize asks from your largest customers, you’ll find that their collective feedback will pull your product roadmap in a specific direction.

Real-time status of your platform: Most larger customers will expect to see your platform’s historical uptime and have real-time visibility into events and incidents as they happen. As you mature and specialize, creating this visibility for customers also drives more collaboration between your customer operations and infrastructure teams. This collaboration is valuable to invest in, as it provides insights into how customers are experiencing a particular degradation in your service and allows for you to communicate back what you found so far and what your ETA is.

Backups: As your customers grow, be prepared for expectations around backups — not just in terms of how long it takes to recover the whole application, but also around backup periodicity, location of your backups and data retention (e.g., are you holding on to the data too long?). If you’re building your backup strategy, thinking about future flexibility around backup management will help you stay ahead of these asks.

Apr
08
2021
--

Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project in a tributary near Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.

Sep
24
2020
--

NUVIA raises $240M from Mithril to make climate-ready enterprise chips

Climate change is on everyone’s minds these days, what with the outer Bay Area on fire, orange skies above San Francisco, and a hurricane season that is bearing down on the East Coast with alacrity (and that’s just the United States in the past two weeks).

A major — and growing — source of those emissions is data centers, the cloud infrastructure that powers most of our devices and experiences. That’s led to some novel ideas, such as Microsoft’s underwater data center Project Natick, which just came back to the surface for testing a bit more than a week ago.

Yet, for all the fun experiments, there is a bit more of an obvious solution: just make the chips more energy efficient.

That’s the thesis of NUVIA, which was founded by three ex-Apple chip designers who led the design of the “A” series chip line for the company’s iPhones and iPads for years. Those chips are wicked fast within a very tight energy envelope, and NUVIA’s premise is essentially what happens when you take those sorts of energy constraints (and the experience of its chip design team) and apply them to the data center.

We did a deep profile of the company last year when it announced its $53 million Series A, so definitely read that to understand the founding story and the company’s mission. Now about one year later, it’s coming back to us with news of a whole bunch of more funding.

NUVIA announced today that it has closed on a $240 million Series B round led by Mithril Capital, with a bunch of others involved listed below.

Since we last chatted with the company, we now have a bit more detail of what it’s working on. It has two products under development, a system-on-chip (SoC) unit dubbed “Orion” and a CPU core dubbed “Phoenix.” The company previewed a bit of Phoenix’s performance last month, although as with most chip companies, it is almost certainly too early to make any long-term predictions about how the technology will settle in with existing and future chips coming to the market.

NUVIA’s view is that chips are limited to about 250-300 watts of power given the cooling and power constraints of most data centers. As more cores become common pre chip, each core is going to have to make do with less power availability while maintaining performance. NUVIA’s tech is trying to solve that problem, lowering total cost of ownership for data center operators while also improving overall energy efficiency.

There’s a lot more work to be done of course, so expect to see more product announcements and previews from the company as it gets its technology further finalized. With $240 million more dollars in the bank though, it certainly has the resources to make some progress.

Shortly after we chatted with the company last year, Apple sued company founder and CEO Gerald Williams III for breach of contract, with the company arguing that its former chip designer was trying to poach employees for his nascent startup. Williams counter-sued earlier this year, and the two parties are now in the discovery phase of their lawsuit, which remains ongoing.

In addition to lead Mithril, the round was done “in partnership with” the founders of semiconductor giant Marvell (Sehat Sutardja and Weili Dai), funds managed by BlackRock, Fidelity, and Temasek, plus Atlantic Bridge and Redline Capital along with Series A investors Capricorn Investment Group, Dell Technologies Capital, Mayfield, Nepenthe LLC, and WRVI Capital.

Sep
27
2019
--

Google will soon open a cloud region in Poland

Google today announced its plans to open a new cloud region in Warsaw, Poland to better serve its customers in Central and Eastern Europe.

This move is part of Google’s overall investment in expanding the physical footprint of its data centers. Only a few days ago, after all, the company announced that, in the next two years, it would spend $3.3 billion on its data center presence in Europe alone.

Google Cloud currently operates 20 different regions with 61 availability zones. Warsaw, like most of Google’s regions, will feature three availability zones and launch with all the standard core Google Cloud services, including Compute Engine, App Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner and BigQuery.

To launch the new region in Poland, Google is partnering with Domestic Cloud Provider (a.k.a. Chmury Krajowej, which itself is a joint venture of the Polish Development Fund and PKO Bank Polski). Domestic Cloud Provider (DCP) will become a Google Cloud reseller in the country and build managed services on top of Google’s infrastructure.

“Poland is in a period of rapid growth, is accelerating its digital transformation, and has become an international software engineering hub,” writes Google Cloud CEO Thomas Kurian. “The strategic partnership with DCP and the new Google Cloud region in Warsaw align with our commitment to boost Poland’s digital economy and will make it easier for Polish companies to build highly available, meaningful applications for their customers.”

May
21
2019
--

Praqma puts Atlassian’s Data Center products into containers

It’s KubeCon + CloudNativeCon this week and in the slew of announcements, one name stood out: Atlassian . The company is best known as the maker of tools that allow developers to work more efficiently, and now as a cloud infrastructure provider. In this age of containerization, though, even Atlassian can bask in the glory that is Kubernetes, because the company today announced that its channel partner Praqma is launching Atlassian Software in Kubernetes (ASK), a new solution that allows enterprises to run and manage as containers its on-premise applications like Jira Data Center, with the help of Kubernetes.

Praqma is now making ASK available as open source.

As the company notes in today’s announcement, running a Data Center application and ensuring high availability can be a lot of work using today’s methods. With AKS and by containerizing the applications, scaling and management should become easier — and downtime more avoidable.

“Availability is key with ASK. Automation keeps mission-critical applications running whatever happens,” Praqma’s team explains. “If a Jira server fails, Data Center will automatically redirect traffic to healthy servers. If an application or server crashes Kubernetes automatically reconciles by bringing up a new application. There’s also zero downtime upgrades for Jira.”

AKS handles the scaling and most admin tasks, in addition to offering a monitoring solution based on the open-source Grafana and Prometheus projects.

Containers are slowly becoming the distribution medium of choice for a number of vendors. As enterprises move their existing applications to containers, it makes sense for them to also expect that they can manage their existing on-premises applications from third-party vendors in the same systems. For some vendors, that may mean a shift away from pre-server licensing to per-seat licensing, so there are business implications to this, but in general, it’s a logical move for most.

Apr
02
2018
--

Microsoft launches 2 new Azure regions in Australia

Microsoft continues its steady pace of opening up new data centers and launching new regions for its Azure cloud. Today, the company announced the launch of two new regions in Australia. To deliver these new regions, Azure Australia Central and Central 2, Microsoft entered a strategic partnership with Canberra Data Centers and unsurprisingly, the regions are located in the country’s capital territory around Canberra. These new central regions complement Microsoft’s existing data center presence in Australia, which previously focused on the business centers of Sydney and Melbourne.

Given the location in Canberra, it’s also no surprise that Microsoft is putting an emphasis on its readiness for handling government workloads on its platform. Throughout its announcement, the company also emphasizes that all of its Australia data centers are also the right choice for its customers in New Zealand.

Julia White, Microsoft corporate VP for Azure, told me last month that the company’s strategy around its data center expansion has always been about offering a lot of small regions to allow it to be close to its customers (and, in return, to allow its customers to be close to their own customers, too). “The big distinction is the number of regions we have. “White said. “Microsoft started its infrastructure approach focused on enterprise organizations and built lots of regions because of that. We didn’t pick this regional approach because it’s easy or because it’s simple, but because we believe this is what our customers really want.”

Azure currently consists of 50 available or announced regions. Over time, more of these will also feature numerous availability zones inside every region, though for now, this recently announced feature is only present in two regions.

Mar
16
2018
--

Google expands its Cloud Platform region in the Netherlands

Google today announced that it has expanded its recently launched Cloud Platform region in the Netherlands with an additional zone. The investment, which is worth a reported 500 million euros, expands the existing Netherlands region from two to three regions. With this, all four of the Central European Google Cloud Platform zones now feature three zones (which are akin to what AWS would call “availability zones”) that allow developers to build highly available services across multiple data centers.

Google typically aims to have a least three zones in every region, so today’s announcement to expand its region in the Dutch province of Groningen doesn’t come as a major surprise.

With this move, Google is also making Cloud SpannerCloud BigtableManaged Instance Groups, and Cloud SQL available in the region.

Over the course of the last two years, Google has worked hard to expand its global data center footprint. While it still can’t compete with the likes of AWS and Azure, which currently offers more regions than any of its competitors, the company now has enough of a presence to be competitive in most markets.

In the near future, Google also plans to open regions in Los Angeles, Finland, Osaka and Hong Kong. The major blank spots on its current map remain Africa, China (for rather obvious reasons) and Eastern Europe, including Russia.

Mar
14
2018
--

Microsoft announces its first cloud regions in the Middle East

Microsoft today announced its plans for a major expansion of the Microsoft Cloud with the launch of its first cloud regions in the Middle East. These new regions, which are scheduled to go online in 2019, will be located in Abu Dhabi and Dubai and will host the company’s usual Azure, Office 365 and Dynamics 365 services.

“Microsoft has been present in the Middle East for more than two decades and is deeply invested in the region in many ways,” said Sayed Hashish, Regional General Manager, Microsoft Gulf, in today’s announcement. “Driven by strong customer demand for cloud computing, local data centers were the logical next step given the enormous opportunity that the cloud presents. In areas like digital transformation, and the development of new intelligent services, our ambition is for the Microsoft Cloud to form a strategic part of the backbone for regional economic development.”

The region of the Middle East is relatively late in adopting cloud computing, but that also means that it has a lot of room left to grow. Microsoft is clearly trying to get ahead of this trend.

It’s worth noting that Amazon, too, has already announced its plans for a region in Bahrain which will open in about a year, while Google has not announced any plans to enter this market yet.

In addition to the new Middle East regions, Microsoft also today announced its first region in Switzerland (with data centers around Geneva and Zurich), which is scheduled to go online in 2019. In Germany, the company is launching an additional cloud region and in France, the Microsoft Cloud is now generally available.

In total, Microsoft now offers 50 regions around the globe, with plans for 12 new regions in the works already.

Aug
31
2017
--

Facebook to open source LogDevice for storing logs from distributed data centers

 Facebook is planning to open source LogDevice, the company’s custom-built solution for storing logs collected from distributed data centers. The company made the announcement as part of its Scale conference. Read More

Apr
06
2017
--

Microsoft’s Azure Stack preview adds support for Azure Functions and App Service

 Azure Stack is one of Microsoft’s most ambitious projects in the data center space. The goal of Azure Stack is, at its core, to bring many of the features of Microsoft’s Azure cloud computing platform into the enterprise data center. While there are some obvious advantages to using public cloud services like Azure, AWS or Google Cloud, after all, for many customers, running their… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com