Jul
22
2019
--

Google Cloud makes it easier to set up continuous delivery with Spinnaker

Google Cloud today announced Spinnaker for Google Cloud Platform, a new solution that makes it easier to install and run the Spinnaker continuous delivery (CD) service on Google’s cloud.

Spinnaker was created inside Netflix and is now jointly developed by Netflix and Google. Netflix open-sourced it back in 2015 and over the course of the last few years, it became the open-source CD platform of choice for many enterprises. Today, companies like Adobe, Box, Cisco, Daimler, Samsung and others use it to speed up their development process.

With Spinnaker for Google Cloud Platform, which runs on the Google Kubernetes Engine, Google is making the install process for the service as easy as a few clicks. Once up and running, the Spinnaker install includes all of the core tools, as well as Deck, the user interface for the service. Users pay for the resources used by the Google Kubernetes Engine, as well as Cloud Memorystore for Redis, Google Cloud Load Balancing and potentially other resources they use in the Google Cloud.

could spinnker.max 1100x1100

The company has pre-configured Spinnaker for testing and deploying code on Google Kubernetes Engine, Compute Engine and App Engine, though it also will work with any other public or on-prem cloud. It’s also integrated with Cloud Build, Google’s recently launched continuous integration service, and features support for automatic backups and integrated auditing and monitoring with Google’s Stackdriver.

“We want to make sure that the solution is great both for developers and DevOps or SRE teams,” says Matt Duftler, tech lead for Google’s Spinnaker effort, in today’s announcement. “Developers want to get moving fast with the minimum of overhead. Platform teams can allow them to do that safely by encoding their recommended practice into Spinnaker, using Spinnaker for GCP to get up and running quickly and start onboard development teams.”

 

Jul
01
2019
--

We’ll talk even more Kubernetes at TC Sessions: Enterprise with Microsoft’s Brendan Burns and Google’s Tim Hockin

You can’t go to an enterprise conference these days without talking containers — and specifically the Kubernetes container management system. It’s no surprise then, that we’ll do the same at our inaugural TC Sessions: Enterprise event on September 5 in San Francisco. As we already announced last week, Kubernetes co-founder Craig McLuckie and Aparna Sinha, Google’s director of product management for Kubernetes, will join us to talk about the past, present and future of containers in the enterprise.

In addition, we can now announce that two other Kubernetes co-founders will join us: Google principal software engineer Tim Hockin, who currently works on Kubernetes and the Google Container Engine, and Microsoft distinguished engineer Brendan Burns, who was the lead engineer for Kubernetes during his time at Google.

With this, we’ll have three of the four Kubernetes co-founders onstage to talk about the five-year-old project.

Before joining the Kuberntes efforts, Hockin worked on internal Google projects like Borg and Omega, as well as the Linux kernel. On the Kubernetes project, he worked on core features and early design decisions involving networking, storage, node, multi-cluster, resource isolation and cluster sharing.

While his colleagues Craig McLuckie and Joe Beda decided to parlay their work on Kubernetes into a startup, Heptio, which they then successfully sold to VMware for about $550 million, Burns took a different route and joined the Microsoft Azure team three years ago.

I can’t think of a better group of experts to talk about the role that Kubernetes is playing in reshaping how enterprise build software.

If you want a bit of a preview, here is my conversation with McLuckie, Hockin and Microsoft’s Gabe Monroy about the history of the Kubernetes project.

Early-Bird tickets are now on sale for $249; students can grab a ticket for just $75. Book your tickets here before prices go up.

May
09
2019
--

AWS remains in firm control of the cloud infrastructure market

It has to be a bit depressing to be in the cloud infrastructure business if your name isn’t Amazon. Sure, there’s a huge, growing market, and the companies behind Amazon are growing even faster. Yet it seems no matter how fast they grow, Amazon remains a dot on the horizon.

It seems inconceivable that AWS can continue to hold sway over such a large market for so long, but as we’ve pointed out before, it has been able to maintain its position through true first-mover advantage. The other players didn’t even show up until several years after Amazon launched its first service in 2006, and they are paying the price for their failure to see the way computing would change the way Amazon did.

They certainly see it now, whether it’s IBM, Microsoft or Google, or Tencent and Alibaba, both of which are growing fast in the China/Asia markets. All of these companies are trying to find the formula to help differentiate themselves from AWS and give them some additional market traction.

Cloud market growth

Interestingly, even though companies have begun to move with increasing urgency to the cloud, the pace of growth slowed a bit in the first quarter to a 42 percent rate, according to data from Synergy Research, but that doesn’t mean the end of this growth cycle is anywhere close.

Apr
11
2019
--

Google Cloud makes some strong moves to differentiate itself from AWS and Microsoft

Google Cloud held its annual customer conference, Google Cloud Next, this week in San Francisco. It had a couple of purposes. For starters, it could introduce customers to new CEO Thomas Kurian for the first time since his hiring at the end of last year. And secondly, and perhaps more importantly, it could demonstrate that it can offer a value proposition that is distinct from AWS and Microsoft.

Kurian’s predecessor, Diane Greene, was fond of saying that it was still early days for the cloud market, and she’s still right, but while the pie has continued to grow substantially, Google’s share of the market has stayed stubbornly in single digits. It needed to use this week’s conference as at least a springboard to showcase its strengths.

Its lack of commercial cloud market clout has always been a bit of a puzzler. This is Google after all. It runs Google Search and YouTube and Google Maps and Google Docs. These are massive services that rarely go down. You would think being able to run these massive services would translate into massive commercial success, but so far it hasn’t.

Missing ingredients

Even though Greene brought her own considerable enterprise cred to GCP, having been a co-founder at VMware, the company that really made the cloud possible by popularizing the virtual machine, she wasn’t able to significantly change the company’s commercial cloud fortunes.

In a conversation with TechCrunch’s Frederic Lardinois, Kurian talked about missing ingredients, like having people to talk to (or maybe a throat to choke). “A number of customers told us ‘we just need more people from you to help us.’ So that’s what we’ll do,” Kurian told Lardinois.

But, of course, it’s never one thing when it comes to a market as complex as cloud infrastructure. Sure, you can add more bodies in customer support or sales, or more aggressively pursue high-value enterprise customers, or whatever Kurian has identified as holes in GCP’s approach up until now, but it still requires a compelling story, and Google took a big step toward having the ingredients for a new story this week.

Changing position

Google is trying to position itself in the same way as any cloud vendor going after AWS. They are selling themselves as the hybrid cloud company that can help with your digital transformation. It’s a common strategy, but Google did more than throw out the usual talking points this week. It walked the walk too.

For starters, it introduced Anthos, a single tool to manage your workloads wherever they live, even in a rival cloud. This is a big deal, and if it works as described, it does give that new beefed-up sales team at Google Cloud a stronger story to tell around integration. As my colleague, Frederic Lardinois described it:

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure.

AWS hasn’t made made many friends in the open-source community of late, and Google reiterated that it was going to be the platform that is friendly to open-source projects. To that end, it announced a number of major partnerships.

Finally, the company took a serious look at verticals, trying to put together packages of Google Cloud services designed specifically for a given vertical. As an example, it put together a package for retailers that included special services to help keep you up and running during peak demand; tools to suggest if you like this, you might be interested in these items; contact center AI; and other tools specifically geared toward the retail market. You can expect the company will be doing more of this to make the platform more attractive to a given market space.

Photo: Michael Short/Bloomberg via Getty Images

All of this and more, way too much to summarize in one article, was exactly what Google Cloud needed to do this week. Now comes the hard part. They have come up with some good ideas and they have to go out and sell it.

Nobody has ever denied that Google lacked good technology. That has always been an inherently obvious strength, but it has struggled to translate that into substantial market share. That is Kurian’s challenge. As Greene used to say, in baseball terms, it’s still early innings. And it really still is, but the game is starting to move along, and Kurian needs to get the team moving in the right direction if it expects to be competitive.

Apr
10
2019
--

Google Cloud takes aim at verticals starting with new set of tools for retailers

Google might not be Adobe or Salesforce, but it has a particular set of skills, which fit nicely with retailer requirements and can over time help improve the customer experience, even if that means just simply making sure the website or app is running, even on peak demand. Today, at Google Cloud Next, the company showed off a package of solutions as an example its vertical strategy.

Just this morning, the company announced a new phase of its partnership with Salesforce, where it’s using its contact center AI tools and chatbot technology in combination with Salesforce data to produce a product that plays to each company’s strengths and helps improve the customer service experience.

But Google didn’t stop with a high profile partnership. It has a few tricks of its own for retailers, starting with the classic retailer Black Friday kind of scenario. The easiest way to explain the value of cloud scaling is to look at a retail event like Black Friday when you know servers are going to be bombarded with traffic.

The cloud has always been good at scaling up for those kind of events, but it’s not perfect, as Amazon learned last year when it slowed down on Prime Day. Google wants to help companies avoid those kinds of disasters because a slow or down website translates into lots of lost revenue.

The company offers eCommerce Hosting, designed specifically for online retailers, and it is offering a special premium program, so retailers get “white glove treatment with technical architecture reviews and peak season operations support…” according to the company. In other words, it wants to help these companies avoid disastrous, money-losing results when a site goes down due to demand.

In addition, Google is offering real-time inventory tools, so customers and clerks can know exactly what stock is on hand, and it’s applying its AI expertise to this, as well with tools like Google Contact Center AI solution to help deliver better customer service experiences or Cloud Vision technology to help customers point their cameras at a product and see similar or related products. They also offer Recommendations AI, a tool, that says, if you bought these things, you might like this too, among other tools.

The company counts retail customers like Shopify and Ikea. In addition, the company is working with SI partners like Accenture, CapGemini and Deloitte and software partners like Salesforce, SAP and Tableau.

All of this is about creating a set of services created specifically for a given vertical to help that industry take advantage of the cloud. It’s one more way for Google Cloud to bring solutions to market and help increase its marketshare.

Apr
10
2019
--

Google Cloud announces Traffic Director, a networking management tool for service mesh

With each new set of technologies comes a new set of terms. In the containerized world, applications are broken down into discrete pieces or micro services. As these services proliferate, it creates a service mesh, a network of services and the interactions that take place as they interact. For each new technology like this, it requires a management layer, especially for the network administrators to understand and control the new concept, in this case, the service mesh.

Today at Google Cloud Next, the company announced the Beta of Traffic Director for open service mesh, specifically to help network managers understand what’s happening in their service mesh.

“To accelerate adoption and reduce the toil of managing service mesh, we’re excited to introduce Traffic Director, our new GCP-managed, enterprise-ready configuration and traffic control plane for service mesh that enables global resiliency, intelligent load balancing, and advanced traffic control capabilities like canary deployments,” Brad Calder, VP of engineering for technical infrastructure at Google Cloud, wrote in a blog post introducing the tool.

Traffic Director provides a way for operations to deploy a service mesh on their networks and have more control over how it works and interacts with the rest of the system. The tool works with Virtual Machines, Compute Engine on GCP, or in a containerized approach, GKE on GCP.

The product is just launching into Beta today, but the road map includes additional security features and support for hybrid environments, and eventually integration with Anthos, the hybrid management tool the company introduced yesterday at Google Cloud Next.

Apr
10
2019
--

Google Cloud unveils new identity tools based on Zero Trust framework

Google Cloud announced some new identity tools today at Google Cloud Next designed to simplify identity Access Management within the context of the BeyondCorp Zero Trust security model.

Zero Trust, as the name implies, means you have to assume you can’t trust anyone using your network. In the days before the cloud, you could set up a firewall and with some reasonable degree of certainty assume people inside had permission to be there. The cloud changed that, and Zero Trust was born to help provide a more modern security posture that took that into account.

The company wants to make it easier for developers to build identity into applications without a lot of heavy lifting. It sees identity as more than a way to access applications, but as an integral part of the security layer, especially in the context of the BeyondCorp approach. If you know who the person is, and can understand the context of how they are interacting with you, that can give strong clues as to whether the person is who they actually say they are.

This is about more than protecting your applications, it’s about making sure that your entire system from your virtual machine to your APIs are all similarly protected. “Over the past few months, we added context-aware access capabilities in Beta to C?loud Identity-Aware Proxy ?(IAP) and V?PC Service Controls ?to help protect web apps, VMs and Google Cloud Platform (GCP) APIs. Today, we are making these capabilities generally available in Cloud IAP, as well as extending them in Beta to C?loud Identity? to help you protect access to G Suite apps,” the company wrote in an introductory blog post.

Diagram: Google

This context-aware access layer protects all of these areas across the cloud. “Context-aware access allows you to define and enforce granular access to apps and infrastructure based on a user’s identity and the context of their request. This can help increase your organization’s security posture while giving users an easy way to more securely access apps or infrastructure resources, from virtually any device, anywhere,” the company wrote.

The G Suite protection is in beta, but the rest is generally available starting today.

Apr
09
2019
--

Apigee jumps on hybrid bandwagon with new API for hybrid environments

This year at Google Cloud Next, the theme is all about supporting hybrid environments, so it shouldn’t come as a surprise that Apigee, the API company it bought in 2016 for $265 million, is also getting into the act. Today, Apigee announced the Beta of Apigee Hybrid, a new product designed for hybrid environments.

Amit Zavery, who recently joined Google Cloud after many years at Oracle, and Nandan Sridhar, describe the new product in a joint blog post as “a new deployment option for the Apigee API management platform that lets you host your runtime anywhere—in your data center or the public cloud of your choice.”

As with Anthos, the company’s approach to hybrid management announced earlier today, the idea is to have a single way to manage your APIs no matter where you choose to run them.

“With Apigee hybrid, you get a single, full-featured API management solution across all your environments, while giving you control over your APIs and the data they expose and ensuring a unified strategy across all APIs in your enterprise,” Zavery and Sridhar wrote in the blog post announcing the new approach.

The announcement is part of an overall strategy by the company to support a customer’s approach to computing across a range environments, often referred to as hybrid cloud. In the Cloud Native world, the idea is to present a single fabric to manage your deployments, regardless of location.

This appears to be an extension of that idea, which makes sense given that Google was the first company to develop and open source Kubernetes, which is at the forefront of containerization and Cloud Native computing. While this isn’t pure Cloud Native computing, it is keeping true to its ethos and it fits in the scope of Google Cloud’s approach to computing in general, especially as it is being defined at this year’s conference.

Apr
09
2019
--

Google Cloud challenges AWS with new open-source integrations

Google today announced that it has partnered with a number of top open-source data management and analytics companies to integrate their products into its Google Cloud Platform and offer them as managed services operated by its partners. The partners here are Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j and Redis Labs.

The idea here, Google says, is to provide users with a seamless user experience and the ability to easily leverage these open-source technologies in Google’s cloud. But there is a lot more at play here, even though Google never quite says so. That’s because Google’s move here is clearly meant to contrast its approach to open-source ecosystems with Amazon’s. It’s no secret that Amazon’s AWS cloud computing platform has a reputation for taking some of the best open-source projects and then forking those and packaging them up under its own brand, often without giving back to the original project. There are some signs that this is changing, but a number of companies have recently taken action and changed their open-source licenses to explicitly prevent this from happening.

That’s where things get interesting, because those companies include Confluent, Elastic, MongoDB, Neo4j and Redis Labs — and those are all partnering with Google on this new project, though it’s worth noting that InfluxData is not taking this new licensing approach and that while DataStax uses lots of open-source technologies, its focus is very much on its enterprise edition.

“As you are aware, there has been a lot of debate in the industry about the best way of delivering these open-source technologies as services in the cloud,” Manvinder Singh, the head of infrastructure partnerships at Google Cloud, said in a press briefing. “Given Google’s DNA and the belief that we have in the open-source model, which is demonstrated by projects like Kubernetes, TensorFlow, Go and so forth, we believe the right way to solve this it to work closely together with companies that have invested their resources in developing these open-source technologies.”

So while AWS takes these projects and then makes them its own, Google has decided to partner with these companies. While Google and its partners declined to comment on the financial arrangements behind these deals, chances are we’re talking about some degree of profit-sharing here.

“Each of the major cloud players is trying to differentiate what it brings to the table for customers, and while we have a strong partnership with Microsoft and Amazon, it’s nice to see that Google has chosen to deepen its partnership with Atlas instead of launching an imitation service,” Sahir Azam, the senior VP of Cloud Products at MongoDB told me. “MongoDB and GCP have been working closely together for years, dating back to the development of Atlas on GCP in early 2017. Over the past two years running Atlas on GCP, our joint teams have developed a strong working relationship and support model for supporting our customers’ mission critical applications.”

As for the actual functionality, the core principle here is that Google will deeply integrate these services into its Cloud Console; for example, similar to what Microsoft did with Databricks on Azure. These will be managed services and Google Cloud will handle the invoicing and the billings will count toward a user’s Google Cloud spending commitments. Support will also run through Google, so users can use a single service to manage and log tickets across all of these services.

Redis Labs CEO and co-founder Ofer Bengal echoed this. “Through this partnership, Redis Labs and Google Cloud are bringing these innovations to enterprise customers, while giving them the choice of where to run their workloads in the cloud, he said. “Customers now have the flexibility to develop applications with Redis Enterprise using the fully integrated managed services on GCP. This will include the ability to manage Redis Enterprise from the GCP console, provisioning, billing, support, and other deep integrations with GCP.”

Apr
09
2019
--

Google Cloud is bringing two new data centers online in 2020

At Google Cloud Next today, the company announced it is bringing two brand new data centers online in the 2020 time frame, with one in Seoul, South Korea and one in Salt Lake City, Utah.

The company, like many of its web scale peers, has had the data center building pedal to the metal over the last several years. It has grown to 15 regions, with each region hosting multiple zones for a total of 45 zones. In all, the company has a presence in 13 countries and says it has invested an impressive $47 billion (with a B) of CAPEX investment from 2016-2018.

Google Data Center Map. Photo: Google

“We’re going to be announcing the availability in early 2020 of Seoul, South Korea. So we are announcing a region there with three zones for customers to build their applications. Again, customers, either multinationals that are looking to serve their customers in that market or local customers that are looking to go global. This really helps address their needs and allows them to serve the customers in the way that they want to,” Dominic Preuss, director of product management said.

He added, “Similarly, Salt Lake City is our third region in the western United States along with Oregon and Los Angeles. And so it allows developers to build distributed applications across multiple regions in the western United States.”

In addition, the company announced that its new data center in Osaka, Japan is expected to come online some time in the coming weeks. One in Jakarta, Indonesia, currently under construction, is expected to come online the first half of next year.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com