Jul
12
2018
--

Google’s Apigee teams up with Informatica to extend its API ecosystem

Google acquired API management service Apigee back in 2016, but it’s been pretty quiet around the service in recent years. Today, however, Apigee announced a number of smaller updates that introduce a few new integrations with the Google Cloud platform, as well as a major new partnership with cloud data management and integration firm Informatica that essentially makes Informatica the preferred integration partner for Google Cloud.

Like most partnerships in this space, the deal with Informatica involves some co-selling and marketing agreements, but that really wouldn’t be all that interesting. What makes this deal stand out is that Google is actually baking some of Informatica’s tools right into the Google Cloud dashboard. This will allow Apigee users to use Informatica’s wide range of integrations with third-party enterprise applications while Informatica users will be able to publish their APIs through Apigee and have that service manage them for them.

Some of Google’s competitors, including Microsoft, have built their own integration services. As Google Cloud director of product management Ed Anuff told me, that wasn’t really on Google’s road map. “It takes a lot of know-how to build a rich catalog of connectors,” he said. “You could go and build an integration platform but if you don’t have that, you can’t address your customer’s needs.” Instead, Google went to look for a partner who already has this large catalog and plenty of credibility in the enterprise space.

Similarly, Informatica’s senior VP and GM for big data, cloud and data integration Ronen Schwartz noted that many of his company’s customers are now looking to move into the cloud and this move will make it easier for Informatica’s customers to bring their services into Apigee and open them up for external applications. “With this partnership, we are bringing the best of breed of both worlds to our customers,” he said. “And we are doing it now and we are making it available in an integrated, optimized way.”

Jul
03
2018
--

Google Cloud’s COO departs after 7 months

At the end of last November, Google announced that Diane Bryant, who at the time was on a leave of absence from her position as the head of Intel’s data center group, would become Google Cloud’s new COO. This was a major coup for Google, but it wasn’t meant to last. After only seven months on the job, Bryant has left Google Cloud, as Business Insider first reported today.

“We can confirm that Diane Bryant is no longer with Google. We are grateful for the contributions she made while at Google and we wish her the best in her next pursuit,” a Google spokesperson told us when we reached out for comment.

The reasons for Bryant’s departure are currently unclear. It’s no secret that Intel is looking for a new CEO and Bryant would fit the bill. Intel also famously likes to recruit insiders as its leaders, though I would be surprised if the company’s board had already decided on a replacement. Bryant spent more than 25 years at Intel and her hire at Google looked like it would be a good match, especially given that Google’s position behind Amazon and Microsoft in the cloud wars means that it needs all the executive talent it can get.

When Bryant was hired, Google Cloud CEO Diane Greene noted that “Diane’s strategic acumen, technical knowledge and client focus will prove invaluable as we accelerate the scale and reach of Google Cloud.” According to the most recent analyst reports, Google Cloud’s market share has ticked up a bit — and its revenue has increased at the same time — but Google remains a distant third in the competition and it doesn’t look like that’s changing anytime soon.

Jun
26
2018
--

With Cloud Filestore, the Google Cloud gets a new storage option

Google is giving developers a new storage option in its cloud. Cloud Filestore, which will launch into beta next month, essentially offers a fully managed network attached storage (NAS) service in the cloud. This means that companies can now easily run applications that need a traditional file system interface on the Google Cloud Platform.

Traditionally, developers who wanted access to a standard file system over the kind of object storage and database options that Google already offered had to rig up a file server with a persistent disk. Filestore does away with all of this and simply allows Google Cloud users to spin up storage as needed.

The promise of Filestore is that it offers high throughput, low latency and high IOPS. The service will come in two tiers: premium and standard. The premium tier will cost $0.30 per GB and month and promises a throughput speed of 700 MB/s and 30,000 IOPS, no matter the storage capacity. Standard-tier Filestore storage will cost $0.20 per GB and month, but performance scales with capacity and doesn’t hit peak performance until you store more than 10TB of data in Filestore.

Google launched Filestore at an event in Los Angeles that mostly focused on the entertainment and media industry. There are plenty of enterprise applications in those verticals that need a shared file system, but the same can be said for many other industries that rely on similar enterprise applications.

The Filestore beta will launch next month. Because it’s still in beta, Google isn’t making any uptime promises right now and there is no ETA for when the service will come out of beta.

Jun
19
2018
--

Google injects Hire with AI to speed up common tasks

Since Google Hire launched last year it has been trying to make it easier for hiring managers to manage the data and tasks associated with the hiring process, while maybe tweaking LinkedIn while they’re at it. Today the company announced some AI-infused enhancements that they say will help save time and energy spent on manual processes.

“By incorporating Google AI, Hire now reduces repetitive, time-consuming tasks, like scheduling interviews into one-click interactions. This means hiring teams can spend less time with logistics and more time connecting with people,” Google’s Berit Hoffmann, Hire product manager wrote in a blog post announcing the new features.

The first piece involves making it easier and faster to schedule interviews with candidates. This is a multi-step activity that involves scheduling appropriate interviewers, choosing a time and date that works for all parties involved in the interview and scheduling a room in which to conduct the interview. Organizing these kind of logistics tend to eat up a lot of time.

“To streamline this process, Hire now uses AI to automatically suggest interviewers and ideal time slots, reducing interview scheduling to a few clicks,” Hoffmann wrote.

Photo: Google

Another common hiring chore is finding keywords in a resume. Hire’s AI now finds these words for a recruiter automatically by analysing terms in a job description or search query and highlighting relevant words including synonyms and acronyms in a resume to save time spent manually searching for them.

Photo: Google

Finally, another standard part of the hiring process is making phone calls, lots of phone calls. To make this easier, the latest version of Google Hire has a new click-to-call function. Simply click the phone number and it dials automatically and registers the call in call a log for easy recall or auditing.

While Microsoft has LinkedIn and Office 365, Google has G Suite and Google Hire. The strategy behind Hire is to allow hiring personnel to work in the G Suite tools they are immersed in every day and incorporate Hire functionality within those tools.

It’s not unlike CRM tools that integrate with Outlook or GMail because that’s where sales people spend a good deal of their time anyway. The idea is to reduce the time spent switching between tools and make the process a more integrated experience.

While none of these features individually will necessarily wow you, they are making use of Google AI to simplify common tasks to reduce some of the tedium associated with every-day hiring tasks.

Jun
13
2018
--

Salesforce deepens data sharing partnership with Google

Last Fall at Dreamforce, Salesforce announced a deepening friendship with Google . That began to take shape in January with integration between Salesforce CRM data and Google Analytics 360 and Google BigQuery. Today, the two cloud giants announced the next step as the companies will share data between Google Analytics 360 and the Salesforce Marketing Cloud.

This particular data sharing partnership makes even more sense as the companies can share web analytics data with marketing personnel to deliver ever more customized experiences for users (or so the argument goes, right?).

That connection certainly didn’t escape Salesforce’s VP of product marketing, Bobby Jania. “Now, marketers are able to deliver meaningful consumer experiences powered by the world’s number one marketing platform and the most widely adopted web analytics suite,” Jania told TechCrunch.

Brent Leary, owner of the consulting firm CRM Essentials says the partnership is going to be meaningful for marketers. “The tighter integration is a big deal because a large portion of Marketing Cloud customers are Google Analytics/GA 360 customers, and this paves the way to more seamlessly see what activities are driving successful outcomes,” he explained.

The partnership involves four integrations that effectively allow marketers to round-trip data between the two platforms. For starters, consumer insights from both Marketing Cloud and Google Analytics 360, will be brought together into a single analytics dashboard inside Marketing Cloud. Conversely, Market Cloud data will be viewable inside Google Analytics 360 for attribution analysis and also to use the Marketing Cloud information to deliver more customized web experiences. All three of these integrations will be generally available starting today.

A fourth element of the partnership being announced today won’t be available in Beta until the third quarter of this year. “For the first time ever audiences created inside the Google Analytics 360 platform can be activated outside of Google. So in this case, I’m able to create an audience inside of Google Analytics 360 and then I’m able to activate that audience in Marketing Cloud,” Jania explained.

An audience is like a segment, so if you have a group of like-minded individuals in the Google analytics tool, you can simply transfer it to Salesforce Marketing Cloud and send more relevant emails to that group.

This data sharing capability removes a lot of the labor involved in trying to monitor data stored in two places, but of course it also raises questions about data privacy. Jania was careful to point out that the two platforms are not sharing specific information about individual consumers, which could be in violation of the new GDPR data privacy rules that went into effect in Europe at the end of last month.

“What we’re [we’re sharing] is either metadata or aggregated reporting results. Just to be clear there’s no personal identifiable data that is flowing between the systems so everything here is 100% GDPR-compliant,” Jania said.

But Leary says it might not be so simple, especially in light of recent data sharing abuses. “With Facebook having to open up about how they’re sharing consumer data with other organizations, companies like Salesforce and Google will have to be more careful than ever before about how the consumer data they make available to their corporate customers will be used by them. It’s a whole new level of scrutiny that has to be apart of the data sharing equation,” Leary said.

The announcements were made today at the Salesforce Connections conference taking place in Chicago this week.

Jun
07
2018
--

Google Cloud announces the Beta of single tenant instances

One of the characteristics of cloud computing is that when you launch a virtual machine, it gets distributed wherever it makes the most sense for the cloud provider. That usually means sharing servers with other customers in what is known as a multi-tenant environment. But what about times when you want a physical server dedicated just to you?

To help meet those kinds of demands, Google announced the Beta of Google Compute Engine Sole-tenant nodes, which have been designed for use cases such a regulatory or compliance where you require full control of the underlying physical machine, and sharing is not desirable.

“Normally, VM instances run on physical hosts that may be shared by many customers. With sole-tenant nodes, you have the host all to yourself,” Google wrote in a blog post announcing the new offering.

Diagram: Google

Google has tried to be as flexible as possible, letting the customer choose exactly what configuration they want in terms CPU and memory. Customers can also let Google choose the dedicated server that’s best at any particular moment, or you can manually select the server if you want that level of control. In both cases, you will be assigned a dedicated machine.

If you want to play with this, there is a free tier and then various pricing tiers for a variety of computing requirements. Regardless of your choice, you will be charged on a per-second basis with a one-minute minimum charge, according to Google.

Since this feature is still in Beta, it’s worth noting that it is not covered under any SLA. Microsoft and Amazon have similar offerings.

Jun
06
2018
--

Four years after its release, Kubernetes has come a long way

On June 6th, 2014 Kubernetes was released for the first time. At the time, nobody could have predicted that 4 years later that the project would become a de facto standard for container orchestration or that the biggest tech companies in the world would be backing it. That would come later.

If you think back to June 2014, containerization was just beginning to take off thanks to Docker, which was popularizing the concept with developers, but being so early there was no standard way to manage those containers.

Google had been using containers as a way to deliver applications for years and ran a tool called Borg to handle orchestration. It’s called an orchestrator because much like a conductor of an orchestra, it decides when a container is launched and when it shuts down once it’s completed its job.

At the time, two Google engineers, Craig McLuckie and Joe Beda, who would later go on to start Heptio, were looking at developing an orchestration tool like Borg for companies that might not have the depth of engineering talent of Google to make it work. They wanted to spread this idea of how they develop distributed applications to other developers.

Hello world

Before that first version hit the streets, what would become Kubernetes developed out of a need for an orchestration layer that Beda and McLuckie had been considering for a long time. They were both involved in bringing Google Compute Engine, Google’s Infrastructure as a Service offering, to market, but they felt like there was something missing in the tooling that would fill in the gaps between infrastructure and platform service offerings.

“We had long thought about trying to find a way to bring a sort of a more progressive orchestrated way of running applications in production. Just based on our own experiences with Google Compute Engine, we got to see firsthand some of the challenges that the enterprise faced in moving workloads to the cloud,” McLuckie explained.

He said that they also understood some of the limitations associated with virtual machine-based workloads and they were thinking about tooling to help with all of that. “And so we came up the idea to start a new project, which ultimately became Kubernetes.”

Let’s open source it

When Google began developing Kubernetes in March 2014, it wanted nothing less than to bring container orchestration to the masses. It was a big goal and McLuckie, Beda and teammate Brendan Burns believed the only way to get there was to open source the technology and build a community around it. As it turns out, they were spot on with that assessment, but couldn’t have been 100 percent certain at the time. Nobody could have.

Photo: Cloud Native Computing Foudation

“If you look at the history, we made the decision to open source Kubernetes and make it a community-oriented project much sooner than conventional wisdom would dictate and focus on really building a community in an open and engaged fashion. And that really paid dividends as Kubernetes has accelerated and effectively become the standard for container orchestration,” McLuckie said.

The next thing they did was to create the Cloud Native Computing Foundation (CNCF) as an umbrella organization for the project. If you think about it, this project could have gone in several directions, as current CNCF director Dan Kohn described in a recent interview.

Going cloud native

Kohn said Kubernetes was unique in a couple of ways. First of all, it was based on existing technology developed over many years at Google. “Even though Kubernetes code was new, the concepts and engineering and know-how behind it was based on 15 years at Google building Borg (And a Borg replacement called Omega that failed),” Kohn said. The other thing was that Kubernetes was designed from the beginning to be open sourced.

Photo: Swapnil Bhartiya on Flickr. Used under CC by SA 2.0 license

He pointed out that Google could have gone in a few directions with Kubernetes. It could have created a commercial product and sold it through Google Cloud. It could have open sourced it, but had a strong central lead as they did with Go. They could have gone to the Linux Foundation and said they wanted to create a stand-alone Kubernetes Foundation. But they didn’t do any of these things.

McLuckie says they decided to something entirely different and place it under the auspices of the Linux Foundation, but not as Kubernetes project. Instead they wanted to create a new framework for cloud native computing itself and the CNCF was born. “The CNCF is a really important staging ground, not just for Kubernetes, but for the technologies that needed to come together to really complete the narrative, to make Kubernetes a much more comprehensive framework,” McLuckie explained.

Getting everyone going in the same direction

Over the last few years, we have watched as Kubernetes has grown into a container orchestration standard. Last summer in quick succession  a slew of major enterprise players joined CNCF as AWSOracleMicrosoftVMware and Pivotal all joined. They came together with Red Hat, Intel, IBM Cisco and others who were already members.

Cloud Native Computing Foundation Platinum members

Each these players no doubt wanted to control the orchestration layer, but they saw Kubernetes gaining momentum so rapidly, they had little choice but to go along. Kohn jokes that having all these big name players on board is like herding cats, but bringing in them in has been the goal all along. He said it just happened much faster than he thought it would.

In a recent interview with TechCrunch, David Aronchick, who runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days. He is shocked by how quickly it has grown. “I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he told TechCrunch.

As it has grown, it has become readily apparent that McLuckie was right about building that cloud native framework instead of a stand-alone Kubernetes foundation. Today there are dozens of adjacent projects and the organization is thriving.

Nobody is more blown away by this than McLuckie himself who says seeing Kubernetes hit these various milestones since its initial release has been amazing for him and his team to watch. “It’s just been a series of these wonderful kind of moments as Kubernetes has gained a head of steam, and it’s been  so much fun to see the community really rally around it.”

Jun
04
2018
--

The new Gmail will roll out to all users next month

Google today announced that the new version of Gmail will launch into general availability and become available to all G Suite users next month. The exact date remains up in the air but my guess is that it’ll be sooner than later.

The new Gmail offers features like message snoozing, attachment previews, a sidebar for both Google apps like Calendar and third-party services like Trello, offline support, confidential messages that self-destruct after a set time, and more. It’s also the only edition of Gmail that currently allows you to try out Smart Compose, which tries to complete your sentences for you.

Here is what the rollout will look like for G Suite users. Google didn’t detail what the plan for regular users will look like, but if you’re not a G Suite user, you can already try the new Gmail today anyway and chances are stragglers will also get switched over to the new version at a similar pace as G Suite users.

Starting in July, G Suite admins will be able to immediately transition all of their users to the new Gmail, but users can still opt out for another twelve weeks. After that time is up, all G Suite users will move to the new Gmail experience.

Admins can also give users the option to try the new Gmail at their own pace or — and this is the default setting — they can just wait another four weeks and then Google will automatically give users the option to opt in.

Eight weeks after general availability, so sometime in September, all users will be migrated automatically but can still opt out for another four weeks.

That all sounds a bit more complicated than necessary, but the main gist here is: chances are you’ll get access to the new Gmail next month and if you hate it, you can still opt out for a bit longer. Then, if you still hate it, you are out of luck because come October, you will be using the new Gmail no matter what.

May
06
2018
--

Kubernetes stands at an important inflection point

Last week at KubeCon and CloudNativeCon in Copenhagen, we saw an open source community coming together, full of vim and vigor and radiating positive energy as it recognized its growing clout in the enterprise world. Kubernetes, which came out of Google just a few years ago, has gained acceptance and popularity astonishingly rapidly — and that has raised both a sense of possibility and a boat load of questions.

At this year’s European version of the conference, the community seemed to be coming to grips with that rapid growth as large corporate organizations like Red Hat, IBM, Google, AWS and VMware all came together with developers and startups trying to figure out exactly what they had here with this new thing they found.

The project has been gaining acceptance as the defacto container orchestration tool, and as that happened, it was no longer about simply getting a project off the ground and proving that it could work in production. It now required a greater level of tooling and maturity that previously wasn’t necessary because it was simply too soon.

As this has happened, the various members who make up this growing group of users, need to figure out, mostly on the fly, how to make it all work when it is no longer just a couple of developers and a laptop. There are now big boy and big girl implementations and they require a new level of sophistication to make them work.

Against this backdrop, we saw a project that appeared to be at an inflection point. Much like a startup that realizes it actually achieved the product-market fit it had hypothesized, the Kubernetes community has to figure out how to take this to the next level — and that reality presents some serious challenges and enormous opportunities.

A community in transition

The Kubernetes project falls under the auspices of the Cloud Native Computing Foundation (or CNCF for short). Consider that at the opening keynote, CNCF director Dan Kohn was brimming with enthusiasm, proudly rattling off numbers to a packed audience, showing the enormous growth of the project.

Photo: Ron Miller

If you wanted proof of Kubernetes’ (and by extension cloud native computing’s) rapid ascension, consider that the attendance at KubeCon in Copenhagen last week numbered 4300 registered participants, triple the attendance in Berlin just last year.

The hotel and conference center were buzzing with conversation. Every corner and hallway, every bar stool in the hotel’s open lobby bar, at breakfast in the large breakfast room, by the many coffee machines scattered throughout the venue, and even throughout the city, people chatted, debated and discussed Kubernetes and the energy was palpable.

David Aronchick, who now runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days (way back in 2015) and he was certainly surprised to see how big it has become in such a short time.

“I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he said.

Growing up

Yet there was great demand, and with each leap forward and each new level of maturity came a new set of problems to solve, which in turn has created opportunities for new services and startups to fill in the many gaps. As Aparna Sinha, who is the Kubernetes group product manager at Google, said in her conference keynote, enterprise companies want some level of certainty that earlier adopters were willing to forego to take a plunge into the new and exciting world of containers.

Photo: Cloud Native Computing Foundation

As she pointed out, for others to be pulled along and for this to truly reach another level of adoption, it’s going to require some enterprise-level features and that includes security, a higher level of application tooling and a better overall application development experience. All these types of features are coming, whether from Google or from the myriad of service providers who have popped up around the project to make it easier to build, deliver and manage Kubernetes applications.

Sinha says that one of the reasons the project has been able to take off as quickly as it has, is that its roots lie in a container orchestration tool called Borg, which the company has been using internally for years. While that evolved into what we know today as Kubernetes, it certainly required some significant repackaging to work outside of Google. Yet that early refinement at Google gave it an enormous head start over an average open source project — which could account for its meteoric rise.

“When you take something so well established and proven in a global environment like Google and put it out there, it’s not just like any open source project invented from scratch when there isn’t much known and things are being developed in real time,” she said.

For every action

One thing everyone seemed to recognize at KubeCon was that in spite of the head start and early successes, there remains much work to be done, many issues to resolve. The companies using it today mostly still fall under the early adopter moniker. This remains true even though there are some full blown enterprise implementations like CERN, the European physics organization, which has spun up 210 Kubernetes clusters or JD.com, the Chinese Internet shopping giant, which has 20K servers running Kubernetes with the largest cluster consisting of over 5000 servers. Still, it’s fair to say that most companies aren’t that far along yet.

Photo: Ron Miller

But the strength of an enthusiastic open source community like Kubernetes and cloud native computing in general, means that there are companies, some new and some established, trying to solve these problems, and the multitude of new ones that seem to pop up with each new milestone and each solved issue.

As Abby Kearns, who runs another open source project, the Cloud Foundry Foundation, put it in her keynote, part of the beauty of open source is all those eyeballs on it to solve the scads of problems that are inevitably going to pop up as projects expand beyond their initial scope.

“Open source gives us the opportunity to do things we could never do on our own. Diversity of thought and participation is what makes open source so powerful and so innovative,” she said.

It’s worth noting that several speakers pointed out that diversity of thought also required actual diversity of membership to truly expand ideas to other ways of thinking and other life experiences. That too remains a challenge, as it does in technology and society at large.

In spite of this, Kubernetes has grown and developed rapidly, while benefiting from a community which so enthusiastically supports it. The challenge ahead is to take that early enthusiasm and translate it into more actual business use cases. That is the inflection point where the project finds itself, and the question is will it be able to take that next step toward broader adoption or reach a peak and fall back.

May
04
2018
--

Google Kubeflow, machine learning for Kubernetes, begins to take shape

Ever since Google created Kubernetes as an open source container orchestration tool, it has seen it blossom in ways it might never have imagined. As the project gains in popularity, we are seeing many adjunct programs develop. Today, Google announced the release of version 0.1 of the Kubeflow open source tool, which is designed to bring machine learning to Kubernetes containers.

While Google has long since moved Kubernetes into the Cloud Native Computing Foundation, it continues to be actively involved, and Kubeflow is one manifestation of that. The project was only first announced at the end of last year at Kubecon in Austin, but it is beginning to gain some momentum.

David Aronchick, who runs Kubeflow for Google, led the Kubernetes team for 2.5 years before moving to Kubeflow. He says the idea behind the project is to enable data scientists to take advantage of running machine learning jobs on Kubernetes clusters. Kubeflow lets machine learning teams take existing jobs and simply attach them to a cluster without a lot of adapting.

With today’s announcement, the project begins to move ahead, and according to a blog post announcing the milestone, brings a new level of stability, while adding a slew of new features that the community has been requesting. These include Jupyter Hub for collaborative and interactive training on machine learning jobs and Tensorflow training and hosting support, among other elements.

Aronchick emphasizes that as an open source project you can bring whatever tools you like, and you are not limited to Tensorflow, despite the fact that this early version release does include support for Google’s machine learning tools. You can expect additional tool support as the project develops further.

In just over 4 months since the original announcement, the community has grown quickly with over 70 contributors, over 20 contributing organizations along with over 700 commits in 15 repositories. You can expect the next version, 0.2, sometime this summer.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com