May
09
2018
--

StubHub bets on Pivotal and Google Cloud as it looks to go beyond tickets

StubHub is best known as a destination for buying and selling event tickets. The company operates in 48 countries and sells a ticket every 1.3 seconds. But the company wants to go beyond that and provide its users with a far more comprehensive set of services around entertainment. To do that, it’s working on changing its development culture and infrastructure to become more nimble. As the company announced today, it’s betting on Google Cloud and Pivotal Cloud Foundry as the infrastructure for this move.

StubHub CTO Matt Swann told me that the idea behind going with Pivotal — and the twelve-factor app model that entails — is to help the company accelerate its journey and give it an option to run new apps in both an on-premise and cloud environment.

“We’re coming from a place where we are largely on premise,” said Swann. “Our aim is to become increasingly agile — where we are going to focus on building balanced and focused teams with a global mindset.” To do that, Swann said, the team decided to go with the best platforms to enable that and that “remove the muck that comes with how developers work today.”

As for Google, Swann noted that this was an easy decision because the team wanted to leverage that company’s infrastructure and machine learning tools like Cloud ML. “We are aiming to build some of the most powerful AI systems focused on this space so we can be ahead of our customers,” he said. Given the number of users, StubHub sits on top of a lot of data — and that’s exactly what you need when you want to build AI-powered services. What exactly these will look like, though, remains to be seen, but Swann has only been on the job for six months. We can probably expect to see more for the company in this space in the coming months.

“Digital transformation is on the mind of every technology leader, especially in industries requiring the capability to rapidly respond to changing consumer expectations,” said Bill Cook, President of Pivotal . “To adapt, enterprises need to bring together the best of modern developer environments with software-driven customer experiences designed to drive richer engagement.”

Stubhub has already spun up its new development environment and plans to launch all new ups on this new infrastructure. Swann acknowledged that they company won’t be switching all of its workloads over to the new setup soon. But he does expect that the company will hit a tipping point in the next year or so.

He also noted that this over transformation means that the company will look beyond its own walls and toward working with more third-party APIs, especially with regard to transportation services and merchants that offer services around events.

Throughout our conversation, Swann also stressed that this isn’t a technology change for the sake of it.

Apr
25
2018
--

Google Cloud expands its bet on managed database services

Google announced a number of updates to its cloud-based database services today. For the most part, we’re not talking about any groundbreaking new products here, but all of these updates address specific pain points that enterprises suffer when they move to the cloud.

As Google Director of Product Management Dominic Preuss told me ahead of today’s announcements, Google long saw itself as a thought leader in the database space. For the longest time, though, that thought leadership was all about things like the Bigtable paper and didn’t really manifest itself in the form of products. Projects like the globally distributed Cloud Spanner database are now allowing Google Cloud to put its stamp on this market.

Preuss also noted that many of Google’s enterprise users often start with lifting and shifting their existing workloads to the cloud. Once they have done that, though, they are also looking to launch new applications in the cloud — and at that point, they typically want managed services that free them from having to do the grunt work of managing their own infrastructure.

Today’s announcements mostly fit into this mold of offering enterprises the kind of managed database services they are asking for.

The first of these is the beta launch of Cloud Memorystore for Redis, a fully managed in-memory data store for users who need in-memory caching for capacity buffering and similar use cases.

Google is also launching a new feature for Cloud Bigtable, the company’s NoSQL database service for big data workloads. Bigtable now features regional replication (or at least it will, once this has rolled out to all users within the next week or so). The general idea here is to give enterprises that previously used Cassandra for their on-premises workloads an alternative in the Google Cloud portfolio, and these cross-zone replications increase the availability and durability of the data they store in the service.

With this update, Google is also making Cloud SQL for PostgreSQL generally available with a 99.95 percent SLA, and it’s adding commit timestamps to Cloud Spanner.

What’s next for Google’s database portfolio? Unsurprisingly, Preuss wouldn’t say, but he did note that the company wants to help enterprises move as many of their workloads to the cloud as they can — and for the most part, that means managed services.

Apr
05
2018
--

Google Cloud gives developers more insights into their networks

Google Cloud is launching a new feature today that will give its users a new way to monitor and optimize how their data flows between their servers in the Google Cloud and other Google Services, on-premises deployments and virtually any other internet endpoint. As the name implies, VPC Flow Logs are meant for businesses that already use Google’s Virtual Private Cloud features to isolate their resources from other users.

VPC Flow Logs monitors and logs all the network flows (both UDP and TCP) that are sent from and received by the virtual machines inside a VPC, including traffic between Google Cloud regions. All of that data can be exported to Stackdriver Logging or BigQuery, if you want to keep it in the Google Cloud, or you can use Cloud Pub/Sub to export it to other real-time analytics or security platforms. The data updates every five seconds and Google promises that using this service has no impact on the performance of your deployed applications.

As the company notes in today’s announcement, this will allow network operators to get far more insight into the details of how the Google network performs and to troubleshoot issues if they arise. In addition, it will allow them to optimize their network usage and costs by giving them more information about their global traffic.

All of this data is also quite useful for performing forensics when it looks like somebody may have gotten into your network, too. If that’s your main use case, though, you probably want to export your data to a specialized security information and event management (SIEM) platform from vendors like Splunk or ArcSight.

Jan
04
2018
--

Google Cloud launches preemptible GPUs with a 50% discount

google data center Google Cloud today announced the launch of preemptible GPUs. Like Google’s preemptible VMs (and AWS’s comparable spot instances), these GPUs come at a significant discount — in this case, 50 percent. But in return, Google may shut them down at any point if it needs these resources. All you get is a 30-second warning. You also can only use any given preemptible GPU for up to… Read More

Nov
30
2017
--

Google Cloud brings in former Intel exec Diane Bryant as COO

 There are now two Dianes running the show at Google Cloud. The company announced that Diane Bryant has been hired as the COO of the division. She joins Diane Greene, who came on board as Senior VP of Google Cloud in November 2015. Greene appeared to be excited about the prospect of her joining the team. “I can’t think of a person with more relevant experience and talents. She is… Read More

Oct
03
2017
--

Webinar October 4, 2017: Databases in the Hosted Cloud

Databases in the Hosted Cloud 1

Join Percona’s Chief Evangelist, Colin Charles as he presents Databases in the Hosted Cloud on Wednesday, October 4, 2017, at 7:00 am PDT / 10:00 am EDT (UTC-7).Databases in the Hosted Cloud 1


Today you can use hosted MySQL/MariaDB/Percona Server for MySQL/PostgreSQL in several “cloud providers” as a database as a service (DBaaS). Learn the differences, the access methods and the level of control you have for the various public databases in the hosted cloud offerings:

  • Amazon RDS including Aurora
  • Google Cloud SQL
  • Rackspace OpenStack DBaaS
  • Oracle Cloud’s MySQL Service

The administration tools and ideologies behind each are completely different, and you are in a “locked-down” environment. Some considerations include:

  • Different backup strategies
  • Planning for multiple data centers for availability
  • Where do you host your application?
  • How do you get the most performance out of the solution?
  • What does this all cost?
  • Monitoring

Growth topics include:

  • How do you move from one DBaaS to another?
  • How do you move from a DBaaS to your own hosted platform?

Register for the webinar here.

Securing Your MySQLColin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within many open source communities and has spoken on the conference circuit.

 

Sep
28
2017
--

Google Compute Engine now lets you play nesting dolls with your VMs

 Here is a cloud computing feature that may seem a bit odd at first but that does actually have its uses. Google’s Compute Engine today launched the beta of a new feature called “nested virtualization.” As the name implies, this essentially allows you to run VMs inside of VMs. But why would you want to do that? “Nested virtualization makes it easier for enterprise users… Read More

Sep
26
2017
--

Google Cloud acquires cloud identity management company Bitium

 Google Cloud announced today that it has acquired Bitium, a company that focused on offering enterprise-grade identity management and access tools, such as single-sign on, for cloud-based applications. This will basically help Google better manage enterprise cloud customer implementation across an organization, including doing things like setting security levels and access policies for… Read More

Jul
20
2017
--

Google Cloud gets a new networking algorithm that boosts internet throughput

 Google today announced that TCP BBR, a new congestion-control algorithm is now available to its Cloud Platform users. The general idea here is to improve on the existing congestion-control algorithms for internet traffic, which have been around since the 1980s and which typically only take packet loss into account (when networking buffers fill up, routers will discard any new packets). Read More

Mar
19
2017
--

Galvanize will teach students how to use IBM Watson APIs with new machine learning course

 As part of IBM’s annual InterConnect conference in Las Vegas, the company is announcing a new machine learning course in partnership with workspace and education provider Galvanize to familiarize students with IBM’s suite of Watson APIs. These APIs simplify the process of building tools that rely on language, speech and vision analysis. Going by the admittedly clunky name IBM… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com