May
06
2018
--

Kubernetes stands at an important inflection point

Last week at KubeCon and CloudNativeCon in Copenhagen, we saw an open source community coming together, full of vim and vigor and radiating positive energy as it recognized its growing clout in the enterprise world. Kubernetes, which came out of Google just a few years ago, has gained acceptance and popularity astonishingly rapidly — and that has raised both a sense of possibility and a boat load of questions.

At this year’s European version of the conference, the community seemed to be coming to grips with that rapid growth as large corporate organizations like Red Hat, IBM, Google, AWS and VMware all came together with developers and startups trying to figure out exactly what they had here with this new thing they found.

The project has been gaining acceptance as the defacto container orchestration tool, and as that happened, it was no longer about simply getting a project off the ground and proving that it could work in production. It now required a greater level of tooling and maturity that previously wasn’t necessary because it was simply too soon.

As this has happened, the various members who make up this growing group of users, need to figure out, mostly on the fly, how to make it all work when it is no longer just a couple of developers and a laptop. There are now big boy and big girl implementations and they require a new level of sophistication to make them work.

Against this backdrop, we saw a project that appeared to be at an inflection point. Much like a startup that realizes it actually achieved the product-market fit it had hypothesized, the Kubernetes community has to figure out how to take this to the next level — and that reality presents some serious challenges and enormous opportunities.

A community in transition

The Kubernetes project falls under the auspices of the Cloud Native Computing Foundation (or CNCF for short). Consider that at the opening keynote, CNCF director Dan Kohn was brimming with enthusiasm, proudly rattling off numbers to a packed audience, showing the enormous growth of the project.

Photo: Ron Miller

If you wanted proof of Kubernetes’ (and by extension cloud native computing’s) rapid ascension, consider that the attendance at KubeCon in Copenhagen last week numbered 4300 registered participants, triple the attendance in Berlin just last year.

The hotel and conference center were buzzing with conversation. Every corner and hallway, every bar stool in the hotel’s open lobby bar, at breakfast in the large breakfast room, by the many coffee machines scattered throughout the venue, and even throughout the city, people chatted, debated and discussed Kubernetes and the energy was palpable.

David Aronchick, who now runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days (way back in 2015) and he was certainly surprised to see how big it has become in such a short time.

“I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he said.

Growing up

Yet there was great demand, and with each leap forward and each new level of maturity came a new set of problems to solve, which in turn has created opportunities for new services and startups to fill in the many gaps. As Aparna Sinha, who is the Kubernetes group product manager at Google, said in her conference keynote, enterprise companies want some level of certainty that earlier adopters were willing to forego to take a plunge into the new and exciting world of containers.

Photo: Cloud Native Computing Foundation

As she pointed out, for others to be pulled along and for this to truly reach another level of adoption, it’s going to require some enterprise-level features and that includes security, a higher level of application tooling and a better overall application development experience. All these types of features are coming, whether from Google or from the myriad of service providers who have popped up around the project to make it easier to build, deliver and manage Kubernetes applications.

Sinha says that one of the reasons the project has been able to take off as quickly as it has, is that its roots lie in a container orchestration tool called Borg, which the company has been using internally for years. While that evolved into what we know today as Kubernetes, it certainly required some significant repackaging to work outside of Google. Yet that early refinement at Google gave it an enormous head start over an average open source project — which could account for its meteoric rise.

“When you take something so well established and proven in a global environment like Google and put it out there, it’s not just like any open source project invented from scratch when there isn’t much known and things are being developed in real time,” she said.

For every action

One thing everyone seemed to recognize at KubeCon was that in spite of the head start and early successes, there remains much work to be done, many issues to resolve. The companies using it today mostly still fall under the early adopter moniker. This remains true even though there are some full blown enterprise implementations like CERN, the European physics organization, which has spun up 210 Kubernetes clusters or JD.com, the Chinese Internet shopping giant, which has 20K servers running Kubernetes with the largest cluster consisting of over 5000 servers. Still, it’s fair to say that most companies aren’t that far along yet.

Photo: Ron Miller

But the strength of an enthusiastic open source community like Kubernetes and cloud native computing in general, means that there are companies, some new and some established, trying to solve these problems, and the multitude of new ones that seem to pop up with each new milestone and each solved issue.

As Abby Kearns, who runs another open source project, the Cloud Foundry Foundation, put it in her keynote, part of the beauty of open source is all those eyeballs on it to solve the scads of problems that are inevitably going to pop up as projects expand beyond their initial scope.

“Open source gives us the opportunity to do things we could never do on our own. Diversity of thought and participation is what makes open source so powerful and so innovative,” she said.

It’s worth noting that several speakers pointed out that diversity of thought also required actual diversity of membership to truly expand ideas to other ways of thinking and other life experiences. That too remains a challenge, as it does in technology and society at large.

In spite of this, Kubernetes has grown and developed rapidly, while benefiting from a community which so enthusiastically supports it. The challenge ahead is to take that early enthusiasm and translate it into more actual business use cases. That is the inflection point where the project finds itself, and the question is will it be able to take that next step toward broader adoption or reach a peak and fall back.

Apr
30
2018
--

RedHat’s CoreOS launches a new toolkit for managing Kubernetes applications

CoreOS, the Linux distribution and container management startup Red Hat acquired for $250 million earlier this year, today announced the Operator Framework, a new open source toolkit for managing Kubernetes clusters.

CoreOS first talked about operators in 2016. The general idea here is to encode the best practices for deploying and managing container-based applications as code. “The way we like to think of this is that the operators are basically a picture of the best employee you have,” Red Hat OpenShift product manager Rob Szumski told me. Ideally, the Operator Framework frees up the operations team from doing all the grunt work of managing applications and allows them to focus on higher-level tasks. And at the same time, it also removes the error-prone humans from the process since the operator will always follow the company rulebook.

“To make the most of Kubernetes, you need a set of cohesive APIs to extend in order to service and manage your applications that run on Kubernetes,” CoreOS CTO Brandon Philips explains in today’s announcement. “We consider Operators to be the runtime that manages this type of application on Kubernetes.”

As Szumski told me, the CoreOS team developed many of these best practices in building and managing its own Tectonic container platform (and from the community that uses it). Once written, the operators watch over the Kubernetes cluster and can handle upgrades, for example, and when things go awry, the can react to failures within milliseconds.

The overall Operator Framework consists of three pieces: an SDK for building, testing and packaging the actual operator, the Operator Lifecycle Manager for deploying the operator to a Kubernetes cluster and managing them, and the Operator Metering tool for metering Kubernetes users for enterprises that need to do chargebacks or that want to charge their customers based on usage.

The metering tool doesn’t quite seem to fit into the overall goal here, but as Szumski told me, it’s something a lot of businesses have been looking for and CoreOS actually argues that this is a first for Kubernetes.

Today’s CoreOS/Red Hat announcement only marks the start of a week that’ll likely see numerous other Kubernetes-related announcements. That’s because the Cloud Native Computing Foundation is its KubeCon developer conference in the next few days and virtually every company in the container ecosystem will attend the event and have some kind of announcements.

Apr
23
2018
--

Heptio launches an open-source load balancer for Kubernetes and OpenStack

Heptio is one of the more interesting companies in the container ecosystem. In part, that’s due to the simple fact that it was founded by Craig McLuckie and Joe Beda, two of the three engineers behind the original Kubernetes project, but also because of the technology it’s developing and the large amount of funding it has raised to date.

As the company announced today, it saw its revenue grow 140 percent from the last quarter of 2017 to the first quarter of 2018. In addition, Heptio says its headcount quadrupled since the beginning of 2017. Without any actual numbers, that kind of data doesn’t mean all that much. It’s easy to achieve high-growth numbers if you’re starting out from zero, after all. But it looks like things are going well at the company and that the team is finding its place in the fast-growing Kubernetes ecosystem.

In addition to announcing these numbers, the team also today launched a new open-source project that will join the company’s existing stable of tools, like the cluster-recovery tool Ark and the Kubernetes cluster-monitoring tool Sonobuoy.

This new tool, Heptio Gimbal, has a very specific use case that is probably only of interest to a relatively small number of users — but for them, it’ll be a lifeline. Gimbal, which Heptio developed together with Yahoo Japan subsidiary Actapio, helps enterprises route traffic into both Kubernetes clusters and OpenStack deployments. Many enterprises now run these technologies in parallel, and while some are now moving beyond OpenStack and toward a more Kubernetes -centric architecture, they aren’t likely to do away with their OpenStack investments anytime soon.

“We approached Heptio to help us modernize our infrastructure with Kubernetes without ripping out legacy investments in OpenStack and other back-end systems,” said Norifumi Matsuya, CEO and president at Actapio. “Application delivery at scale is key to our business. We needed faster service discovery and canary deployment capability that provides instant rollback and performance measurement. Gimbal enables our developers to address these challenges, which at the macro-level helps them increase their productivity and optimize system performance.”

Gimbal uses many of Heptio’s existing open-source tools, as well as the Envoy proxy, which is part of the Cloud Native Computing Foundation’s stable of cloud-native projects. For now, Gimbal only supports one specific OpenStack release (the “Mitaka” release from 2016), but the team is looking at adding support for VMware and EC2 in the future.

Apr
20
2018
--

The Final Countdown: Are You Ready for Percona Live 2018?

Are you ready for Percona Live

Are you ready for Percona Live 2018It’s hard to believe Percona Live 2018 starts on Monday! We’re looking forward to seeing everyone in Santa Clara next week! Here are some quick highlights to remember:

  • In addition to all the amazing sessions and keynotes we’ve announced, we’ll be hosting the MySQL Community Awards and the Lightning Talks on Monday during the Opening Reception.
  • We’ve also got a great lineup of demos in the exhibit hall all day Tuesday and Wednesday – be sure to stop by and learn more about open source database products and tools.
  • On Monday, we have a special China Track now available from Alibaba Cloud, PingCAP and Shannon Systems. We’ve just put a $20.00 ticket on sale for that track, and if you have already purchased any of our other tickets, you are also welcome to attend those four sessions.
  • Don’t forget to make your reservation at the Community Dinner. It’s a great opportunity to socialize with everyone and Pythian is always a wonderful host!

Thanks to everyone who is sponsoring, presenting and attending! The community is who makes this event successful and so much fun to be a part of!

The post The Final Countdown: Are You Ready for Percona Live 2018? appeared first on Percona Database Performance Blog.

Apr
20
2018
--

Kubernetes and Cloud Foundry grow closer

Containers are eating the software world — and Kubernetes is the king of containers. So if you are working on any major software project, especially in the enterprise, you will run into it sooner or later. Cloud Foundry, which hosted its semi-annual developer conference in Boston this week, is an interesting example for this.

Outside of the world of enterprise developers, Cloud Foundry remains a bit of an unknown entity, despite having users in at least half of the Fortune 500 companies (though in the startup world, it has almost no traction). If you are unfamiliar with Cloud Foundry, you can think of it as somewhat similar to Heroku, but as an open-source project with a large commercial ecosystem and the ability to run it at scale on any cloud or on-premises installation. Developers write their code (following the twelve-factor methodology), define what it needs to run and Cloud Foundry handles all of the underlying infrastructure and — if necessary — scaling. Ideally, that frees up the developer from having to think about where their applications will run and lets them work more efficiently.

To enable all of this, the Cloud Foundry Foundation made a very early bet on containers, even before Docker was a thing. Since Kubernetes wasn’t around at the time, the various companies involved in Cloud Foundry came together to build their own container orchestration system, which still underpins much of the service today. As it took off, though, the pressure to bring support for Kubernetes grew inside of the Cloud Foundry ecosystem. Last year, the Foundation announced its first major move in this direction by launching its Kubernetes-based Container Runtime for managing containers, which sits next to the existing Application Runtime. With this, developers can use Cloud Foundry to run and manage their new (and existing) monolithic apps and run them in parallel with the new services they develop.

But remember how Cloud Foundry also still uses its own container service for the Application Runtime? There is really no reason to do that now that Kubernetes (and the various other projects in its ecosystem) have become the default of handling containers. It’s maybe no surprise then that there is now a Cloud Foundry project that aims to rip out the old container management systems and replace them with Kubernetes. The container management piece isn’t what differentiates Cloud Foundry, after all. Instead, it’s the developer experience — and at the end of the day, the whole point of Cloud Foundry is that developers shouldn’t have to care about the internal plumbing of the infrastructure.

There is another aspect to how the Cloud Foundry ecosystem is embracing Kubernetes, too. Since Cloud Foundry is also just software, there’s nothing stopping you from running it on top of Kubernetes, too. And with that, it’s no surprise that some of the largest Cloud Foundry vendors, including SUSE and IBM, are doing exactly that.

The SUSE Cloud Application Platform, which is a certified Cloud Foundry distribution, can run on any public cloud Kubernetes infrastructure, including the Microsoft Azure Container service. As the SUSE team told me, that means it’s not just easier to deploy, but also far less resource-intensive to run.

Similarly, IBM is now offering Cloud Foundry on top of Kubernetes for its customers, though it’s only calling this an experimental product for now. IBM’s GM of Cloud Developer Services Don Boulia stressed that IBM’s customers were mostly looking for ways to run their workloads in an isolated environment that isn’t shared with other IBM customers.

Boulia also stressed that for most customers, it’s not about Kubernetes versus Cloud Foundry. For most of his customers, using Kubernetes by itself is very much about moving their existing applications to the cloud. And for new applications, those customers are then opting to run Cloud Foundry.

That’s something the SUSE team also stressed. One pattern SUSE has seen is that potential customers come to it with the idea of setting up a container environment and then, over the course of the conversation, decide to implement Cloud Foundry as well.

Indeed, the message of this week’s event was very much that Kubernetes and Cloud Foundry are complementary technologies. That’s something Chen Goldberg, Google’s Director of Engineering for Container Engine and Kubernetes, also stressed during a panel discussion at the event.

Both the Cloud Foundry Foundation and the Cloud Native Computing Foundation (CNCF), the home of Kubernetes, are under the umbrella of the Linux Foundation. They take somewhat different approaches to their communities, with Cloud Foundry stressing enterprise users far more than the CNCF. There are probably some politics at play here, but for the most part, the two organizations seem friendly enough — and they do share a number of members. “We are part of CNCF and part of Cloud Foundry foundation,” Pivotal CEO Rob Mee told our own Ron Miller. “Those communities are increasingly sharing tech back and forth and evolving together. Not entirely independent and not competitive either. Lot of complexity and subtlety. CNCF and Cloud Foundry are part of a larger ecosystem with complimentary and converging tech.”

We’ll likely see more of this technology sharing — and maybe collaboration — between the CNCF and Cloud Foundry going forward. The CNCF is, after all, the home of a number of very interesting projects for building cloud-native applications that do have their fair share of use cases in Cloud Foundry, too.

Apr
12
2018
--

Percona Live 2018 Featured Talk: Containerizing Databases at New Relic (What We Learned) with Joshua Galbraith and Bryant Vinisky

Joshua Bryant Percona Live 2018 New Relic

Percona Live 2018 Featured TalkWelcome to another interview blog for the rapidly-approaching Percona Live 2018. Each post in this series highlights a Percona Live 2018 featured talk at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post highlights Joshua Galbraith, Senior Software Engineer and Bryant Vinisky, Site Reliability Engineer at New Relic. Their session talk is titled Containerizing Databases at New Relic: What We Learned. There are many trade-offs when containerizing databases, and there are many open source support options for database containers such as Kubernetes and Apache Mesos. In our conversation, we discussed what containers can bring to a database environment:

Percona: Who are you, and how did you get into databases? What was your path to your current responsibilities?

Joshua: My name is Joshua Galbraith, and I’m a Senior Software Engineer and Technical Product Manager on the Database Engineering team at New Relic. I’ve been working with open-source databases for the past ten years. I started my software engineering career by writing scripts to load several years of high-resolution geophysical sensor data from flat files into a Postgres database. I’ve been writing software that interacts with databases and tools to make operating them easier ever since. I’ve run MySQL, MongoDB, ElasticSearch, Cassandra and Redis in a variety of production environments. A little over three years ago, I co-founded a DBaaS startup and built a container-based platform for Apache Cassandra. I’ve been containerizing databases ever since — most recently by working on a project called Megabase at New Relic.

Joshua Bryant Percona Live 2018 New RelicBryant: Hello, my name is Bryant Vinisky. I currently work on the Database Engineering team at New Relic as a Senior Site Reliability Engineer. My professional experience with databases and the open source ecosystem started over eight years ago when I joined the engineering team at NWEA and helped roll out a SaaS platform for delivering adaptive assessments to students over the web. That platform was backed by a number of different database and related technologies — namely PostgreSQL, MongoDB and Redis.

Though in the beginning, my role didn’t formally involve databases, they were clearly a very important part of the greater system. Armed with an unquenchable curiosity, I was never shy about crossing boundaries into DB land. On the development side, much of the tooling and side projects I worked on over the years frequently touched databases for a storage backend. As the assessment platform scaled out to support hundreds of thousands of concurrent users, my role shifted to a reliability focus, often involving pre-release load testing of the major system components as well as doing follow up for problems that occurred on the production system. Much of the work involved dealing with the databases as a scaling bottleneck.

Containers came into the picture for me over two years ago when I moved to New Relic, where they had been using containers for stateless services for years and largely skipped over VMs. It wasn’t long before we started exploring containers as a solution for stateful database services with our Megabase project, an area where I’ve spent a lot of my time since

Your tutorial is titled “Containerizing Databases at New Relic: What We Learned”. How did you decide on containers as your database solution?

Joshua/Bryant: At New Relic, we had already built a Container Fabric for our stateless services. We knew we needed to provide databases to our internal teams in a way that was fast, cost-efficient and repeatable. Using containers would get us most of the way towards those goals, but we didn’t have a proven pattern that we could follow to reach them. We heard about other companies succeeding in building similar solutions, and we knew that we were moving away from virtual machines. Containers seemed to fit the bill.

What are the benefits and drawbacks of containerizing databases?

Joshua/Bryant: Containers are great for packaging and deployment. They allow us to deploy a known version of configuration, code and environment in a deterministic way. Unfortunately, containers are not perfect mechanisms for resource isolation, especially when large amounts of persistent storage are required. The challenge of containerizing databases is to make the right trade-offs between portability and performance, complexity and ease-of-operation given the specific context and goals of your team and organization.

Were specific database software (MySQL, Redis, PostgreSQL) easier to containerize? Why?

Joshua/Bryant: In general, the databases that are easiest to containerize are the databases that manage their own state and make themselves effectively stateless from the point of the scheduler and container orchestration framework. Databases that provide cluster membership, fault-tolerance and data replication are easier to run with a traditional orchestration framework. Databases that do not do these things on their own require additional “sidecar” services and/or application-specific frameworks to operate reliably.

Why should people attend your talk? What do you hope people will take away from it?

Joshua/Bryant: As a team, we’ve learned a lot of lessons about what to do, and not to do, when running stateful applications in containers. We’d like to pass that knowledge on to our audience. We also want to send people away with enough information to make the right decisions about whether or not to containerize the databases they are responsible for, and how to do it in a way that is successful given their own goals and context. At the very least, we hope it will be entertaining — and maybe we’ll learn some things too.

What are you looking forward to at Percona Live (besides your talk)?

Joshua/Bryant: We’re very excited to hear about open-source tools like Vitess and ProxySQL. I’m also looking forward to hearing about the latest monitoring and performance analysis tools. I always love deep-dives into specific database problems faced by a company, and I love talking to people in the expo hall. I always come away from Percona Live events a little more excited about my day-to-day work and the future of open-source databases.

Want to find out more about this Percona Live 2018 featured talk, and containerizing stateful databases? Register for Percona Live 2018, and see Joshua and Bryant’s session talk Containerizing Databases at New Relic: What We Learned. Register now to get the best price! Use the discount code SeeMeSpeakPL18 for 10% off.

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

The post Percona Live 2018 Featured Talk: Containerizing Databases at New Relic (What We Learned) with Joshua Galbraith and Bryant Vinisky appeared first on Percona Database Performance Blog.

Apr
11
2018
--

Calling All Polyglots: Percona Live 2018 Keynote Schedule Now Available!

Percona Live 2018 Keynotes

Percona Live 2018 KeynotesWe’ve posted the Percona Live 2018 keynote addresses for the seventh annual Percona Live Open Source Database Conference 2018, taking place April 23-25, 2018 at the Santa Clara Convention Center in Santa Clara, CA. 

This year’s keynotes explore topics ranging from how cloud and open source database adoption accelerates business growth, to leading-edge emerging technologies, to the importance of MySQL 8.0, to the growing popularity of PostgreSQL.

We’re excited by the great lineup of speakers, including our friends at Alibaba Cloud, Grafana, Microsoft, Oracle, Upwork and VividCortex, the innovative leaders on the Cool Technologies panel, and Brendan Gregg from Netflix, who will discuss how to get the most out of your database on a Linux OS, using his experiences at Netflix to highlight examples.  

With the theme of “Championing Open Source Databases,” the conference will feature multiple tracks, including MySQL, MongoDB, Cloud, PostgreSQL, Containers and Automation, Monitoring and Ops, and Database Security. Once again, Percona will be offering a low-cost database 101 track for beginning users who want to learn how to use and operate open source databases.

The Percona Live 2018 keynotes include:

Tuesday, April 24, 2018

  • Open Source for the Modern Business – Peter Zaitsev of Percona will discuss how open source database adoption continues to grow in enterprise organizations, the expectations and definitions of what constitutes success continue to change. A single technology for everything is no longer an option; welcome to the polyglot world. The talk will include several compelling open source projects and trends of interest to the open source database community and will be followed by a round of lightning talks taking a closer look at some of those projects.
  • Cool Technologies Showcase – Four industry leaders will introduce key emerging industry developments. Andy Pavlo of Carnegie Mellon University will discuss the requirements for enabling autonomous database optimizations. Nikolay Samokhvalov of PostgreSQL.org will discuss new PostgreSQL tools. Sugu Sougoumarane of PlanetScale Data will explore how Vitess became a high-performance, scalable and available MySQL clustering cloud solution in line with today’s NewSQL storage systems. Shuhao Wu of Shopify explains how to use Ghostferry as a data migration tool for incompatible cloud platforms.
  • State of the Dolphin 8.0 – Tomas Ulin of Oracle will discuss the focus, strategy, investments and innovations that are evolving MySQL to power next-generation web, mobile, cloud and embedded applications – and why MySQL 8.0 is the most significant MySQL release in its history.
  • Linux Performance 2018 – Brendan Gregg of Netflix will summarize recent performance features to help users get the most out of their Linux systems, whether they are databases or application servers. Topics include the KPTI patches for Meltdown, eBPF for performance observability, Kyber for disk I/O scheduling, BBR for TCP congestion control, and more.

Wednesday, April 25, 2018

  • Panel Discussion: Database Evolution in the Cloud – An expert panel of industry leaders, including Lixun Peng of Alibaba, Sunil Kamath of Microsoft, and Baron Schwartz of VividCortex, will discuss the rapid changes occurring with databases deployed in the cloud and what that means for the future of databases, management and monitoring and the role of the DBA and developer.
  • Future Perfect: The New Shape of the Data Tier – Baron Schwartz of VividCortex will discuss the impact of macro trends such as cloud computing, microservices, containerization, and serverless applications. He will explore where these trends are headed, touching on topics such as whether we are about to see basic administrative tasks become more automated, the role of open source and free software, and whether databases as we know them today are headed for extinction.
  • MongoDB at Upwork – Scott Simpson of Upwork, the largest freelancing website for connecting clients and freelancers, will discuss how MongoDB is used at Upwork, how the company chose the database, and how Percona helps make the company successful.

We will also present the Percona Live 2018 Community Awards and Lightning Talks on Monday, April 23, 2018, during the Opening Night Reception. Don’t miss the first day of tutorials and Opening Night Reception!

Register for the conference on the Percona Live Open Source Database Conference 2018 website.

Sponsorships

Limited Sponsorship opportunities for Percona Live 2018 Open Source Database Conference are still available, and offer the opportunity to interact with the DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solution vendors, and entrepreneurs who typically attend the event. Contact live@percona.com for sponsorship details.

  • Diamond Sponsors – Percona, VividCortex
  • Platinum – Alibaba Cloud, Microsoft
  • Gold Sponsors – Facebook, Grafana
  • Bronze Sponsors – Altinity, BlazingDB, Box, Dynimize, ObjectRocket, Pingcap, Shannon Systems, SolarWinds, TimescaleDB, TwinDB, Yelp
  • Contributing Sponsors – cPanel, Github, Google Cloud, NaviCat
  • Media Sponsors – Database Trends & Applications, Datanami, EnterpriseTech, HPCWire, ODBMS.org, Packt

The post Calling All Polyglots: Percona Live 2018 Keynote Schedule Now Available! appeared first on Percona Database Performance Blog.

Mar
28
2018
--

GoDaddy to move most of its infrastructure to AWS, not including domain management for its 75M domains

It really is Go Time for GoDaddy . Amazon’s cloud services provider AWS and GoDaddy, the domain registration and management giant, may have competed in the past when it comes to working with small businesses to provide them with web services, but today the two took a step closer together. AWS said that GoDaddy is now migrating “the majority” of its infrastructure to AWS in a multi-year deal that will also see AWS becoming a partner in selling on some products of GoDaddy’s — namely Managed WordPress and GoCentral for managing domains and building and running websites.

The deal — financial terms of which are not being disclosed — is wide-ranging, but it will not include taking on domain management for GoDaddy’s 75 million domains currently under management, a spokesperson for the company confirmed to me.

“GoDaddy is not migrating the domains it manages to AWS,” said Dan Race, GoDaddy’s VP of communications. “GoDaddy will continue to manage all customer domains. Domain management is obviously a core business for GoDaddy.”

The move underscores Amazon’s continuing expansion as a powerhouse in cloud hosting and related services, providing a one-stop shop for customers who come for one product and stay for everything else (not unlike its retail strategy in that regard). Also, it is a reminder of how the economies of scale in the cloud business make it financially challenging to compete if you are not already one of the big players, or lack deep pockets to sustain your business as you look to grow. GoDaddy has been a direct victim of those economics: just last summer, GoDaddy killed off Cloud Servers, its AWS-style business for building, testing and scaling cloud services on GoDaddy infrastructure. It also already was hosting some services on AWS prior to this: its enterprise-grade Managed WordPress service was already being hosted there, for example.

The AWS deal also highlights how GoDaddy is trimming operational costs to improve its overall balance sheet under Scott Wagner, the COO who took over as CEO from Blake Irving at the beginning of this year. 

“As a technology provider with more than 17 million customers, it was very important for GoDaddy to select a cloud provider with deep experience in delivering a highly reliable global infrastructure, as well as an unmatched track record of technology innovation, to support our rapidly expanding business,” said Charles Beadnall, CTO at GoDaddy, in a statement.

AWS provides a superior global footprint and set of cloud capabilities which is why we selected them to meet our needs today and into the future. By operating on AWS, we’ll be able to innovate at the speed and scale we need to deliver powerful new tools that will help our customers run their own ventures and be successful online,” he continued.

AWS said that GoDaddy will be using AWS’s Elastic Container Service for Kubernetes and Elastic Compute Cloud P3 instances, as well as machine learning, analytics, and other database-related and container technology. Race told TechCrunch that the infrastructure components that the company is migrating to AWS currently run at GoDaddy but will be gradually moved away as part of its multi-year migration.

“As a large, high-growth business, GoDaddy will be able to leverage AWS to innovate for its customers around the world,” said Mike Clayville, VP, worldwide commercial sales at AWS, in a statement. “Our industry-leading services will enable GoDaddy to leverage emerging technologies like machine learning, quickly test ideas, and deliver new tools and solutions to their customers with greater frequency. We look forward to collaborating with GoDaddy as they build anew in the cloud and innovate new solutions to help people turn their ideas into reality online.”

Feb
28
2018
--

OpenStack gets support for virtual GPUs and new container features

 OpenStack, the open-source infrastructure project that aims to give enterprises the equivalent of AWS for the private clouds, today announced the launch of its 17th release, dubbed “Queens.” After all of those releases, you’d think that there isn’t all that much new that the OpenStack community could add to the project, but just as the large public clouds keep adding… Read More

Feb
08
2018
--

Sylabs launches Singularity Pro, a container platform for high performance computing

 Sylabs, the commercial company behind the open source Singularity container engine, announced its first commercial product today, Singularity Pro. Sylabs was launched in 2015 to create a container platform specifically designed for scientific and high performance computing use cases, two areas that founder and CEO Gregory Kurtzer, says were left behind in the containerization movement over… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com