May
07
2018
--

As Kubernetes grows, a startup ecosystem develops in its wake

Kubernetes, the open source container orchestration tool, came out of Google several years ago and has gained traction amazingly fast. With each step in its growth, it has created opportunities for companies to develop businesses on top of the open source project.

The beauty of open source is that when it works, you build a base platform and an economic ecosystem follows in its wake. That’s because a project like Kubernetes (or any successful open source offering) generates new requirements as a natural extension of the growth and development of a project.

Those requirements represent opportunities for new projects, of course, but also for startups looking at building companies adjacent that open source community. Before that can happen however, a couple of key pieces have to fall into place.

Ingredients for success

For starters you need the big corporates to get behind it. In the case of Kuberentes, in a 6 week period last year in quick succession between July and the beginning of September, we saw some of the best known enterprise technology companies including AWSOracleMicrosoftVMware and Pivotal all join the Cloud Native Computing Foundation (CNCF), the professional organization behind the open source project. This was a signal that Kubernetes was becoming a standard of sorts for container orchestration.

Surely these big companies would have preferred (and tried) to control the orchestration layer themselves, but they soon found that their customers preferred to use Kubernetes and they had little choice, but to follow the clear trend that was developing around the project.

Photo: Georgijevic on Getty Images

The second piece that has to come together for an open source community to flourish is that a significant group of developers have to accept it and start building stuff on top of the platform — and Kubernetes got that too. Consider that according to CNCF, a total of 400 projects have been developed on the platform by 771 developers contributing over 19,000 commits since the launch of Kubernetes 1.0 in 2015. Since last August, the last date for which the CNCF has numbers, developer contributions had increased by 385 percent. That’s a ton of momentum.

Cue the investors

When you have those two ingredients in place — developers and large vendors — you can begin to gain velocity. As more companies and more developers come, the community continues to grow, and that’s what we’ve been seeing with Kubernetes.

As that happens, it typically doesn’t take long for investors to take notice, and according to CNCF, there has been over $4 billion in investments so far in cloud native companies — this from a project that didn’t even exist that long ago.

Photo: Fitria Ramli / EyeEm on Getty Images.

That investment has taken the form of venture capital funding startups trying to build something on top of Kubernetes, and we’ve seen some big raises. Earlier this month, Hasura raised a $1.6M seed round for a packaged version Kubernetes designed specially to meet the needs of developers. Just last week, Upbound, a new startup from Seattle got $9 million in its Series A round to help manage multi-cluster and multi-cloud environments in a standard (cloud-native) way. A little further up the maturity curve, Heptio has raised over $33 million with its most recent round being a $25 million Series B last September. Finally, there is CoreOS, which raised almost $50 million before being sold to Red Hat for $250 million in January.

CoreOS wasn’t alone by any means as we’ve seen other exits coming over the last year or two with organizations scooping up cloud native startups. In particular, when you see the largest organizations like Microsoft, Oracle and Red Hat buying relatively young startups, they are often looking for talent, customers and products to get up to speed more quickly in a growing technology area like Kubernetes.

Growing an economic ecosystem

Kubernetes has grown and developed into an economic powerhouse in short period of time as dozens of side projects have developed around it, creating even more opportunity for companies of all sizes to build products and services to meet an ever-growing set of needs in a virtuous cycle of investment, innovation and economic activity.

Cloud Native Computing Foundation projects. Photo: Cloud Native Computing Foundation

If this project continues to grow, chances are it will gain even more investment as companies continue to flow toward containers and Kubernetes, and even more startups develop to help create products to meet new needs as a result.

May
06
2018
--

Kubernetes stands at an important inflection point

Last week at KubeCon and CloudNativeCon in Copenhagen, we saw an open source community coming together, full of vim and vigor and radiating positive energy as it recognized its growing clout in the enterprise world. Kubernetes, which came out of Google just a few years ago, has gained acceptance and popularity astonishingly rapidly — and that has raised both a sense of possibility and a boat load of questions.

At this year’s European version of the conference, the community seemed to be coming to grips with that rapid growth as large corporate organizations like Red Hat, IBM, Google, AWS and VMware all came together with developers and startups trying to figure out exactly what they had here with this new thing they found.

The project has been gaining acceptance as the defacto container orchestration tool, and as that happened, it was no longer about simply getting a project off the ground and proving that it could work in production. It now required a greater level of tooling and maturity that previously wasn’t necessary because it was simply too soon.

As this has happened, the various members who make up this growing group of users, need to figure out, mostly on the fly, how to make it all work when it is no longer just a couple of developers and a laptop. There are now big boy and big girl implementations and they require a new level of sophistication to make them work.

Against this backdrop, we saw a project that appeared to be at an inflection point. Much like a startup that realizes it actually achieved the product-market fit it had hypothesized, the Kubernetes community has to figure out how to take this to the next level — and that reality presents some serious challenges and enormous opportunities.

A community in transition

The Kubernetes project falls under the auspices of the Cloud Native Computing Foundation (or CNCF for short). Consider that at the opening keynote, CNCF director Dan Kohn was brimming with enthusiasm, proudly rattling off numbers to a packed audience, showing the enormous growth of the project.

Photo: Ron Miller

If you wanted proof of Kubernetes’ (and by extension cloud native computing’s) rapid ascension, consider that the attendance at KubeCon in Copenhagen last week numbered 4300 registered participants, triple the attendance in Berlin just last year.

The hotel and conference center were buzzing with conversation. Every corner and hallway, every bar stool in the hotel’s open lobby bar, at breakfast in the large breakfast room, by the many coffee machines scattered throughout the venue, and even throughout the city, people chatted, debated and discussed Kubernetes and the energy was palpable.

David Aronchick, who now runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days (way back in 2015) and he was certainly surprised to see how big it has become in such a short time.

“I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he said.

Growing up

Yet there was great demand, and with each leap forward and each new level of maturity came a new set of problems to solve, which in turn has created opportunities for new services and startups to fill in the many gaps. As Aparna Sinha, who is the Kubernetes group product manager at Google, said in her conference keynote, enterprise companies want some level of certainty that earlier adopters were willing to forego to take a plunge into the new and exciting world of containers.

Photo: Cloud Native Computing Foundation

As she pointed out, for others to be pulled along and for this to truly reach another level of adoption, it’s going to require some enterprise-level features and that includes security, a higher level of application tooling and a better overall application development experience. All these types of features are coming, whether from Google or from the myriad of service providers who have popped up around the project to make it easier to build, deliver and manage Kubernetes applications.

Sinha says that one of the reasons the project has been able to take off as quickly as it has, is that its roots lie in a container orchestration tool called Borg, which the company has been using internally for years. While that evolved into what we know today as Kubernetes, it certainly required some significant repackaging to work outside of Google. Yet that early refinement at Google gave it an enormous head start over an average open source project — which could account for its meteoric rise.

“When you take something so well established and proven in a global environment like Google and put it out there, it’s not just like any open source project invented from scratch when there isn’t much known and things are being developed in real time,” she said.

For every action

One thing everyone seemed to recognize at KubeCon was that in spite of the head start and early successes, there remains much work to be done, many issues to resolve. The companies using it today mostly still fall under the early adopter moniker. This remains true even though there are some full blown enterprise implementations like CERN, the European physics organization, which has spun up 210 Kubernetes clusters or JD.com, the Chinese Internet shopping giant, which has 20K servers running Kubernetes with the largest cluster consisting of over 5000 servers. Still, it’s fair to say that most companies aren’t that far along yet.

Photo: Ron Miller

But the strength of an enthusiastic open source community like Kubernetes and cloud native computing in general, means that there are companies, some new and some established, trying to solve these problems, and the multitude of new ones that seem to pop up with each new milestone and each solved issue.

As Abby Kearns, who runs another open source project, the Cloud Foundry Foundation, put it in her keynote, part of the beauty of open source is all those eyeballs on it to solve the scads of problems that are inevitably going to pop up as projects expand beyond their initial scope.

“Open source gives us the opportunity to do things we could never do on our own. Diversity of thought and participation is what makes open source so powerful and so innovative,” she said.

It’s worth noting that several speakers pointed out that diversity of thought also required actual diversity of membership to truly expand ideas to other ways of thinking and other life experiences. That too remains a challenge, as it does in technology and society at large.

In spite of this, Kubernetes has grown and developed rapidly, while benefiting from a community which so enthusiastically supports it. The challenge ahead is to take that early enthusiasm and translate it into more actual business use cases. That is the inflection point where the project finds itself, and the question is will it be able to take that next step toward broader adoption or reach a peak and fall back.

May
04
2018
--

Google Kubeflow, machine learning for Kubernetes, begins to take shape

Ever since Google created Kubernetes as an open source container orchestration tool, it has seen it blossom in ways it might never have imagined. As the project gains in popularity, we are seeing many adjunct programs develop. Today, Google announced the release of version 0.1 of the Kubeflow open source tool, which is designed to bring machine learning to Kubernetes containers.

While Google has long since moved Kubernetes into the Cloud Native Computing Foundation, it continues to be actively involved, and Kubeflow is one manifestation of that. The project was only first announced at the end of last year at Kubecon in Austin, but it is beginning to gain some momentum.

David Aronchick, who runs Kubeflow for Google, led the Kubernetes team for 2.5 years before moving to Kubeflow. He says the idea behind the project is to enable data scientists to take advantage of running machine learning jobs on Kubernetes clusters. Kubeflow lets machine learning teams take existing jobs and simply attach them to a cluster without a lot of adapting.

With today’s announcement, the project begins to move ahead, and according to a blog post announcing the milestone, brings a new level of stability, while adding a slew of new features that the community has been requesting. These include Jupyter Hub for collaborative and interactive training on machine learning jobs and Tensorflow training and hosting support, among other elements.

Aronchick emphasizes that as an open source project you can bring whatever tools you like, and you are not limited to Tensorflow, despite the fact that this early version release does include support for Google’s machine learning tools. You can expect additional tool support as the project develops further.

In just over 4 months since the original announcement, the community has grown quickly with over 70 contributors, over 20 contributing organizations along with over 700 commits in 15 repositories. You can expect the next version, 0.2, sometime this summer.

May
02
2018
--

Upbound grabs $9M Series A to automate multi-cloud management

Kubernetes, the open source container orchestration tool, does a great job of managing a single cluster, but Upbound, a new Seattle-based startup wants to extend this ability to manage multiple Kubernetes clusters across multi-cloud environment. It’s a growing requirement as companies deploy ever-larger numbers of clusters and choose a multi-vendor approach to cloud infrastructure services.

Today, the company announced a $9 million Series A investment led by GV (formerly Google Ventures) along with numerous unnamed angel investors from the cloud-native community. As part of the deal, GV’s Dave Munichiello will be joining the company board of directors.

It’s important to note that the company is currently working on the product and could be a year away from a release, but the vision is certainly compelling. As Upbound CEO and founder Bassam Tabbara says, his company’s solution could allow customers to run, scale and optimize their workloads across clusters, regions and clouds as a single entity.

That level of control could enable them to set rules and policies across those clusters and clouds. For example, a customer might control costs by creating a rule to find the cloud with lowest cost for processing a given job, or provide failover control across regions and clouds — all automatically. It would provide the general ability to have highly granular control across multiple environments that isn’t really possible now, Tabarra explained.

That vision of enterprise portability is certainly something that caught the eye of GV’s Munichiello. “Upbound presents a credible approach to multi-cloud computing built on the success of Kubernetes, and as a response to the growing enterprise demand for hybrid and multi-cloud environments,” he said in a statement.

Companies are working with multiple Kubernetes clusters today. As an example, CERN, the European physics organization is running 210 clusters. JD.com, the Chinese shopping site has over 20,000 servers running Kubernetes. The largest cluster is made up of 5000 servers. As these projects scale, they require a tool to help manage their workloads across these larger environments.

The company’s founder isn’t new to cloud-native computing or open source. Tabarra was part of the team responsible for producing the open source project, Rook, an offshoot of Kubernetes and a Cloud Native Computing Foundation Sandbox project.  Rook helps orchestrate distributed storage systems running in cloud native environments in a similar way that Kubernetes does for containerized environments. That project provided some of the ground work for what Upbound is trying to do on a broader scale beyond pure storage.

The computing world is suddenly all about abstraction. We started with virtual machines, which allowed you take an individual server and make it into multiple virtual machines. That led to containers, which could take the same machine in let you launch hundreds of containers. Kubernetes is an open source container orchestration tool that has rapidly gained acceptance by allowing operations to treat a cluster of Kubernetes nodes as a single entity, making it much easier to launch and manage containers.

Upbound launched last Fall and currently has 8 employees, but Tabbara says they are actively seeking new engineers. The nature of their business is about distributed workloads and he says the workforce will be similar. They won’t have to work in Seattle. He says the plan is to use and contribute to open source whenever possible and to open source parts of the product when it’s available.

Apr
21
2018
--

Pivotal CEO talks IPO and balancing life in Dell family of companies

Pivotal has kind of a strange role for a company. On one hand its part of the EMC federation companies that Dell acquired in 2016 for a cool $67 billion, but it’s also an independently operated entity within that broader Dell family of companies — and that has to be a fine line to walk.

Whatever the challenges, the company went public yesterday and joined VMware as a  separately traded company within Dell. CEO Rob Mee says the company took the step of IPOing because it wanted additional capital.

“I think we can definitely use the capital to invest in marketing and R&D. The wider technology ecosystem is moving quickly. It does take additional investment to keep up,” Mee told TechCrunch just a few hours after his company rang the bell at the New York Stock Exchange.

As for that relationship of being a Dell company, he said that Michael Dell let him know early on after the EMC acquisition that he understood the company’s position. “From the time Dell acquired EMC, Michael was clear with me: You run the company. I’m just here to help. Dell is our largest shareholder, but we run independently. There have been opportunities to test that [since the acquisition] and it has held true,” Mee said.

Mee says that independence is essential because Pivotal has to remain technology-agnostic and it can’t favor Dell products and services over that mission. “It’s necessary because our core product is a cloud-agnostic platform. Our core value proposition is independence from any provider — and Dell and VMware are infrastructure providers,” he said.

That said, Mee also can play both sides because he can build products and services that do align with Dell and VMware offerings. “Certainly the companies inside the Dell family are customers of ours. Michael Dell has encouraged the IT group to adopt our methods and they are doing so,” he said. They have also started working more closely with VMware, announcing a container partnership last year.

Photo: Ron Miller

Overall though he sees his company’s mission in much broader terms, doing nothing less than helping the world’s largest companies transform their organizations. “Our mission is to transform how the world builds software. We are focused on the largest organizations in the world. What is a tailwind for us is that the reality is these large companies are at a tipping point of adopting how they digitize and develop software for strategic advantage,” Mee said.

The stock closed up 5 percent last night, but Mee says this isn’t about a single day. “We do very much focus on the long term. We have been executing to a quarterly cadence and have behaved like a public company inside Pivotal [even before the IPO]. We know how to do that while keeping an eye on the long term,” he said.

Mar
31
2018
--

Red Hat looks beyond Linux

The Red Hat Linux distribution is turning 25 years old this week. What started as one of the earliest Linux distributions is now the most successful open-source company, and its success was a catalyst for others to follow its model. Today’s open-source world is very different from those heady days in the mid-1990s when Linux looked to be challenging Microsoft’s dominance on the desktop, but Red Hat is still going strong.

To put all of this into perspective, I sat down with the company’s current CEO (and former Delta Air Lines COO) Jim Whitehurst to talk about the past, present and future of the company, and open-source software in general. Whitehurst took the Red Hat CEO position 10 years ago, so while he wasn’t there in the earliest days, he definitely witnessed the evolution of open source in the enterprise, which is now more widespread than every.

“Ten years ago, open source at the time was really focused on offering viable alternatives to traditional software,” he told me. “We were selling layers of technology to replace existing technology. […] At the time, it was open source showing that we can build open-source tech at lower cost. The value proposition was that it was cheaper.”

At the time, he argues, the market was about replacing Windows with Linux or IBM’s WebSphere with JBoss. And that defined Red Hat’s role in the ecosystem, too, which was less about technological information than about packaging. “For Red Hat, we started off taking these open-source projects and making them usable for traditional enterprises,” said Whitehurst.

Jim Whitehurst, Red Hat president and CEO (photo by Joan Cros/NurPhoto via Getty Images)

About five or six ago, something changed, though. Large corporations, including Google and Facebook, started open sourcing their own projects because they didn’t look at some of the infrastructure technologies they opened up as competitive advantages. Instead, having them out in the open allowed them to profit from the ecosystems that formed around that. “The biggest part is it’s not just Google and Facebook finding religion,” said Whitehurst. “The social tech around open source made it easy to make projects happen. Companies got credit for that.”

He also noted that developers now look at their open-source contributions as part of their resumé. With an increasingly mobile workforce that regularly moves between jobs, companies that want to compete for talent are almost forced to open source at least some of the technologies that don’t give them a competitive advantage.

As the open-source ecosystem evolved, so did Red Hat. As enterprises started to understand the value of open source (and stopped being afraid of it), Red Hat shifted from simply talking to potential customers about savings to how open source can help them drive innovation. “We’ve gone from being commeditizers to being innovators. The tech we are driving is now driving net new innovation,” explained Whitehurst. “We are now not going in to talk about saving money but to help drive innovation inside a company.”

Over the last few years, that included making acquisitions to help drive this innovation. In 2015, Red Hat bought IT automation service Ansible, for example, and last month, the company closed its acquisition of CoreOS, one of the larger independent players in the Kubernetes container ecosystem — all while staying true to its open-source root.

There is only so much innovation you can do around a Linux distribution, though, and as a public company, Red Hat also had to look beyond that core business and build on it to better serve its customers. In part, that’s what drove the company to launch services like OpenShift, for example, a container platform that sits on top of Red Hat Enterprise Linux and — not unlike the original Linux distribution — integrates technologies like Docker and Kubernetes and makes them more easily usable inside an enterprise.

The reason for that? “I believe that containers will be the primary way that applications will be built, deployed and managed,” he told me, and argued that his company, especially after the CoreOS acquisition, is now a leader in both containers and Kubernetes. “When you think about the importance of containers to the future of IT, it’s a clear value for us and for our customers.”

The other major open-source project Red Hat is betting on is OpenStack . That may come as a bit of a surprise, given that popular opinion in the last year or so has shifted against the massive project that wants to give enterprises an open source on-premise alternative to AWS and other cloud providers. “There was a sense among big enterprise tech companies that OpenStack was going to be their savior from Amazon,” Whitehurst said. “But even OpenStack, flawlessly executed, put you where Amazon was five years ago. If you’re Cisco or HP or any of those big OEMs, you’ll say that OpenStack was a disappointment. But from our view as a software company, we are seeing good traction.”

Because OpenStack is especially popular among telcos, Whitehurst believes it will play a major role in the shift to 5G. “When we are talking to telcos, […] we are very confident that OpenStack will be the platform for 5G rollouts.”

With OpenShift and OpenStack, Red Hat believes that it has covered both the future of application development and the infrastructure on which those applications will run. Looking a bit further ahead, though, Whitehurst also noted that the company is starting to look at how it can use artificial intelligence and machine learning to make its own products smarter and more secure, but also at how it can use its technologies to enable edge computing. “Now that large enterprises are also contributing to open source, we have a virtually unlimited amount of material to bring our knowledge to,” he said.

 

Mar
26
2018
--

The Linux Foundation launches a deep learning foundation

Despite its name, the Linux Foundation has long been about more than just Linux. These days, it’s a foundation that provides support to other open source foundations and projects like Cloud Foundry, the Automotive Grade Linux initiative and the Cloud Native Computing Foundation. Today, the Linux Foundation is adding yet another foundation to its stable: the LF Deep Learning Foundation.

The idea behind the LF Deep Learning Foundation is to “support and sustain open source innovation in artificial intelligence, machine learning, and deep learning while striving to make these critical new technologies available to developers and data scientists everywhere.”

The founding members of the new foundation include Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa and ZTE. Others will likely join in the future.

“We are excited to offer a deep learning foundation that can drive long-term strategy and support for a host of projects in the AI, machine learning, and deep learning ecosystems,” said Jim Zemlin, executive director of The Linux Foundation.

The foundation’s first official project is the Acumos AI Project, a collaboration between AT&T and Tech Mahindra that was already hosted by the Linux Foundation. Acumos AI is a platform for developing, discovering and sharing AI models and workflows.

Like similar Linux Foundation-based organizations, the LF Deep Learning Foundation will offer different membership levels for companies that want to support the project, as well as a membership level for non-profits. All LF Deep Learning members have to be Linux Foundation members, too.

Mar
13
2018
--

Don’t Get Hit with a Database Disaster: Database Security Compliance

Percona Live 2018 security talks

In this post, we discuss database security compliance, what you should be looking at and where to get more information.

As Percona’s Chief Customer Officer, I get the opportunity to talk with a lot of customers. Hearing about the problems that both their technical teams face, as well as the business challenges their companies experience first-hand is incredibly valuable in terms of what the market is facing in general. Not every problem you see has a purely technical solution, and not every good technical solution solves the core business problem.

Matt Yonkovit, Percona CCOAs database technology advances and data continues to be the core blood of most modern applications, DBA’s will have a say in business level strategic planning more than ever. This coincides with the advances in technology and automation that make many classic manual “DBA” jobs and tasks obsolete. Traditional DBA’s are evolving into a blend of system architect, data strategist and master database architect. I want to talk about the business problems that not only the C-Suite care about, but DBAs as a whole need to care about in the near future.

Let’s start with one topic everyone should have near the top of their list: security.

We did a recent survey of our customers, and their biggest concern right now is security and compliance.

Not long ago, most DBA’s I knew dismissed this topic as “someone else’s problem” (I remember being told that the database is only as secure as the network, so fix the network!). Long gone are the days when network security was enough. Even the DBA’s who did worry about security only did so within the limited scope of what the database system could provide out of the box.  Again, not enough.

So let me run an experiment:

Raise your hand if your company has some bigger security initiative this year. 

I’m betting a lot of you raised your hand!

Security is not new to the enterprise. It’s been a priority for years now. However, it has not been receiving a hyper-focus in the open source database space until the last three years or so. Why? There have been a number of high profile database security breaches in the last year, all highlighting a need for better database security. This series of serious data breaches have exposed how fragile some security protocols in companies are. If that was not enough, new government regulations and laws have made data protection non-optional. This means you have to take the security of your database seriously, or there could be fines and penalties.

Percona Live 2018 security talksGovernment regulations are nothing new, but the breadth and depth of these are growing and are opening up a whole new challenge for databases systems and administrators. GDPR was signed into law two years ago (you can read more here: https://en.wikipedia.org/wiki/General_Data_Protection_Regulation and https://www.dataiq.co.uk/blog/summary-eu-general-data-protection-regulation) and is scheduled to take effect on May 25, 2018. This has many businesses scrambling not only to understand the impact, but figure out how they need to comply. These regulations redefine simple things, like what constitutes “personal data” (for instance, your anonymous buying preferences or location history even without your name).

New requirements also mean some areas get a bit more complicated as they approach the gray area of definition. For instance, GDPR guarantees the right to be forgotten. What does this mean? In theory, it means end-users can request that all their personal information is removed from your systems as if they did not exist. Seems simple, but in reality, you can go as far down the rabbit hole as you want. Does your application support this already? What about legacy applications? Even if the apps can handle it, does this mean previously taken database backups have to forget you as well? There is a lot to process for sure.

So what are the things you can do?

  1. Educate yourself and understand expectations, even if you weren’t involved in compliance discussions before.
  2. Start working on incremental improvements now on your data security. This is especially true in the area’s where you have some control, without massive changes to the application. Encryption at rest is a great place to start if you don’t have it.
  3. Start talking with others in the organization about how to identify and protect personal information.
  4. Look to increase security by default by getting involved in new applications early in the design phase.

The good news is you are not alone in tackling this challenge. Every company must address it. Because of this focus on security, we felt strongly about ensuring we had a security track at Percona Live 2018 this year. These talks from Fastly, Facebook, Percona, and others provide information on how companies around the globe are tackling these security issues. In true open source fashion, we are better when we learn and grow from one another.

What are the Percona Live 2018 security talks?

We have a ton of great security content this year at Percona Live, across a bunch of technologies and open source software. Some of the more interesting Percona Live 2018 security talks are:

Want to attend Percona Live 2018 security talks? Register for Percona Live 2018. Register now to get the best price! Use the discount code SeeMeSpeakPL18 for 10% off.

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Mar
07
2018
--

Percona Live 2018 Featured Talk: Securing Your Data on PostgreSQL with Payal Singh

Payal PostgreSQL 1

Percona Live 2018 Featured TalkWelcome to another interview blog for the rapidly-approaching Percona Live 2018. Each post in this series highlights a Percona Live 2018 featured talk at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post highlights Payal Singh, DBA at OmniTI Computer Consulting Inc. Her talk is titled Securing Your Data on PostgreSQL. There is often a lack of understanding about how best to manage minimum basic application security features – especially with major security features being released with every major version of PostgreSQL. In our conversation, we discussed how Payal works to improve application security using Postgres:

Percona: Who are you, and how did you get into databases? What was your path to your current responsibilities?

Payal: I’m primarily a data addict. I fell in love with databases when it was first taught to me in high school. The declarative SQL syntax was intuitive to me, and efficient compared to other languages I had used (C and C++). I realized that if given the opportunity, I’d choose to become a database administrator. I joined OmniTI in summer of 2012 as a web engineer intern during my Masters, but grabbed the chance to work on an internal database migration project. Working with the DBA team gave me a lot of new insight and exposure, especially into open source databases. The more I learned, the more I loved my job. Right after completing my Masters I joined OmniTI as a full-time database administrator, and never looked back!

Percona: Your talk is titled ” Securing Your Data on PostgreSQL”. Why do you think that security (or the lack of it) is such an issue?

Payal: Securing your data is critical. In my experience, the one reason people using commercial databases are apprehensive of switching to open source alternatives is a lack of exposure to security features. If you look at open source databases today, specifically PostgreSQL, it has the most advanced security features: data encryptionauditingrow-level security to name a few. People don’t know about them, though. As a FOSS project, we don’t have a centralized marketing team to advertise these features to our potential user base, which makes it necessary to spread information through other channels. Speaking about it at a popular conference like Percona Live is one of them!

In addition to public awareness, Postgres is advancing at a lightning pace. With each new major version released every year, a bunch of new security feature additions and major improvements in existing security features are added. So much so that it becomes challenging to keep up with all these features, even for existing Postgres users. My talk on Postgres security aims to inform current as well as prospective Postgres users about the advanced security features that exist and their use case, useful tips to use them, the gotchas, what’s lacking and what’s currently under development.

Percona: Is PostgreSQL better or worse with security and security options than either MySQL or MongoDB? Why?

Payal PostgreSQL 1Payal: I may be a little biased, but I think Postgres is the best database from a security point of view. MySQL is pretty close though! There are quite a few reasons why I consider Postgres to be the best, but I’d like to save that discussion for my talk at Percona Live! For starters though, I think that Postgres’s authentication and role architecture significantly clearer and more straightforward than MySQL’s implementation. Focusing strictly on security, I’d also say that access control and management is more granular and customizable in Postgres than it is in MySQL – although here I’d have to say MySQL’s ACL is easier and more intuitive to manage.

Percona: What is the biggest challenge for database security we are facing?

Payal: For all the databases? I’d say with the rapid growth of IoT, encrypted data processing is a huge requirement that none of the well-known databases currently provide. Even encryption of data at rest outside of the IoT context requires more attention. It is one of the few things that a DBMS can do as a last-ditch effort to protect its data in SQL injection attacks, if all other layers of security (network, application layer, etc.) have failed (which very often is the case).

Percona: Why should people attend your talk? What do you hope people will take away from it? 

Payal: My talk is a run-through of all current and future Postgres security features, from the basic to the very advanced and niche. It is not an isolated talk that assumes Postgres is the only database in the world. I often compare and contrast other database implementations of similar security features as well. Not only is it a decent one-hour primer for people new and interested in Postgres, but also a good way to weigh the pros and cons among databases from a security viewpoint.

Percona: What are you looking forward to at Percona Live (besides your talk)?

Payal: I’m looking forward to all the great talks! I got a lot of information out of the talks at Percona Live last year. The tutorials on new MySQL features were especially great!

Want to find out more about this Percona Live 2018 featured talk, and Payal and PostgreSQL security? Register for Percona Live 2018, and see her talk Securing Your Data on PostgreSQL. Register now to get the best price! Use the discount code SeeMeSpeakPL18 for 10% off.

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Feb
28
2018
--

OpenStack gets support for virtual GPUs and new container features

 OpenStack, the open-source infrastructure project that aims to give enterprises the equivalent of AWS for the private clouds, today announced the launch of its 17th release, dubbed “Queens.” After all of those releases, you’d think that there isn’t all that much new that the OpenStack community could add to the project, but just as the large public clouds keep adding… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com