Feb
21
2019
--

Percona Server for MongoDB Operator 0.2.1 Early Access Release Is Now Available

Percona Server for MongoDB

Percona Server for MongoDB OperatorPercona announces the availability of the Percona Server for MongoDB Operator 0.2.1 early access release.

The Percona Server for MongoDB Operator simplifies the deployment and management of Percona Server for MongoDB in a Kubernetes or OpenShift environment. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.

Note: PerconaLabs is one of the open source GitHub repositories for unofficial scripts and tools created by Percona staff. These handy utilities can help save your time and effort.

Percona software builds located in the Percona-Lab repository are not officially released software, and also aren’t covered by Percona support or services agreements.

You can install the Percona Server for MongoDB Operator on Kubernetes or OpenShift. While the operator does not support all the Percona Server for MongoDB features in this early access release, instructions on how to install and configure it are already available along with the operator source code in our Github repository.

The Percona Server for MongoDB Operator on Percona-Lab is an early access release. Percona doesn’t recommend it for production environments.

Improvements

  • Backups to S3 compatible storages
  • CLOUD-117: An error proof functionality was included into this release. It doesn’t allow unsafe configurations by default, preventing user from configuring a cluster with more than one Arbiter node or a Replica Set with less than three nodes.
    • For those who still need such configurations, this protection can be disabled by setting allowUnsafeConfigurations=true in the deploy/cr.yaml file.

Fixed Bugs

  • CLOUD-105: The Service-per-Pod feature used with the LoadBalancer didn’t work with cluster sizes not equal to 1.
  • CLOUD-137: PVC assigned to the Arbiter Pod had the same size as PVC of the regular Percona Server for MongoDB Pods, despite the fact that Arbiter doesn’t store data.

Percona Server for MongoDB is an enhanced, open source and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition. It supports MongoDB protocols and drivers. Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.
Feb
20
2019
--

Google’s hybrid cloud platform is now in beta

Last July, at its Cloud Next conference, Google announced the Cloud Services Platform, its first real foray into bringing its own cloud services into the enterprise data center as a managed service. Today, the Cloud Services Platform (CSP) is launching into beta.

It’s important to note that the CSP isn’t — at least for the time being — Google’s way of bringing all of its cloud-based developer services to the on-premises data center. In other words, this is a very different project from something like Microsoft’s Azure Stack. Instead, the focus is on the Google Kubernetes Engine, which allows enterprises to then run their applications in both their own data centers and on virtually any cloud platform that supports containers.As Google Cloud engineering director Chen Goldberg told me, the idea here it to help enterprises innovate and modernize. “Clearly, everybody is very excited about cloud computing, on-demand compute and managed services, but customers have recognized that the move is not that easy,” she said and noted that the vast majority of enterprises are adopting a hybrid approach. And while containers are obviously still a very new technology, she feels good about this bet on the technology because most enterprises are already adopting containers and Kubernetes — and they are doing so at exactly the same time as they are adopting cloud and especially hybrid clouds.

It’s important to note that CSP is a managed platform. Google handles all of the heavy lifting like upgrades and security patches. And for enterprises that need an easy way to install some of the most popular applications, the platform also supports Kubernetes applications from the GCP Marketplace.

As for the tech itself, Goldberg stressed that this isn’t just about Kubernetes. The service also uses Istio, for example, the increasingly popular service mesh that makes it easier for enterprises to secure and control the flow of traffic and API calls between its applications.

With today’s release, Google is also launching its new CSP Config Management tool to help users create multi-cluster policies and set up and enforce access controls, resource quotas and more. CSP also integrates with Google’s Stackdriver Monitoring service and continuous delivery platforms.

“On-prem is not easy,” Goldberg said, and given that this is the first time the company is really supporting software in a data center that is not its own, that’s probably an understatement. But Google also decided that it didn’t want to force users into a specific set of hardware specifications like Azure Stack does, for example. Instead, CSP sits on top of VMware’s vSphere server virtualization platform, which most enterprises already use in their data centers anyway. That surely simplifies things, given that this is a very well-understood platform.

Jan
28
2019
--

Percona Server for MongoDB Operator 0.2.0 Early Access Release Is Now Available

Percona Server for MongoDB Operator

Percona Server for MongoDB OperatorPercona announces the availability of the Percona Server for MongoDB Operator 0.2.0 early access release.

The Percona Server for MongoDB Operator simplifies the deployment and management of Percona Server for MongoDB in a Kubernetes or OpenShift environment. It extends the Kubernetes API with a new custom resource for deploying, configuring and managing the application through the whole life cycle.

Note: PerconaLabs is one of the open source GitHub repositories for unofficial scripts and tools created by Percona staff. These handy utilities can help save your time and effort.

Percona software builds located in the Percona-Lab repository are not officially released software, and also aren’t covered by Percona support or services agreements.

You can install the Percona Server for MongoDB Operator on Kubernetes or OpenShift. While the operator does not support all the Percona Server for MongoDB features in this early access release, instructions on how to install and configure it are already available along with the operator source code in our Github repository.

The Percona Server for MongoDB Operator on Percona-Lab is an early access release. Percona doesn’t recommend it for production environments. 

New features

  • Percona Server for MongoDB backups are now supported and can be performed on a schedule or on demand.
  • Percona Server for MongoDB Operator now supports Replica Set Arbiter nodes to reduce disk IO and occupied space if needed.
  • Service per Pod operation mode implemented in this version allows assigning external or internal static IP addresses to the Replica Set nodes.

Improvements

  • CLOUD-76: Several Percona Server for MongoDB clusters can now share one namespace.

Fixed Bugs

  • CLOUD-97: The Replica Set watcher was not stopped automatically after the custom resource deletion.
  • CLOUD-46: When k8s-mongodb-initiator was running on an already-initialized Replica Set, it still attempted to initiate it.
  • CLOUD-45: The operator was temporarily removing MongoDB nodes from the Replica Set during a Pod update without the need.
  • CLOUD-51: It was not possible to set requests without limits in the custom resource configuration.
  • CLOUD-52: It was not possible to set limits without requests in the custom resource configuration.
  • CLOUD-89: The k8s-mongodb-initiator  was exiting with exit-code 1 instead of 0 if the Replica Set initiation has already happened, e.g., when a custom resource was deleted and recreated without deleting PVC data.
  • CLOUD-96: The operator was crashing after a re-create of the custom resource that already had old PVC data, and caused it to skip Replica Set init.

Percona Server for MongoDB is an enhanced, open source and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB Community Edition. It supports MongoDB protocols and drivers. Percona Server for MongoDB extends MongoDB Community Edition functionality by including the Percona Memory Engine, as well as several enterprise-grade features. It requires no changes to MongoDB applications or code.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system.
Jan
08
2019
--

Percona Live 2019 Tracks

Percona Live 2019

Percona Live Percona Live 2019Open Source Database Conference 2019 in North America has moved to Austin, Texas: a cool place to be, and host to many big names in the tech space. Read what Dave Stokes, MySQL Community Manager for Oracle, has to say in favor of Austin.

If you need a conference ticket for Austin, put in your proposal now!

Those who are successful with their presentation or tutorial submissions will receive a pass to the full three days of the event. Closing date for the call for papers is Sunday, January 20.

Percona is adopting an industry trend by organizing the conference into 13 separate tracks with one Percona expert coordinating community input for each one. We believe subject-specific mini-committees of experts should provide better results than a single mega-committee covering everything.

The MySQL track is being led by Alkin Tezuysal, Senior Technical Manager

MariaDB is the responsibility of Sveta Smirnova, Principle Support Escalation Specialist.

MongoDB is being driven by Consultant Doug Duncan.

PostgreSQL is being pushed forward by Avinash Vallarapu, PostgreSQL Support Tech Lead

Other Open Source Databases well, this important challenge has been handed to Senior Support Engineer Agustín Gallego

Java Development for Open Source Databases might be of interest to developers and is being led by Rodrigo Trindade, Service Delivery Manager

Kubernetes track is being headed by Mykola Marzhan who is our Kubernetes Technical Lead

Database Security and Compliance will be overseen by Denis Farar, General Counsel and VP of HR (but make no mistake, this is still a track where tech content is very welcome)

Automation & AI topics, at the leading edge of database technology challenges, are the responsibility of Max Bubenick, Platform Lead.

Observability & Monitoring talk selection will be led by Roma Novikov, Director of Platform Engineering – so get those PMM and other OS monitoring proposals at the ready!

Polyglot Persistence is in the hands of our Senior Software Engineer Ibrar Ahmed who is waiting to hear all about your experiences with cross-database applications, data exchange and how to meet the challenges of a hybrid database world.

Migration to OpenSource Databases which is a similar-but-different track full of challenges parallel to that of polyglot applications is being watched over by Marco Tusa, Managing Consultant

Business & Enterprise track will be driven by Brian Walters, Director of Solution Engineering who is keen to hear of your case studies and experiences of the impact of open source databases on your process and organizations.

Cloud is a special case, since it touches on virtually all aspects of open source database technology. If your talk has particular relevance to ‘cloud’ then please add this track with your submission. Similarly Innovative Technologies can apply across the board, and if you have something to share that is truly new, then add that to your track list. Those that are most exciting in the context of cloud or innovative in their approach may be selected for their cloud or innovation merit, whichever track they belong to.

Our track champions will engage with community experts to select papers and shape content. If you would like to contribute by taking on talk selection, please let me know.

New speakers, and those with less experience, are welcome, we are here to help, so first check out my community blog post with links to info and video workshops on how to put together a selection-worthy proposal. Even old-hands might find some inspiration!

All in all, we think this is a great move, with the track champions contributing their passion, experience and knowledge of contemporary open source issues to the development of excellent content.  Although we’re changing several things at once, no one gets a prize for standing still. We hope you’ll continue to support and grow with us this great, open source, database focused event! Put a note in your diary to join us from May 28 – 30 in Austin, Texas.

Finally, if you would like to get in touch with any of our track champions, please let me know

Dec
11
2018
--

The Cloud Native Computing Foundation adds etcd to its open-source stable

The Cloud Native Computing Foundation (CNCF), the open-source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.

Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open-source projects that use etcd include Cloud Foundry, and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”

Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 projects that fall under its “incubated technologies” umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.

That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).

Dec
03
2018
--

Percona Live 2019 Call for Papers is Now Open!

Percona Live CFP 2019

Percona Live 2019Announcing the opening of the Percona Live 2019 Open Source Database Conference call for papers. It will be open from now until January 20, 2019. The Percona Live Open Source Database Conference 2019 takes place May 28-30 in Austin, Texas.

Our theme this year is CONNECT. ACCELERATE. INNOVATE.

As a speaker at Percona Live, you’ll have the opportunity to CONNECT with your peers—open source database experts and enthusiasts who share your commitment to improving knowledge and exchanging ideas. ACCELERATE your projects and career by presenting at the premier open source database event, a great way to build your personal and company brands. And influence the evolution of the open source software movement by demonstrating how you INNOVATE!

Community initiatives remain core to the open source ethos, and we are proud of the contribution we make with Percona Live in showcasing thought leading practices in the open source database world.

With a nod to innovation, this year we are introducing a business track to benefit those business leaders who are exploring the use of open source and are interested in learning more about its costs and benefits.

Speaking Opportunities

The Percona Live Open Source Database Conference 2019 Call for Papers is open until January 20, 2019. We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Classes and talks are invited for Foundation (either entry-level or of general interest to all), Core (intermediate), and Masterclass (advanced) levels.

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. We encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

If your proposal is selected for breakout or tutorial sessions, you will receive a complimentary full conference pass.

Topics and Themes

We want proposals that cover the many aspects of application development using all open source databases, as well as new and interesting ways to monitor and manage database environments. Did you just embrace open source databases this year? What are the technical and business values of moving to or using open source databases? How did you convince your company to make the move? Was there tangible ROI?

Best practices and current trends, including design, application development, performance optimization, HA and clustering, cloud, containers and new technologies –  what’s holding your focus? Share your case studies, experiences and technical knowledge with an engaged audience of open source peers.

In the submission entry, indicate which of these themes your proposal best fits: tutorial, business needs; case studies/use cases; operations; or development. Also include which track(s) from the list below would be best suited to your talk.

Tracks

The conference committee is looking for proposals that cover the many aspects of using, deploying and managing open source databases, including:

  • MySQL. Do you have an opinion on what is new and exciting in MySQL? With the release of MySQL 8.0, are you using the latest features? How and why? Are they helping you solve any business issues, or making deployment of applications and websites easier, faster or more efficient? Did the new release influence you to change to MySQL? What do you see as the biggest impact of the MySQL 8.0 release? Do you use MySQL in conjunction with other databases in your environment?
  • MariaDB. Talks highlighting MariaDB and MariaDB compatible databases and related tools. Discuss the latest features, how to optimize performance, and demonstrate the best practices you’ve adopted from real production use cases and applications.
  • PostgreSQL. Why do you use PostgreSQL as opposed to other SQL options? Have you done a comparison or benchmark of PostgreSQL vs. other types of databases related to your applications? Why, and what were the results? How does PostgreSQL help you with application performance or deployment? How do you use PostgreSQL in conjunction with other databases in your environment?
  • MongoDB. Has the 4.0 release improved your experience in application development or time-to-market? How are the new features making your database environment better? What is it about MongoDB 4.0 that excites you? What are your experiences with Atlas? Have you moved to it, and has it lived up to its promises? Do you use MongoDB in conjunction with other databases in your environment?
  • Polyglot Persistence. How are you using multiple open source databases together? What tools and technologies are helping you to get them interacting efficiently? In what ways are multiple databases working together helping to solve critical business issues? What are the best practices you’ve discovered in your production environments?
  • Observability and Monitoring. How are you designing your database-powered applications for observability? What monitoring tools and methods are providing you with the best application and database insights for running your business? How are you using tools to troubleshoot issues and bottlenecks? How are you observing your production environment in order to understand the critical aspects of your deployments? 
  • Kubernetes. How are you running open source databases on the Kubernetes, OpenShift and other container platforms? What software are you using to facilitate their use? What best practices and processes are making containers a vital part of your business strategy? 
  • Automation and AI. How are you using automation to run databases at scale? Are you using automation to create self-running, self-healing, and self-tuning databases? Is machine learning and artificial intelligence (AI) helping you create a new generation of database automation?
  • Migration to Open Source Databases. How are you migrating to open source databases? Are you migrating on-premises or to the cloud? What are the tools and strategies you’ve used that have been successful, and what have you learned during and after the migration? Do you have real-world migration stories that illustrate how best to migrate?
  • Database Security and Compliance. All of us have experienced security and compliance challenges. From new legislation like GDPR, PCI and HIPAA, exploited software bugs, or new threats such as ransomware attacks, when is enough “enough”? What are your best practices for preventing incursions? How do you maintain compliance as you move to the cloud? Are you finding that security and compliance requirements are preventing your ability to be agile?
  • Other Open Source Databases. There are many, many great open source database software and solutions we can learn about. Submit other open source database talk ideas – we welcome talks for both established database technologies as well as the emerging new ones that no one has yet heard about (but should).
  • Business and Enterprise. Has your company seen big improvements in ROI from using Open Source Databases? Are there efficiency levels or interesting case studies you want to share? How did you convince your company to move to Open Source?

How to Respond to the Call for Papers

For information on how to submit your proposal, visit our call for papers page.

Sponsorship

If you would like to obtain a sponsor pack for Percona Live Open Source Database Conference 2019, you will find more information including a prospectus on our sponsorship page. You are welcome to contact me, Bronwyn Campbell, directly.

Nov
06
2018
--

VMware acquires Heptio, the startup founded by 2 co-founders of Kubernetes

During its big customer event in Europe, VMware announced another acquisition to step up its game in helping enterprises build and run containerised, Kubernetes-based architectures: it has acquired Heptio, a startup out of Seattle that was co-founded by Joe Beda and Craig McLuckie, who were two of the three people who co-created Kubernetes back at Google in 2014 (it has since been open sourced).

Beda and McLuckie and their team will all be joining VMware in the transaction.

Terms of the deal are not being disclosed — VMware said in a release that they are not material to the company — but as a point of reference, when Heptio last raised money — a $25 million Series B in 2017, with investors including Lightspeed, Accel and Madrona — it was valued at $117 million post-money, according to data from PitchBook.

Given the pedigree of Heptio’s founders, this is a signal of the big bet that VMware is taking on Kubernetes, and the belief that it will become an increasing cornerstone in how enterprises run their businesses. The larger company already works with 500,000+ customers globally, and 75,000 partners. It’s not clear how many customers Heptio worked with but they included large, tech-forward businesses like Yahoo Japan.

It’s also another endorsement of the ongoing rise of open source and its role in cloud architectures, a paradigm that got its biggest boost at the end of October with IBM’s acquisition of RedHat, one of the biggest tech acquisitions of all time at $34 billion.

Heptio provides professional services for enterprises that are adopting or already use Kubernetes, providing training, support and building open-source projects for managing specific aspects of Kubernetes and related container clusters, and this deal is about VMware expanding the business funnel and margins for Kubernetes within it its wider cloud, on-premise and hybrid storage and computing services with that expertise.

“Kubernetes is emerging as an open framework for multi-cloud infrastructure that enables enterprise organizations to run modern applications,” said Paul Fazzone, senior vice president and general manager, Cloud Native Apps Business Unit, VMware, in a statement. “Heptio products and services will reinforce and extend VMware’s efforts with PKS to establish Kubernetes as the de facto standard for infrastructure across clouds upon closing. We are thrilled that the Heptio team led by Craig and Joe will be joining VMware to help us guide customers as they move to a multi-cloud world.”

VMware and its Pivotal business already offer Kubernetes-related services by way of PKS, which lets organizations run cloud-agnostic apps. Heptio will become a part of that wider portfolio.

“The team at Heptio has been focused on Kubernetes, creating products that make it easier to manage multiple clusters across multiple clouds,” said Craig McLuckie, CEO and co-founder of Heptio. “And now we will be tapping into VMware’s cloud native resources and proven ability to execute, amplifying our impact. VMware’s interest in Heptio is a recognition that there is so much innovation happening in open source. We are jointly committed to contribute even more to the community—resources, ideas and support.”

VMware has made some 33 acquisitions overall, according to Crunchbase, but this appears to have been the first specifically to boost its position in Kubernetes.

The deal is expected to close by fiscal Q4 2019, VMware said.

Oct
11
2018
--

New Relic acquires Belgium’s CoScale to expand its monitoring of Kubernetes containers and microservices

New Relic, a provider of analytics and monitoring around a company’s internal and external facing apps and services to help optimise their performance, is making an acquisition today as it continues to expand a newer area of its business, containers and microservices. The company has announced that it has purchased CoScale, a provider of monitoring for containers and microservices, with a specific focus on Kubernetes.

Terms of the deal — which will include the team and technology — are not being disclosed, as it will not have a material impact on New Relic’s earnings. The larger company is traded on the NYSE (ticker: NEWR) and has been a strong upswing in the last two years, and its current market cap its around $4.6 billion.

Originally founded in Belgium, CoScale had raised $6.4 million and was last valued at $7.9 million, according to PitchBook. Investors included Microsoft (via its ScaleUp accelerator), PMV and the Qbic Fund, two Belgian investors.

We are thrilled to bring CoScale’s knowledge and deeply technical team into the New Relic fold,” noted Ramon Guiu, senior director of product management at New Relic. “The CoScale team members joining New Relic will focus on incorporating CoScale’s capabilities and experience into continuing innovations for the New Relic platform.”

The deal underscores how New Relic has had to shift in the last couple of years: when the company was founded years ago, application monitoring was a relatively easy task, with the web and a specified number of services the limit of what needed attention. But services, apps and functions have become increasingly complex and now tap data stored across a range of locations and devices, and processing everything generates a lot of computing demand.

New Relic first added container and microservices monitoring to its stack in 2016. That’s a somewhat late arrival to the area, New Relic CEO Lew Cirne believes that it’s just at the right time, dovetailing New Relic’s changes with wider shifts in the market.

‘We think those changes have actually been an opportunity for us to further differentiate and further strengthen our thesis that the New Relic  way is really the most logical way to address this,” he told my colleague Ron Miller last month. As Ron wrote, Cirne’s take is that New Relic has always been centered on the code, as opposed to the infrastructure where it’s delivered, and that has helped it make adjustments as the delivery mechanisms have changed.

New Relic already provides monitoring for Kubernetes, Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (EKS), Microsoft Azure Kubernetes Service (AKS), and RedHat Openshift, and the idea is that CoScale will help it ramp up across that range, while also adding Docker and OpenShift to the mix, as well as offering new services down the line to serve the DevOps community.

“The visions of New Relic and CoScale are remarkably well aligned, so our team is excited that we get to join New Relic and continue on our journey of helping companies innovate faster by providing them visibility into the performance of their modern architectures,” said CoScale CEO Stijn Polfliet, in a statement. “[Co-founder] Fred [Ryckbosch] and I feel like this is such an exciting space and time to be in this market, and we’re thrilled to be teaming up with the amazing team at New Relic, the leader in monitoring modern applications and infrastructure.”

Oct
10
2018
--

Google Cloud expands its networking feature with Cloud NAT

It’s a busy week for news from Google Cloud, which is hosting its Next event in London. Today, the company used the event to launch a number of new networking features. The marquee launch today is Cloud NAT, a new service that makes it easier for developers to build cloud-based services that don’t have public IP addresses and can only be accessed from applications within a company’s virtual private cloud.

As Google notes, building this kind of setup was already possible, but it wasn’t easy. Obviously, this is a pretty common use case, though, so with Cloud NAT, Google now offers a fully managed service that handles all the network address translation (hence the NAT) and provides access to these private instances behind the Cloud NAT gateway.

Cloud NAT supports Google Compute Engine virtual machines as well as Google Kubernetes Engine containers, and offers both a manual mode where developers can specify their IPs and an automatic mode where IPs are automatically allocated.

Also new in today’s release is Firewall Rules Logging, which is now in beta. Using this feature, admins can audit, verify and analyze the effects of their firewall rules. That means when there are repeated connection attempts that the firewall blocked, you can now analyze those and see whether somebody was up to no good or whether somebody misconfigured the firewall. Because the data is only delayed by about five seconds, the service provides near real-time access to this data — and you can obviously tie this in with other services like Stackdriver Logging, Cloud Pub/Sub and BigQuery to create alerts and further analyze the data.

Also new today is managed TLS certificated for HTTPS load balancers. The idea here is to take the hassle out of managing TLS certificates (the kind of certificates that ensure that your user’s browser creates a secure connection to your app) when there is a load balancer in play. This feature, too, is now in beta.

Oct
09
2018
--

Cloud Foundry expands its support for Kubernetes

Not too long ago, the Cloud Foundry Foundation was all about Cloud Foundry, the open source platform as a service (PaaS) project that’s now in use by most of the Fortune 500 enterprises. This project is the Cloud Foundry Application Runtime. A year ago, the Foundation also announced the Cloud Foundry Container Runtime that helps businesses run the Application Platform and their container-based applications in parallel. In addition, Cloud Foundry has also long been the force behind BOSH, a tool for building, deploying and managing cloud applications.

The addition of the Container Runtime a year go seemed to muddle the organization’s mission a bit, but now that the dust has settled, the intent here is starting to become clearer. As Cloud Foundry CTO Chip Childers told me, what enterprises are mostly using the Container Runtime for is for running the pre-packaged applications they get from their vendors. “The Container Runtime — or really any deployment of Kubernetes — when used next to or in conjunction with the App Runtime, that’s where people are largely landing packaged software being delivered by an independent software vendor,” he told me. “Containers are the new CD-ROM. You just want to land it in a good orchestration platform.”

Because the Application Runtime launched well before Kubernetes was a thing, the Cloud Foundry project built its own container service, called Diego.

Today, the Cloud Foundry foundation is launching two new Kubernetes-related projects that take the integration between the two to a new level. The first is Project Eirini, which was launched by IBM and is now being worked on by Suse and SAP as well. This project has been a long time in the making and it’s something that the community has expected for a while. It basically allows developers to choose between using the existing Diego orchestrator and Kubernetes when it comes to deploying applications written for the Application Runtime. That’s a big deal for Cloud Foundry.

“What Eirini does, is it takes that Cloud Foundry Application Runtime — that core PaaS experience that the [Cloud Foundry] brand is so tied to and it allows the underlying Diego scheduler to be replaced with Kubernetes as an option for those use cases that it can cover,” Childers explained. He added that there are still some use cases the Diego container management system is better suited for than Kubernetes. One of those is better Windows support — something that matters quite a bit to the enterprise companies that use Cloud Foundry. Childers also noted that the multi-tenancy guarantees of Kubernetes are a bit less stringent than Diego’s.

The second new project is CF Containerization, which was initially developed by Suse. Like the name implies, CF Containerization basically allows you to package the core Cloud Foundry Application Runtime and deploy it in Kubernetes clusters with the help of the BOSH deployment tool. This is pretty much what Suse is already using to ship its Cloud Foundry distribution.

Clearly then, Kubernetes is becoming part and parcel of what the Cloud Foundry PaaS service will sit on top of and what developers will use to deploy the applications they write for it in the near future. At first glance, this focus on Kubernetes may look like it’s going to make Cloud Foundry superfluous, but it’s worth remembering that, at its core, the Cloud Foundry Application Runtime isn’t about infrastructure but about a developer experience and methodology that aims to manage the whole application development lifecycle. If Kubernetes can be used to help manage that infrastructure, then the Cloud Foundry project can focus on what it does best, too.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com