Dec
11
2023
--

Storage Strategies for PostgreSQL on Kubernetes

Storage Strategies for PostgreSQL on Kubernetes

Deploying PostgreSQL on Kubernetes is not new and can be easily streamlined through various Operators, including Percona’s. There are a wealth of options on how you can approach storage configuration in Percona Operator for PostgreSQL, and in this blog post, we review various storage strategies — from basics to more sophisticated use cases.

The basics

Setting StorageClass

StorageClass resource in Kubernetes allows users to set various parameters of the underlying storage. For example, you can choose the public cloud storage type – gp3, io2, etc, or set file system.

You can check existing storage classes by running the following command:

$ kubectl get sc
NAME                      PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
premium-rwo               pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   54m
regionalpd-storageclass   pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   false                  51m
standard                  kubernetes.io/gce-pd    Delete          Immediate              true                   54m
standard-rwo (default)    pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   54m

As you see
standardrwo
is a default StorageClass, meaning that if you don’t specify anything, the Operator will use it.

To instruct Percona Operator for PostgreSQL which storage class to use, set it the
spec.instances.[].dataVolumeClaimSpec section:

    dataVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      storageClassName: STORAGE_CLASS_NAME
      resources:
        requests:
          storage: 1Gi

Separate volume for Write-Ahead Logs

Write-Ahead Logs (WALs) keep the recording of every transaction in your PostgreSQL deployment. They are useful for point-in-time recovery and minimizing your Recovery Point Objective (RPO). In Percona Operator, it is possible to have a separate volume for WALs to minimize the impact on performance and storage capacity. To set it, use
spec.instances.[].walVolumeClaimSpec section:

    walVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      storageClassName: STORAGE_CLASS_NAME
      resources:
        requests:
          storage: 1Gi

If you enable
walVolumeClaimSpec, the Operator will create two volumes per replica Pod – one for data and one for WAL:

cluster1-instance1-8b2m-pgdata   Bound    pvc-2f919a49-d672-49cb-89bd-f86469241381   1Gi        RWO            standard-rwo   36s
cluster1-instance1-8b2m-pgwal    Bound    pvc-bf2c26d8-cf42-44cd-a053-ccb6abadd096   1Gi        RWO            standard-rwo   36s
cluster1-instance1-ncfq-pgdata   Bound    pvc-7ab7e59f-017a-4655-b617-ff17907ace3f   1Gi        RWO            standard-rwo   36s
cluster1-instance1-ncfq-pgwal    Bound    pvc-51baffcf-0edc-472f-9c95-7a0cea3e6507   1Gi        RWO            standard-rwo   36s
cluster1-instance1-w4d8-pgdata   Bound    pvc-c60282ed-3599-4033-afc7-e967871efa1b   1Gi        RWO            standard-rwo   36s
cluster1-instance1-w4d8-pgwal    Bound    pvc-ef530cb4-82fb-4661-ac76-ee7fda1f89ce   1Gi        RWO            standard-rwo   36s

Changing storage size

If your StorageClass and storage interface (CSI) supports VolumeExpansion, you can just change the storage size in the Custom Resource manifest. The operator will do the rest and expand the storage automatically. This is a zero-downtime operation and is limited by underlying storage capabilities only.

Changing storage

It is also possible to change the storage capabilities, such as filesystem, IOPs, and type. Right now, it is possible through creating a new storage class and applying it to the new instance group. 

spec:
  instances:
  - name: newGroup
    dataVolumeClaimSpec:
      accessModes:
      - ReadWriteOnce
      storageClassName: NEW_STORAGE_CLASS
      resources:
        requests:
          storage: 2Gi

Creating a new instance group replicates the data to new replica nodes. This is done without downtime, but replication might introduce additional load on the primary node and the network. 

There is work in progress under Kubernetes Enhancement Proposal (KEP) #3780. It will allow users to change various volume attributes on the fly vs through the storage class. 

Data persistence

Finalizers

By default, the Operator keeps the storage and secret resources if the cluster is deleted. We do it to protect the users from human errors and other situations. This way, the user can quickly start the cluster, reusing the existing storage and secrets.

This default behavior can be changed by enabling a finalizer in the Custom Resource: 

apiVersion: pgv2.percona.com/v2
kind: PerconaPGCluster
metadata:
  name: cluster1
  finalizers:
  - percona.com/delete-pvc
  - percona.com/delete-ssl

This is useful for non-production clusters where you don’t need to keep the data. 

StorageClass data protection

There are extreme cases where human error is inevitable. For example, someone can delete the whole Kubernetes cluster or a namespace. Good thing that StorageClass resource comes with reclaimPolicy option, which can instruct Container Storage Interface to keep the underlying volumes. This option is not controlled by the operator, and you should set it for the StorageClass separately. 

> apiVersion: storage.k8s.io/v1
> kind: StorageClass
> ...
> provisioner: pd.csi.storage.gke.io
- reclaimPolicy: Delete
+ reclaimPolicy: Retain

In this case, even if Kubernetes resources are deleted, the physical storage is still there.

Regional disks

Regional disks are available at Azure and Google Cloud but not yet at AWS. In a nutshell, it is a disk that is replicated across two availability zones (AZ).

Kubernetes Regional disks

To use regional disks, you need a storage class that specifies in which AZs will it be available and replicated to:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: regionalpd-storageclass
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-balanced
  replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: topology.gke.io/zone
    values:
    - us-central1-a
    - us-central1-b

There are some scenarios where regional disks can help with cost reduction. Let’s review three PostgreSQL topologies:

  1. Single node with regular disks
  2. Single node with regional disks
  3. PostgreSQL Highly Available cluster with regular disks

If we apply availability zone failure to these topologies, we will get the following:

  1. Single node with regular disks is the cheapest one, but in case of AZ failure, recovery might take hours or even days – depending on the data.
  2. With single node and regional disks, you will not be spending a dime on compute for replicas, but at the same time, you will recover within minutes. 
  3. PostgreSQL cluster provides the best availability, but also comes with high compute costs.
Single PostgreSQL node, regular disk Single PostgreSQL node, regional disks PostgreSQL HA, regular disks
Compute costs $ $ $$
Storage costs $ $$ $$
Network costs $0 $0 $
Recovery Time Objective Hours Minutes Seconds

Local storage

One of the ways to reduce your total cost of ownership (TCO) for stateful workloads on Kubernetes and boost your performance is to use local storage as opposed to network disks. Public clouds provide instances with NVMe SSDs that can be utilized in k8s with tools like OpenEBS, Portworx, and more. The way it is consumed is through regular storage classes and deserves a separate blog post.

Kubernetes Local storage

Conclusion

In this blog post, we discussed the basics of storage configuration and saw how to fine-tune various storage parameters. There are different topologies, needs, and corresponding strategies for running PostgreSQL on Kubernetes, and depending on your cost, performance, and availability needs, you have a wealth of options with Percona Operators. 

Try out the Percona Operator for PostgreSQL by following the quickstart guide here.

Join the Percona Kubernetes Squad – a group of database professionals at the forefront of innovating database operations on Kubernetes within their organizations and beyond. The Squad is dedicated to providing its members with unwavering support as we all navigate the cloud-native landscape.

Apr
29
2021
--

Wasabi scores $112M Series C on $700M valuation to take on cloud storage hyperscalers

Taking on Amazon S3 in the cloud storage game would seem to be a fool-hearty proposition, but Wasabi has found a way to build storage cheaply and pass the savings onto customers. Today the Boston-based startup announced a $112 million Series C investment on a $700 million valuation.

Fidelity Management & Research Company led the round with participation from previous investors. It reports that it has now raised $219 million in equity so far, along with additional debt financing, but it takes a lot of money to build a storage business.

CEO David Friend says that business is booming and he needed the money to keep it going. “The business has just been exploding. We achieved a roughly $700 million valuation on this round, so  you can imagine that business is doing well. We’ve tripled in each of the last three years and we’re ahead of plan for this year,” Friend told me.

He says that demand continues to grow and he’s been getting requests internationally. That was one of the primary reasons he went looking for more capital. What’s more, data sovereignty laws require that certain types of sensitive data like financial and healthcare be stored in-country, so the company needs to build more capacity where it’s needed.

He says they have nailed down the process of building storage, typically inside co-location facilities, and during the pandemic they actually became more efficient as they hired a firm to put together the hardware for them onsite. They also put channel partners like managed service providers (MSPs) and value added resellers (VARs) to work by incentivizing them to sell Wasabi to their customers.

Wasabi storage starts at $5.99 per terabyte per month. That’s a heck of a lot cheaper than Amazon S3, which starts at 0.23 per gigabyte for the first 50 terabytes or $23.00 a terabyte, considerably more than Wasabi’s offering.

But Friend admits that Wasabi still faces headwinds as a startup. No matter how cheap it is, companies want to be sure it’s going to be there for the long haul and a round this size from an investor with the pedigree of Fidelity will give the company more credibility with large enterprise buyers without the same demands of venture capital firms.

“Fidelity to me was the ideal investor. […] They don’t want a board seat. They don’t want to come in and tell us how to run the company. They are obviously looking toward an IPO or something like that, and they are just interested in being an investor in this business because cloud storage is a virtually unlimited market opportunity,” he said.

He sees his company as the typical kind of market irritant. He says that his company has run away from competitors in his part of the market and the hyperscalers are out there not paying attention because his business remains a fraction of theirs for the time being. While an IPO is far off, he took on an institutional investor this early because he believes it’s possible eventually.

“I think this is a big enough market we’re in, and we were lucky to get in at just the right time with the right kind of technology. There’s no doubt in my mind that Wasabi could grow to be a fairly substantial public company doing cloud infrastructure. I think we have a nice niche cut out for ourselves, and I don’t see any reason why we can’t continue to grow,” he said.

Nov
19
2020
--

Amazon S3 Storage Lens gives IT visibility into complex S3 usage

As your S3 storage requirements grow, it gets harder to understand exactly what you have, and this especially true when it crosses multiple regions. This could have broad implications for administrators, who are forced to build their own solutions to get that missing visibility. AWS changed that this week when it announced a new product called Amazon S3 Storage Lens, a way to understand highly complex S3 storage environments.

The tool provides analytics that help you understand what’s happening across your S3 object storage installations, and to take action when needed. As the company describes the new service in a blog post, “This is the first cloud storage analytics solution to give you organization-wide visibility into object storage, with point-in-time metrics and trend lines as well as actionable recommendations,” the company wrote in the post.

Amazon S3 Storage Lens Console

Image Credits: Amazon

The idea is to present a set of 29 metrics in a dashboard that help you “discover anomalies, identify cost efficiencies and apply data protection best practices,” according to the company. IT administrators can get a view of their storage landscape and can drill down into specific instances when necessary, such as if there is a problem that requires attention. The product comes out of the box with a default dashboard, but admins can also create their own customized dashboards, and even export S3 Lens data to other Amazon tools.

For companies with complex storage requirements, as in thousands or even tens of thousands of S3 storage instances, who have had to kludge together ways to understand what’s happening across the systems, this gives them a single view across it all.

S3 Storage Lens is now available in all AWS regions, according to the company.

Nov
17
2020
--

Dropbox shifts business product focus to remote work with Spaces update

In a September interview at TechCrunch Disrupt, Dropbox co-founder and CEO Drew Houston talked about how the pandemic had forced the company to rethink what work means, and how his company is shifting with the new requirements of a work-from-home world. Today, the company announced broad changes to Dropbox Spaces, the product introduced last year, to make it a collaboration and project management tool designed with these new requirements in mind.

Dropbox president Timothy Young says that the company has always been about making it easy to access files wherever you happen to be and whatever device you happen to be on, whether that was in a consumer or business context. As the company has built out its business products over the last several years, that involved sharing content internally or externally. Today’s announcement is about helping teams plan and execute around the content you create with a strong project focus.

“Now what we’re basically trying to do is really help distributed teams stay organized, collaborate together and keep moving along, but also do so in a really secure way and support IT, administrators and companies with some features around that as well, while staying true to Dropbox principles,” Young said.

This involves updating Spaces to be a full-fledged project management tool designed with a distributed workforce in mind. Spaces connects to other tools like your calendar, people directory, project management software — and of course files. You can create a project, add people and files, then set up a timeline and assign and track tasks, In addition, you can access meetings directly from Spaces and communicate with team members, who can be inside or outside the company.

Houston suggested a product like this could be coming in his September interview when he said:

“Back in March we started thinking about this, and how [the rapid shift to distributed work] just kind of happened. It wasn’t really designed. What if you did design it? How would you design this experience to be really great? And so starting in March we reoriented our whole product road map around distributed work,” he said.

Along these same lines, Young says the company itself plans to continue to be a remote first company even after the pandemic ends, and will continue to build tools to make it easier to collaborate and share information with that personal experience in mind.

Today’s announcement is a step in that direction. Dropbox Spaces has been in private beta and should be available at the beginning of next year.

Nov
09
2020
--

Qumulo update adds NvME caching for more efficient use of flash storage

Qumulo, the Seattle-based data storage startup, announced a bunch of updates today, including support for NvME caching, an approach that should enable customers to access faster flash storage at a lower price point.

NvME flash storage development is evolving quickly, driving down the price with higher performance, a win-win situation for large data producers, but it’s still more expensive than traditional drives. Qumulo CEO Bill Richter pointed out that the software still has to take advantage of these changing flash storage dynamics.

To that end, the company claims with its new NvME caching capability, it is giving customers the ability to access faster flash storage for the same price as spinning disks by optimizing the software to more intelligently manage data on its platform and take advantage of the higher performance storage.

The company is also announcing the ability to dynamically scale using the latest technologies such as chips, memory and storage in an automated way. Further, it’s providing automated data encryption at no additional charge, and new instant updates, which it says can be implemented without any downtime. Finally, it has introduced a new interface to make it easier for customers to move their data from on premises data storage to Amazon S3.

Richter says that the company’s mission has always been about creating, managing and consuming massive amounts of file-based data. As the pandemic has taken hold, more companies are moving their data and applications to the cloud.

“The major secular trends that underpin Qumulo’s mission — the massive amount of file-based content, and the use of cloud computing to solve the content challenge, have both accelerated during the pandemic and we have received really clear signs of that,” he said.

Qumulo was founded in 2012 and has raised $351 million. Its most recent raise was a hefty $125 million last July on a valuation over $1.2 billion.

Oct
26
2020
--

Dropbox begins shift to high-efficiency Western Digital Shingled Magnetic Recording disks

Last year, Dropbox talked about making a shift to Shingled Magnetic Recording, or SMR disks for short, because of the efficiency they can give a high-volume storage platform like theirs. Today, Western Digital announced that Dropbox was one of the first companies to qualify their Ultrastar® DC HC650 20TB host-managed SMR hard disks.

Dropbox’s modern infrastructure story goes back to 2017, when it decided to shift most of its business from being hosted on AWS to building their own infrastructure. As they moved through the process of making that transition in the following years, they were looking for new storage technology ideas to help drive down the cost of running their own massive storage system.

As principal engineer James Cowling told TechCrunch last year, one of the ideas that emerged was using SMR:

What emerged was SMR, which has high storage density and a lower price point. Moving to SMR gave Dropbox the ability to do more with less, increasing efficiency and lowering overall costs — an essential step for a company trying to do this on its own. “It required expertise obviously, but it was also exciting to bring a lot of efficiencies in terms of cost and storage efficiency, while pulling down boundaries between software and hardware,” Cowling said.

As it turns out, Dropbox VP of engineering Andrew Fong says that the company has been working with Western Digital for a number of years and the new SMR technology is the latest step in that partnership.

Western Digital says these drives deliver this cost savings through increased storage density and lower power requirements. “When considering exabyte-scale needs, and associated capital and operating cost of the data center, the long-term value in terms of lower cost-per-TB, higher density, low power and high reliability can help benefit the bottom line,” the company said in a statement.

Time will tell if these disks deliver as promised, but they certainly show a lot of potential for a high-volume user like Dropbox.

Oct
20
2020
--

Egnyte introduces new features to help deal with security/governance during pandemic

The pandemic has put stress on companies dealing with a workforce that is mostly — and sometimes suddenly — working from home. That has led to rising needs for security and governance tooling, something that Egnyte is looking to meet with new features aimed at helping companies cope with file management during the pandemic.

Egnyte is an enterprise file storage and sharing (EFSS) company, though it has added security services and other tools over the years.

“It’s no surprise that there’s been a rapid shift to remote work, which has I believe led to mass adoption of multiple applications running on multiple clouds, and tied to that has been a nonlinear reaction of exponential growth in data security and governance concerns,” Vineet Jain, co-founder and CEO at Egnyte, explained.

There’s a lot of data at stake.

Egnyte’s announcements today are in part a reaction to the changes that COVID has brought, a mix of net-new features and capabilities that were on its road map, but accelerated to meet the needs of the changing technology landscape.

What’s new?

The company is introducing a new feature called Smart Cache to make sure that content (wherever it lives) that an individual user accesses most will be ready whenever they need it.

“Smart Cache uses machine learning to predict the content most likely to be accessed at any given site, so administrators don’t have to anticipate usage patterns. The elegance of the solution lies in that it is invisible to the end users,” Jain said. The end result of this capability could be lower storage and bandwidth costs, because the system can make this content available in an automated way only when it’s needed.

Another new feature is email scanning and governance. As Jain points out, email is often a company’s largest data store, but it’s also a conduit for phishing attacks and malware. So Egnyte is introducing an email governance tool that keeps an eye on this content, scanning it for known malware and ransomware and blocking files from being put into distribution when it identifies something that could be harmful.

As companies move more files around it’s important that security and governance policies travel with the document, so that policies can be enforced on the file wherever it goes. This was true before COVID-19, but has only become more true as more folks work from home.

Finally, Egnyte is using machine learning for auto-classification of documents to apply policies to documents without humans having to touch them. By identifying the document type automatically, whether it has personally identifying information or it’s a budget or planning document, Egnyte can help customers auto-classify and apply policies about viewing and sharing to protect sensitive materials.

Egnyte is reacting to the market needs as it makes changes to the platform. While the pandemic has pushed this along, these are features that companies with documents spread out across various locations can benefit from regardless of the times.

The company is over $100 million ARR today, and grew 22% in the first half of 2020. Whether the company can accelerate that growth rate in H2 2020 is not yet clear. Regardless, Egnyte is a budding IPO candidate for 2021 if market conditions hold.

Sep
16
2020
--

Pure Storage acquires data service platform Portworx for $370M

Pure Storage, the public enterprise data storage company, today announced that it has acquired Portworx, a well-funded startup that provides a cloud-native storage and data-management platform based on Kubernetes, for $370 million in cash. This marks Pure Storage’s largest acquisition to date and shows how important this market for multicloud data services has become.

Current Portworx enterprise customers include the likes of Carrefour, Comcast, GE Digital, Kroger, Lufthansa, and T-Mobile. At the core of the service is its ability to help users migrate their data and create backups. It creates a storage layer that allows developers to then access that data, no matter where it resides.

Pure Storage will use Portworx’s technology to expand its hybrid and multicloud services and provide Kubernetes -based data services across clouds.

Image Credits: Portworx

“I’m tremendously proud of what we’ve built at Portworx: An unparalleled data services platform for customers running mission-critical applications in hybrid and multicloud environments,” said Portworx CEO Murli Thirumale. “The traction and growth we see in our business daily shows that containers and Kubernetes are fundamental to the next-generation application architecture and thus competitiveness. We are excited for the accelerated growth and customer impact we will be able to achieve as a part of Pure.”

When the company raised its Series C round last year, Thirumale told me that Portworx had expanded its customer base by over 100% and its bookings increased by 376 from 2018 to 2019.

“As forward-thinking enterprises adopt cloud-native strategies to advance their business, we are thrilled to have the Portworx team and their groundbreaking technology joining us at Pure to expand our success in delivering multicloud data services for Kubernetes,” said Charles Giancarlo, chairman and CEO of Pure Storage. “This acquisition marks a significant milestone in expanding our Modern Data Experience to cover traditional and cloud native applications alike.”

Sep
15
2020
--

Dropbox CEO Drew Houston says the pandemic forced the company to reevaluate what work means

Dropbox CEO and co-founder Drew Houston, appearing at TechCrunch Disrupt today, said that COVID has accelerated a shift to distributed work that we have been talking about for some time, and these new ways of working will not simply go away when the pandemic is over.

“When you think more broadly about the effects of the shift to distributed work, it will be felt well beyond when we go back to the office. So we’ve gone through a one-way door. This is maybe one of the biggest changes to knowledge work since that term was invented in 1959,” Houston told TechCrunch Editor-In-Chief Matthew Panzarino.

That change has prompted Dropbox to completely rethink the product set over the last six months, as the company has watched the way people work change in such a dramatic way. He said even though Dropbox is a cloud service, no SaaS tool in his view was purpose-built for this new way of working and we have to reevaluate what work means in this new context.

“Back in March we started thinking about this, and how [the rapid shift to distributed work] just kind of happened. It wasn’t really designed. What if you did design it? How would you design this experience to be really great? And so starting in March we reoriented our whole product road map around distributed work,” he said.

He also broadly hinted that the fruits of that redesign are coming down the pike. “We’ll have a lot more to share about our upcoming launches in the future,” he said.

Houston said that his company has adjusted well to working from home, but when they had to shut down the office, he was in the same boat as every other CEO when it came to running his company during a pandemic. Nobody had a blueprint on what to do.

“When it first happened, I mean there’s no playbook for running a company during a global pandemic so you have to start with making sure you’re taking care of your customers, taking care of your employees, I mean there’s so many people whose lives have been turned upside down in so many ways,” he said.

But as he checked in on the customers, he saw them asking for new workflows and ways of working, and he recognized there could be an opportunity to design tools to meet these needs.

“I mean this transition was about as abrupt and dramatic and unplanned as you can possibly imagine, and being able to kind of shape it and be intentional is a huge opportunity,” Houston said.

Houston debuted Dropbox in 2008 at the precursor to TechCrunch Disrupt, then called the TechCrunch 50. He mentioned that the Wi-Fi went out during his demo, proving the hazards of live demos, but offered words of encouragement to this week’s TechCrunch Disrupt Battlefield participants.

Although his is a public company on a $1.8 billion run rate, he went through all the stages of a startup, getting funding and eventually going public, and even today as a mature public company, Dropbox is still evolving and changing as it adapts to changing requirements in the marketplace.

Jul
16
2020
--

Qumulo scores $125M Series E on $1.2B valuation as storage biz accelerates

Qumulo, a Seattle storage startup helping companies store vast amounts of data, announced a $125 million Series E investment today on a $1.2 billion valuation.

BlackRock led the round with help from Highland Capital Partners, Madrona Venture Group, Kleiner Perkins and new investor Amity Ventures. The company reports it has now raised $351 million.

CEO Bill Richter says the valuation is more than 2x its most recent round, a $93 million Series D in 2018. While the valuation puts his company in the unicorn club, he says that it’s more important than simple bragging rights. “It puts us in the category of raising at a billion-plus dollar level during a very complicated environment in the world. Actually, that’s probably the more meaningful news,” he told TechCrunch.

It typically hasn’t been easy raising money during the pandemic, but Richter reports the company started getting inbound interest in March just before things started shutting down nationally. What’s more, as the company’s quarter closed at the end of April, they had grown almost 100% year over year, and beaten their pre-COVID revenue estimate. He says they saw that as a signal to take additional investment.

“When you’re putting up nearly 100% year over year growth in an environment like this, I think it really draws a lot of attention in a positive way,” he said. And that attention came in the form of a huge round that closed this week.

What’s driving that growth is that the amount of unstructured data, which plays to the company’s storage strength, is accelerating during the pandemic as companies move more of their activities online. He says that when you combine that with a shift to the public cloud, he believes that Qumulo is well positioned.

Today the company has 400 customers and more than 300 employees, with plans to add another 100 before year’s end. As he adds those employees, he says that part of the company’s core principles includes building a diverse workforce. “We took the time as an organization to write out a detailed set of hiring practices that are designed to root out bias in the process,” he said.

One of the keys to that is looking at a broad set of candidates, not just the ones you’ve known from previous jobs. “The reason for that is that when you force people to go through hiring practices, you open up the position to a broader, more diverse set of candidates and you stop the cycle of continuously creating what I call ‘club memberships’, where if you were a member of the club before you’re a member in the future,” he says.

The company has been around since 2012 and spent the first couple of years conducting market research before building its first product. In 2014 it released a storage appliance, but over time it has shifted more toward hybrid solutions.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com