Jan
04
2024
--

MySQL General Tablespaces: A Powerful Storage Option for Your Data

MySQL general tablespacesManaging storage and performance efficiently in your MySQL database is crucial, and general tablespaces offer flexibility in achieving this. This blog discusses general tablespaces and explores their functionalities, benefits, and practical usage, along with illustrative examples.What are MySQL general tablespaces?In contrast to the single system tablespace that holds system tables by default, general tablespaces are […]

Oct
16
2023
--

Adding Transparent Data Encryption to PostgreSQL with pg_tde: Please Test

Transparent Data Encryption to PostgreSQL

PG_TDE is an experimental transparent data encryption access method for PostgreSQL 16 and beyond. This software is under active development and at a very early stage of design and implementation. In the spirit of open and transparent communication, we would appreciate your feedback and invite PostgreSQL users to test the extension and provide feedback either via the GitHub repository or in the forum.

What is TDE?

Transparent Data Encryption (TDE) offers encryption at the file level and solves the problem of protecting data at rest. This is something that is available in other databases but not provided in upstream, vanilla Postgres.

Percona has received user feedback that this would be a useful feature, so we are working on this as an open source extension for Postgres that anyone can deploy. Percona co-founder Peter Zaitsev’s blog on why PostgreSQL needs TDE highlights some of the technical and business reasons why you might want TDE. Since PostgreSQL doesn’t have TDE features yet, Percona wants to provide the TDE feature as an extension to PostgreSQL.

Running pg_tde

The following examples use Docker to demonstrate what is needed to test pg_tde.

stoker@testa:~$sudo docker run --name pg-tde -e POSTGRES_PASSWORD=mysecretpassword -d perconalab/postgres-tde-ext
2ccbe758f32348e286cb277aed17c1c3f9c880b37f92303bd2266a334096b0b1
Log in to PostgreSQL

We specified the POSTGIS_PASSWORD in the docker run command above.

stoker@testa:~$ sudo docker run -it --rm postgres psql -h 172.17.0.2 -U postgres
Password for user postgres:
psql (16.0 (Debian 16.0-1.pgdg120+1))
Type "help" for help.

Verify that pg_tde is installed

Use the psql dx command to double-check that pg_tde is installed.

postgres=# dx
List of installed extensions
Name | Version | Schema | Description
---------+---------+------------+------------------------------
pg_tde | 1.0 | public | pg_tde access method
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(2 rows)

Another way to check is to try to create the pg_tde extension. The server should inform you that it is already installed. If you create a new database, you will have to re-create the extension side of it as below.

postgres=# CREATE EXTENSION pg_tde;
ERROR: extension "pg_tde" already exists
postgres=#

Now, we can create a table that uses pg_tde.

postgres=# CREATE TABLE sbtest1 ( id SERIAL, k INTEGER DEFAULT '0' NOT NULL, PRIMARY KEY (id)) USING pg_tde;
CREATE TABLE

And now you can insert data, delete data, update data, and do all the DML you are used to with PostgreSQL.

postgres=# SELECT cmin, cmax, xmin, xmax, ctid, * FROM sbtest1;

cmin | cmax | xmin | xmax | ctid | id | k
------+------+------+------+--------+----+----
0 | 0 | 744 | 0 | (0,1) | 1 | 1
0 | 0 | 744 | 0 | (0,2) | 2 | 2
0 | 0 | 744 | 0 | (0,3) | 3 | 3
0 | 0 | 744 | 0 | (0,7) | 7 | 7
0 | 0 | 744 | 0 | (0,8) | 8 | 8
0 | 0 | 744 | 0 | (0,9) | 9 | 9
0 | 0 | 744 | 0 | (0,10) | 10 | 10
0 | 0 | 746 | 0 | (0,11) | 11 | 11
0 | 0 | 746 | 0 | (0,12) | 12 | 12
0 | 0 | 746 | 0 | (0,13) | 13 | 13
(10 rows)

postgres=#

Please test pg_tde

Percona wants to make pg_tde your choice of TDE encryption, and for that, we need as many people testing and providing feedback as possible.

Follow the directions above or on the Github repository. Please let us know what you like and what you dislike about pg_tde. Let Percona know about any issues you discover, tell us what additional tooling about pg_tde you would like to have, and any other feedback.

This is open source software, and the old adage about having many eyeballs on the code to ensure its quality is applicable here, as Percona wants your input.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

 

Download Percona Distribution for PostgreSQL Today!

Apr
20
2023
--

Using Encryption-at-Rest for PostgreSQL in Kubernetes

encryption-at-rest in PostgreSQL

Data-at-rest encryption is essential for compliance with regulations that require the protection of sensitive data. Encryption can help organizations comply with regulations and avoid legal consequences and fines. It is also critical for securing sensitive data and avoiding data breaches.

PostgreSQL does not natively support Transparent Data Encryption (TDE). TDE is a database encryption technique that encrypts data at the column or table level, as opposed to full-disk encryption (FDE), which encrypts the entire database.

As for FDE, there are multiple options available for PostgreSQL. In this blog post, you will learn:

  • how to leverage FDE on Kubernetes with Percona Operator for PostgreSQL
  • how to start using encrypted storage for already running cluster

Prepare

In most public clouds, block storage is not encrypted by default. To enable the encryption of the storage in Kubernetes, you need to modify the StorageClass resource. This will instruct Container Storage Interface (CSI) to provision encrypted storage volume on your block storage (AWS EBS, GCP Persistent Disk, Ceph, etc.).

The configuration of the storage class depends on your storage plugin. For example, in Google Kubernetes Engine (GKE), you need to create the key in Cloud Key Management Service (KMS) and set it in the StorageClass:

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: my-enc-sc
provisioner: pd.csi.storage.gke.io
parameters:
  type: pd-standard
  disk-encryption-kms-key: KMS_KEY_ID

Get

KMS_KEY_ID

by following the instructions in this document.

For AWS EBS, you just need to add an encrypted field; the key in AWS KMS will be generated automatically.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: my-enc-sc
provisioner: kubernetes.io/aws-ebs
parameters:
  encrypted: 'true'
  fsType: ext4
  type: gp2
volumeBindingMode: WaitForFirstConsumer

Read more about storage encryption in the documentation of your cloud provider or storage project of your choice.

Try it out

Once you have the StorageClass created, it is time to use it. I will use Percona Operator for PostgreSQL v2 (currently in tech preview) in my tests, but such an approach can be used with any Percona Operator.

Deploy the operator by following our installation instructions. I will use the regular kubectl way:

kubectl apply -f deploy/bundle.yaml --server-side

Create a new cluster with encrypted storage

To create the cluster with encrypted storage, you must set the correct storage class in the Custom Resource.

spec:
  ...
  instances:
  - name: instance1
  ...
    dataVolumeClaimSpec:
      storageClassName: my-enc-sc
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

Apply the custom resource:

kubectl apply -f deploy/cr.yaml

The cluster should be up and running, backed by encrypted storage.

Encrypt storage for existing cluster

This task boils down to switching from one StorageClass to another. With version two of the Operator, we have a notion of instance groups. They are absolutely fantastic for testing new configurations, including compute and storage.

  1. Start with a regular cluster with two nodes – Primary and Replica. Storage is not encrypted. (0-fde-pg.yaml)
  2. Add another instance group with two nodes, but this time with encrypted storage (1-fde-pg.yaml). To do that, we change the spec.instances section:
  1.   - name: instance2
        replicas: 2
        dataVolumeClaimSpec:
          storageClassName: my-enc-sc
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 1Gi

    Wait for replication to complete and see the traffic hitting new nodes.

  2. Terminate nodes with unencrypted storage by removing the old instance group from the Custom Resource (2-fde-pg.yaml).

Now your cluster runs using encrypted storage.

Conclusion

It is quite interesting that PostgreSQL does not have built-in data-at-rest encryption. Peter Zaitsev wrote a blog post about it in the past – Why PostgreSQL Needs Transparent Database Encryption (TDE) – and why it is needed.

Storage-level encryption allows you to keep your data safe, but it has its limitations. The top limitations are:

  1. You can’t encrypt database objects granularly, only the whole storage.
  2. Also (1) does not allow you to encrypt different data with different keys, which might be the blocker for compliance and regulations.
  3. Physical backups, when files are copied from the disk, are not encrypted.

Even with these limitations, encrypting the data is highly recommended. Try out our operator and let us know what you think.

  • Please use this forum for general discussions.
  • Submit JIRA issue for bugs, and improvements of feature requests.
  • For commercial support, please use our contact page.

The Percona Kubernetes Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.

 

Learn More About Percona Kubernetes Operators

Sep
23
2022
--

Keep Your Data Safe with Percona

Keep Your Data Safe with Percona

Keep Your Data Safe with PerconaSeptember was and is an extremely fruitful month (especially for the black-hat hackers) for news about data leaks and breaches:

  1. Uber suffers computer system breach, alerts authorities
  2. GTA 6 source code and videos leaked after Rockstar Games hack
  3. Revolut breach: personal and banking data exposed

In this blog post, we want to remind you how to keep your data safe when running your favorite open source databases.

Network exposure

Search engines like Shodan are an easy way to search for publicly available databases. Over 3.6 million MySQL servers found exposed on the Internet.

The best practice here is to run database servers in the isolated private network, even from the rest of your corporate network. In this case, you have a low risk of exposure even in the case of server misconfiguration.

If for some reason you run your database on the server in a public network, you still can avoid network exposure:

  • Bind your server to the localhost or private IP address of the server

For example, for MySQL use bind-address option in your my.cnf:

bind-address = 192.168.0.123

  • Configure your firewall to block access through a public network interface on the operating system

Users and passwords

To complement the network exposure story, ensure that your users cannot connect from just any IP address. Taking MySQL as an example, the following GRANT command allows to connect from one of the private networks only:

GRANT ALL ON db1.* TO 'perconaAdmin'@'192.168.0.0/255.255.0.0';

MySQL also has an auth_socket plugin, that controls the connection to the database through Unix sockets. Read more in this blog post: Use MySQL Without a Password (and Still be Secure).

Minimize the risk and do not use default usernames and passwords. SecList is a good example of bad choices for passwords: MySQL, PostgreSQL, and a misc list. Percona Platform provides users with Advisors (read more below) that preemptively check for misconfigured grants, weak passwords, and more.

So now we agree that a strong password is a must. Did you know that you can enforce it? This Percona post talks about Improving MySQL Password Security with Validation Plugin that performs such enforcement.

A strong password is set, great! To make your system even more resilient to security risks, it is recommended to have a password rotation policy. This policy can be manually executed, but also can be automated through various integrations, like LDAP, KMIP, HashiCorp Vault, and many more. For example, this document describes how Percona Server for MongoDB can work with LDAP.

Encryption

There are two types of encryption when you talk about databases and ideally, you’re going to use both of them:

  1. Transport encryption – secure the traffic between client and server and between cluster nodes
  2. Data-at-rest encryption (or Transparent Data Encryption – TDE) – encrypt the data on a disk to prevent unauthorized access

Transport

With an unencrypted connection between the client and the server, someone with access to the network could watch all your traffic and steal the credentials and sensitive data. We recommend enabling network encryption by default. Read the following blog posts highlighting the details:

Data-at-rest

Someone can get access to the physical disk or a network block storage and read the data. To mitigate this risk, you can encrypt the data on the disk. It can be done on the file system, block storage level, and with the database storage engine itself.

Tools like fscrypt or in-built encryption in ZFS can help with file system encryption. Public clouds provide built-in encryption for their network storage solutions (ex AWS EBS, GCP). Private storage solutions, like Ceph, also come with the support of data-at-rest encryption on the block level.

Percona takes security seriously, which is why we recommend enabling data-at-rest encryption by default, especially for production workloads. Percona Server for MySQL and Percona Server for MongoDB provides you with a wide variety of options to perform TDE on the database level.

Preventive measures

Mistakes and misconfiguration can happen and it would be cool if there was a mechanism to alert you about issues before it is too late. Guess what – we have it! 

Percona Monitoring and Management (PMM) comes with Advisors which are the checks that identify potential security threats, vulnerabilities, data loss or data corruption, and other issues. Advisors are the software representation of the years of Percona’s expertise in database security and performance.

By connecting PMM to Percona Platform, users can get more sophisticated Advisors for free, whereas our paid customers are getting even deeper database checks, which discover various misconfiguration or non-compliance gems.

Learn more about Percona Platform with PMM on our website and check if your databases are secured and fine-tuned right away.

If you still believe you need more help, please let us know through our Community Forums or contact the Percona team directly.

Sep
20
2022
--

Column-Level Encryption in MySQL

Column-Level Encryption in MySQL

Column-Level Encryption in MySQLIn a post written earlier this year – Percona Server for MySQL Encryption Options and Choices I discussed some of the options around encryption in MySQL.  Being such a complex topic, that post was meant to clarify and highlight various aspects of “encryption” at different levels.  I recently had this topic come up again, but specifically around column-level encryption and various options so I wanted to touch on this in more detail.

As of the current release of Percona Server for MySQL, there is no built-in way to define a single column as encrypted.  Ideally, there could be some metadata passed in a create statement and this would just automatically happen, such as this:

Unfortunately, this option isn’t available and we need to do some data manipulation at or prior to read/write time.

Built-in MySQL encryption functions

One of the most common approaches is to use the built-in MySQL encryption functions described here: https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html.  The standard flows look roughly like this:

This works perfectly fine and the data will be stored encrypted for that column.  If an unauthorized person were to gain access to the running table, they would be unable to read that column without the key.  The biggest concern with this approach is that the plaintext key and plaintext data are BOTH specified in the query.  This leads to potential leaks in log files (slow query, binary log, etc) as well as potential sniffing over non-secure networks (i.e. connections not using SSL).

Also, key storage can become more cumbersome.  If you plan to share the same key for the entire table, key rotation and management can become non-trivial.  General best practices recommend that you rotate keys at least once a year.  On large tables, this could be a massive undertaking.  Let’s look at the envelope encryption pattern as an alternative.

Envelope encryption

In contrast to using built-in encryption, envelope encryption uses the concept of individual data keys.  While there are many ways to approach this, my personal experience uses the AWS KMS service.  In KMS, you can create what is called a “Customer Master Key” (CMK).  This is great for encrypting small strings like passwords but is limited to encrypting strings up to 4KB.

A more flexible approach is to use individual data keys, encrypt the data locally in the application, and then store the encrypted data along with the encrypted data key.  This can be done at various levels of granularity – from the per-row level to the table level, to the database level.   Here is the general process for envelope encryption:

When you need to decrypt the data, you first decrypt the key and then use that to decrypt the data:

Because both the data key and the data are encrypted, both are safe to store together.  This is one of the main benefits of this method.  You can have hundreds to millions of data keys independent of one another but protected by a single master key.  This allows you to deactivate all of the individual keys by disabling one single master key.  

It also simplifies key rotation – you can simply rotate the master key and start using the new key to generate new data keys. Note that this doesn’t re-encrypt the data, but it does allow you to follow the best practice of periodically rotating keys.  As long as you don’t delete any of the old keys, KMS can determine which key to use for decryption from the key itself and automatically decrypt your data (i.e. you don’t have to specify which key encrypted the data).  

I’ve included a link to a sample script in Python that shows this process in more detail.  While there isn’t actually an active database in the demo, the code prints out the INSERT and SELECT statements that could be run against a live server: https://github.com/mbenshoof/kms-envelope-demo

Challenges with column-level encryption

Searching

It should go without saying that introducing column-level encryption isn’t a trivial task.  One of the biggest challenges is reviewing how encrypted data is retrieved.  For example, if I store social security numbers encrypted individually, then how do I search for the user with an SSN of 123-45-6789?  If I use a shared key for the entire table/schema, then it is possible with proper indexing.  I just have to pass the encrypted value to the where clause and if it exists, it should be found.

However, on a per-row model where each row uses a unique key, this is no longer possible.  As I don’t know the key the value was encrypted with, I can’t search for the encrypted value in the table.  In cases like this, you might consider a one-way hash field that could be searched against.  For example, you could store the SHA256 hash of an SSN as an additional column for searching but then decrypt any other sensitive information.

CPU and space overhead

The other challenge is adding additional write/read overhead to handle encryption and decryption.  While this may or may not be an issue depending on the use case, the extra CPU needed either on the application side or MySQL side could come into play.  You will need to consider the extra processing required and factor that in when planning the size of your resources.

Additionally, depending on the encryption libraries used, there can be additional space needed to store encrypted values.  In some cases, the encrypted value (when stored in base64 specifically) may end up requiring a higher storage footprint.  The space could be compounded if using an index on an additional hash column.  For small values (like SSN), the hash may be much larger than the actual data.  This can result in a much higher storage footprint when applied to millions of records. 

Wrapping up

Encryption and security are very important and complicated topics.  When considering column-level encryption in MySQL, you definitely have some options.  The easiest way would be to just leverage the built-in encryption functions in MySQL.  However, you can take things a step further and handle all encryption and decryption in your application.  

As is always the case with complicated topics like this, the choice and approach depend entirely on your requirements and use case.  Just know that with the flexibility of MySQL, there is most likely a design and approach that works for you!

Jul
07
2021
--

Opaque raises $9.5M seed to secure sensitive data in the cloud

Opaque, a new startup born out of Berkeley’s RISELab, announced a $9.5 million seed round today to build a solution to access and work with sensitive data in the cloud in a secure way, even with multiple organizations involved. Intel Capital led today’s investment with participation by Race Capital, The House Fund and FactoryHQ.

The company helps customers work with secure data in the cloud while making sure the data they are working on is not being exposed to cloud providers, other research participants or anyone else, says company president Raluca Ada Popa.

“What we do is we use this very exciting hardware mechanism called Enclave, which [operates] deep down in the processor — it’s a physical black box — and only gets decrypted there. […] So even if somebody has administrative privileges in the cloud, they can only see encrypted data,” she explained.

Company co-founder Ion Stoica, who was a co-founder at Databricks, says the startup’s solution helps resolve two conflicting trends. On one hand, businesses increasingly want to make use of data, but at the same time are seeing a growing trend toward privacy. Opaque is designed to resolve this by giving customers access to their data in a safe and fully encrypted way.

The company describes the solution as “a novel combination of two key technologies layered on top of state-of-the-art cloud security—secure hardware enclaves and cryptographic fortification.” This enables customers to work with data — for example to build machine learning models — without exposing the data to others, yet while generating meaningful results.

Popa says this could be helpful for hospitals working together on cancer research, who want to find better treatment options without exposing a given hospital’s patient data to other hospitals, or banks looking for money laundering without exposing customer data to other banks, as a couple of examples.

Investors were likely attracted to the pedigree of Popa, a computer security and applied crypto professor at UC Berkeley and Stoica, who is also a Berkeley professor and co-founded Databricks. Both helped found RISELabs at Berkeley where they developed the solution and spun it out as a company.

Mark Rostick, vice president and senior managing director at lead investor Intel Capital says his firm has been working with the founders since the startup’s earliest days, recognizing the potential of this solution to help companies find complex solutions even when there are multiple organizations involved sharing sensitive data.

“Enterprises struggle to find value in data across silos due to confidentiality and other concerns. Confidential computing unlocks the full potential of data by allowing organizations to extract insights from sensitive data while also seamlessly moving data to the cloud without compromising security or privacy,” Rostick said in a statement

He added, “Opaque bridges the gap between data security and cloud scale and economics, thus enabling inter-organizational and intra-organizational collaboration.”

 

Apr
20
2021
--

Cape Privacy announces $20M Series A to help companies securely share data

Cape Privacy, the early-stage startup that wants to make it easier for companies to share sensitive data in a secure and encrypted way, announced a $20 million Series A today.

Evolution Equity Partners led the round with participation from new investors Tiger Global Management, Ridgeline Partners and Downing Lane. Existing investors Boldstart Ventures, Version One Ventures, Haystack, Radical Ventures and a slew of individual investors also participated. The company has now raised approximately $25 million, including a $5 million seed investment we covered last June.

Cape Privacy CEO Ché Wijesinghe says that the product has evolved quite a bit since we last spoke. “We have really focused our efforts on encrypted learning, which is really the core technology, which was fundamental to allowing the multi-party compute capabilities between two organizations or two departments to work and build machine learning models on encrypted data,” Wijesinghe told me.

Wijesinghe says that a key business case involves a retail company owned by a private equity firm sharing data with a large financial services company, which is using the data to feed its machine learning models. In this case, sharing customer data, it’s essential to do it in a secure way and that is what Cape Privacy claims is its primary value prop.

He said that while the data sharing piece is the main focus of the company, it has data governance and compliance components to be sure that entities sharing data are doing so in a way that complies with internal and external rules and regulations related to the type of data.

While the company is concentrating on financial services for now, because Wijesinghe has been working with these companies for years, he sees uses cases far beyond a single vertical, including pharmaceuticals, government, healthcare telco and manufacturing.

“Every single industry needs this and so we look at the value of what Cape’s encrypted learning can provide as really being something that can be as transformative and be as impactful as what SSL was for the adoption of the web browser,” he said.

Richard Seewald, founding and managing partner at lead investor Evolution Equity Partners likes that ability to expand the product’s markets. “The application in Financial Services is only the beginning. Cape has big plans in life sciences and government where machine learning will help make incredible advances in clinical trials and counter-terrorism for example. We anticipate wide adoption of Cape’s technology across many use cases and industries,” he said.

The company has recently expanded to 20 people and Wijesinghe, who is half Asian, takes DEI seriously. “We’ve been very, very deliberate about our DEI efforts, and I think one of the things that we pride ourselves in is that we do foster a culture of acceptance, that it’s not just about diversity in terms of color, race, gender, but we just hired our first nonbinary employee,” he said,

Part of making people feel comfortable and included involves training so that fellow employees have a deeper understanding of the cultural differences. The company certainly has diversity across geographies with employees in 10 different time zones.

The company is obviously remote with a spread like that, but once the pandemic is over, Wijesinghe sees bringing people together on occasion with New York City as the hub for the company, where people from all over the world can fly in and get together.

Apr
16
2021
--

Enterprise security attackers are one password away from your worst day

If the definition of insanity is doing the same thing over and over and expecting a different outcome, then one might say the cybersecurity industry is insane.

Criminals continue to innovate with highly sophisticated attack methods, but many security organizations still use the same technological approaches they did 10 years ago. The world has changed, but cybersecurity hasn’t kept pace.

Distributed systems, with people and data everywhere, mean the perimeter has disappeared. And the hackers couldn’t be more excited. The same technology approaches, like correlation rules, manual processes and reviewing alerts in isolation, do little more than remedy symptoms while hardly addressing the underlying problem.

The current risks aren’t just technology problems; they’re also problems of people and processes.

Credentials are supposed to be the front gates of the castle, but as the SOC is failing to change, it is failing to detect. The cybersecurity industry must rethink its strategy to analyze how credentials are used and stop breaches before they become bigger problems.

It’s all about the credentials

Compromised credentials have long been a primary attack vector, but the problem has only grown worse in the midpandemic world. The acceleration of remote work has increased the attack footprint as organizations struggle to secure their network while employees work from unsecured connections. In April 2020, the FBI said that cybersecurity attacks reported to the organization grew by 400% compared to before the pandemic. Just imagine where that number is now in early 2021.

It only takes one compromised account for an attacker to enter the active directory and create their own credentials. In such an environment, all user accounts should be considered as potentially compromised.

Nearly all of the hundreds of breach reports I’ve read have involved compromised credentials. More than 80% of hacking breaches are now enabled by brute force or the use of lost or stolen credentials, according to the 2020 Data Breach Investigations Report. The most effective and commonly-used strategy is credential stuffing attacks, where digital adversaries break in, exploit the environment, then move laterally to gain higher-level access.

Dec
14
2020
--

5 questions every IT team should to be able to answer

Now more than ever, IT teams play a vital role in keeping their businesses running smoothly and securely. With all of the assets and data that are now broadly distributed, a CEO depends on their IT team to ensure employees remain connected and productive and that sensitive data remains protected.

CEOs often visualize and measure things in terms of dollars and cents, and in the face of continuing uncertainty, IT — along with most other parts of the business — is facing intense scrutiny and tightening of budgets. So, it is more important than ever to be able to demonstrate that they’ve made sound technology investments and have the agility needed to operate successfully in the face of continued uncertainty.

For a CEO to properly understand risk exposure and make the right investments, IT departments have to be able to confidently communicate what types of data are on any given device at any given time.

Here are five questions that IT teams should be ready to answer when their CEO comes calling:

What have we spent our money on?

Or, more specifically, exactly how many assets do we have? And, do we know where they are? While these seem like basic questions, they can be shockingly difficult to answer … much more difficult than people realize. The last several months in the wake of the COVID-19 outbreak have been the proof point.

With the mass exodus of machines leaving the building and disconnecting from the corporate network, many IT leaders found themselves guessing just how many devices had been released into the wild and gone home with employees.

One CIO we spoke to estimated they had “somewhere between 30,000 and 50,000 devices” that went home with employees, meaning there could have been up to 20,000 that were completely unaccounted for. The complexity was further compounded as old devices were pulled out of desk drawers and storage closets to get something into the hands of employees who were not equipped to work remotely. Companies had endpoints connecting to corporate network and systems that they hadn’t seen for years — meaning they were out-of-date from a security perspective as well.

This level of uncertainty is obviously unsustainable and introduces a tremendous amount of security risk. Every endpoint that goes unaccounted for not only means wasted spend but also increased vulnerability, greater potential for breach or compliance violation, and more. In order to mitigate these risks, there needs to be a permanent connection to every device that can tell you exactly how many assets you have deployed at any given time — whether they are in the building or out in the wild.

Are our devices and data protected?

Device and data security go hand in hand; without the ability to see every device that is deployed across an organization, it becomes next to impossible to know what data is living on those devices. When employees know they are leaving the building and going to be off network, they tend to engage in “data hoarding.”

Dec
01
2020
--

Google launches Android Enterprise Essentials, a mobile device management service for small businesses

Google today introduced a new mobile management and security solution, Android Enterprise Essentials, which, despite its name, is actually aimed at small to medium-sized businesses. The company explains this solution leverages Google’s experience in building Android Enterprise device management and security tools for larger organizations in order to come up with a simpler solution for those businesses with smaller budgets.

The new service includes the basics in mobile device management, with features that allow smaller businesses to require their employees to use a lock screen and encryption to protect company data. It also prevents users from installing apps outside the Google Play Store via the Google Play Protect service, and allows businesses to remotely wipe all the company data from phones that are lost or stolen.

As Google explains, smaller companies often handle customer data on mobile devices, but many of today’s remote device management solutions are too complex for small business owners, and are often complicated to get up-and-running.

Android Enterprise Essentials attempts to make the overall setup process easier by eliminating the need to manually activate each device. And because the security policies are applied remotely, there’s nothing the employees themselves have to configure on their own phones. Instead, businesses that want to use the new solution will just buy Android devices from a reseller to hand out or ship to employees with policies already in place.

Though primarily aimed at smaller companies, Google notes the solution may work for select larger organizations that want to extend some basic protections to devices that don’t require more advanced management solutions. The new service can also help companies get started with securing their mobile device inventory, before they move up to more sophisticated solutions over time, including those from third-party vendors.

The company has been working to better position Android devices for use in workplace over the past several years, with programs like Android for Work, Android Enterprise Recommended, partnerships focused on ridding the Play Store of malware, advanced device protections for high-risk users, endpoint management solutions, and more.

Google says it will roll out Android Enterprise Essentials initially with distributors Synnex in the U.S. and Tech Data in the U.K. In the future, it will make the service available through additional resellers as it takes the solution global in early 2021. Google will also host an online launch event and demo in January for interested customers.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com