Oct
05
2020
--

Strike Graph raises $3.9M to help automate security audits

Compliance automation isn’t exactly the most exciting topic, but security audits are big business and companies that aim to get a SOC 2, ISO 207001 or FedRamp certification can often spend six figures to get through the process with the help of an auditing service. Seattle-based Strike Graph, which is launching today and announcing a $3.9 million seed funding round, wants to automate as much of this process as possible.

The company’s funding round was led by Madrona Venture Group, with participation from Amplify.LA, Revolution’s Rise of the Rest Seed Fund and Green D Ventures.

Strike Graph co-founder and CEO Justin Beals tells me that the idea for the company came to him during his time as CTO at machine learning startup Koru (which had a bit of an odd exit last year). To get enterprise adoption for that service, the company had to get a SOC 2 security certification. “It was a real challenge, especially for a small company. In talking to my colleagues, I just recognized how much of a challenge it was across the board. And so when it was time for the next startup, I was just really curious,” he told me.

Image Credits: Strike Graph

Together with his co-founder Brian Bero, he incubated the idea at Madrona Venture Labs, where he spent some time as Entrepreneur in Residence after Koru.

Beals argues that today’s process tends to be slow, inefficient and expensive. The idea behind Strike Graph, unsurprisingly, is to remove as many of these inefficiencies as is currently possible. The company itself, it is worth noting, doesn’t provide the actual audit service. Businesses will still need to hire an auditing service for that. But Beals also argues that the bulk of what companies are paying for today is pre-audit preparation.

“We do all that preparation work and preparing you and then, after your first audit, you have to go and renew every year. So there’s an important maintenance of that information.”

Image Credits: Strike Graph

When customers come to Strike Graph, they fill out a risk assessment. The company takes that and can then provide them with controls for how to improve their security posture — both to pass the audit and to secure their data. Beals also noted that soon, Strike Graph will be able to help businesses automate the collection of evidence for the audit (say your encryption settings) and can pull that in regularly. Certifications like SOC 2, after all, require companies to have ongoing security practices in place and get re-audited every 12 months. Automated evidence collection will launch in early 2021, once the team has built out the first set of its integrations to collect that data.

That’s also where the company, which mostly targets mid-size businesses, plans to spend a lot of its new funding. In addition, the company plans to focus on its marketing efforts, mostly around content marketing and educating its potential customers.

“Every company, big or small, that sells a software solution must address a broad set of compliance requirements in regards to security and privacy. Obtaining the certifications can be a burdensome, opaque and expensive process. Strike Graph is applying intelligent technology to this problem — they help the company identify the appropriate risks, enable the audit to run smoothly and then automate the compliance and testing going forward,” said Hope Cochran, managing director at Madrona Venture Group. “These audits were a necessary pain when I was a CFO, and Strike Graph’s elegant solution brings together teams across the company to move the business forward faster.”

Mar
03
2017
--

MongoDB Audit Log: Why and How

MongoDB Audit Log

MongoDB Audit LogThis blog post is another in the series on the Percona Server for MongoDB 3.4 bundle release. In this blog post, we’ll talk about the MongoDB audit log.

Percona’s development team has always invested in the open-source community a priority – especially for MongoDB. As part of this commitment, Percona continues to build MongoDB Enterprise Server features into our free, alternative, open-source Percona Server for MongoDB. One of the key features that we have added to Percona Server for MongoDB is audit logging. Auditing your MongoDB environment strengthens your security and helps you keep track of who did what in your database.

In this blog post, we will show how to enable this functionality, what general actions can be logged, and how you can filter only the information that is important for your use-case.

Enable Audit Log

Audit messages can be logged into syslog, console or file (JSON or BSON format). In most cases, it’s preferable to log to the file in BSON format (the performance impact is smaller than JSON). In the last section, you can find some simple examples of how to further query this type of file.

Enable the audit log in the command line or the config file with:

mongod --dbpath /var/lib/mongodb --auditDestination file --auditFormat BSON --auditPath /var/lib/mongodb/auditLog.bson

auditLog:
   destination: file
   format: BSON
   path: /var/lib/mongodb/auditLog.bson

Just note that until this bug is fixed and released, if you’re using Percona Server for MongoDB and the --fork option while starting the mongod instance you’ll have to provide an absolute path for audit log file instead of relative path.

Actions logged

Generally speaking, the following actions can be logged:

  • Authentication and authorization
  • Cluster operations
  • Read and write operations (logged under authCheck event and require auditAuthorizationSuccess parameter to be enabled)
  • Schema operations
  • Custom application messages (logged under applicationMessage event if the client/app issues a logApplicationMessage command,  the user needs to have clusterAdmin role or the one that inherits from it to issue this command)

You can see the whole list of actions logged here.

By default, MongoDB doesn’t log all the read and write operations. So if you want to track those, you’ll have to enable the auditAuthorizationSuccess parameter. They then will be logged under the authCheck event. Note that this can have a serious performance impact.

Also, this parameter can be enabled dynamically on an already running instance with the audit log setup, while some other things can’t be changed once setup.

Enable logging of CRUD operations in the command line or config file:

mongod --dbpath /var/lib/mongodb --setParameter auditAuthorizationSuccess=true --auditDestination file --auditFormat BSON --auditPath /var/lib/mongodb/auditLog.bson

auditLog:
  destination: file
  format: BSON
  path: /var/lib/mongodb/auditLog.bson
setParameter: { auditAuthorizationSuccess: true }

Or to enable it on the running instance, issue this command in the client:

db.adminCommand( { setParameter: 1, auditAuthorizationSuccess: true } )

Filtering

If you don’t want to track all the events MongoDB is logging by default, you can specify filters in the command line or the config file. Filters need to be valid JSON queries on the audit log message (format available here). In the filters, you can use standard query selectors ($eq, $in, $gt, $lt, $ne, …) as well as regex. Note that you can’t change the filters dynamically after the start.

Also, Percona Server for MongoDB 3.2 and 3.4 have slightly different message formats. 3.2 uses a “params” field, and 3.4 uses “param” just like MongoDB. When filtering on those fields, you might want to check for the difference.

Filter only events from one user:

mongod --dbpath /var/lib/mongodb --auditDestination file --auditFormat BSON --auditPath /var/lib/mongodb/auditLog.bson --auditFilter '{ "users.user": "prod_app" }'

auditLog:
  destination: file
  format: BSON
  path: /var/lib/mongodb/auditLog.bson
  filter: '{ "users.user": "prod_app" }'

Filter events from several users based on username prefix (using regex):

mongod --dbpath /var/lib/mongodb --auditDestination file --auditFormat BSON --auditPath /var/lib/mongodb/auditLog.bson --auditFilter '{ "users.user": /^prod_app/ }'

auditLog:
  destination: file
  format: BSON
  path: /var/lib/mongodb/auditLog.bson
  filter: '{ "users.user": /^prod_app/ }'

Filtering multiple event types by using standard query selectors:

mongod --dbpath /var/lib/mongodb --auditDestination file --auditFormat BSON --auditPath /var/lib/mongodb/auditLog.bson --auditFilter '{ atype: { $in: [ "dropCollection", "dropDatabase" ] } }'

auditLog:
  destination: file
  format: BSON
  path: /var/lib/mongodb/auditLog.bson
  filter: '{ atype: { $in: [ "dropCollection", "dropDatabase" ] } }'

Filter read and write operations on all the collections in the test database (notice the double escape of dot in regex):

mongod --dbpath /var/lib/mongodb --auditDestination file --auditFormat BSON --auditPath /var/lib/mongodb/auditLog.bson --setParameter auditAuthorizationSuccess=true --auditFilter '{ atype: "authCheck", "param.command": { $in: [ "find", "insert", "delete", "update", "findandmodify" ] }, "param.ns": /^test\./ } }'

auditLog:
  destination: file
  format: BSON
  path: /var/lib/mongodb/auditLog.bson
  filter: '{ atype: "authCheck", "param.command": { $in: [ "find", "insert", "delete", "update", "findandmodify" ] }, "param.ns": /^test\./ } }'
setParameter: { auditAuthorizationSuccess: true }

Example messages

Here are two example messages from an audit log file. The first one is from a failed client authentication, and the second one is where the user tried to insert a document into a collection for which he has no write authorization.

> bsondump auditLog.bson
{"atype":"authenticate","ts":{"$date":"2017-02-14T14:11:29.975+0100"},"local":{"ip":"127.0.1.1","port":27017},"remote":{"ip":"127.0.0.1","port":42634},"users":[],"roles":[],"param":{"user":"root","db":"admin","mechanism":"SCRAM-SHA-1"},"result":18}

> bsondump auditLog.bson
{"atype":"authCheck","ts":{"$date":"2017-02-14T14:15:49.161+0100"},"local":{"ip":"127.0.1.1","port":27017},"remote":{"ip":"127.0.0.1","port":42636},"users":[{"user":"antun","db":"admin"}],"roles":[{"role":"read","db":"admin"}],"param":{"command":"insert","ns":"test.orders","args":{"insert":"orders","documents":[{"_id":{"$oid":"58a3030507bd5e3486b1220d"},"id":1.0,"item":"paper clips"}],"ordered":true}},"result":13}

Querying audit log for specific event

The audit log feature is now working, and we have some data in the BSON binary file. How do I query it to find some specific event that interests me? Obviously there are many simple or more complex ways to do that using different tools (Apache Drill or Elasticsearch come to mind), but for the purpose of this blog post, we’ll show two simple ways to do that.

The first way without exporting data anywhere is using the bsondump tool to convert BSON to JSON and pipe it into the jq tool (command-line JSON processor) to query JSON data. Install the jq tool in Ubuntu/Debian with:

sudo apt-get install jq

Or in Centos with:

sudo yum install epel-release
sudo yum install jq

Then, if we want to know who created a database with the name “prod” for example, we can use something like this (I’m sure you’ll find better ways to use the jq tool for querying this kind of data):

> bsondump auditLog.bson | jq -c 'select(.atype == "createDatabase") | select(.param.ns == "prod")'
{"atype":"createDatabase","ts":{"$date":"2017-02-17T12:13:48.142+0100"},"local":{"ip":"127.0.1.1","port":27017},"remote":{"ip":"127.0.0.1","port":47896},"users":[{"user":"prod_app","db":"admin"}],"roles":[{"role":"root","db":"admin"}],"param":{"ns":"prod"},"result":0}

In the second example, we’ll use the mongorestore tool to import data into another instance of mongod, and then just query it like a normal collection:

> mongorestore -d auditdb -c auditcol auditLog.bson
2017-02-17T12:28:56.756+0100    checking for collection data in auditLog.bson
2017-02-17T12:28:56.797+0100    restoring auditdb.auditcol from auditLog.bson
2017-02-17T12:28:56.858+0100    no indexes to restore
2017-02-17T12:28:56.858+0100    finished restoring auditdb.auditcol (142 documents)
2017-02-17T12:28:56.858+0100    done

The import is done, and now we can query the collection for the same data from the MongoDB client:

> use auditdb
switched to db auditdb
> db.auditcol.find({atype: "createDatabase", param: {ns: "prod"}})
{ "_id" : ObjectId("58a6de78bdf080b8e8982a4f"), "atype" : "createDatabase", "ts" : { "$date" : "2017-02-17T12:13:48.142+0100" }, "local" : { "ip" : "127.0.1.1", "port" : 27017 }, "remote" : { "ip" : "127.0.0.1", "port" : 47896 }, "users" : [ { "user" : "prod_app", "db" : "admin" } ], "roles" : [ { "role" : "root", "db" : "admin" } ], "param" : { "ns" : "prod" }, "result" : 0 }

It looks like the audit log in MongoDB/Percona Server for MongoDB is a solid feature. Setting up tracking for information that is valuable to you only depends on your use case.

Feb
15
2016
--

MySQL Auditing with MariaDB Auditing Plugin

MySQL MariaDB audit

MariaDB Auditing PluginThis blog will address how the MariaDB Auditing Plugin can help monitor database activity to help with security, accountability and troubleshooting.

Why Audit Your Databases?

Auditing is an essential task for monitoring your database environment. By auditing your database, you can achieve accountability for actions taken or content accessed within your environment. You will also deter users (or others) from inappropriate actions.

If there is any bad behavior, you can investigate suspicious activity. For example, if a user is deleting data from tables, the admins could audit all connections to the database and all deletions of rows. You can also use auditing to notify admins when an unauthorized user manipulates or deletes data or that a user has more privileges than expected.

Auditing Plugins Available for MySQL

As Sergei Glushchenko said in a previous blog, MySQL version 5.5.3 and later provides the Audit Plugin API, which can be used to write an audit plugin. The API provides notification for the following events:

  • messages written to general log (LOG)
  • messages written to error log (ERROR)
  • query results sent to client (RESULT)
  • logins (including failed) and disconnects (CONNECT)

All current audit plugins for MySQL provide an audit log as result of their work. They differ in record format, filtering capabilities and verbosity of log records.

  • MySQL Enterprise Audit Plugin – This plugin is not open source and is only available with MySQL Enterprise, which has a significant cost attached to it. It is the most stable and robust.
  • Percona Audit Log Plugin – Percona provides an open source auditing solution that installs with Percona Server 5.5.37+ and 5.6.17+. This plugin has quite a few output features as it outputs XML, JSON and to syslog. Percona’s implementation is the first to be a drop-in replacement for MySQL Enterprise Audit Plugin. As it has some internal hooks to the server to be feature-compatible with Oracle’s plugin, it is not available as a standalone for other versions of MySQL. This plugin is actively maintained by Percona.
  • McAfee MySQL Audit Plugin – Around the longest and has been used widely. It is open source and robust, while not using the official auditing API. It isn’t updated as often as one may like. There hasn’t been any new features in some time. It was recently updated to support MySQL 5.7.
  • MariaDB Audit Plugin – The only plugin that claims to support MySQL, Percona Server and MariaDB. It is open source and constantly upgraded with new versions of MariaDB. Versions starting at 1.2 are most stable, and it may be risky to use versions below that in your production environment. Versions below 1.2 may be unstable and I have seen it crash production servers. Older versions also log clear text passwords.

About the MariaDB Auditing Plugin

The MariaDB Auditing Plugin provides auditing functionality for not only MariaDB, but Percona Server and MySQL as well. It is installed with MariaDB or available as a plugin for Percona Server and MySQL.

I worked with the MariaDB Auditing Plugin because I was using MySQL community, without an enterprise license, which means the Enterprise Plugin and Percona’s plugin are off the table. We wanted to use a plugin that used MySQL’s built in auditing API, not a custom one that reads known memory blocks and is sensitive to upgrades such as McAfee’s plugin.

Get the Plugin

To get the MariaDB Auditing Plugin, download the .so from here: https://mariadb.com/products/connectors-plugins.

You can manually install the .so file to your plugin directory (ie /usr/lib/mysql/plugin on debian):

SHOW GLOBAL VARIABLES LIKE 'plugin_dir';

I highly recommend packaging it if you intend to do any automation (chef, puppet) or upgrades in the future.

Packaging

Similar steps can be performed with fpm.

Create a directory structure for the debian package:

$ mkdir mariadb-server-audit-plugin-1.2.0
$ cd mariadb-server-audit-plugin-1.2.0
$ mkdir -p usr/lib/mysql/plugin

Copy plugin into package directory:

$ cp /path/to/server_audit.so usr/lib/mysql/plugin

Debianize the package directory:

$ dh_make --createorig

Delete example files:

$ cd debian/ ; rm -f *.ex

Configure the package:

$ echo "usr/lib/mysql/plugin/server_audit.so" > debian/install
$ echo "usr/lib/mysql/plugin/server_audit.so" > debian/source/include-binaries

Build the .deb:

$ dpkg-buildpackage -us -uc

Verify package version:

$ dpkg-deb -W mariadb-server-audit-plugin_1.2.0-1_amd64.deb
mariadb-server-audit-plugin     1.2.0-1

Install

Not required but highly recommended (INSTALL PLUGIN and UNINSTALL PLUGIN tend to fail for this plugin depending on what else is happening within your environment):

$ service mysql stop

Install with dpkg:

$ dpkg -i mariadb-server-audit-plugin_1.2.0-1_amd64.deb

Configuration

Reference https://mariadb.com/kb/en/mariadb/server_audit-system-variables/ for more information on configuration.

Add to my.cnf (if you didn’t restart, you can set these in sql with SET GLOBAL):

# load plugin
plugin-load=server_audit=server_audit.so
# do not allow users to uninstall plugin
server_audit=FORCE_PLUS_PERMANENT
# only audit connections and DDL queries
server_audit_events=CONNECT,QUERY_DDL
# enable logging
server_audit_logging=ON
# any users who don’t need auditing (csv)
server_audit_excl_users=’root’
# or can use server_audit_incl_users=’jayj’

Log destination

When selecting the log destination, you want to use one method. It is dangerous to configure both, so decide ahead of time on your logging strategy.

# flat file
server_audit_output_type=FILE
server_audit_file_path=/var/log/mysql/audit.log
server_audit_file_rotate_size=1000000
server_audit_file_rotations=9
# syslog
server_audit_output_type=SYSLOG
server_audit_syslog_facility=LOG_LOCAL6
server_audit_syslog_ident=mysql_audit
server_audit_syslog_info=this-host.name
server_audit_syslog_priority=LOG_INFO

Verify Install

$ service mysql start
$ mysql
mysql> SHOW PLUGINS;
+-------------------------+----------+-----------------+-----------------+---------+
| Name                    | Status   | Type              | Library        | License |
+-------------------------+----------+-----------------+-----------------+---------+
...
| SERVER_AUDIT            | ACTIVE   | AUDIT             | server_audit.so| GPL     |
+-------------------------+----------+-----------------+------------------+--------+
24 rows in set (0.00 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME='SERVER_AUDIT'G
    *************************** 1. row ***************************
        PLUGIN_NAME: SERVER_AUDIT
        PLUGIN_VERSION: 1.2
        PLUGIN_STATUS: ACTIVE
        PLUGIN_TYPE: AUDIT
        PLUGIN_TYPE_VERSION: 3.2
        PLUGIN_LIBRARY: server_audit.so
        PLUGIN_LIBRARY_VERSION: 1.3
        PLUGIN_AUTHOR:  Alexey Botchkov (MariaDB Corporation)
        PLUGIN_DESCRIPTION: Audit the server activity
        PLUGIN_LICENSE: GPL
        LOAD_OPTION: FORCE_PLUS_PERMANENT
        1 row in set (0.01 sec)

Check the logs

$ tail server_audit.log
20130927 01:00:00,localhost.localdomain,root,localhost,1,1,QUERY,,'SET GLOBAL server_audit_logging=ON',0

Rsyslog config

I recommend starting here and setting up an elasticsearch cluster with logstash and kibana, also known as the ELK stack. This allows you to aggregate and search your logs to find problems. Here is a sample rsyslog configuration:

$ cat /etc/rsyslog.d/10-mysqlaudit.conf
# keep in /var/log as syslog user can’t access /var/log/mysql usually
/var/log/mysql-audit.log {
    daily
    rotate 7
    missingok
    create 640 syslog adm
    compress
    sharedscripts
    postrotate
    reload rsyslog >/dev/null 2>&1 || true
    endscript
}

Conclusion

The MariaDB Auditing Plugin is quick and easy to install and bring into your current logging or auditing solution.

Once you have installed auditing you can detect problems with an authorization or access control implementation. It allows you to create audit policies that you expect will never generate an audit record because the data is protected. If these policies do generate audit records, then you know that the other security controls are not properly implemented.

Auditing information can help you troubleshoot performance or application issues and lets you see exactly what SQL queries are being processed.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com