Oct
02
2018
--

NYC wants to build a cyber army

Empires rise and fall, and none more so than business empires. Whole industries that once dominated the planet are just a figment in memory’s eye, while new industries quietly grow into massive behemoths.

New York City has certainly seen its share of empires. Today, the city is a global center of finance, real estate, legal services, technology, and many, many more industries. It hosts the headquarters of roughly 10% of the Fortune 500, and the metro’s GDP is roughly equivalent to that of Canada.

So much wealth and power, and all under constant attack. The value of technology and data has skyrocketed, and so has the value of stealing and disrupting the services that rely upon it. Cyber crime and cyber wars are adding up: according to a report published jointly between McAfee and the Center for Strategic and International Studies, the costs of these operations are in the hundreds of billions of dollars – and New York’s top industries such as financial services bear the brunt of the losses.

Yet, New York City has hardly been a bastion for the cybersecurity industry. Boston and Washington DC are far stronger today on the Acela corridor, and San Francisco and Israel have both made huge impacts on the space. Now, NYC’s leaders are looking to build a whole new local empire that might just act as a bulwark for its other leading ecosystems.

Today, the New York City Economic Development Corporation (NYCEDC) announced the launch of Cyber NYC, a $30 million “catalyzing” investment designed to rapidly grow the city’s ecosystem and infrastructure for cybersecurity.

James Patchett, CEO of New York City Economic Development Corporation. (Photo from NYCEDC)

James Patchett, CEO of NYCEDC, explained in an interview with TechCrunch that cybersecurity is “both an incredible opportunity and also a huge threat.” He noted that “the financial industry has been the lifeblood of this city for our entire history,” and the costs of cybercrime are rising quickly. “It’s a lose-lose if we fail to invest in the innovation that keeps the city strong” but “it’s a win if we can create all of that innovation here and the corresponding jobs,” he said.

The Cyber NYC program is made up of a constellation of programs:

  • Partnering with Jerusalem Venture Partners, an accelerator called Hub.NYC will develop enterprise cybersecurity companies by connecting them with advisors and customers. The program will be hosted in a nearly 100,000 square foot building in SoHo.
  • Partnering with SOSA, the city will create a new, 15,000 square foot Global Cyber Center co-working facility in Chelsea, where talented individuals in the cyber industry can hang out and learn from each other through event programming and meetups.
  • With Fullstack Academy and Laguardia Community College, a Cyber Boot Camp will be created to enhance the ability of local workers to find jobs in the cybersecurity space.
  • Through an “Applied Learning Initiative,” students will be able to earn a “CUNY-Facebook Master’s Degree” in cybersecurity. The program has participation from the City University of New York, New York University, Columbia University, Cornell Tech, and iQ4.
  • With Columbia University’s Technology Ventures, NYCEDC will introduce a program called Inventors to Founders that will work to commercialize university research.

NYCEDC’s map of the Cyber NYC initiative. (Photo from NYCEDC)

In addition to Facebook, other companies have made commitments to the program, including Goldman Sachs, MasterCard, PricewaterhouseCoopers, and edX.org. Two Goldman execs, Chief Operational Risk Officer Phil Venables and Chief Information Security Officer Andy Ozment, have joined the initiative’s advisory boards.

The NYCEDC estimates that there are roughly 6,000 cybersecurity professionals currently employed in New York City. Through these programs, it estimates that the number could increase by another 10,000. Patchett said that “it is as close to a no-brainer in economic development because of the opportunity and the risk.”

From Jerusalem to New York

To tackle its ambitious cybersecurity goals, the NYCEDC is partnering with two venture firms, Jerusalem Venture Partners (JVP) and SOSA, with significant experience investing, operating, and growing companies in the sector.

Jerusalem-based JVP is an established investor that should help founders at Hub.NYC get access to smart capital, sector expertise, and the entrepreneurial experience needed to help their startups scale. JVP invests in early-, late-, and growth-stage companies focused on cybersecurity, big data, media, and enterprise software.

JVP will run Hub.NYC, a startup accelerator that will help cybersecurity startups connect with customers and mentors. (Photo from JVP)

Erel Margalit, who founded the firm in 1993, said that “If you look at what JVP has done … we create ecosystems.” Working with Jerusalem’s metro government, Margalit and the firm pioneered a number of institutions such as accelerators that turned Israel into an economic powerhouse in the cybersecurity industry. His social and economic work eventually led him to the Knesset, Israel’s unicameral legislature, where he served as an MP from 2015-2017 with the Labor Party.

Israel is a very small country with a relative dearth of large companies though, a huge challenge for startups looking to scale up. “Today if you want to build the next-generation leading companies, you have to be not only where the ideas are being brewed, but also where the solutions are being [purchased],” Margalit explained. “You need to be working with the biggest customers in the world.”

That place, in his mind, is New York City. It’s a city he has known since his youth – he worked at Moshe’s Moving IN NYC while attending Columbia as a grad student where he got his PhD in philosophy. Now, he can pack up his own success from Israel and scale it up to an even larger ecosystem.

Since its founding, JVP has successfully raised $1.1 billion across eight funds, including a $60 million fund specifically focused on the cybersecurity space. Over the same period, the firm has seen 32 successful exits, including cybersecurity companies CyberArk (IPO in 2014) and CyActive (Acquired by PayPal in 2013).

JVP’s efforts in the cybersecurity space also go beyond the investment process, with the firm recently establishing an incubator, known as JVP Cyber Labs, specifically focused on identifying, nurturing and building the next wave of Israeli cybersecurity and big data companies.

On average, the firm has focused on deals in the $5-$10 million range, with a general proclivity for earlier-stage companies where the firm can take a more hands-on mentorship role. Some of JVP’s notable active portfolio companies include Source Defense, which uses automation to protect against website supply chain attacks, ThetaRay, which uses big data to analyze threats, and Morphisec, which sells endpoint security solutions.

Opening up innovation with SOSA

The self-described “open-innovation platform,” SOSA is a global network of corporations, investors, and entrepreneurs that connects major institutions with innovative startups tackling core needs.

SOSA works closely with its partner startups, providing investor sourcing, hands-on mentorship and the physical resources needed to achieve growth. The group’s areas of expertise include cybersecurity, fintech, automation, energy, mobility, and logistics. Though headquartered in Tel Aviv, SOSA recently opened an innovation lab in New York, backed by major partners including HP, RBC, and Jefferies.

With the eight-floor Global Cyber Center located in Chelsea, it is turning its attention to an even more ambitious agenda. Uzi Scheffer, CEO of SOSA, said to TechCrunch in a statement that “The Global Cyber Center will serve as a center of gravity for the entire cybersecurity industry where they can meet, interact and connect to the finest talent from New York, the States, Israel and our entire global network.”

SOSA’s new building in Chelsea will be a center for the cybersecurity community (Photo from SOSA)

With an already established presence in New York, SOSA’s local network could help spur the local corporate participation key to the EDC’s plan, while SOSA’s broader global network can help achieve aspirations of turning New York City into a global cybersecurity leader.

It is no coincidence that both of the EDC’s venture partners are familiar with the Israeli cybersecurity ecosystem. Israel has long been viewed as a leader in cybersecurity innovation and policy, and has benefited from the same successful public-private sector coordination New York hopes to replicate.

Furthermore, while New York hopes to create organic growth within its own local ecosystem, the partnerships could also benefit the city if leading Israeli cybersecurity companies look to relocate due to the limited size of the Israeli market.

Big plans, big results?

While we spent comparatively less time discussing them, the NYCEDC’s educational programs are particularly interesting. Students will be able to take classes at any university in the five-member consortium, and transfer credits freely, a concept that the NYCEDC bills as “stackable certificates.”

Meanwhile, Facebook has partnered with the City University of New York to create a professional master’s degree program to train up a new class of cybersecurity leaders. The idea is to provide a pathway to a widely-respected credential without having to take too much time off of work. NYCEDC CEO Patchett said, ”you probably don’t have the time to take two years off to do a masters program,” and so the program’s flexibility should provide better access to more professionals.

Together, all of these disparate programs add up to a bold attempt to put New York City on the map for cybersecurity. Talent development, founder development, customer development – all have been addressed with capital and new initiatives.

Will the community show up at initiatives like the Global Cyber Center, pictured here? (Photo from SOSA)

Yet, despite the time that NYCEDC has spent to put all of these partners together cohesively under one initiative, the real challenge starts with getting the community to participate and build upon these nascent institutions. “What we hear from folks a lot of time,” Patchett said to us, is that “there is no community for cyber professionals in New York City.” Now the buildings have been placed, but the people need to walk through the front doors.

The city wants these programs to be self-sustaining as soon as possible. “In all cases, we don’t want to support these ecosystems forever,” Patchett said. “If we don’t think they’re financially sustainable, we haven’t done our job right.” He believes that “there should be a natural incentive to invest once the ecosystem is off the ground.”

As the world encounters an ever-increasing array of cyber threats, old empires can falter – and new empires can grow. Cybersecurity may well be one of the next great industries, and it may just provide the needed defenses to ensure that New York City’s other empires can live another day.

Sep
27
2018
--

Alphabet’s Chronicle launches an enterprise version of VirusTotal

VirusTotal, the virus and malware scanning service own by Alphabet’s Chronicle, launched an enterprise-grade version of its service today. VirusTotal Enterprise offers significantly faster and more customizable malware search, as well as a new feature called Private Graph, which allows enterprises to create their own private visualizations of their infrastructure and malware that affects their machines.

The Private Graph makes it easier for enterprises to create an inventory of their internal infrastructure and users to help security teams investigate incidents (and where they started). In the process of building this graph, VirtusTotal also looks are commonalities between different nodes to be able to detect changes that could signal potential issues.

The company stresses that these graphs are obviously kept private. That’s worth noting because VirusTotal already offered a similar tool for its premium users — the VirusTotal Graph. All of the information there, however, was public.

As for the faster and more advanced search tools, VirusTotal notes that its service benefits from Alphabet’s massive infrastructure and search expertise. This allows VirusTotal Enterprise to offers a 100x speed increase, as well as better search accuracy. Using the advanced search, the company notes, a security team could now extract the icon from a fake application, for example, and then return all malware samples that share the same file.

VirusTotal says that it plans to “continue to leverage the power of Google infrastructure” and expand this enterprise service over time.

Google acquired VirusTotal back in 2012. For the longest time, the service didn’t see too many changes, but earlier this year, Google’s parent company Alphabet moved VirusTotal under the Chronicle brand and the development pace seems to have picked up since.

Sep
25
2018
--

Snyk raises $22M on a $100M valuation to detect security vulnerabilities in open source code

Open source software is now a $14 billion+ market and growing fast, in use in one way or another in 95 percent of all enterprises. But that expansion comes with a shadow: open source components can come with vulnerabilities, and so their widespread use in apps becomes a liability to a company’s cybersecurity.

Now, a startup out of the UK called Snyk, which has built a way to detect when those apps or components are compromised, is announcing a $22 million round of funding to meet the demand from enterprises wanting to tackle the issue head on.

Led by Accel, with participation from GV plus previous investors Boldstart Ventures and Heavybit, this Series B notably is the second round raised by Snyk within seven months — it raised a $7 million Series A in March. That’s a measure of how the company is growing (and how enthusiastic investors are about what it has built so far). The startup is not disclosing its valuation but a source close to the deal says it is around $100 million now (it’s raised about $33 million to date).

As another measure of Snyk’s growth, the company says it now has over 200 paying customers and 150,000 users, with revenues growing five-fold in the last nine months. In March, it had 130 paying customers.

(Current clients include ASOS, Digital Ocean, New Relic and Skyscanner, the company said.)

Snyk plays squarely in the middle of how the landscape for enterprise services exists today. It provides options for organisations to use it on-premises, via the cloud, or in a hybrid version of the two, with a range of paid and free tiers to get users acquainted with the service.

Guy Podjarny, the company’s CEO who co-founded Snyk with Assaf Hefetz and Danny Grander, explained that Snyk works in two parts. First, the startup has built a threat intelligence system “that listens to open source activity.” Tapping into open-conversation platforms — for example, GitHub commits and forum chatter — Snyk uses machine learning to detect potential mentions of vulnerabilities. It then funnels these to a team of human analysts, “who verify and curate the real ones in our vulnerability DB.”

Second, the company analyses source code repositories — including, again, GitHub as well as BitBucket — “to understand which open source components each one uses, flag the ones that are vulnerable, and then auto-fix them by proposing the right dependency version to use and through patches our security team builds.”

Open source components don’t have more vulnerabilities than closed source ones, he added, “but their heavy reuse makes those vulnerabilities more impactful.” Components can be used in thousands of applications, and by Snyk’s estimation, some 77 percent of those applications will end up with components that have security vulnerabilities. “As a result, the chances of an organisation being breached through a vulnerable open source component are far greater than a security flaw purely in their code.”

Podjarny says the plan is not to tackle proprietary code longer term but to expand how it can monitor apps built on open source.

“Our focus is on two fronts – building security tools developers love, and fixing open source security,” he said. “We believe the risk from insecure use of open source code is far greater than that of your own code, and is poorly addressed in the industry. We do intend to expand our protection from fixing known vulnerabilities in open source components to monitoring and securing them in runtime, flagging and containing malicious and compromised components.”

While this is a relatively new area for security teams to monitor and address, he added that the Equifax breach highlighted what might happen in the worst-case scenario if such issues go undetected. Snyk is not the only company that has identified the gap in the market. Black Duck focuses on flagging non-compliant open source licences, and offers some security features as well.

However, it is Snyk — whose name derives from a play on the word “sneak”, combined with the acronym meaning “so now you know” — that seems to be catching the most attention at the moment.

“Some of the largest data breaches in recent years were the result of unfixed vulnerabilities in open source dependencies; as a result, we’ve seen the adoption of tools to monitor and remediate such vulnerabilities grow exponentially,” said Philippe Botteri, partner at Accel, who is joining the board with this round. “We’ve also seen the ownership of application security shifting towards developers. We feel that Snyk is uniquely positioned in the market given the team’s deep security domain knowledge and developer-centric mindset, and are thrilled to join them on this mission of bringing security tools to developers.”

Sep
24
2018
--

Backing up Percona Server for MySQL with keyring_vault plugin enabled

Percona XtraBackup with keyring_vault

Percona XtraBackup with keyring_vaultTo use Percona XtraBackup with keyring_vault plugin enabled you need to take some special measures to secure a working backup. This post addresses how to backup Percona Server for MySQL with keyring_vault plugin enabled. We also run through the steps needed to restore the backup from the master to a slave.

This is the second of a two-part series on setting up Hashicorp Vault with Percona Server for MySQL with the keyring_vault plugin. First part is Using the keyring_vault plugin with Percona Server for MySQL 5.7.

Backing up from the master

First you need to install the latest Percona XtraBackup 2.4 package, in this tutorial I used this version:

[root@mysql1 ~]# xtrabackup --version
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=1
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)

Create a transition key using any method you prefer.  This transition key will be used by Percona XtraBackup to encrypt keys of the files being backed up. Make sure to keep the transition key and not lose it or else the backup will be unrecoverable.

[root@mysql1 ~]# openssl rand -base64 24
NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT

You can store the transition-key in Vault and retrieve it later:

[root@mysql1 ~]# # Store the transition-key to the Vault server
[root@mysql1 ~]# curl -H "Content-Type: application/json" -H "X-Vault-Token: be515093-b1a8-c799-b237-8e04ea90ad7a" --cacert "/etc/vault_ca/vault.pem" -X PUT -d '{"value": "NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT"}' "https://192.168.0.114:8200/v1/secret/dc1/master/transition_key"
[root@mysql1 ~]# # Retrieve the transition-key from the Vault server
[root@mysql1 ~]# curl -s -H "X-Vault-Token: be515093-b1a8-c799-b237-8e04ea90ad7a" --cacert "/etc/vault_ca/vault.pem" -X GET "https://192.168.0.114:8200/v1/secret/dc1/master/transition_key" | jq .data.value
"NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT"
[root@mysql1 ~]# # Delete the transition-key from the Vault server
[root@mysql1 ~]# curl -H "Content-Type: application/json" -H "X-Vault-Token: be515093-b1a8-c799-b237-8e04ea90ad7a" --cacert "/etc/vault_ca/vault.pem" -X DELETE "https://192.168.0.114:8200/v1/secret/dc1/master/transition_key"

We will stream the backup to the slave server using netcat, first run this on the slave:

[root@mysql2 ~]# ncat -l 9999 | cat - > backup.xbstream

Then on the master I used --stream=xbstream  since it fails with --stream=tar reported here (PXB-1571). Run the xtrabackup command like this:

[root@mysql1 ~]# xtrabackup --stream=xbstream --backup --target-dir=backup/ --transition-key=NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT | nc 192.168.0.117 9999
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=1 --transition-key=*
xtrabackup: recognized client arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=1 --transition-key=* --user=root --stream=xbstream --backup=1 --target-dir=backup/
180715 01:28:56  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root'  (using password: NO).
180715 01:28:56  version_check Connected to MySQL server
180715 01:28:56  version_check Executing a version check against the server...
180715 01:28:56  version_check Done.
180715 01:28:56 Connecting to MySQL server host: localhost, user: root, password: not set, port: not set, socket: not set
Using server version 5.7.22-22-log
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 0, set to 65536
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 50331648
InnoDB: Number of pools: 1
180715 01:28:56 Added plugin 'keyring_vault.so' to load list.
180715 01:28:56 >> log scanned up to (2616858)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 2 for mysql/plugin, old maximum was 0
...
180715 01:28:58 Finished backing up non-InnoDB tables and files
180715 01:28:58 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
xtrabackup: The latest check point (for incremental): '2616849'
xtrabackup: Stopping log copying thread.
.180715 01:28:58 >> log scanned up to (2616865)
180715 01:28:58 Executing UNLOCK TABLES
180715 01:28:58 All tables unlocked
180715 01:28:58 [00] Streaming ib_buffer_pool to
180715 01:28:58 [00]        ...done
180715 01:28:58 Backup created in directory '/root/backup/'
180715 01:28:58 [00] Streaming
180715 01:28:58 [00]        ...done
180715 01:28:58 [00] Streaming
180715 01:28:58 [00]        ...done
180715 01:28:58 Saving xtrabackup_keys.
xtrabackup: Transaction log of lsn (2616849) to (2616865) was copied.
Shutting down plugin 'keyring_vault'
180715 01:28:58 completed OK!

Restoring the backup on the Slave server

Extract the backup to a temporary location:

[root@mysql2 backup]# xbstream -x < ../backup.xbstream

And then prepare it with the following command. Notice that we are still using the same transition key we used when backing up the database in the master server.

[root@mysql2 ~]# xtrabackup --prepare --target-dir=backup/ --transition-key=NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT
xtrabackup: recognized server arguments: --innodb_checksum_algorithm=crc32 --innodb_log_checksum_algorithm=strict_crc32 --innodb_data_file_path=ibdata1:12M:autoextend --innodb_log_files_in_group=2 --innodb_log_file_size=50331648 --innodb_fast_checksum=0 --innodb_page_size=16384 --innodb_log_block_size=512 --innodb_undo_directory=./ --innodb_undo_tablespaces=0 --server-id=1 --redo-log-version=1 --transition-key=*
xtrabackup: recognized client arguments: --innodb_checksum_algorithm=crc32 --innodb_log_checksum_algorithm=strict_crc32 --innodb_data_file_path=ibdata1:12M:autoextend --innodb_log_files_in_group=2 --innodb_log_file_size=50331648 --innodb_fast_checksum=0 --innodb_page_size=16384 --innodb_log_block_size=512 --innodb_undo_directory=./ --innodb_undo_tablespaces=0 --server-id=1 --redo-log-version=1 --transition-key=* --prepare=1 --target-dir=backup/
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
xtrabackup: cd to /root/backup/
xtrabackup: This target seems to be not prepared yet.
...
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 2617384
180715 01:31:10 completed OK!

Configure keyring_vault.conf on slave

Create the keyring_vault.conf file with the following contents:

[root@mysql2 ~]# cat /var/lib/mysql-keyring/keyring_vault.conf
vault_url = https://192.168.0.114:8200
secret_mount_point = secret/dc1/slave
token = be515093-b1a8-c799-b237-8e04ea90ad7a
vault_ca = /etc/vault_ca/vault.pem

Notice that it uses the same token as the master server but has a different secret_mount_point. The same CA certificate will be used across all servers connecting to this Vault server.

Use –copy-back option to finalize backup restoration

Next use the --copy-back option to copy the files from the temporary backup location to the mysql data directory on the slave. Observe that during this phase XtraBackup generates a new master key, stores it in the Vault server and re-encrypts tablespace headers using this key.

[root@mysql2 ~]# xtrabackup --copy-back --target-dir=backup/ --transition-key=NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT --generate-new-master-key --keyring-vault-config=/var/lib/mysql-keyring/keyring_vault.conf
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=2 --transition-key=* --generate-new-master-key=1
xtrabackup: recognized client arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=2 --transition-key=* --generate-new-master-key=1 --copy-back=1 --target-dir=backup/
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
180715 01:32:28 Loading xtrabackup_keys.
180715 01:32:28 Loading xtrabackup_keys.
180715 01:32:29 Generated new master key with ID 'be1ba51c-87c0-11e8-ac1c-00163e79c097-2'.
...
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/plugin.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/servers.ibd to /var/lib/mysql/mysql/servers.ibd
180715 01:32:29 [01]        ...done
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/servers.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/help_topic.ibd to /var/lib/mysql/mysql/help_topic.ibd
180715 01:32:29 [01]        ...done
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/help_topic.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/help_category.ibd to /var/lib/mysql/mysql/help_category.ibd
180715 01:32:29 [01]        ...done
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/help_category.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/help_relation.ibd to /var/lib/mysql/mysql/help_relation.ibd
180715 01:32:29 [01]        ...done
...
180715 01:32:30 [01] Encrypting /var/lib/mysql/encryptedschema/t1.ibd tablespace header with new master key.
180715 01:32:30 [01] Copying ./encryptedschema/db.opt to /var/lib/mysql/encryptedschema/db.opt
180715 01:32:30 [01]        ...done
...
180715 01:32:31 [01] Copying ./xtrabackup_binlog_pos_innodb to /var/lib/mysql/xtrabackup_binlog_pos_innodb
180715 01:32:31 [01]        ...done
180715 01:32:31 [01] Copying ./xtrabackup_master_key_id to /var/lib/mysql/xtrabackup_master_key_id
180715 01:32:31 [01]        ...done
180715 01:32:31 [01] Copying ./ibtmp1 to /var/lib/mysql/ibtmp1
180715 01:32:31 [01]        ...done
Shutting down plugin 'keyring_vault'
180715 01:32:31 completed OK!

Once that’s done, change file/directory ownership to mysql.

[root@mysql2 ~]# chown -R mysql:mysql /var/lib/mysql/

Start the mysqld instance on the slave server configured similarly to the master configuration in the first part of this series.

early-plugin-load="keyring_vault=keyring_vault.so"
loose-keyring_vault_config="/var/lib/mysql-keyring/keyring_vault.conf"
encrypt_binlog=ON
innodb_encrypt_online_alter_logs=ON
innodb_encrypt_tables=ON
innodb_temp_tablespace_encrypt=ON
master_verify_checksum=ON
binlog_checksum=CRC32
log_bin=mysqld-bin
binlog_format=ROW
server-id=2
log-slave-updates

[root@mysql2 ~]# systemctl status mysqld
? mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2018-07-15 01:32:59 UTC; 6h ago
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html
  Process: 1390 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 1372 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 1392 (mysqld)
   CGroup: /system.slice/mysqld.service
           ??1392 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
Jul 15 01:32:58 mysql2 systemd[1]: Starting MySQL Server...
Jul 15 01:32:59 mysql2 systemd[1]: Started MySQL Server.

From here, you should be able to create the replication user on the master, and then configure slave replication based on the coordinates in the xtrabackup_binlog_info file. You can follow this section of the manual on starting replication.

For further reference, please read the manual related to Encrypted InnoDB tablespace backups.

Is validating your security strategy a concern?

Do you need to demonstrate that the security strategy that you have implemented for your databases is sufficient and appropriate? Perhaps you could benefit from a professional database audit? It could provide the reassurance that your organization needs.

The post Backing up Percona Server for MySQL with keyring_vault plugin enabled appeared first on Percona Database Performance Blog.

Sep
24
2018
--

Yubico’s new security keys now support FIDO2

Yubico, the company behind the popular Yubikey security keys, today announced the launch of its 5 Series keys. The company argues that these new keys, which start at $45, are the first multi-protocol securities keys that support the FIDO2 standard. With this, Yubico argues, the company will be able to replace password-based authentication, which is often a hassle and unsecure, with stronger hardware-based authentication.

“Innovation is core to all we do, from the launch of the original YubiKey ten years ago, to the concept of one authentication device across multiple services, and today as we are accelerating into the passwordless era,” said Stina Ehrensvard, the CEO and founder of Yubico in today’s announcement. “The YubiKey 5 Series can deliver single-factor, two-factor, or multi-factor secure login, supporting many different uses cases on different platforms for different verticals with a variety of authentication scenarios.”

The company made the announcement ahead of Microsoft’s Ignite conference this week, where Microsoft, too, is expected to make a number of security announcements around the future of passwords.

“Passwordless login brings a monumental change to how business users and consumers will securely log in to applications and services,” said Alex Simons, the corporate vice president of Microsoft’s Identity Division. “With FIDO2, Microsoft is working to remove the dependency on password-based logins, with support from devices like the YubiKey 5.”

For the most part, the new keys looks very much like the existing ones, but new to the series is the YubiKey 5 NFC, which combines supports all of the major security protocols over both USB and NFC — and the addition of NFC makes it a better option for those who want to use the same key on they desktops, laptops and mobile phones or tablets.

Supported protocols, in addition to FIDO2, include FIDO U2F, smart card (PIV), Yubico OTP, OpenPGP, OATH-TOTP, OATH-HOTP, and Challenge-Response.

The new keys will come in all of the standard Yubico form factors, including the large USB-A key with NFC support, as well as smaller versions and those for USB-C devices.

In its press release, Yubico stresses that its keys are manufactured and programmed in the USA and Sweden. The fact that it’s saying that is no accident, given that Google recently launched its own take on security keys (after years of recommending Yubikeys). Google’s keys, however, are being built by a Chinese company and while Google is building its own firmware for them, there are plenty of sceptics out there who aren’t exactly waiting for a key that was manufactured in China.

Sep
21
2018
--

Securing PostgreSQL as an Enterprise-Grade Environment

PostgreSQL enterprise-grade security

PostgreSQL® logoIn this post, we review how you can build an enhanced and secure PostgreSQL database environment using community software. We look at the features that are available in PostgreSQL that, when implemented, provide improved security.

As discussed in the introductory blog post of this series, in our webinar of October 10, 2018 we highlight important aspects an enterprise should consider for their PostgreSQL environments. This series of blogs addressing particular aspects of the enterprise-grade postgres environment complements the webinar. This post addresses security.

Authentication Layer

Client connections to PostgreSQL Server using host based authentication

PostgreSQL uses a host based authentication file (pg_hba.conf) to authorize incoming connections. This file contains entries with a combination of 5 categories: type, database, user, address, and method. A client is allowed to connect to a database only when the combination of username, database and the hostname of the client matches an entry in the pg_hba.conf file.

Consider the following entry in pg_hba.conf file :

# TYPE DATABASE USER ADDRESS METHOD
host percona pguser 192.168.0.14/32 md5

This entry says that connections from server 192.168.0.14 are only allowed from user pguser and only to the database percona. The method md5 forces password authentication.

The order of the entries in the pg_hba.conf file matters. If you have an entry that rejects connections from a given server followed by another that allows connections from it, the first entry in the order is considered. So, in this case, the connection is rejected.

This is the first layer of protection in authentication. If this criteria is not satisfied in this Access Control List (ACL), PostgreSQL will discard the request without considering even the server authentication.

Server Authentication

Historically, PostgreSQL uses MD5 digest as a password hash by default. The problem with pure MD5 hashing is that this function will always return the same hash for a given password, which renders a MD5 digest more susceptible for password cracking. Newer versions of PostgreSQL implement SCRAM Authentication (Simple Authentication and Secured Layer) that stores passwords in salted and iterated hash formats to strengthen PostgreSQL against offline attacks. SCRAM-SHA-256 support was introduced in PostgreSQL 10. What matters most in terms of “enterprise-grade” security is that PostgreSQL supports industry-standard authentication methods out of the box, like SSL certificates, PAM/LDAP, Kerberos, etc.

Authorization Layer

User management through roles and privileges

It is always recommended to implement segregation of users through roles and privileges. There may be several user accounts in your PostgreSQL server. Only a few of them may be application accounts while the rest are developers or admin accounts. In such cases, PostgreSQL allows you to create multiple roles. Those can be assigned with a set of privileges. Thus, instead of managing user privileges individually, standard roles can be maintained and the appropriate role from the list can be assigned to a user. Through roles, database access can be standardized, which helps in user management and avoids granting too much or too little privilege to a given user.

For example, we might have six roles:

app_read_write
app_read_only
dev_read_write
dev_read_only
admin_read_write
admin_read_only

Now, if you need to create a new dev user who can only have read access, grant one among the appropriate roles, such as dev_read_only:

GRANT dev_read_only to avi_im_developer;

Row level Security

Starting with version 9.5, PostgreSQL implements row level security, which can limit access to only a subset of records/rows in a table. Usually a user is granted a mix of SELECT, INSERT, DELETE and UPDATE privileges on a given table, which allows access to all records in the table. Through row level security, however, such privileges can be restricted to a subset of the records by means of a policy, which in turn can be  assigned to a role.

In the next example, we create an employee table and two manager accounts. We then enable row level security on the table and create a policy that allows the managers to only view/modify their own subordinates’ records:

CREATE TABLE scott.employee (id INT, first_name VARCHAR(20), last_name VARCHAR(20), manager VARCHAR(20));
INSERT INTO scott.employee VALUES (1,'avinash','vallarapu','carina');
INSERT INTO scott.employee VALUES (2,'jobin','augustine','stuart');
INSERT INTO scott.employee VALUES (3,'fernando','laudares','carina');
CREATE USER carina WITH ENCRYPTED PASSWORD 'carina';
CREATE USER stuart WITH ENCRYPTED PASSWORD 'stuart';
CREATE ROLE managers;
GRANT managers TO carina, stuart;
GRANT SELECT, INSERT, UPDATE, DELETE ON scott.employee TO managers;
GRANT USAGE ON SCHEMA scott TO managers;
ALTER TABLE scott.employee ENABLE ROW LEVEL SECURITY;
CREATE POLICY employee_managers ON scott.employee TO managers USING (manager = current_user);

In the log we can see that only certain records are visible to each manager:

$ psql -d percona -U carina
psql (10.5)
Type "help" for help.
percona=> select * from scott.employee ;
id | first_name | last_name | manager
----+------------+-----------+---------
 1 | avinash    | vallarapu | carina
 3 | fernando   | laudares | carina
(2 rows)
$ psql -d percona -U stuart
psql (10.5)
Type "help" for help.
percona=> select * from scott.employee ;
id | first_name | last_name | manager
----+------------+-----------+---------
 2 | jobin      | augustine | stuart
(1 row)

You can read more about row level security in the manual page.

Data Security

1. Encryption of data over the wire using SSL

PostgreSQL allows you to use SSL to enable encryption of data in motion. In addition, you may enable certification based authentication to ensure that the communication is happening between trusted parties. SSL is implemented by OpenSSL and thus it requires the OpenSSL package to be installed in your PostgreSQL server and PostgreSQL to be built –with-openssl support.

The following entry in a pg_hba.conf file says that connections to any database and from any user are allowed from server 192.68.0.13 as long as the communication is encrypted over SSL. Also, the connection is only established when a valid client certificate is provided:

# TYPE DATABASE USER ADDRESS METHOD
hostssl all all 192.168.0.13/32 md5

Optionally, you may also use Client Certificate Authentication using the following method:

# TYPE DATABASE USER ADDRESS METHOD
hostssl all all 192.168.0.13/32 cert clientcert=1

2. Encryption at Rest – pgcrypto

The pgcrypto module provides cryptographic functions for PostgreSQL, allowing certain fields to be stored encrypted. pgcrypto implements PGP encryption, which is part of the OpenPGP (RFC 4880) standard. It supports both symmetric-key and public-key encryption. Besides the advanced features offered by PGP for encryption, pgcrypto also offers functions for running simple encryption based on ciphers. These functions only run a cipher over data.

Accounting and Auditing

Logging in PostgreSQL

PostgreSQL allows you to log either all of the statements or a few statements based on parameter settings. You can log all the DDLs or DMLs or any statement running for more than a certain duration to the log file when logging_collector is enabled. To avoid write overload to the data directory, you may also move your log_directory to a different location. Here’s a few important parameters you should review when logging activities in your PostgreSQL server:

log_connections
log_disconnections
log_lock_waits
log_statement
log_min_duration_statement

Please note that detailed logging takes additional disk space and may impose an important overhead in terms of write IO depending on the activity in your PostgreSQL server. You should be careful when enabling logging and should only do so after understanding the overhead and performance degradation it may cause to your workload.

Auditing – pgaudit and set_user

Some essential auditing features in PostgreSQL are implemented as extensions, which can be enabled at will on highly secured environments with regulatory requirements.

pgaudit helps to audit the activities happening in the database. If any unauthorized user has intentionally obfuscated the DDL or DML, the statement the user has passed and the sub-statement that was actually executed in the database will be logged in the PostgreSQL log file.

set_user

  provides a method of privilege escalations. If properly implemented, it provides the highest level of auditing, which allows the monitoring of even SUPERUSER actions.

You can read more about pgaudit here.

Security Bug Fixes

PostgreSQL Global Development Group (PGDG) considers security bugs seriously. Any security vulnerabilities can be reported directly to security@postgresql.org. The list of security issues fixed for all the supported PostgreSQL versions can be found here. Security fixes to PostgreSQL are made available through minor version upgrades. This is the main reason why it is advised to always maintain PostgreSQL servers upgraded to the latest minor version.

If you liked this post…

Please join Percona’s PostgreSQL Support Technical Lead,  Avinash Vallarapu; Senior Support Engineer, Fernando Laudares; and Senior Support Engineer, Jobin Augustine, on Wednesday, October 10, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4), as they demonstrate an enterprise-grade PostgreSQL® environment built using a combination of open source tools and extensions.

Register Now

The post Securing PostgreSQL as an Enterprise-Grade Environment appeared first on Percona Database Performance Blog.

Sep
17
2018
--

Using the keyring_vault Plugin with Percona Server for MySQL 5.7

keyring_vault store database encryption keys

keyring_vault store database encryption keysThis is the first of a two-part series on using the keyring_vault plugin with Percona Server for MySQL 5.7. The second part will walk you through on how to use Percona Xtrabackup to backup from this instance and restore to another server and set it up as a slave with keyring_vault plugin.

What is the keyring_vault plugin?

The keyring_vault is a plugin that allows the database to interface with a Hashicorp Vault server to store and secure encryption keys. The Vault server then acts as a centralized encryption key management solution which is critical for security and for compliance with various security standards.

Configuring Vault

Create SSL certificates to be used by Vault. You can use the sample ssl.conf template below to generate the necessary files.

[root@vault1 ~]# cat /etc/sslkeys/ssl.conf
[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
C = US
ST = NC
L =  R
O = Percona
CN = *
[v3_req]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:TRUE
subjectAltName = @alt_names
[alt_names]
IP = 192.168.0.114

Then run the two commands below to generated the cert and key files and the certificate chain:

$ openssl req -config ssl.conf -x509 -days 365 -batch -nodes -newkey rsa:2048 -keyout vault.key -out vault.crt
$ cat vault.key vault.crt > vault.pem

Once the SSL certificates are created start Vault with the sample configuration below. Take note that you should follow the suggested best practices when deploying Vault in production, this example is to get us by with a simple working setup.

[root@vault1 ~]# cat /etc/vault.hcl
listener "tcp" {
address = "192.168.0.114:8200"
tls_cert_file="/etc/sslkeys/vault.crt"
tls_key_file="/etc/sslkeys/vault.key"
}
storage "file" {
path = "/var/lib/vault"
}

Assuming Vault started up fine and you are able to unseal Vault, the next step is to create the policy file. For more details on initializing and unsealing Vault please read the manual here.

[root@vault1 ~]# cat /etc/vault/policy/dc1.hcl
path "secret/*" {
capabilities = ["list"]
}
path "secret/dc1/*" {
capabilities = ["create", "read", "delete", "update", "list"]
}

Create a Vault policy named dc1-secrets using the dc1.hcl file like this:

[root@vault1 ~]# vault policy write dc1-secrets /etc/vault/policy/dc1.hcl -ca-cert=/etc/sslkeys/vault.pem
Success! Uploaded policy: dc1-secrets

Next, create a token associated with the newly created policy:

[root@vault1 ~]# vault token create -policy=dc1-secrets -ca-cert=/etc/sslkeys/vault.pem > dc1-token
[root@vault1 ~]# cat dc1-token
Key                  Value
---                  -----
token                be515093-b1a8-c799-b237-8e04ea90ad7a
token_accessor       4c1ba5c5-3fed-e9bb-d230-5bf1392e2d7e
token_duration       8760h
token_renewable      true
token_policies       ["dc1-secrets" "default"]
identity_policies    []
policies             ["dc1-secrets" "default"]

Setting up MySQL

The following instructions should work starting from Percona Server for MySQL 5.7.20-18 and through later versions.

Configure my.cnf with the following variables:

early-plugin-load="keyring_vault=keyring_vault.so"
loose-keyring_vault_config="/var/lib/mysql-keyring/keyring_vault.conf"
encrypt_binlog=ON
innodb_encrypt_online_alter_logs=ON
innodb_encrypt_tables=ON
innodb_temp_tablespace_encrypt=ON
master_verify_checksum=ON
binlog_checksum=CRC32
log_bin=mysqld-bin
binlog_format=ROW
server-id=1
log-slave-updates

Create the keyring_vault.conf file in the path above with the following contents:

[root@mysql1 ~]# cat /var/lib/mysql-keyring/keyring_vault.conf
vault_url = https://192.168.0.114:8200
secret_mount_point = secret/dc1/master
token = be515093-b1a8-c799-b237-8e04ea90ad7a
vault_ca = /etc/vault_ca/vault.pem

Here we are using the vault.pem file generated by combining the vault.crt and vault.key files. Observe that our secret_mount_point is secret/dc1/master. We want to make sure that this mount point is unique across all servers, this is in fact advised in the manual here.

Ensure that the CA certificate is owned by mysql user:

[root@mysql1 ~]# ls -la /etc/vault_ca/
total 24
drwxr-xr-x  2 mysql mysql   41 Jul 14 11:39 .
drwxr-xr-x 63 root  root  4096 Jul 14 13:17 ..
-rw-------  1 mysql mysql 1139 Jul 14 11:39 vault.pem

Initialize the MySQL data directory on the Master:

[root@mysql1 ~]# mysqld --initialize-insecure --datadir=/var/lib/mysql --user=mysql

For production systems we do not recommend using --initialize-insecure option, this is just to skip additional steps in this tutorial.

Finally, start mysqld instance and then test the setup by creating an encrypted table.

[root@mysql1 ~]# systemctl status mysqld
? mysqld.service - MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2018-07-14 23:53:16 UTC; 2s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Process: 1401 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
Process: 1383 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 1403 (mysqld)
CGroup: /system.slice/mysqld.service
??1403 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
Jul 14 23:53:16 mysql1 systemd[1]: Starting MySQL Server...
Jul 14 23:53:16 mysql1 systemd[1]: Started MySQL Server.

At this point you should have Percona Server for MySQL instance with tablespace encryption using Vault.

Researching database security?

You might also enjoy this pre-recorded webinar securing your database servers from external attacks presented by my colleague Colin Charles.

The post Using the keyring_vault Plugin with Percona Server for MySQL 5.7 appeared first on Percona Database Performance Blog.

Sep
14
2018
--

Encryption of the InnoDB System Tablespace and Parallel Doublewrite Buffer

encryption of InnoDB tablespace parallel doublewrite buffer

encryption of InnoDB tablespace parallel doublewrite bufferIn my last post I compared data at-rest encryption features available for MySQL and MariaDB. As noted at the time, some of the features available for Percona Server for MySQL were in development, and the latest version (5.7.23) sees two of them released as ALPHA quality.

Encrypting the InnoDB system tablespace

The first of the new features is InnoDB system tablespace encryption via innodb_sys_tablespace_encrypt, which would provide encryption of the following data:

  • the change buffer, which caches changes to secondary index pages as a result of DML operations for pages that are not in the InnoDB buffer pool
  • The undo logs if they have not been configured to be stored in separate undo tablespaces
  • data from any tables that exist in the main tablespace, which occurs when innodb_file_per_table is disabled

There are some related changes on the horizon that would allow this to be applied to an existing instance. However, for now this is only available for new instances as it can only be applied during bootstrap. This means that it would require a logical restore of your data to use it with an existing cluster–I should restate that this is an ALPHA feature and not production-ready.

There are some extra points to note about this new variable:

  • an instance with an encrypted tablespace cannot be downgraded to use a version prior to 5.7.23, due to the inability to read the tablespace
  • as noted, it is not currently possible to convert the tablespace between encrypted and unencrypted states, or vice versa
  • the key for the system tablespace can be manually rotated using ALTER INSTANCE ROTATE INNODB MASTER KEY as per any other tablespace

Encrypting the parallel doublewrite buffer

To complement the encryption of the system tablespace, it is also possible to encrypt the parallel doublewrite buffer using innodb_parallel_dblwr_encrypt, a feature unique to Percona Server for MySQL.  This means that any data for an encrypted tablespace is also only written in an encrypted form in the parallel doublewrite buffer; unencrypted tablespace data remains in plaintext. Unlike innodb_sys_tablespace_encrypt, you are able to set innodb_parallel_dblwr_encrypt dynamically on an existing instance.

There are more encryption features planned–or already in development–for Percona Server for MySQL so watch this space!

The post Encryption of the InnoDB System Tablespace and Parallel Doublewrite Buffer appeared first on Percona Database Performance Blog.

Sep
10
2018
--

Using ProxySQL to connect to IPv6-only databases over IPv4

connect to ipv6 database from ipv4 application using proxysql

connect to ipv6 database from ipv4 application using proxysqlIt’s 2018. Maybe now is the time to start migrating your network to IPv6, and your database infrastructure is a great place to start. Unfortunately, many legacy applications don’t offer the option to connect to MySQL directly over IPv6 (sometimes even if passing a hostname). We can work around this by using ProxySQL’s IPv6 support which was added in version 1.3. This will allow us to proxy incoming IPv4 connections to IPv6-only database servers.

Note that by default ProxySQL only listens on IPv4. We don’t recommended changing that until this bug is resolved. The bug causes ProxySQL to segfault frequently if listening on IPv6.

In this example I’ll use centos7-pxc57-1 as my database server. It’s running Percona XtraDB Cluster (PXC) 5.7 on CentOS 7,  which is only accessible over IPv6. This is one node of a three node cluster, but l treat this one node as a standalone server for this example.  One node of a synchronous cluster can be thought of as equivalent to the entire cluster, and vice-versa. Using the PXC plugin for ProxySQL to split reads from writes is the subject of a future blog post.

The application server, centos7-app01, would be running the hypothetical legacy application.

Note: We use default passwords throughout this example. You should always change the default passwords.

We have changed the IPv6 address in these examples. Any resemblance to real IPv6 addresses, living or dead, is purely coincidental.

  • 2a01:5f8:261:748c::74 is the IPv6 address of the ProxySQL server
  • 2a01:5f8:261:748c::71 is the Percona XtraDB node

Step 1: Install ProxySQL for your distribution

Packages are available here but in this case I’m going to use the version provided by the Percona yum repository:

[...]
Installed:
proxysql.x86_64 0:1.4.9-1.1.el7
Complete!

Step 2: Configure ProxySQL to listen on IPv4 TCP port 3306 by editing /etc/proxysql.cnf and starting it

[root@centos7-app1 ~]# vim /etc/proxysql.cnf
[root@centos7-app1 ~]# grep interfaces /etc/proxysql.cnf
interfaces="127.0.0.1:3306"
[root@centos7-app1 ~]# systemctl start proxysql

Step 3: Configure ACLs on the destination database server to allow ProxySQL to connect over IPv6

mysql> GRANT SELECT on sys.* to 'monitor'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'monitor';
Query OK, 0 rows affected, 1 warning (0.25 sec)
mysql> GRANT ALL ON legacyapp.* TO 'legacyappuser'@'2a01:5f8:261:748c::74' IDENTIFIED BY 'super_secure_password';
Query OK, 0 rows affected, 1 warning (0.25 sec)

Step 4: Add the IPv6 address of the destination server to ProxySQL and add users

We need to configure the IPv6 server as a mysql_server inside ProxySQL. We also need to add a user to ProxySQL as it will reuse these credentials when connecting to the backend server. We’ll do this by connecting to the admin interface of ProxySQL on port 6032:

[root@centos7-app1 ~]# mysql -h127.0.0.1 -P6032 -uadmin -padmin
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.
Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.5.30 (ProxySQL Admin Module)
Copyright (c) 2009-2018 Percona LLC and/or its affiliates
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> INSERT INTO mysql_servers(hostgroup_id,hostname,port) VALUES (1,'2a01:5f8:261:748c::71',3306);
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO mysql_users(username, password, default_hostgroup) VALUES ('legacyappuser', 'super_secure_password', 1);
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL USERS TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
mysql> SAVE MYSQL USERS TO DISK;
Query OK, 0 rows affected (0.27 sec)
mysql> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.01 sec)
mysql> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.30 sec)
mysql> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
mysql> SAVE MYSQL VARIABLES TO DISK;
Query OK, 95 rows affected (0.12 sec)

Step 5: Configure your application to connect to ProxySQL over IPv4 on localhost4 (IPv4 localhost)

This is application specific and so not shown here, but I’d configure my application to use localhost4 as this is in /etc/hosts by default and points to 127.0.0.1 and not ::1

Step 6: Verify

As I don’t have the application here, I’ll verify with mysql-client. Remember that ProxySQL is listening on 127.0.0.1 port 3306, so we connect via ProxySQL on IPv4 (the usage of 127.0.0.1 rather than a hostname is just to show this explicitly):

[root@centos7-app1 ~]# mysql -h127.0.0.1 -ulegacyappuser -psuper_secure_password
mysql: [Warning] Using a password on the command line interface can be insecure.
mysql> SELECT host FROM information_schema.processlist WHERE ID=connection_id();
+-----------------------------+
| host                        |
+-----------------------------+
| 2a01:5f8:261:748c::74:57546 |
+-----------------------------+
1 row in set (0.00 sec)
mysql> CREATE TABLE legacyapp.legacy_test_table(id int);
Query OK, 0 rows affected (0.83 sec)

The query above shows the remote host (from MySQL’s point of view) for the current connection. As you can see, MySQL sees this connection established over IPv6. So to recap, we connected to MySQL on an IPv4 IP address (127.0.0.1) and were successfully proxied to a backend IPv6 server.

The post Using ProxySQL to connect to IPv6-only databases over IPv4 appeared first on Percona Database Performance Blog.

Aug
28
2018
--

Very Good Security makes data ‘unhackable’ with $8.5M from Andreessen

“You can’t hack what isn’t there,” Very Good Security co-founder Mahmoud Abdelkader tells me. His startup assumes the liability of storing sensitive data for other companies, substituting dummy credit card or Social Security numbers for the real ones. Then when the data needs to be moved or operated on, VGS injects the original info without clients having to change their code.

It’s essentially a data bank that allows businesses to stop storing confidential info under their unsecured mattress. Or you could think of it as Amazon Web Services for data instead of servers. Given all the high-profile breaches of late, it’s clear that many companies can’t be trusted to house sensitive data. Andreessen Horowitz is betting that they’d rather leave it to an expert.

That’s why the famous venture firm is leading an $8.5 million Series A for VGS, and its partner Alex Rampell is joining the board. The round also includes NYCA, Vertex Ventures, Slow Ventures and PayPal mafioso Max Levchin. The cash builds on VGS’ $1.4 million seed round, and will pay for its first big marketing initiative and more salespeople.

“Hey! Stop doing this yourself!,” Abdelkader asserts. “Put it on VGS and we’ll let you operate on your data as if you possess it with none of the liability.” While no data is ever 100 percent unhackable, putting it in VGS’ meticulously secured vaults means clients don’t have to become security geniuses themselves and instead can focus on what’s unique to their business.

“Privacy is a part of the UN Declaration of Human Rights. We should be able to build innovative applications without sacrificing our privacy and security,” says Abdelkader. He got his start in the industry by reverse-engineering games like StarCraft to build cheats and trainer software. But after studying discrete mathematics, cryptology and number theory, he craved a headier challenge.

Abdelkader co-founded Y Combinator-backed payment system Balanced in 2010, which also raised cash from Andreessen. But out-muscled by Stripe, Balanced shut down in 2015. While transitioning customers over to fellow YC alumni Stripe, Balanced received interest from other companies wanting it to store their data so they could be PCI-compliant.

Very Good Security co-founder and CEO Mahmoud Abdelkader

Now Abdelkader and his VP from Balanced, Marshall Jones, have returned with VGS to sell that as a service. It’s targeting startups that handle data like payment card information, Social Security numbers and medical info, though eventually it could invade the larger enterprise market. It can quickly help these clients achieve compliance certifications for PCI, SOC2, EI3PA, HIPAA and other standards.

VGS’ innovation comes in replacing this data with “format preserving aliases” that are privacy safe. “Your app code doesn’t know the difference between this and actually sensitive data,” Abdelkader explains. In 30 minutes of integration, apps can be reworked to route traffic through VGS without ever talking to a salesperson. VGS locks up the real strings and sends the aliases to you instead, then intercepts those aliases and swaps them with the originals when necessary.

“We don’t actually see your data that you vault on VGS,” Abdelkader tells me. “It’s basically modeled after prison. The valuables are stored in isolation.” That means a business’ differentiator is their business logic, not the way they store data.

For example, fintech startup LendUp works with VGS to issue virtual credit card numbers that are replaced with fake numbers in LendUp’s databases. That way if it’s hacked, users’ don’t get their cards stolen. But when those card numbers are sent to a processor to actually make a payment, the real card numbers are subbed in last-minute.

VGS charges per data record and operation, with the first 500 records and 100,000 sensitive API calls free; $20 a month gets clients double that, and then they pay 4 cent per record and 2 cents per operation. VGS provides access to insurance too, working with a variety of underwriters. It starts with $1 million policies that can be much larger for Fortune 500s and other big companies, which might want $20 million per incident.

Obviously, VGS has to be obsessive about its own security. A breach of its vaults could kill its brand. “I don’t sleep. I worry I’ll miss something. Are we a giant honey pot?,” Abdelkader wonders. “We’ve invested a significant amount of our money into 24/7 monitoring for intrusions.”

Beyond the threat of hackers, VGS also has to battle with others picking away at part of its stack or trying to compete with the whole, like TokenEx, HP’s Voltage, Thales’ Vormetric, Oracle and more. But it’s do-it-yourself security that’s the status quo and what VGS is really trying to disrupt.

But VGS has a big accruing advantage. Each time it works with a clients’ partners like Experian or TransUnion for a company working with credit checks, it already has a relationship with them the next time another clients has to connect with these partners. Abdelkader hopes that, “Effectively, we become a standard of data security and privacy. All the institutions will just say ‘why don’t you use VGS?’”

That standard only works if it’s constantly evolving to win the cat-and-mouse game versus attackers. While a company is worrying about the particular value it adds to the world, these intelligent human adversaries can find a weak link in their security — costing them a fortune and ruining their relationships. “I’m selling trust,” Abdelkader concludes. That peace of mind is often worth the price.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com