Jun
10
2019
--

Vectra lands $100M Series E investment for AI-driven network security

Vectra, a seven-year-old company that helps customers detect intrusions at the network level, whether in the cloud or on premises, announced a $100 million Series E funding round today led by TCV. Existing investors, including Khosla Ventures and Accel, also participated in the round, which brings the total raised to more than $200 million, according to the company.

As company CEO Hitesh Sheth explained, there are two primary types of intrusion detection. The first is end point detection and the second is his company’s area of coverage, network detection and response, or NDR.  He says that by adding a layer of artificial intelligence, it improves the overall results.

“One of the keys to our success has been applying AI to network traffic, the networking side of NDR, to look for the signal in the noise. And we can do this across the entire infrastructure, from the data center to the cloud all the way into end user traffic including IoT,” he explained.

He said that as companies move their data to the cloud, they are looking for ways to ensure the security of their most valuable data assets, and he says his company’s NDR solution can provide that. In fact, securing the cloud side of the equation is one of the primary investment focuses for this round.

Tim McAdam, from lead investor TCV, says that the AI piece is a real differentiator for Vectra and one that attracted his firm to invest in the company. He said that while he realized that AI is an overused term these days, after talking to 30 customers he heard over and over again that Vectra’s AI-driven solution was a differentiator over competing products. “All of them have decided to standardize on the Vectra Cognito because to a person, they spoke of the efficacy and the reduction of their threat vectors as a result of standardizing on Vectra,” McAdam told TechCrunch.

The company was founded in 2012 and currently has 240 employees. That is expected to double in a year to 18 months with this funding.

May
28
2019
--

FireEye snags security effectiveness testing startup Verodin for $250M

When FireEye reported its earnings last month, the outlook was a little light, so the security vendor decided to be proactive and make a big purchase. Today, the company announced it has acquired Verodin for $250 million. The deal closed today.

The startup had raised over $33 million since it opened its doors 5 years ago, according to Crunchbase data, and would appear to have given investors a decent return. With Verodin, FireEye gets a security validation vendor, that is, a company that can run a review against the existing security setup and find gaps in coverage.

That would seem to be a handy kind of tool to have in your security arsenal, and could possibly explain the price tag. Perhaps, it could also help set FireEye apart from the broader market, or fill in a gap in its own platform.

FireEye CEO Kevin Mandia certainly sees the potential of his latest purchase. “Verodin gives us the ability to automate security effectiveness testing using the sophisticated attacks we spend hundreds of thousands of hours responding to, and provides a systematic, quantifiable, and continuous approach to security program validation,” he said in a statement.

Chris Key, Verodin co-founder and chief executive officer, sees the purchase through the standard acquisition lens. “By joining FireEye, Verodin extends its ability to help customers take a proactive approach to understanding and mitigating the unique risks, inefficiencies and vulnerabilities in their environments,” he said in a statement. In other words, as part of a bigger company, we’ll do more faster.

While FireEye plans to incorporate Verodin into its on-prem and managed services, it will continue to sell the solution as a stand-alone product, as well.

May
21
2019
--

Google says some G Suite user passwords were stored in plaintext since 2005

Google says a small number of its enterprise customers mistakenly had their passwords stored on its systems in plaintext.

The search giant disclosed the exposure Tuesday but declined to say exactly how many enterprise customers were affected. “We recently notified a subset of our enterprise G Suite customers that some passwords were stored in our encrypted internal systems unhashed,” said Google vice president of engineering Suzanne Frey.

Passwords are typically scrambled using a hashing algorithm to prevent them from being read by humans. G Suite administrators are able to manually upload, set and recover new user passwords for company users, which helps in situations where new employees are on-boarded. But Google said it discovered in April that the way it implemented password setting and recovery for its enterprise offering in 2005 was faulty and improperly stored a copy of the password in plaintext.

Google has since removed the feature.

No consumer Gmail accounts were affected by the security lapse, said Frey.

“To be clear, these passwords remained in our secure encrypted infrastructure,” said Frey. “This issue has been fixed and we have seen no evidence of improper access to or misuse of the affected passwords.”

Google has more than 5 million enterprise customers using G Suite.

Google said it also discovered a second security lapse earlier this month as it was troubleshooting new G Suite customer sign-ups. The company said since January it was improperly storing “a subset” of unhashed G Suite passwords on its internal systems for up to two weeks. Those systems, Google said, were only accessible to a limited number of authorized Google staff, the company said.

“This issue has been fixed and, again, we have seen no evidence of improper access to or misuse of the affected passwords,” said Frey.

Google said it’s notified G Suite administrators to warn of the password security lapse, and will reset account passwords for those who have yet to change.

A spokesperson confirmed Google has informed data protection regulators of the exposure.

Google becomes the latest company to have admitted storing sensitive data in plaintext in the past year. Facebook said in March that “hundreds of millions” of Facebook and Instagram passwords were stored in plaintext. Twitter and GitHub also admitted similar security lapses last year.

Read more:

May
15
2019
--

Egnyte brings native G Suite file support to its platform

Egnyte announced today that customers can now store G Suite files inside its storage, security and governance platform. This builds on the support the company previously had for Office 365 documents.

Egnyte CEO and co-founder Vineet Jain says that while many enterprise customers have seen the value of a collaborative office suite like G Suite, they might have stayed away because of compliance concerns (whether that was warranted or not).

He said that Google has been working on an API for some time that allows companies like Egnyte to decouple G Suite documents from Google Drive. Previously, if you wanted to use G Suite, you no choice but to store the documents in Google Drive.

Jain acknowledges that the actual integration is pretty much the same as his competitors because Google determined the features. In fact, Box and Dropbox announced similar capabilities over the last year, but he believes his company has some differentiating features on its platform.

“I honestly would be hard pressed to tell you this is different than what Box or Dropbox is doing, but when you look at the overall context of what we’re doing…I think our advanced governance features are a game changer,” Jain told TechCrunch.

What that means is that G Suite customers can open a document and get the same editing experience as they would get were they inside Google Drive, while getting all the compliance capabilities built into Egnyte via Egnyte Protect. What’s more, they can store the files wherever they like, whether that’s in Egnyte itself, an on-premises file store or any cloud storage option that Egnyte supports, for that matter.

Egnyte storage and compliance platform

G Suite documents stored on the Egnyte platform

Long before it was commonplace, Egnyte tried to differentiate itself from a crowded market by being a hybrid play where files can live on-premises or in the cloud. It’s a common way of looking at cloud strategy now, but it wasn’t always the case.

Jain has always emphasized a disciplined approach to growing the company, and it has grown to 15,000 customers and 600 employees over 11 years in business. He won’t share exact revenue, but says the company is generating “multi-millions in revenue” each month.

He has been talking about an IPO for some time, and that remains a goal for the company. In a recent letter to employees that Egnyte shared with TechCrunch, Jain put it this way. “Our leadership team, including our board members, have always looked forward to an IPO as an interim milestone — and that has not changed. However, we now believe this company has the ability to not only be a unicorn but to be a multi-billion dollar company in the long-term. This is a mindset that we all need to have moving forward,” he wrote.

Egnyte was founded in 2007 and has raised more than $137 million, according to Crunchbase data.

May
02
2019
--

Takeaways from F8 and Facebook’s next phase

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Josh Constine and Frederic Lardinois discuss major announcements that came out of Facebook’s F8 conference and dig into how Facebook is trying to redefine itself for the future.

Though touted as a developer-focused conference, Facebook spent much of F8 discussing privacy upgrades, how the company is improving its social impact, and a series of new initiatives on the consumer and enterprise side. Josh and Frederic discuss which announcements seem to make the most strategic sense, and which may create attractive (or unattractive) opportunities for new startups and investment.

“This F8 was aspirational for Facebook. Instead of being about what Facebook is, and accelerating the growth of it, this F8 was about Facebook, and what Facebook wants to be in the future.

That’s not the newsfeed, that’s not pages, that’s not profiles. That’s marketplace, that’s Watch, that’s Groups. With that change, Facebook is finally going to start to decouple itself from the products that have dragged down its brand over the last few years through a series of nonstop scandals.”

(Photo by Justin Sullivan/Getty Images)

Josh and Frederic dive deeper into Facebook’s plans around its redesign, Messenger, Dating, Marketplace, WhatsApp, VR, smart home hardware and more. The two also dig into the biggest news, or lack thereof, on the developer side, including Facebook’s Ax and BoTorch initiatives.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

Apr
24
2019
--

VDOO secures $32M for a platform that uses AI to detect and fix vulnerabilities on IoT devices

Our universe of connected things is expanding by the day: the number of objects with embedded processors now exceeds the number of smartphones globally and is projected to reach some 18 billion devices by 2022. But just as that number is growing, so are the opportunities for malicious hackers to use these embedded devices to crack into networks, disrupting how these objects work and stealing information, a problem that analysts estimate will cost $18.3 billion to address by 2023. Now, an Israeli startup called VDOO has raised $32 million to address this, with a platform that identifies and fixes security vulnerabilities in IoT devices, and then tests to make sure that the fixes work.

The funding is being led by WRVI Capital and GGV Capital and also includes strategic investments from NTT DOCOMO (which works with VDOO), MS&AD Ventures (the venture arm of the global cyber insurance firm), and Avigdor Willenz (who founded both Galileo Technologies and Annapurna Labs, respectively acquired by Marvell and Amazon). 83North, Dell Technology Capital and David Strohm, who backed VDOO in its previous round of $13 million in January 2018, also participated, bringing the total raised by VDOO now to $45 million.

VDOO — a reference to the Hebrew word that sounds like “vee-doo” and means “making sure” — was cofounded by Netanel Davidi (co-CEO), Uri Alter (also co-CEO) and Asaf Karas (CTO). Davidi and Alter previously co-founded Cyvera, a pioneer in endpoint security that was acquired by Palo Alto Networks and became the basis for its own endpoint security product; Karas meanwhile has extensive experience coming to VDOO of working, among other places, for the Israeli Defense Forces.

In an interview, Davidi noted that the company was created out of one of the biggest shortfalls of IoT.

“Many embedded systems have a low threshold for security because they were not created with security in mind,” he said, noting that this is partly due to concerns of how typical security fixes might impact performance, and the fact that this has typically not been a core competency for hardware makers, but something that is considered after devices are in the market. At the same time, a lot of security solutions today in the IoT space have focused on monitoring, but not fixing, he added. “Most companies have good solutions for the visibility of their systems, and are able to identify vulnerabilities on the network, but are not sufficient at protecting devices themselves.”

The sheer number of devices on the market and their spread across a range of deployments from manufacturing and other industrial scenarios, through to in-home systems that can be vulnerable even when not connected to the internet, also makes for a complicated and uneven landscape.

VDOO’s approach was to conceive of a very lightweight implementation that sits on a small group of devices — “small” is relative here: the set was 16,000 objects — applying machine learning to “learn” how different security vulnerabilities might behave to discover adjacent hacks that hadn’t yet been identified.

“For any kind of vulnerability, using deep binary analysis capabilities, we try to understand the broader idea, to figure out how a similar vulnerability can emerge,” he said.

Part of the approach is to pare down security requirements and solutions to those pertinent to the device in question, and providing clear guidance to vendors for how to best avoid problems in the first place at the development stage. VDOO then also generates specific “tailor-made on-device micro-agents” to continue the detection and repair process. (Davidi likened it to a modern approach to some cancer care: preventive measures such as periodic monitoring checks; followed by a “tailored immunotherapy” based on prior analysis of DNA.)

It currently supports Linux- and Android-based operating systems, as well as FreeRTOS and support for more systems coming soon, Davidi said. It sells its services primarily to device makers, who can make over the air updates to their devices after they have been purchased and implemented to keep them up to date with the latest fixes. Typical devices currently secured with VDOO tech include safety and security devices such as surveillance cameras, NVRs & DVRs, fire alarm systems, access controls, routers, switches and access points, Davidi said.

It’s the focus on providing security services for hardware makers, in fact, that helps VDOO stand out from the others in the field.

“Among all startups for embedded systems, VDOO is the first to introduce a unique, holistic approach focusing on the device vendors which are the focal enabler in truly securing devices,” said Lip-Bu Tan, founding partner of WRVI Capital. “We are delighted to back VDOO’s technology, and the exceptional team that has created advanced tools to allow vendors to secure devices as much as possible without in-house security know-how, for the first time in many decades, I see a clear demand for security, as being raised constantly in many meetings with leading OEMs worldwide, as well as software giants.”

Over the last 18 months, as VDOO has continued to expand its own reach, it has picked up customers along the way after identifying vulnerabilities in their devices. Its dataset covers some 70 million embedded systems’ binaries and more than 16,000 versions of embedded systems, and it has worked with customers to identify and address 150 zero-day vulnerabilities and 100,000 security issues that would have potentially impacted 1.5 billion devices.

Interestingly, while VDOO is building its own IP, it is also working with a number of vendors to provide many of the fixes. Davidi says that VDOO and those vendors go through fairly rigorous screening processes before integrating, and the hope is that down the line there will more automation brought in for the “fixing” element using third-party solutions.

“VDOO brings a unique end-to-end security platform, answering the global connectivity trend and the emerging threats targeting embedded devices, to provide security as an essential enabler of extensive connected devices adoption. With its differentiated capabilities, VDOO has succeeded in acquiring global customers, including many top-tier brands. Moreover, VDOO’s ability to uncover and mitigate weaknesses created by external suppliers fits perfectly into our Supply Chain Security investment strategy,” said Glenn Solomon, managing partner at GGV Capital, in a statement. “This funding, together with the company’s great technology, skilled entrepreneurs and one of the best teams we have seen, will allow VDOO to maintain its leadership position in IoT security and expand geographies while continuing to develop its state-of-the-art technology.”

Valuation is currently not being disclosed.

Apr
18
2019
--

MySQL-python: Adding caching_sha2_password and TLSv1.2 Support

python not connecting to MySQL

python not connecting to MySQLPython 2 reaches EOL on 2020-01-01 and one of its commonly used third-party packages is MySQL-python. If you have not yet migrated away from both of these, since MySQL-python does not support Python 3, then you may have come across some issues if you are using more recent versions of MySQL and are enforcing a secured installation. This post will look at two specific issues that you may come across (caching_sha2_password in MySQL 8.0 and TLSv1.2 with MySQL >=5.7.10 when using OpenSSL) that will prevent you from connecting to a MySQL database and buy you a little extra time to migrate code.

For CentOS 7, MySQL-python is built against the client library provided by the mysql-devel package, which does not support some of the newer features, such as caching_sha2_password (the new default authentication plugin as of MySQL 8.0.4) and TLSv1.2. This means that you cannot take advantage of these security features and therefore leave yourself with an increased level of vulnerability.

So, what can we do about this? Thankfully, it is very simple to resolve, so long as you don’t mind getting your hands dirty!

Help! MySQL-python won’t connect to my MySQL 8.0 instance

First of all, let’s take a look at the issues that the latest version of MySQL-python (MySQL-python-1.2.5-1.el7.rpm for CentOS 7) has when connecting to a MySQL 8.0 instance. We will use a Docker container to help test this out by installing MySQL-python along with the Percona Server 8.0 client so that we can compare the two.

=> docker run --rm -it --network host --name rpm-build centos:latest
# yum install -y -q MySQL-python
warning: /var/cache/yum/x86_64/7/base/packages/MySQL-python-1.2.5-1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for MySQL-python-1.2.5-1.el7.x86_64.rpm is not installed
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-6.1810.2.el7.centos.x86_64 (@CentOS)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
# yum install -q -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm; percona-release setup ps80
* Enabling the Percona Original repository
<*> All done!
The percona-release package now contains a percona-release script that can enable additional repositories for our newer products.
For example, to enable the Percona Server 8.0 repository use:
  percona-release setup ps80
Note: To avoid conflicts with older product versions, the percona-release setup command may disable our original repository for some products.
For more information, please visit:
  https://www.percona.com/doc/percona-repo-config/percona-release.html
* Disabling all Percona Repositories
* Enabling the Percona Server 8.0 repository
* Enabling the Percona Tools repository
<*> All done!
# yum install -q -y percona-server-client
warning: /var/cache/yum/x86_64/7/ps-80-release-x86_64/packages/percona-server-shared-8.0.15-5.1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 8507efa5: NOKEY
Public key for percona-server-shared-8.0.15-5.1.el7.x86_64.rpm is not installed
Importing GPG key 0x8507EFA5:
 Userid     : "Percona MySQL Development Team (Packaging key) <mysql-dev@percona.com>"
 Fingerprint: 4d1b b29d 63d9 8e42 2b21 13b1 9334 a25f 8507 efa5
 Package    : percona-release-1.0-11.noarch (installed)
 From       : /etc/pki/rpm-gpg/PERCONA-PACKAGING-KEY

Next we will create a client config to save some typing later on and then check that we can connect to the MySQL instance that is already running; if you don’t have MySQL running on your local machine then you can install it in the container.

# cat <<EOS > /root/.my.cnf
> [client]
> user = testme
> password = my-secret-pw
> host = 127.0.0.1
> protocol = tcp
> ssl-mode = REQUIRED
> EOS
mysql> /* hide passwords */ PAGER sed "s/AS '.*' REQUIRE/AS 'xxx' REQUIRE/" ;
PAGER set to 'sed "s/AS '.*' REQUIRE/AS 'xxx' REQUIRE/"'
mysql> SHOW CREATE USER CURRENT_USER();
CREATE USER 'testme'@'%' IDENTIFIED WITH 'caching_sha2_password' AS 'xxx' REQUIRE SSL PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT
mysql> SELECT @@version, @@version_comment, ssl_version, ssl_cipher, user FROM sys.session_ssl_status INNER JOIN sys.processlist ON thread_id = thd_id AND conn_id = CONNECTION_ID();
8.0.12-1        Percona Server (GPL), Release '1', Revision 'b072566'   TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256     testme@localhost
All looks good here, so now we will check that MySQLdb can also connect:
# /usr/bin/python -c "import MySQLdb; dbc=MySQLdb.connect(read_default_file='~/.my.cnf').cursor(); dbc.execute('select @@version, @@version_comment, ssl_version, ssl_cipher, user from sys.session_ssl_status inner join sys.processlist on thread_id = thd_id and conn_id = connection_id()'); print dbc.fetchall()"
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib64/python2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
    return Connection(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
    super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2059, "Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory")

Changing the user’s authentication plugin

We have hit the first issue, because MySQL 8.0 introduced the caching_sha2_password plugin and made it the default authentication plugin, so we can’t connect at all. However, we can gain access by changing the grants for the user to use the old default plugin and then test again.

mysql> ALTER USER 'testme'@'%' IDENTIFIED WITH mysql_native_password BY "my-secret-pw";
# /usr/bin/python -c "import MySQLdb; dbc=MySQLdb.connect(read_default_file='~/.my.cnf').cursor(); dbc.execute('select @@version, @@version_comment, ssl_version, ssl_cipher, user from sys.session_ssl_status inner join sys.processlist on thread_id = thd_id and conn_id = connection_id()'); print dbc.fetchall()"
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib64/python2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
    return Connection(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
    super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (1045, "Access denied for user 'testme'@'localhost' (using password: YES)")

Configuring SSL options

We still can’t connect, so what can be wrong? Well, we haven’t added any SSL details to the config other than specifying that we need to use SSL, so we will add the necessary options and make sure that we can connect.

# cat <<EOS >> /root/.my.cnf
> ssl-ca = /root/certs/ca.pem
> ssl-cert = /root/certs/client-cert.pem
> ssl-key = /root/certs/client-key.pem
> EOS
# mysql -Bse "select 1"
1
# /usr/bin/python -c "import MySQLdb; dbc=MySQLdb.connect(read_default_file='~/.my.cnf').cursor(); dbc.execute('select @@version, @@version_comment, ssl_version, ssl_cipher, user from sys.session_ssl_status inner join sys.processlist on thread_id = thd_id and conn_id = connection_id()'); print dbc.fetchall()"
(('8.0.12-1', "Percona Server (GPL), Release '1', Revision 'b072566'", 'TLSv1', 'ECDHE-RSA-AES256-SHA', 'testme@localhost'),)

Forcing TLSv1.2 or later to further secure connections

That looks much better now, but if you look closely then you will notice that the MySQLdb connection is using TLSv1, which will make your security team either sad, angry or perhaps both as the connection can be downgraded! We want to secure the installation and keep data over the wire safe from prying eyes, so we will remove TLSv1 and also TLSv1.1 from the list of versions accepted by MySQL and leave only TLSv1.2 (TLSv1.3 is not supported with our current MySQL 8.0 and OpenSSL versions). Any guesses what will happen now?

mysql> SET PERSIST_ONLY tls_version = "TLSv1.2"; RESTART;
# /usr/bin/python -c "import MySQLdb; dbc=MySQLdb.connect(read_default_file='~/.my.cnf').cursor(); dbc.execute('select @@version, @@version_comment, ssl_version, ssl_cipher, user from sys.session_ssl_status inner join sys.processlist on thread_id = thd_id and conn_id = connection_id()'); print dbc.fetchall()"
Traceback (most recent call last):
  File "", line 1, in
  File "/usr/lib64/python2.7/site-packages/MySQLdb/__init__.py", line 81, in Connect
    return Connection(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 193, in __init__
    super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2026, 'SSL connection error: error:00000001:lib(0):func(0):reason(1)')

MySQLdb can no longer connect to MySQL, sadly with a rather cryptic message! However, as we made the change that triggered this we don’t need to decipher it and can now start looking to add TLSv1.2 support to MySQLdb, so roll up your sleeves!

Solution: Build a new RPM

In order to build a new RPM we will need to do a little extra work in the container first of all, but it doesn’t take long and is pretty simple to do. We are going to install the necessary packages to create a basic build environment and then rebuild the MySQL-python RPM from its current source RPM.

## Download packages
# yum install -q -y rpm-build yum-utils gnupg2 rsync deltarpm gcc
Package yum-utils-1.1.31-50.el7.noarch already installed and latest version
Package gnupg2-2.0.22-5.el7_5.x86_64 already installed and latest version
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
## Create build directory tree
# install -d /usr/local/builds/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
## Configure the RPM macro
# echo "%_topdir /usr/local/builds/rpmbuild" > ~/.rpmmacros
## Switch to a temporary directory to ease cleanup
# cd "$(mktemp -d)"; pwd
/tmp/tmp.xqxdBeLkXs
## Download the source RPM
# yumdownloader --source -q MySQL-python
Enabling updates-source repository
Enabling base-source repository
Enabling extras-source repository
## Extract the source RPM
# rpm2cpio "$(ls -1 MySQL-python*src.rpm)" | cpio -idmv
MySQL-python-1.2.5.zip
MySQL-python-no-openssl.patch
MySQL-python.spec
234 blocks

We are now ready to start making some changes to the source code and build specifications, but first of all we need to take note of another change that occurred in MySQL 8.0. Older code will reference my_config.h, which has since been removed and is no longer required for building; the fix for this will be shown below.

## Adjust the spec file to use percona-server-devel and allow a build number
# sed -i "s/mysql-devel/percona-server-devel/g; s/Release: 1/Release: %{buildno}/" MySQL-python.spec
## Store the ZIP filename and extract
# SRC_ZIP="$(ls -1 MySQL-python*.zip)"; unzip "${SRC_ZIP}"
## Store the source directory and remove the include of my_config.h
# SRC_DIR=$(find . -maxdepth 1 -type d -name "MySQL-python*"); sed -i 's/#include "my_config.h"/#define NO_MY_CONFIG/' "${SRC_DIR}/_mysql.c"
## View our _mysql.c change
# fgrep -m1 -B3 -A1 -n NO_MY_CONFIG "${SRC_DIR}/_mysql.c"
41-#if defined(MS_WINDOWS)
42-#include
43-#else
44:#define NO_MY_CONFIG
45-#endif
## Update the source
# zip -uv "${SRC_ZIP}" "${SRC_DIR}/_mysql.c"
updating: MySQL-python-1.2.5/_mysql.c   (in=84707) (out=17314) (deflated 80%)
total bytes=330794, compressed=99180 -> 70% savings

Now that we have adjusted the source code and specification we can start work on the new package so that we can once again connect to MySQL.

## Sync the source to the build tree
# rsync -ac ./ /usr/local/builds/rpmbuild/SOURCES/
## Copy the new specification file to the build tree
# cp -a MySQL-python.spec /usr/local/builds/rpmbuild/SPECS/
## Build a new source RPM with our updates
# rpmbuild --define "buildno 2" -bs /usr/local/builds/rpmbuild/SPECS/MySQL-python.spec
Wrote: /usr/local/builds/rpmbuild/SRPMS/MySQL-python-1.2.5-2.el7.src.rpm
## Use the new source RPM to install any missing dependencies
# yum-builddep -q -y /usr/local/builds/rpmbuild/SRPMS/MySQL-python-1.2.5-2.el7.src.rpm &>>debug.log
## Build a new RPM
# rpmbuild --define "buildno 2" --rebuild /usr/local/builds/rpmbuild/SRPMS/MySQL-python-1.2.5-2.el7.src.rpm &>>debug.log
# tail -n1 debug.log
+ exit 0

All went well and so we can now install the new package and see if it worked.

## Install the local RPM
# yum localinstall -q -y /usr/local/builds/rpmbuild/RPMS/x86_64/MySQL-python-1.2.5-2.el7.x86_64.rpm
## Check to see which libmysqlclient is used
# ldd /usr/lib64/python2.7/site-packages/_mysql.so | fgrep libmysqlclient
        libmysqlclient.so.21 => /usr/lib64/mysql/libmysqlclient.so.21 (0x00007f61660be000)
## Test the connection
# /usr/bin/python -c "import MySQLdb; dbc=MySQLdb.connect(read_default_file='~/.my.cnf').cursor(); dbc.execute('select @@version, @@version_comment, ssl_version, ssl_cipher, user from sys.session_ssl_status inner join sys.processlist on thread_id = thd_id and conn_id = connection_id()'); print dbc.fetchall()"
(('8.0.12-1', "Percona Server (GPL), Release '1', Revision 'b072566'", 'TLSv1.2', 'ECDHE-RSA-AES128-GCM-SHA256', 'testme@localhost'),)

Almost there… now force user authentication with the caching_sha2_password plugin

Hurrah! We can once again connect to the database and this time we are using TLSv1.2 and have thus increased our security. There is one thing left to do now though. Earlier on we needed to change the authentication plugin, so we will now change it back for extra security and see if all is still well.

mysql> ALTER USER 'testme'@'%' IDENTIFIED WITH caching_sha2_password BY "my-secret-pw";
# /usr/bin/python -c "import MySQLdb; dbc=MySQLdb.connect(read_default_file='~/.my.cnf').cursor(); dbc.execute('select @@version, @@version_comment, ssl_version, ssl_cipher, user from sys.session_ssl_status inner join sys.processlist on thread_id = thd_id and conn_id = connection_id()'); print dbc.fetchall()"
(('8.0.12-1', "Percona Server (GPL), Release '1', Revision 'b072566'", 'TLSv1.2', 'ECDHE-RSA-AES128-GCM-SHA256', 'testme@localhost'),)

Mission successful! Hopefully, if you are finding yourself in need of a little extra time migrating away from MySQLdb then this will help.


Image modified from photo by David Clode on Unsplash

Apr
12
2019
--

Homeland Security warns of security flaws in enterprise VPN apps

Several enterprise virtual private networking apps are vulnerable to a security bug that can allow an attacker to remotely break into a company’s internal network, according to a warning issued by Homeland Security’s cybersecurity division.

An alert was published Friday by the government’s Cybersecurity and Infrastructure Security Agency following a public disclosure by CERT/CC, the vulnerability disclosure center at Carnegie Mellon University.

The VPN apps built by four vendors — Cisco, Palo Alto Networks, Pulse Secure and F5 Networks — improperly store authentication tokens and session cookies on a user’s computer. These aren’t your traditional consumer VPN apps used to protect your privacy, but enterprise VPN apps that are typically rolled out by a company’s IT staff to allow remote workers to access resources on a company’s network.

The apps generate tokens from a user’s password and are stored on their computer to keep the user logged in without having to reenter their password every time. But if stolen, these tokens can allow access to that user’s account without needing their password.

But with access to a user’s computer — such as through malware — an attacker could steal those tokens and use them to gain access to a company’s network with the same level of access as the user. That includes company apps, systems and data.

So far, only Palo Alto Networks has confirmed its GlobalProtect app was vulnerable. The company issued a patch for both its Windows and Mac clients.

Neither Cisco nor Pulse Secure have patched their apps. F5 Networks is said to have known about storing since at least 2013 but advised users to roll out two-factor authentication instead of releasing a patch.

CERT warned that hundreds of other apps could be affected — but more testing was required.

Apr
11
2019
--

Armis nabs $65M Series C as IoT security biz grows in leaps and bounds

Armis is helping companies protect IoT devices on the network without using an agent, and it’s apparently a problem that is resonating with the market, as the startup reports 700 percent growth in the last year. That caught the attention of investors, who awarded them a $65 million Series C investment to help keep accelerating that growth.

Sequoia Capital led the round with help from new investors Insight Venture Partners and Intermountain Ventures. Returning investors Bain Capital Ventures, Red Dot Capital Partners and Tenaya Capital also participated. Today’s investment brings the total raised to $112 million, according to the company.

The company is solving a hard problem around device management on a network. If you have devices where you cannot apply an agent to track them, how do you manage them? Nadir Izrael, company co-founder and CTO, says you have to do it very carefully because even scanning for ports could be too much for older devices and they could shut down. Instead, he says that Armis takes a passive approach to security, watching and learning and understanding what normal device behavior looks like — a kind of behavioral fingerprinting.

“We observe what devices do on the network. We look at their behavior, and we figure out from that everything we need to know,” Izrael told TechCrunch. He adds, “Armis in a nutshell is a giant device behavior crowdsourcing engine. Basically, every client of Armis is constantly learning how devices behave. And those statistical models, those machine learning models, they get merged into master models.”

Whatever they are doing, they seem to have hit upon a security pain point. They announced a $30 million Series B almost exactly a year ago, and they went back for more because they were growing quickly and needed the capital to hire people to keep up.

That kind of growth is a challenge for any startup. The company expects to double its 125-person work force before the end of the year, but the company is working to put systems in place to incorporate those new people and service all of those new customers.

The company plans to hire more people in sales and marketing, of course, but they will concentrate on customer support and building out partnership programs to get some help from systems integrators, ISVs and MSPs, who can do some of the customer hand-holding for them.

Apr
10
2019
--

The right way to do AI in security

Artificial intelligence applied to information security can engender images of a benevolent Skynet, sagely analyzing more data than imaginable and making decisions at lightspeed, saving organizations from devastating attacks. In such a world, humans are barely needed to run security programs, their jobs largely automated out of existence, relegating them to a role as the button-pusher on particularly critical changes proposed by the otherwise omnipotent AI.

Such a vision is still in the realm of science fiction. AI in information security is more like an eager, callow puppy attempting to learn new tricks – minus the disappointment written on their faces when they consistently fail. No one’s job is in danger of being replaced by security AI; if anything, a larger staff is required to ensure security AI stays firmly leashed.

Arguably, AI’s highest use case currently is to add futuristic sheen to traditional security tools, rebranding timeworn approaches as trailblazing sorcery that will revolutionize enterprise cybersecurity as we know it. The current hype cycle for AI appears to be the roaring, ferocious crest at the end of a decade that began with bubbly excitement around the promise of “big data” in information security.

But what lies beneath the marketing gloss and quixotic lust for an AI revolution in security? How did AL ascend to supplant the lustrous zest around machine learning (“ML”) that dominated headlines in recent years? Where is there true potential to enrich information security strategy for the better – and where is it simply an entrancing distraction from more useful goals? And, naturally, how will attackers plot to circumvent security AI to continue their nefarious schemes?

How did AI grow out of this stony rubbish?

The year AI debuted as the “It Girl” in information security was 2017. The year prior, MIT completed their study showing “human-in-the-loop” AI out-performed AI and humans individually in attack detection. Likewise, DARPA conducted the Cyber Grand Challenge, a battle testing AI systems’ offensive and defensive capabilities. Until this point, security AI was imprisoned in the contrived halls of academia and government. Yet, the history of two vendors exhibits how enthusiasm surrounding security AI was driven more by growth marketing than user needs.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com