Aug
21
2019
--

Box introduces Box Shield with increased security controls and threat protection

Box has always had to balance the idea of sharing content broadly while protecting it as it moved through the world, but the more you share, the more likely something can go wrong, such as misconfigured shared links that surfaced earlier this year. In an effort to make the system more secure, the company announced Box Shield today in Beta, a set of tools to help employees sharing Box content better understand who they are sharing with, while helping the security team see when content is being misused.

Link sharing is a natural part of what companies do with Box, and as Chief Product and Chief Strategy Officer Jeetu Patel says, you don’t want to change the way people use Box. Instead, he says it’s his job to make it easier to make it secure and that is the goal with today’s announcement.

“We’ve introduced Box Shield, which embeds these content controls and protects the content in a way that doesn’t compromise user experience, while ensuring safety for the administrator and the company, so their intellectual property is protected,” Patel explained.

He says this involves two components. The first is about raising user awareness and helping them understand what they’re sharing. In fact, sometimes companies use Box as a content management backend to distribute files like documentation on the internet on purpose. They want them to be indexed in Google. Other times, however, it’s through misuse of the file-sharing component, and Box wants to fix that with this release by making it clear who they are sharing with and what that means.

They’ve updated the experience on the web and mobile products to make it much clearer through messaging and interface design what the sharing level they have chosen means. Of course, some users will ignore all these messages, so there is a second component to give administrators more control.

2. Box Shield Smart Access

Box Shield access controls (Photo: Box)

This involves helping customers build guardrails into the product to prevent leakage of an entire category of documents that you would never want leaked, like internal business plans, salary lists or financial documents, or even to granularly protect particular files or folders. “The second thing we’re trying to do is make sure that Box itself has some built-in security guardrails and boundary conditions that can help people reduce the risk around employee negligence or inadvertent disclosures, and then make sure that you have some very precision-based, granular security controls that can be applied to classifications that you’ve set on content,” he explained.

In addition, the company wants to help customers detect when employees are abusing content, perhaps sharing sensitive data like customer lists with a personal account, and flag these for the security team. This involves flagging anomalous downloads, suspicious sessions or unusual locations inside Box.

The tool also can work with existing security products already in place, so that whatever classification has been applied in Box travels with a file, and anomalies or misuse can be captured by the company’s security apparatus before the file leaves the company’s boundaries.

While Patel acknowledges there is no way to prevent user misuse or abuse in all cases, by implementing Box Shield, the company is attempting to provide customers with a set of tools to help them reduce the possibility of it going undetected. Box Shield is in private beta today and will be released in the fall.

Aug
06
2019
--

Slack makes some key security enhancements

As Slack makes its way deeper into the enterprise, it needs to layer on more sophisticated security measures like the encryption key management feature it released last year. Today, the company published a blog post outlining its latest security strategy, and while it still doesn’t include end-to-end encryption of Slack messaging, it is a big step forward.

For many companies, there is a minimum level of security they will require before they use a tool like Slack company-wide, and this is particularly true for regulated industries. Slack is trying to answer some of these concerns with today’s post.

As for end-to-end (E2E) encryption, Slack believes it would adversely affect the user experience and says there hasn’t been a lot of customer demand for it so far. “If we were to add E2E encryption, it would result in limited functionality in Slack. With EKM (encryption key management), you gain cryptographic controls, providing visibility and opportunity for key revocation with granularity, control and no sacrifice to user experience,” a Slack spokesperson told TechCrunch.

Today, the company provides the ability for admins to require Touch ID or Face ID or to enter a passcode on a mobile device. In addition, if a user reports a device stolen, admins can wipe Slack conversations remotely, although this is currently only available through an API.

What they have coming soon is a new administrative dashboard, where admins can manage all of this kind of security in a single place. They will even be able to detect if a person is using a jail-broken phone and shut down access to the phone. In addition, they will be able to force upgrades to the latest version of Slack by not allowing access until the person downloads the latest version.

Later this year, admins will be able to block files downloaded from Slack desktop that come from outside of a set of pre-approved IP addresses. And on the mobile side, they will be able to force file links to open in an approved browser.

All of these features are designed to make administrators feel more comfortable using Slack in a secure and reliable way. One of Slack’s big strengths is its ability to integrate with other pieces of the enterprise software ecosystem, but companies still want control over what files are shared and how they open across devices. These new tools go a long way toward easing those types of concerns.

Aug
05
2019
--

Cybereason raises $200 million for its enterprise security platform

Cybereason, which uses machine learning to increase the number of endpoints a single analyst can manage across a network of distributed resources, has raised $200 million in new financing from SoftBank Group and its affiliates. 

It’s a sign of the belief that SoftBank has in the technology, since the Japanese investment firm is basically doubling down on commitments it made to the Boston-based company four years ago.

The company first came to our attention five years ago when it raised a $25 million financing from investors, including CRV, Spark Capital and Lockheed Martin.

Cybereason’s technology processes and analyzes data in real time across an organization’s daily operations and relationships. It looks for anomalies in behavior across nodes on networks and uses those anomalies to flag suspicious activity.

The company also provides reporting tools to inform customers of the root cause, the timeline, the person involved in the breach or breaches, which tools they use and what information was being disseminated within and outside of the organization.

For co-founder Lior Div, Cybereason’s work is the continuation of the six years of training and service he spent working with the Israeli army’s 8200 Unit, the military incubator for half of the security startups pitching their wares today. After his time in the military, Div worked for the Israeli government as a private contractor reverse-engineering hacking operations.

Over the last two years, Cybereason has expanded the scope of its service to a network that spans 6 million endpoints tracked by 500 employees, with offices in Boston, Tel Aviv, Tokyo and London.

“Cybereason’s big data analytics approach to mitigating cyber risk has fueled explosive expansion at the leading edge of the EDR domain, disrupting the EPP market. We are leading the wave, becoming the world’s most reliable and effective endpoint prevention and detection solution because of our technology, our people and our partners,” said Div, in a statement. “We help all security teams prevent more attacks, sooner, in ways that enable understanding and taking decisive action faster.”

The company said it will use the new funding to accelerate its sales and marketing efforts across all geographies and push further ahead with research and development to make more of its security operations autonomous.

“Today, there is a shortage of more than three million level 1-3 analysts,” said Yonatan Striem-Amit, chief technology officer and co-founder, Cybereason, in a statement. “The new autonomous SOC enables SOC teams of the future to harness technology where manual work is being relied on today and it will elevate  L1 analysts to spend time on higher value tasks and accelerate the advanced analysis L3 analysts do.”

Most recently the company was behind the discovery of Operation SoftCell, the largest nation-state cyber espionage attack on telecommunications companies. 

That attack, which was either conducted by Chinese-backed actors or made to look like it was conducted by Chinese-backed actors, according to Cybereason, targeted a select group of users in an effort to acquire cell phone records.

As we wrote at the time:

… hackers have systematically broken in to more than 10 cell networks around the world to date over the past seven years to obtain massive amounts of call records — including times and dates of calls, and their cell-based locations — on at least 20 individuals.

Researchers at Boston-based Cybereason, who discovered the operation and shared their findings with TechCrunch, said the hackers could track the physical location of any customer of the hacked telcos — including spies and politicians — using the call records.

Lior Div, Cybereason’s co-founder and chief executive, told TechCrunch it’s “massive-scale” espionage.

Call detail records — or CDRs — are the crown jewels of any intelligence agency’s collection efforts. These call records are highly detailed metadata logs generated by a phone provider to connect calls and messages from one person to another. Although they don’t include the recordings of calls or the contents of messages, they can offer detailed insight into a person’s life. The National Security Agency  has for years controversially collected the call records of Americans from cell providers like AT&T and Verizon (which owns TechCrunch), despite the questionable legality.

It’s not the first time that Cybereason has uncovered major security threats.

Back when it had just raised capital from CRV and Spark, Cybereason’s chief executive was touting its work with a defense contractor who’d been hacked. Again, the suspected culprit was the Chinese government.

As we reported, during one of the early product demos for a private defense contractor, Cybereason identified a full-blown attack by the Chinese — 10,000 thousand usernames and passwords were leaked, and the attackers had access to nearly half of the organization on a daily basis.

The security breach was too sensitive to be shared with the press, but Div says that the FBI was involved and that the company had no indication that they were being hacked until Cybereason detected it.

Jul
30
2019
--

Network (Transport) Encryption for MongoDB

Encryption for MongoDB

Encryption for MongoDBWhy do I need Network encryption?

In our previous blog post MongoDB Security vs. Five ‘Bad Guys’ there’s an overview of five main areas of security functions.

Let’s say you’ve enabled #1 and #2 (Authentication, Authorization) and #4 (Storage encryption a.k.a. encryption-at-rest and Auditing) mentioned in the previous blog post. Only authenticated users will be connecting, and they will be only doing what they’re authorized to. With storage encryption configured properly, the database data can’t be decrypted even if the server’s disk was stolen or accidentally given away.

You will have some pretty tight database servers indeed. However, consider the following movement of user data over the network:

  • Clients sending updates to the database (to a mongos, or mongod if unsharded).
  • A mongos or mongod sending query results back to a client.
  • Between replica set members as they replicate to each other.
  • mongos nodes retrieving collection data from the shards before relaying it to the user.
  • Shards with each other if chunks are being moved between sharded collections

As it moves, the user collection data is no longer within the database ‘fortress’. It’s riding in plain, unencrypted TCP packets. It can be grabbed from that with tcpdump etc. as shown here:

~$ #mongod primary is running on localhost:28051 is this example.
~$ #
~$ #In a different terminal I run: 
~$ #  mongo --port 28051 -u akira -p secret  --quiet --eval 'db.getSiblingDB("payments").TheImportantCollection.findOne()
~$ 
~$ sudo ngrep -d lo . 'port 28051'
interface: lo (127.0.0.0/255.0.0.0)
filter: ( port 28051 ) and ((ip || ip6) || (vlan && (ip || ip6)))
match: .
####
...
...
T 127.0.0.1:51632 -> 127.0.0.1:28051 [AP] #19
  ..........................find.....TheImportantCollection..filter.......lim
  it........?.singleBatch...lsid......id........-H..HN.n.`..}{..$clusterTime.
  X....clusterTime...../%9].signature.3....hash.........>.9...(.j. ..H4. .key
  Id.....fK.]...$db.....payments..                                           
#
T 127.0.0.1:28051 -> 127.0.0.1:51632 [AP] #20
  ....4................s....cursor......firstBatch......0......_id..........c
  ustomer.d....fn.....Smith..gn.....Ken..city.....Georgeville..street1.....1 
  Wishful St...postcode.....45031...order_ids.........id..........ns. ...paym
  ents.TheImportantCollection...ok........?.operationTime...../%9].$clusterTi
  me.X....clusterTime...../%9].signature.3....hash.........>.9...(.j. ..H4. .
  keyId.....fK.]...                                                          
#
T 127.0.0.1:51632 -> 127.0.0.1:28051 [AP] #21
  \....................G....endSessions.&....0......id........-H..HN.n.`..}{.
  ..$db.....admin..                                                          
#
T 127.0.0.1:28051 -> 127.0.0.1:51632 [AP] #22
  ....5.....................ok........?.operationTime...../%9].$clusterTime.X
  ....clusterTime...../%9].signature.3....hash.........>.9...(.j. ..H4. .keyI
  d.....fK.]...                                                              
###^Cexit

The key names and strings such as customer name and address are visible at a glance. This is proof that the TCP data isn’t encrypted. It is moving around in the plain. (You can use “mongoreplay monitor” if you want to see numeric and other non-ascii-string data in a fully human-readable way.)

(If you can unscramble the ascii soup above and see the whole BSON document in your head – great! But you failed the “I am not a robot” test so now you have to stop reading this web page.)

For comparison, this is what the same ngrep command prints when I change to using TLS in the client <-> database connection.

~$ #ngrep during: mongo --port 28051 --ssl --sslCAFile /data/tls_certs_etc/root.crt \
~$ #  --sslPEMKeyFile /data/tls_certs_etc/client.foo_app.pem -u akira -p secret --quiet \
~$ #  --eval 'db.getSiblingDB("payments").TheImportantCollection.findOne()'
~$ 
~$ sudo ngrep -d lo . 'port 28051'
interface: lo (127.0.0.0/255.0.0.0)
filter: ( port 28051 ) and ((ip || ip6) || (vlan && (ip || ip6)))
match: .
####
...
...
T 127.0.0.1:51612 -> 127.0.0.1:28051 [AP] #23
  .........5nYe.).I.M..H.T..j...r".4{.1\.....>...N.Vm.........C..m.V....7.nP.
  f..Z37......}..c?...$.......edN..Qj....$....O[Zb...[...v.....<s.T..m8..u.u3
  R.?....5;...$.F.h...]....@...uq....."..F.M(^.b.....cv.../............\.z..9
  hY........Bz.QEu...`z.W..O@...\.K..C.....N..=.......}.                     
#
T 127.0.0.1:28051 -> 127.0.0.1:51612 [AP] #24
  .....*......4...p.t...G5!.Od...e}.......b.dt.\.xo..^0g.F'.;""..a.....L....#
  DXR.H..)..b.3`.y.vB{@...../..;lOn.k.$7R.]?.M.!H..BC.7........8..k..Rl2.._..
  .pa..-.u...t..;7T8s. z4...Q.....+.Y.\B.............B`.R.(.........~@f..^{.s
  .....\}.D[&..?..m.j#jb.....*.a..`. J?".........Z...J.,....B6............M>.
  ....J....N.H.).!:...B.g2...lua.....5......L9.?.a3....~.G..:...........VB..v
  ........E..[f.S."+...W...A...3...0.G5^.                                    
#
T 127.0.0.1:51612 -> 127.0.0.1:28051 [AP] #25
  ....m..m.5...u...i.H|..L;...M..~#._.v.....7..e...7w.0.......[p..".E=...a.?.
  G{{TS&.........s\..).U.vwV.t...t..2.%..                                    
#
T 127.0.0.1:28051 -> 127.0.0.1:51612 [AP] #26
  .....?..u.*.j...^.LF]6...I.5..5...X.....?..IR(v.T..sX.`....>..Vos.v...z.3d.
  .z.(d.DFs..j.SIA.d]x..s{7..{....n.[n{z.'e'...r............\..#.<<.Y.5.K....
  .....[......[6.....2......[w.5....H                                        
###^Cexit

 

No more plain data to see! The high-loss ascii format being printed by ngrep above can’t provide genuine satisfaction that this is perfectly encrypted binary data, but I hope I’ve demonstrated a quick, useful way to do a ‘sanity check’ that you are using TLS and are not still sending data in the plain.

Note: I’ve used ngrep because I found it made the shortest example. If you prefer tcpdump you can capture the dump with tcpdump <interface filter> <bpf filter> -w <dump file>, then open with the Wireshark GUI or view it with tshark -r <dump file> -V on the command line. And for real satisfaction that the TLS traffic is cryptographically protected data, you can print the captured data in hexadecimal / binary format (as opposed to ‘ascii’) and run an entropy assessment on it.

What’s the risk, really?

It’s probably a very difficult task for a hypothetical spy who was targeting you 1-to-1 to find and occupy a place in your network where they can just read the TCP traffic as a man-in-the-middle. But wholesale network scanners, who don’t know or care who any target is beforehand, will find any place that happens to have a gate open on the day they were passing by.

The scrambled look of raw TCP data to the human eye is not a distraction to them as it is to you, the DBA or server or application programmer. They’ve already scripted the unpacking of all the protocols. I assume the technical problem for the blackhat hackers is more a big-data one (too much copied data to process). As an aside, I hypothesize that they are already pioneering a lot of edge-computing techniques.

It is true that data captured on the move between servers might be only a tiny fraction of the whole data. But if you are making backups by the dump method once a day, and the traffic between the database server and the backup store server is being listened to, then it would be the full database.

How to enable MongoDB network encryption

MongoDB traffic is not encrypted until you create a set of TLS/SSL certificates and keys, and apply them in the mongod and mongos configuration files of your entire cluster (or non-sharded replica set). If you are an experienced TLS/SSL admin, or you are a DBA who has been given a certificate and key set by security administrators elsewhere in your organization, then I think you will find enabling MongoDB’s TLS easy – just distribute the files, reference them in the net.ssl.* options, and stop all the nodes and restart them. Gradually enabling without downtime takes longer but is still possible by using rolling restarts changing net.ssl.mode from disabled -> allowSSL -> preferSSL -> requireSSL (doc link) in each restart.

Conversely, if you are an experienced DBA and it will be your first time creating and distributing TLS certificates and keys, be prepared to spend some time learning about it first.

The way the certificates and PEM key files are created varies according to the following choices:

  • Using an external certificate authority or making a new root certificate just for these MongoDB clusters
  • If you are using it just for the internal system authentication between mongod and mongos nodes, or if you are enabling TLS for clients too
  • How strict you will be making certificates (e.g. with host limitations)
  • Whether you need the ability to revoke certificates

To repeat the first point in this section: if you have a security administration team who already know and control these public key infrastructure (PKI) components – ask them for help, in the interests of saving time and being more certain you’re getting certificates that conform with internal policy.

Self-made test certificates

Percona Security Team note: This is not a best practice, even though it is in the documentation as a tutorial; we recommend you do not use this in production deployments.

So you want to get hands-on with TLS configuration of MongoDB a.s.a.p.? You’ll need certificates and PEM key files. Having the patience to fully master certificate administration would be a virtue, but you are not that virtuous. So you are going to use the existing tutorials (links below) to create self-signed certificates.

The quickest way to create certificates is:

  • Make a new root certificate
  • Generate server certificates (i.e. the ones the mongod and mongos nodes use for net.ssl.PEMKeyFile) from that root certificate
  • Generate client certificates from the new root certificate too
    • Skip setting CN / “subject” fields that limit the hosts or domains the client certificate can be used on
  • Self-sign those certificates
  • Skip making revocation certificates

The weakness in these certificates is:

  • A man in the middle attack is possible (MongoDB doc link):
    “MongoDB can use any valid TLS/SSL certificate issued by a certificate authority or a self-signed certificate. If you use a self-signed certificate, although the communications channel will be encrypted, there will be no validation of server identity. Although such a situation will prevent eavesdropping on the connection, it leaves you vulnerable to a man-in-the-middle attack. Using a certificate signed by a trusted certificate authority will permit MongoDB drivers to verify the server’s identity.”
  • What will happen if someone gets a copy of one of them?
    • If they get the client or a server certificate they will be able to decrypt or spoof being a  SSL encrypting-and-decrypting network peer on the network edges to those nodes.
    • When using self-signed certificates you distribute a copy of the root certificate with the server or client certificate to every mongod, mongos, and client app. I.e. it’s as likely to be misplaced or stolen as a single client or server certificate. With the root certificate spoofing can be done on any edge in the network.
    • You can’t revoke a stolen client or server certificate and cut them off from further access. You’re stuck with it. You’ll have to completely replace all the server-side and client certificates with cluster-wide downtime (at least for MongoDB < 4.2).

Examples on how to make self-signed certificates:

  • This snippet from MongoDB’s Kevin Adistimba is the most concise I’ve seen.
  • This replicaset setup tutorial for Percona’s Corrado Pandiani includes similar instructions with more mongodb context on the page.

Reference in the MongoDB docs:

Various configuration file examples

Three detailed appendix entries on how to make OpenSSL Certificates for Testing.

Troubleshooting

I like the brevity of the SSL Troubleshooting page in Gabriel Ciciliani’s MongoDB administration cool tips presentation from Percona Live Europe ’18. Speaking from my own experience before enabling them in the MongoDB config it’s crucial to make sure the PEM files (both server and client ones) pass the ‘openssl verify’ test command against the root / CA certificate they’re derived from. Absolutely, 100% do this before trying to use them in your mongodb config.

If “openssl verify“-confirmed certificates still create a mongodb replicaset or cluster that is unconnectable then add the --sslAllowInvalidHostnames option when connecting with the mongo shell, and/or net.ssl.allowInvalidHostnames in mongod/mongos configuration. This is a differential diagnosis to see if the hostname requirements of the certificates are the only thing causing the SSL rules to reject the certificates.

If you find it takes --sslAllowInvalidHostnames to make it work it means the CN subject field and/or SAN fields in the certificate need to be edited until they match the hostnames and domains that the SSL lib identifies the hosts as. Don’t be tempted to just conveniently forget about it; disabling hostname verification is a gap that might be leveraged into a man-in-the-middle attack.

If you still are experiencing trouble my next step would be to check the mongod logs. You will find lines matching the grep expression ‘NETWORK .*SSL’ in the log if there are rejections. (This might become “TLS” later.) E.g.

2019-07-25T16:34:49.981+0900 I NETWORK  [conn11] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:33456 (connection id: 11)

You might also try grepping for '[EW] NETWORK' to look for all network errors and warnings.

For SSL there is no need to raise the logging verbosity to see errors and warnings. From what I can see in ssl_manager_openssl.cpp those all come at the default log verbosity of 0. Only if you want to confirm normal, successful connections would I advise briefly raising log verbosity in the config file to level 2 for the exact log ‘component’ (in this case this is the “network”). (Don’t forget to turn it off soon after – forgetting you set log level 2 is a great way to fill your disk.) But for this topic the only thing I think log level 2 will add is “Accepted TLS connection from peer” confirmations like the following. 

2019-07-25T16:29:41.779+0900 D NETWORK  [conn18] Accepted TLS connection from peer: emailAddress=akira.kurogane@nowhere.com,CN=svrA80v,OU=testmongocluster,O=MongoTestCorp,L=Tokyo,ST=Tokyo,C=JP

Take a peek in the code

Certificate acceptance rules are a big topic and I am not the author to cover it. But take a look at the SSLManagerOpenSSL::parseAndValidatePeerCertificate(…) function in ssl_manager_openssl.cpp as a starting point if you’d like to be a bit more familiar with MongoDB’s application.

Jul
30
2019
--

Confluera snags $9M Series A to help stop cyberattacks in real time

Just yesterday, we experienced yet another major breach when Capital One announced it had been hacked and years of credit card application information had been stolen. Another day, another hack, but the question is how can companies protect themselves in the face of an onslaught of attacks. Confluera, a Palo Alto startup, wants to help with a new tool that purports to stop these kinds of attacks in real time.

Today the company, which launched last year, announced a $9 million Series A investment led by Lightspeed Venture Partners . It also has the backing of several influential technology execs, including John W. Thompson, who is chairman of Microsoft and former CEO at Symantec; Frank Slootman, CEO at Snowflake and formerly CEO at ServiceNow; and Lane Bess, former CEO of Palo Alto Networks.

What has attracted this interest is the company’s approach to cybersecurity. “Confluera is a real-time cybersecurity company. We are delivering the industry’s first platform to deterministically stop cyberattacks in real time,” company co-founder and CEO Abhijit Ghosh told TechCrunch.

To do that, Ghosh says, his company’s solution watches across the customer’s infrastructure, finds issues and recommends ways to mitigate the attack. “We see the problem that there are too many solutions which have been used. What is required is a platform that has visibility across the infrastructure, and uses security information from multiple sources to make that determination of where the attacker currently is and how to mitigate that,” he explained.

Microsoft chairman John Thompson, who is also an investor, says this is more than just real-time detection or real-time remediation. “It’s not just the audit trail and telling them what to do. It’s more importantly blocking the attack in real time. And that’s the unique nature of this platform, that you’re able to use the insight that comes from the science of the data to really block the attacks in real time.”

It’s early days for Confluera, as it has 19 employees and three customers using the platform so far. For starters, it will be officially launching next week at Black Hat. After that, it has to continue building out the product and prove that it can work as described to stop the types of attacks we see on a regular basis.

Jul
24
2019
--

Duo’s Wendy Nather to talk security at TC Sessions: Enterprise

When it comes to enterprise security, how do you move fast without breaking things?

Enter Duo’s Wendy Nather, who will join us at TC Sessions: Enterprise in San Francisco on September 5, where we will get the inside track on how to keep enterprise networks secure without slowing growth.

Nather is head of advisory CISOs at Duo Security, a Cisco company, and one of the most respected and trusted voices in the cybersecurity community as a regular speaker on a range of topics, from threat intelligence to risk analysis, incident response, data security and privacy issues.

Prior to her role at Duo, she was the research director at the Retail ISAC, and served as the research director of the Information Security Practice at independent analyst firm 451 Research.

She also led IT security for the EMEA region of the investment banking division of Swiss Bank Corporation — now UBS.

Nather also co-authored “The Cloud Security Rules,” and was listed as one of SC Magazine’s Women in IT Security “Power Players” in 2014.

We’re excited to have Nather discuss some of the challenges startups and enterprises face in security — threats from both inside and outside the firewall. Companies large and small face similar challenges, from keeping data in to keeping hackers out. How do companies navigate the litany of issues and threats without hampering growth?

Who else will we have onstage, you ask? Good question! We’ll be joined by some of the biggest names and the smartest and most prescient people in the industry, including Bill McDermott at SAP, Scott Farquhar at Atlassian, Julie Larson-Green at Qualtrics, Aaron Levie at Box and Andrew Ng at Landing AI and many, many more. See the whole agenda right here.

Early-bird tickets are on sale right now! For just $249 you can see Nather and these other awesome speakers live at TC Sessions: Enterprise. But hurry, early-bird sales end on August 9; after that, prices jump up by $100. Book here.

If you’re a student on a budget, don’t worry, we’ve got a super-reduced ticket for just $75 when you apply for a student ticket right here.

Enterprise-focused startups can bring the whole crew when you book a Startup Demo table for just $2,000. Each table gives you a primo location to be seen by attendees, investors and other sponsors, in addition to four tickets to enjoy the show. We only have a limited amount of demo tables and we will sell out. Book yours here.

Jul
22
2019
--

Serverless, Inc. expands free Framework to include monitoring and security

Serverless development has largely been a lonely pursuit until recently, but Serverless, Inc. has been offering a free framework for intrepid programmers since 2015. At first, that involved development, deployment and testing, but today the company announced it is expanding into monitoring and security to make it an end-to-end tool — and it’s available for free.

Serverless computing isn’t actually server-free, but it’s a form of computing that provides a way to use only the computing resources you need to carry out a given function — and no more. When the process is complete, the resources effectively go away. That has the potential to be more cost-effective than having a server that’s always on, regardless of whether you’re using it or not. That requires a new way of thinking about how developers write code.

While serverless offers a compelling value proposition, up until Serverless, Inc. came along with some developer tooling, early adherents were pretty much stuck building their own tooling to develop, deploy and test their programs. Today’s announcement expands the earlier free Serverless, Inc. Framework to provide a more complete set of serverless developer tools.

Company founder and CEO Austen Collins says that he has been thinking a lot about what developers need to develop and deploy serverless programs, and talking to customers. He says that they really craved a more integrated approach to serverless development than has been available until now.

“What we’re trying to do is build this perfectly integrated solution for developers and developer teams because we want to enable them to innovate as much as possible and be as autonomous as possible,” Collins told TechCrunch. He says at the same time, he recognizes that operations need to connect to other tools, and the Serverless Framework provides hooks into other systems, as well.

Screenshot 2019 07 22 09.27.24

The new tooling includes an integrated environment, so that once you deploy, you can simply click an error or security event and drill down to a dashboard for more information about the issue. You can click for further detail to see the exact spot in the code where the issue occurred, which should make it easier to resolve more quickly.

While no tool is 100% comprehensive, and most large organizations, and even individual developers, will have a set of tools they prefer to use, this is an attempt to build a one-stop solution for serverless developers for the first time. That in itself is significant, as serverless moves beyond early adopters and begins to become more of a mainstream kind of programming and deployment option. People starting now probably won’t want to cobble together their own toolkits, and the Serverless, Inc. Framerwork gives them a good starting point.

Serverless, Inc. was founded by Collins in 2015 out of a need for serverless computing tooling. He has raised more than $13.5 million since inception.

Jul
18
2019
--

InCountry raises $15M for its cloud-based private data storage-as-a-service solution

The rise of data breaches, along with an expanding raft of regulations (now numbering 80 different regional regimes, and growing) have thrust data protection — having legal and compliant ways of handling personal user information — to the top of the list of things that an organization needs to consider when building and operating their businesses. Now a startup called InCountry, which is building both the infrastructure for these companies to securely store that personal data in each jurisdiction, as well as a comprehensive policy framework for them to follow, has raised a Series A of $15 million. The funding is coming in just three months after closing its seed round — underscoring both the attention this area is getting and the opportunity ahead.

The funding is being led by three investors: Arbor Ventures of Singapore, Global Founders Capital of Berlin and Mubadala of Abu Dhabi. Previous investors Caffeinated Capital, Felicis Ventures, Charles River Ventures and Team Builder Ventures (along with others that are not being named) also participated. It brings the total raised to date to $21 million.

Peter Yared, the CEO and founder, pointed out in an interview the geographic diversity of the three lead backers: he described this as a strategic investment, which has resulted from InCountry already expanding its work in each region. (As one example, he pointed out a new law in the UAE requiring all health data of its citizens to be stored in the country — regardless of where it originated.)

As a result, the startup will be opening offices in each of the regions and launching a new product, InCountry Border, to focus on encryption and data handling that keep data inside specific jurisdictions. This will sit alongside the company’s compliance consultancy as well as its infrastructure business.

“We’re only 28 people and only six months old,” Yared said. “But the proposition we offer — requiring no code changes, but allowing companies to automatically pull out and store the personally identifiable information in a separate place, without anything needed on their own back end, has been a strong pull. We’re flabbergasted with the meetings we’ve been getting.” (The alternative, of companies storing this information themselves, has become massively unpalatable, given all the data breaches we’ve seen, he pointed out.)

In part because of the nature of data protection, in its short six months of life, InCountry has already come out of the gates with a global viewpoint and global remit.

It’s already active in 65 countries — which means it’s already equipped to store, process and regulate profile data in the country of origin in these markets — but that is actually just the tip of the iceberg. The company points out that more than 80 countries around the world have data sovereignty regulations, and that in the U.S., some 25 states already have data privacy laws. Violating these can have disastrous consequences for a company’s reputation, not to mention its bottom line: In Europe, the U.K. data regulator is now fining companies the equivalent of hundreds of millions of dollars when they violate GDPR rules.

This ironically is translating into a big business opportunity for startups that are building technology to help companies cope with this. Just last week, OneTrust raised a $200 million Series A to continue building out its technology and business funnel — the company is a “gateway” specialist, building the welcome screens that you encounter when you visit sites to accept or reject a set of cookies and other data requests.

Yared says that while InCountry is very young and is still working on its channel strategy — it’s mainly working directly with companies at this point — there is a clear opportunity both to partner with others within the ecosystem as well as integrators and others working on cloud services and security to build bigger customer networks.

That speaks to the complexity of the issue, and the different entry points that exist to solve it.

“The rapidly evolving and complex global regulatory landscape in our technology driven world is a growing challenge for companies,” said Melissa Guzy of Arbor Ventures, in a statement. Guzy is joining the board with this round. “InCountry is the first to provide a comprehensive solution in the cloud that enables companies to operate globally and address data sovereignty. We’re thrilled to partner and support the company’s mission to enable global data compliance for international businesses.”

Jul
17
2019
--

Dust Identity secures $10M Series A to identify objects with diamond dust

The idea behind Dust Identity was originally born in an MIT lab where the founders developed the base technology for uniquely identifying objects using diamond dust. Since then, the startup has been working to create a commercial application for the advanced technology, and today it announced a $10 million Series A round led by Kleiner Perkins, which also led its $2.3 million seed round last year.

Airbus Ventures and Lockheed Martin Ventures, New Science Ventures, Angular Ventures and Castle Island Ventures also participated in the round. Today’s investment brings the total raised to $12.3 million.

The company has an unusual idea of applying a thin layer of diamond dust to an object with the goal of proving that that object has not been tampered with. While using diamond dust may sound expensive, the company told TechCrunch last year at the time of its seed round funding that it uses low-cost industrial diamond waste, rather than the expensive variety you find in jewelry stores.

As CEO and co-founder Ophir Gaathon told TechCrunch last year, “Once the diamonds fall on the surface of a polymer epoxy, and that polymer cures, the diamonds are fixed in their position, fixed in their orientation, and it’s actually the orientation of those diamonds that we developed a technology that allows us to read those angles very quickly.”

Ilya Fushman, who is leading the investment for Kleiner, says the company is offering a unique approach to identity and security for objects. “At a time when there is a growing trust gap between manufacturers and suppliers, Dust Identity’s diamond particle tag provides a better solution for product authentication and supply chain security than existing technologies,” he said in a statement.

The presence of strategic investors Airbus and Lockheed Martin shows that big industrial companies see a need for advanced technology like this in the supply chain. It’s worth noting that the company partnered with enterprise computing giant SAP last year to provide a blockchain interface for physical objects, where they store the Dust Identity identifier on the blockchain. Although the startup has a relationship with SAP, it remains blockchain agnostic, according to a company spokesperson.

While it’s still early days for the company, it has attracted attention from a broad range of investors and intends to use the funding to continue building and expanding the product in the coming year. To this point, it has implemented pilot programs and early deployments across a range of industries, including automotive, luxury goods, cosmetics and oil, gas and utilities.

Jul
12
2019
--

MongoDB Security vs. Five ‘Bad Guys’

MongoDB Security

MongoDB SecurityMost any commercially mature DBMS provides the following five ways to secure the data you keep inside it:

  • Authentication of user connections (== Identity)
  • Authorization (== DB command permissions) (a.k.a. Role-based access control)
  • Network Encryption (a.k.a. Transport encryption)
  • Storage Encryption (a.k.a. Encryption-at-rest)
  • Auditing (MongoDB Enterprise or Percona Server for MongoDB only)

MongoDB is no exception. All of these have been present for quite a while, although infamously the first versions set “–auth” off by default and this is still in effect. (See more in the “Auth – Still disabled by default” section later.)

This article is an overview of all, plus some important clarification to sort out Authentication and Authorization. Network and storage encryption and Auditing will be expanded on in other articles.

MongoDB Security: The bad guy line-up

So what exactly do the five security subsystems do? Where does the responsibility and purpose of one stop, and others, start?

I think the easiest way to explain it is to highlight the ‘bad guy’ each one repels.

Bad guy
Authentication An unknown person, whom you didn’t realize had network access to the database server, who just ‘walks in’ and looks at, copies, or damages the database data.
Authorization
a.k.a. Access control
A user or application that reads or alters or deletes data other than what they were supposed to.
This ‘bad guy’ is usually a colleague who does it by accident so it’s mostly for safety rather than security, but it also prevents malicious cases too.
Network Encryption Someone who takes a copy of the data being transferred over a network link somewhere between server A and server B.
Storage Encryption Someone who breaks into your datacenter and steals your server’s hard disk so they can read the data files on it.
In practice, they would probably steal the file data over the network or get the disk in a second-hand hardware sale, but the concept is still someone who obtains a copy of the underlying database files.
Auditing A privileged database user who knows how to cover up their tracks after altering database data.
An important caveat – all bets are off if a unix account with the privilege to overwrite the audit log is controlled by db-abusing adversary.


Authentication and authorization must be activated in unison, and auditing requires authentication as a prerequisite, but otherwise, they can be used independently of each other and there is little if any entanglement of code between one the subsystems above and another.

Apart from the caveats of the paragraph above, if you believe that certain bad guys are not a problem for you then you don’t have to use the relevant section.

Which ones should I use, must I use?

Excluding those who are building a public sandpit or honey-trap, no-one can say the “Authentication” bad guy or the “Authorization” bad guy would be an acceptable visitor to their database. So you should at least be using Authorization and Authentication.

Network encryption is almost a no-brainer for me too, but I assume insecure networks as a matter of course. If your MongoDB cluster and all its clients are, for example, inside a virtual private network that you believe has no firewall holes and no privilege escalation risk from other apps then no, you don’t need network encryption.

But if you are a network security expert who is good enough to guarantee your VPN is risk-free, I assume you’re also skilled at TLS/SSL certificate generation. In this case, you will find that setting up TLS/SSL in MongoDB is pretty easy, so why not do it too?

Storage encryption (a.k.a. encryption-at-rest) and Auditing are only high value when certain other risks are eliminated (e.g. unix root access can’t be gained by an attacker on the servers running the live mongod nodes). Storage encryption has a slight performance cost. If Auditing is used suitably, there is not much performance cost, but beware that performance will quickly choke if audit filters are made too broad.

Having said that, it is worth repeating that these last two are high value to some users. If you know you are one of those, please refer to our later articles on them.

Where in the config?

Although it’s all security, it isn’t configured all in the same place.

Authentication security.authorization, (and/or …keyfile or …clusterAuthMode which imply/force it too).
security.sasl and security.ldap are sub-sections for those optional authentication methods.
Authorization (Enabled simultaneously with “Authentication” above.)
Users and roles are not in config files – they are stored within the db itself. Typically in the admin db’s system.users and system.roles collections).
Network encryption net.ssl N.b. not inside the security.* section of the config file.
Storage encryption security.enableEncryption is disk encryption
Auditing auditLog section of the config file, especially the auditLog.filter option.

The above config pointers are just the root positions; in total there are dozens of different settings underneath them.

Authentication and Authorization

Question: “These Authxxxx and Authyyyy words … the same thing right?”

The Answer: 1) No, 2) Yes. 3) Yes, 4) No.

  1. No: Authentication and authorization are not the same things because they are two parts of the software that do different things

Authentication  == User Identity, by means of credential checking.

Authorization == Assigning and enforcing DB object and DB command permissions.

  1. Yes: Authentication and authorization are kind of a single unit because enabling Authentication automatically enables Authorization too.

I assume it was made like this because this matches user expectations from other older databases, and besides, why authenticate if you don’t want to stop unknown users for accessing or changing data? Authorization is enabled in unison with authentication, so connections from unknown users will have no privilege to do anything with database data.

Authorization requires the user name (verified by Authentication) to know which privileges apply to a connection’s requests. So it can’t be enabled independently to the other either.

  1. Yes: Authentication and authorization are sort of the same thing in unfortunate, legacy naming of configuration options

The commandline argument for enabling authentication (which forces authorization to be on too) is simply “–auth”. Even worse, the configuration file option name for the same thing authentication is security.authorization rather than security.authentication. When you use it, though, the first thing that is being enabled is Authentication, and Authorization is only enabled as an after-effect.

  1. No: There is one exception to the ‘Authentication and authorization on together’ rule: during initial setup Authentication is disabled for localhost connections. This is brief though – you get one opportunity to create the first user, then the exception privilege is dropped.

Another exception is when 3.4+ replica set or cluster uses security.transitionToAuth, but its point is obvious and I won’t expand on it here.

Auth – Still disabled by default

MongoDB’s first versions set “–auth” off by default. This has been widely regarded as a bad move.

You might think that by now (mid-2019, v4.0) it would be on by default – but you’d be wrong. Blank configuration still equates to authorization being off, albeit with startup warnings (on and off again in v3.2, on again v3.4) and various exposure reductions such as localhost becoming the only default-bound network device in v3.6.

Feeling nervous about your MongoDB instances now? If the mongod config files do not have security.authorization set to “enabled”, nor include security.keyfile or a security.clusterAuthMode settings which force it on, then you are not using authentication. You can try this quick mongo shell one-liner (with no user credential arguments set) to double-check if you have authentication and authorization enabled or not. 

mongo --host <target_host>:<port> --quiet --eval 'db.adminCommand({listDatabases: 1})'

The response you want to see is an “unauthorized” error. If you get a list of the database names on the other hand, sorry, you have a naked MongoDB deployment.

Note: Stick with the “listDatabases” example above for simplicity. There are some commands such as ismaster that don’t require authorization at any time. If you use those, you aren’t proving or disproving anything about auth mode.

External Authentication

As most people intuitively expect, this is about allowing users to be authenticated in an external service. As one exception it can’t be used for the internal mongodb __system user, but using it for any real human user or client application service account is perfectly suitable. In concrete terms, the external auth service will be a Kerberos KDC, or an ActiveDirectory or OpenLDAP server. 

Using external authentication doesn’t prevent you from having ordinary MongoDB user accounts at the same time. A common situation is that one DBA user for setup and maintenance reasons was created in the normal way (i.e. with db.createUser(…) in the “admin” db) but otherwise every other account is managed centrally in Kerberos or LDAP.

Internal Authentication

Confusingly MongoDB “internal authentication” doesn’t mean the opposite of the external authentication discussed just above.

It would have been better named ‘peer mongodb node authentication as the __system user’. A mongod node running with authentication enabled won’t trust that any TCP peer is another mongod or mongos node just because it talks like one. Rather it requires that the peer authenticates by proof of a shared secret.

Keyfile Internal Authentication (Default)

In the basic case, the shared secret is the keyfile saved in an identical file distributed to each mongod and mongos node in the cluster. “Key” suggests an asymmetric encryption key but in reality, it is just a password even if you generated it from /dev/random, etc. per the documentation’s advice.

Once the password is used successfully, a mongod node will permit commands coming from the authenticated peer to run as the “__system” superuser.

Unfunny fact: if someone has a copy of the keyfile they can simply strip control and non-printing chars from the key file to make the password string that will let them connect as the “__system” user.

mongo --authenticationDatabase local -u __system -p "$(tr -d '\011-\015\040' < /path/to/keyfile)"

Don’t panic if you try this right now as the mongod (or root) user on one of your MongoDB servers and it succeeds. These unix users already have the permissions to disable security and restart nodes anyway. It is not an extra vulnerability if they can do it. There won’t be accidental read-privilege leaking either – mongod will abort on startup if the keyfile is in anything other than 400 (or 600) file permissions mode.

It is, however, a security failure if users who aren’t DBAs (or the server admins with root) are able to read a copy of the keyfile. This can happen by accidentally saving the keyfile in your world-readable source control, or putting them in deployment ‘recipes’. An intermediate-risk increase is when the keyfile is distributed with mongos nodes owned and run as one of the application team’s unix users instead of “mongod” or other DBA team-owned unix user.

X.509 Internal Authentication

The x.509 authentication mechanism does actually use asymmetric public/private keys, unlike the “security.keyfile” above. It must be used in conjunction with TLS/SSL.

It can be used for client connections as well as internal authentication. Information regarding x.509 authentication is spread over two places in the documentation as a result.

The benefit of x.509 is that, compared to the really-just-a-big-password ‘keyfile’ above, it is less likely that one of the keys deployed with mongod and mongos nodes can be abused by an attacker who gets a copy of it. It depends on how strictly the x.509 certificates are set up, however. To be practical if you do not have a dedicated security team that understands x.509 concepts and best practices, and takes on the administrative responsibility for it, you won’t be getting the best points of x.509. These better practices include tightening down which hosts it will work on and being able to revoke and rollover certificates.

Following up

That’s the end of ‘five bad guys’ overview for MongoDB security, with some clarification about Authentication and Authorization thrown in for good measure.

To give them the space they deserve, the following two subsystems will be covered in later articles:

  • Network encryption
  • Storage encryption (a.k.a. disk encryption or encryption-at-rest)

For Auditing please see this earlier Percona blog MongoDB Audit Log.

Quick documentation links

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com