Oct
12
2018
--

Track PostgreSQL Row Changes Using Public/Private Key Signing

PostgreSQL encryption and authorization

row signing with postgresqlAuthorisations and encryption/decryption within a database system establish the basic guidelines in protecting your database by guarding against malicious structural or data changes.

What are authorisations?

Authorisations are the access privileges that mainly control what a user can and cannot do on the database server for one or more databases. So consider this to be like granting a key to unlock specific doors. Think of this as more like your five star hotel smart card. It allows you access all facilities that are meant for you, but doesn’t let you open every door. Whereas, privileged staff have master keys which let them open any door.

Similarly, in the database world, granting permissions secures the system by allowing specific actions by specific users or user groups, yet it allows database administrator to perform whatever action(s) on the database he/she wishes. PostgreSQL provides user management where you can can create users, and grant and revoke their privileges.

Encryption

Encryption, decryption can protect your data, obfuscate schema structure and help hide code from prying eyes. Encryption/decryption hides the valuable information and ensures that there are no mischievous changes in the code or data that may be considered harmful. In almost all cases, data encryption and decryption happens on the database server. This is more like hiding your stuff somewhere in your room so that nobody can see it, but also making your stuff difficult to access.

PostgreSQL also provides encryption using pgcrypto (PostgreSQL extension). There are some cases where you don’t want to hide the data, but don’t want people to update it either. You can revoke the privileges to modify the data.

Data modifications

But what if an admin user modifies the data? How you can identify that data is changed? If somebody changes the data and you don’t know about, then it is more dangerous than you losing your data, as you are relying on data which may no longer be valid.

Logs in database systems allow us to track back changes and “potentially” identify what was changed—unless, those logs are removed by the administrator.

So consider if you can leave your stuff openly in your room and in case of any changes, you can identify that something was tampered with. In database terms, that translates to data without encryption, but with your very own signature. One option is to add a column to your database table which keeps a checksum for the data that is generated on the client side using the user’s own private key.  Any changes in the data would mean that checksum doesn’t match anymore, and hence, one can easily identify if the data has changed. The data signing happens on the client-side, thereby ensuring that only users with the required private key can insert the data and anyone with a public key can validate.

Public/Private Keys

Asymmetric cryptographic system uses pairs of keys; public keys and private keys. Private keys are known only to the owner(s). It is used for signing or decrypting data. Public keys are shared with other stakeholders who may use it to encrypt messages or validate messages signed by the owner.

Generate Private / Public Key

Private Key

$ openssl genrsa -aes128 -passout pass:password -out key.private.pem
Generating RSA private key, 2048 bit long modulus

Public Key

$ openssl rsa -in key.private.pem -passin pass:password -pubout -out key.public.pem
writing RSA key

Signing Data

Create a sample table tbl_marks and insert a sample row in that. We’ll need to add additional columns for signature verification. This will understandably increase the table size as we are adding additional columns.

postgres=# CREATE TABLE tbl_marks (id INTEGER, name TEXT, marks INTEGER, hash TEXT);

Let’s add a row that we’d like to validate.

postgres=# INSERT INTO tbl_marks VALUES(1, 'Alice', 80);

We will select the data to store the value into into query buffer using

\gset

  command (https://www.postgresql.org/docs/current/static/app-psql.html). The complete row will be saved into “row” psql variable.

postgres=# SELECT row(id,name,marks) FROM tbl_marks WHERE id = 1;
     row   
---------------
(1,Alice,80)
(1 row)
postgres=# \gset
postgres=# SELECT :'row' as row;
     row   
---------------
(1,Alice,80)
(1 row)

Now let’s generate signature for the data stored in “row” variable.

postgres=# \set sign_command `echo :'row' | openssl dgst -sha256 -sign key.private.pem | openssl base64 | tr -d '\n' | tr -d '\r'`
Enter pass phrase for key.private.pem:

The signed hash is stored into the “sign_command” psql variable. Let’s now add this to the data row in tbl_marks table.

postgres=# UPDATE tbl_marks SET hash = :'sign_command' WHERE id = 1;
UPDATE 1

Validating Data

So our data row now contains data with a valid signature. Let’s try to validate to it. We are going to select our data in “row” psql variable and the signature hash in “hash” psql variable.

postgres=# SELECT row(id,name,marks), hash FROM tbl_marks;    
Row           hash                                                                                                                                                                                                                                                                                                                                                                                            
---------------+-----------------------------------------------
(1,Alice,80) | U23g3RwaZmbeZpYPmwezP5xvbIs8ILupW7jtrat8ixA ...
(1 row)
postgres=# \gset

Let’s now validate the data using a public key.

postgres=# \set verify_command `echo :'hash' | awk '{gsub(/.{65}/,"&\n")}1' | openssl base64 -d -out v && echo :'row' | openssl dgst -sha256 -verify key.public.pem -signature v`
postgres=# select :'verify_command' as verify;
  verify    
-------------
Verified OK
(1 row)

Perfect! The data is validated and all this happened on the client side. Imagine somebody doesn’t like that Alice got 80 marks, and they decide to reduce Alice’s marks to 30. Nobody knows if the teacher had given Alice 80 or 30 unless somebody goes and checks the database logs. We’ll give Alice 30 marks now.

postgres=# UPDATE tbl_marks SET marks = 30;
UPDATE 1

The school admin now decides to check that all data is correct before giving out the final results. The school admin has the teacher’s public key and tries to validate the data.

postgres=# SELECT row(id,name,marks), hash FROM tbl_marks;
    row    | hash                                                                                                                                                                                                                                                                  
--------------+--------------------------------------------------
(1,Alice,30) | yO20vyPRPR+HgW9D2nMSQstRgyGmCxyS9bVVrJ8tC7nh18iYc...
(1 row)
postgres=# \gset

postgres=# \set verify_command `echo :'hash' | awk '{gsub(/.{65}/,"&\n")}1' | openssl base64 -d -out v && echo :'row' | openssl dgst -sha256 -verify key.public.pem -signature v`
postgres=# SELECT :'verify_command' AS verify;
      verify      
----------------------
Verification Failure

As expected, the validation fails. Nobody other than the teacher had the private key to sign that data, and any tampering is easily identifiable.

This might not be the most efficient way of securing a dataset, but it is definitely an option if you want to keep the data unencrypted, and yet easily detect any unauthorised changes. All the load is shifted on to the client side for signing and verification thereby reducing load on the server. It allows only users with private keys to update the data, and anybody with the associated public key to validate it.

The example used psql as a client application for signing but you can do this on any client which can call the required openssl functions or directly used openssl binaries for signing and verification.

Oct
10
2018
--

Shasta Ventures is doubling down on security startups with 3 new hires

Early-stage venture capital firm Shasta Ventures has brought on three new faces to beef up its enterprise software and security portfolio amid a big push to “go deeper” into cybersecurity, per Shasta’s managing director Doug Pepper.

Balaji Yelamanchili (above left), the former general manager and executive vice president of Symantec’s enterprise security business unit, joins as a venture partner on the firm’s enterprise software team. He was previously a senior vice president at Oracle and Dell EMC. Pepper says Yelamanchili will be sourcing investments and may take board seats in “certain cases.”

The firm has also tapped Salesforce’s former chief information security officer Izak Mutlu (above center) as an executive-in-residence, a role in which he’ll advise Shasta portfolio companies. Mutlu spent 11 years at the cloud computing company managing IT security and compliance.

InterWest board partner Drew Harman, the final new hire, has joined as a board partner and will work closely with the chief executive officers of Shasta’s startups. Harman has worked in enterprise software for 25 years across a number of roles. He is currently on the boards of the cloud-based monetization platform Aria, enterprise content marketing startup NewsCred, customer retention software provider Totango and others.

There’s no area today that’s more important than cybersecurity,” Pepper told TechCrunch. “The business of venture has gotten increasingly competitive and it demands more focus than ever before. We aren’t looking for generalists, we are looking for domain experts.”

Shasta’s security investments include email authentication service Valimail, which raised a $25 million Series B in May. Airspace Systems, a startup that built “kinetic capture” technologies that can identify offending unmanned aircrafts and take them down, raised a $20 million round with participation from Shasta in March. And four-year-old Stealth Security, a startup that defends companies from automated bot attacks, secured an $8 million investment from Shasta in February.

The Menlo Park-based firm filed to raise $300 million for its fifth flagship VC fund in 2016. A year later, it announced a specialty vehicle geared toward augmented and virtual reality app development. With more than $1 billion under management, the firm also backs consumer, IoT, robotics and space-tech companies across the U.S.

In the last year, Shasta has promoted Nikhil Basu Trivedi, Nitin Chopra and Jacob Mullins from associate to partner, as well as added two new associates, Natalie Sandman and Rachel Star.

Oct
10
2018
--

Egnyte hauls in $75M investment led by Goldman Sachs

Egnyte launched in 2007 just two years after Box, but unlike its enterprise counterpart, which went all-cloud and raised hundreds of millions of dollars, Egnyte saw a different path with a slow and steady growth strategy and a hybrid niche, recognizing that companies were going to keep some content in the cloud and some on prem. Up until today it had raised a rather modest $62.5 million, and hadn’t taken a dime since 2013, but that all changed when the company announced a whopping $75 million investment.

The entire round came from a single investor, Goldman Sachs’ Private Capital Investing arm, a part of Goldman’s Special Situations group. Holger Staude, vice president of Goldman Sachs Private Capital Investing will join Egnyte’s board under the terms of the deal. He says Goldman liked what it saw, a steady company poised for bigger growth with the right influx of capital. In fact, the company has had more than eight straight quarters of growth and have been cash flow positive since Q4 in 2016.

“We were impressed by the strong management team and the company’s fiscal discipline, having grown their top line rapidly without requiring significant outside capital for the past several years. They have created a strong business model that we believe can be replicated with success at a much larger scale,” Staude explained.

Company CEO Vineet Jain helped start the company as a way to store and share files in a business context, but over the years, he has built that into a platform that includes security and governance components. Jain also saw a market poised for growth with companies moving increasing amounts of data to the cloud. He felt the time was right to take on more significant outside investment. He said his first step was to build a list of investors, but Goldman shined through, he said.

“Goldman had reached out to us before we even started the fundraising process. There was inbound interest. They were more aggressive compared to others. Given there was prior conversations, the path to closing was shorter,” he said.

He wouldn’t discuss a specific valuation, but did say they have grown 6x since the 2013 round and he got what he described as “a decent valuation.” As for an IPO, he predicted this would be the final round before the company eventually goes public. “This is our last fund raise. At this level of funding, we have more than enough funding to support a growth trajectory to IPO,” he said.

Philosophically, Jain has always believed that it wasn’t necessary to hit the gas until he felt the market was really there. “I started off from a point of view to say, keep building a phenomenal product. Keep focusing on a post sales experience, which is phenomenal to the end user. Everything else will happen. So this is where we are,” he said.

Jain indicated the round isn’t about taking on money for money’s sake. He believes that this is going to fuel a huge growth stage for the company. He doesn’t plan to focus these new resources strictly on the sales and marketing department, as you might expect. He wants to scale every department in the company including engineering, posts-sales and customer success.

Today the company has 450 employees and more than 14,000 customers across a range of sizes and sectors including Nasdaq, Thoma Bravo, AppDynamics and Red Bull. The deal closed at the end of last month.

Oct
09
2018
--

Microsoft shows off government cloud services with JEDI due date imminent

Just a day after Google decided to drop out of the Pentagon’s massive $10 billion, 10-year JEDI cloud contract bidding, Microsoft announced increased support services for government clients. In a long blog post, the company laid out its government focused cloud services.

While today’s announcement is not directly related to JEDI per se, the timing is interesting just three days ahead of the October 12th deadline for submitting RFPs. Today’s announcement is about showing just how comprehensive the company’s government-specific cloud services are.

In a blog post, Microsoft corporate vice president for Azure, Julia White made it clear the company was focusing hard on the government business. “In the past six months we have added over 40 services and features to Azure Government, as well as publishing a new roadmap for the Azure Government regions providing ongoing transparency into our upcoming releases,” she wrote.

“Moving forward, we are simplifying our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly. In addition, we are adding new options for buying and onboarding cloud services to make it easier to move to the cloud. Finally, we are bringing an array of new hybrid and edge capabilities to government to ensure that government customers have full access to the technology of the intelligent edge and intelligent cloud era,” White added.

While much of the post was around the value proposition of Azure in general such as security, identity, artificial intelligence and edge data processing services, there were a slew of items aimed specifically at the government clients.

For starters, the company is increasing its FedRAMP compliance, a series of regulations designed to ensure vendors deliver cloud services securely to federal government customers. Specifically Microsoft is moving from FedRAMP moderate to high ratings on 50 services.

“By taking the broadest regulatory compliance approach in the industry, we’re making commercial innovation more accessible and easier for government to adopt,” White wrote.

In addition, Microsoft announced it’s expanding Azure Secret Regions, a solution designed specifically for dealing with highly classified information in the cloud. This one appears to take direct aim at JEDI. “We are making major progress in delivering this cloud designed to meet the regulatory and compliance requirements of the Department of Defense and the Intelligence Community. Today, we are announcing these newest regions will be available by the end of the first quarter of 2019. In addition, to meet the growing demand and requirements of the U.S. Government, we are confirming our intent to deliver Azure Government services to meet the highest classification requirements, with capabilities for handling Top Secret U.S. classified data,” White wrote.

The company’s announcements, which included many other pieces that have been previously announced, is clearly designed to show off its government chops at a time where a major government contract is up for grabs. The company announced Azure Stack for Government in August, another piece mentioned in this blog post.

Oct
04
2018
--

BlackBerry races ahead of security curve with quantum-resistant solution

Quantum computing represents tremendous promise to completely alter technology as we’ve known it, allowing operations that weren’t previously possible with traditional computing. The downside of these powerful machines is that they could be strong enough to break conventional cryptography schemes. Today, BlackBerry announced a new quantum-resistant code signing service to help battle that possibility.

The service is meant to anticipate a problem that doesn’t exist yet. Perhaps that’s why BlackBerry hedged its bets in the announcement saying, “The new solution will allow software to be digitally signed using a scheme that will be hard to break with a quantum computer.” Until we have fully functioning quantum computers capable of breaking current encryption, we probably won’t know for sure if this works.

But give BlackBerry credit for getting ahead of the curve and trying to solve a problem that has concerned technologists as quantum computers begin to evolve. The solution, which will be available next month, is actually the product of a partnership between BlackBerry and Isara Corporation, a company whose mission is to build quantum-safe security solutions. BlackBerry is using Isara’s cryptographic libraries to help sign and protect code as security evolves.

“By adding the quantum-resistant code signing server to our cybersecurity tools, we will be able to address a major security concern for industries that rely on assets that will be in use for a long time. If your product, whether it’s a car or critical piece of infrastructure, needs to be functional 10-15 years from now, you need to be concerned about quantum computing attacks,” Charles Eagan, BlackBerry’s chief technology officer, said in a statement.

While experts argue how long it could take to build a fully functioning quantum computer, most agree that it will take between 50 and 100 qubit computers to begin realizing that vision. IBM released a 20 qubit computer last year and introduced a 50 qubit prototype. A qubit represents a single unit of quantum information.

At TechCrunch Disrupt last month, Dario Gil, IBM’s vice president of artificial intelligence and quantum computing, and Chad Rigetti, a former IBM researcher who is founder and CEO at Rigetti Computing, predicted we could be just three years away from the point where a quantum computer surpasses traditional computing.

IBM Quantum Computer

IBM Quantum Computer. Photo: IBM

Whether it happens that quickly or not remains to be seen, but experts have been expressing security concerns around quantum computing as they grow more powerful, and BlackBerry is addressing that concern by coming up with a solution today, arguing that if you are creating critical infrastructure you need to future-proof your security.

BlackBerry, once known for highly secure phones, and one of the earliest popular business smartphones, has pivoted to be more of a security company in recent years. This announcement, made at the BlackBerry Security Summit, is part of the company’s focus on keeping enterprises secure.

Oct
03
2018
--

Palo Alto Networks to acquire RedLock for $173 M to beef up cloud security

Palo Alto Networks launched in 2005 in the age of firewalls. As we all know by now, the enterprise expanded beyond the cozy confines of a firewall long ago and vendors like Palo Alto have moved to securing data in the cloud now too. To that end, the company announced its intent to pay $173 million for RedLock today, an early-stage startup that helps companies make sure their cloud instances are locked down and secure.

The cloud vendors take responsibility for securing their own infrastructure, and for the most part the major vendors have done a decent job. What they can’t do is save their customers from themselves and that’s where a company like RedLock comes in.

As we’ve seen time and again, data has been exposed in cloud storage services like Amazon S3, not through any fault of Amazon itself, but because a faulty configuration has left the data exposed to the open internet. RedLock watches configurations like this and warns companies when something looks amiss.

When the company emerged from stealth just a year ago, Varun Badhwar, company founder and CEO told TechCrunch that this is part of Amazon’s shared responsibility model. “They have diagrams where they have responsibility to secure physical infrastructure, but ultimately it’s the customer’s responsibility to secure the content, applications and firewall settings,” Badhwar told TechCrunch last year.

Badhwar speaking in a video interview about the acquisition says they have been focused on helping developers build cloud applications safely and securely, whether that’s Amazon Web Services, Microsoft Azure or Google Cloud Platform. “We think about [RedLock] as guardrails or as bumper lanes in a bowling alley and just not letting somebody get that gutter ball and from a security standpoint, just making sure we don’t deviate from the best practices,” he explained.

“We built a technology platform that’s entirely cloud-based and very quick time to value since customers can just turn it on through API’s, and we love to shine the light and show our customers how to safely move into public cloud,” he added.

The acquisition will also fit nicely with Evident.io, a cloud infrastructure security startup, the company acquired in March for $300 million. Badhwar believes that customers will benefit from Evident’s compliance capabilities being combined with Red Lock’s analytics capabilities to provide a more complete cloud security solution.

RedLock launched in 2015 and has raised $12 million. The $173 million purchase would appear to be a great return for the investors who put their faith in the startup.

Oct
02
2018
--

NYC wants to build a cyber army

Empires rise and fall, and none more so than business empires. Whole industries that once dominated the planet are just a figment in memory’s eye, while new industries quietly grow into massive behemoths.

New York City has certainly seen its share of empires. Today, the city is a global center of finance, real estate, legal services, technology, and many, many more industries. It hosts the headquarters of roughly 10% of the Fortune 500, and the metro’s GDP is roughly equivalent to that of Canada.

So much wealth and power, and all under constant attack. The value of technology and data has skyrocketed, and so has the value of stealing and disrupting the services that rely upon it. Cyber crime and cyber wars are adding up: according to a report published jointly between McAfee and the Center for Strategic and International Studies, the costs of these operations are in the hundreds of billions of dollars – and New York’s top industries such as financial services bear the brunt of the losses.

Yet, New York City has hardly been a bastion for the cybersecurity industry. Boston and Washington DC are far stronger today on the Acela corridor, and San Francisco and Israel have both made huge impacts on the space. Now, NYC’s leaders are looking to build a whole new local empire that might just act as a bulwark for its other leading ecosystems.

Today, the New York City Economic Development Corporation (NYCEDC) announced the launch of Cyber NYC, a $30 million “catalyzing” investment designed to rapidly grow the city’s ecosystem and infrastructure for cybersecurity.

James Patchett, CEO of New York City Economic Development Corporation. (Photo from NYCEDC)

James Patchett, CEO of NYCEDC, explained in an interview with TechCrunch that cybersecurity is “both an incredible opportunity and also a huge threat.” He noted that “the financial industry has been the lifeblood of this city for our entire history,” and the costs of cybercrime are rising quickly. “It’s a lose-lose if we fail to invest in the innovation that keeps the city strong” but “it’s a win if we can create all of that innovation here and the corresponding jobs,” he said.

The Cyber NYC program is made up of a constellation of programs:

  • Partnering with Jerusalem Venture Partners, an accelerator called Hub.NYC will develop enterprise cybersecurity companies by connecting them with advisors and customers. The program will be hosted in a nearly 100,000 square foot building in SoHo.
  • Partnering with SOSA, the city will create a new, 15,000 square foot Global Cyber Center co-working facility in Chelsea, where talented individuals in the cyber industry can hang out and learn from each other through event programming and meetups.
  • With Fullstack Academy and Laguardia Community College, a Cyber Boot Camp will be created to enhance the ability of local workers to find jobs in the cybersecurity space.
  • Through an “Applied Learning Initiative,” students will be able to earn a “CUNY-Facebook Master’s Degree” in cybersecurity. The program has participation from the City University of New York, New York University, Columbia University, Cornell Tech, and iQ4.
  • With Columbia University’s Technology Ventures, NYCEDC will introduce a program called Inventors to Founders that will work to commercialize university research.

NYCEDC’s map of the Cyber NYC initiative. (Photo from NYCEDC)

In addition to Facebook, other companies have made commitments to the program, including Goldman Sachs, MasterCard, PricewaterhouseCoopers, and edX.org. Two Goldman execs, Chief Operational Risk Officer Phil Venables and Chief Information Security Officer Andy Ozment, have joined the initiative’s advisory boards.

The NYCEDC estimates that there are roughly 6,000 cybersecurity professionals currently employed in New York City. Through these programs, it estimates that the number could increase by another 10,000. Patchett said that “it is as close to a no-brainer in economic development because of the opportunity and the risk.”

From Jerusalem to New York

To tackle its ambitious cybersecurity goals, the NYCEDC is partnering with two venture firms, Jerusalem Venture Partners (JVP) and SOSA, with significant experience investing, operating, and growing companies in the sector.

Jerusalem-based JVP is an established investor that should help founders at Hub.NYC get access to smart capital, sector expertise, and the entrepreneurial experience needed to help their startups scale. JVP invests in early-, late-, and growth-stage companies focused on cybersecurity, big data, media, and enterprise software.

JVP will run Hub.NYC, a startup accelerator that will help cybersecurity startups connect with customers and mentors. (Photo from JVP)

Erel Margalit, who founded the firm in 1993, said that “If you look at what JVP has done … we create ecosystems.” Working with Jerusalem’s metro government, Margalit and the firm pioneered a number of institutions such as accelerators that turned Israel into an economic powerhouse in the cybersecurity industry. His social and economic work eventually led him to the Knesset, Israel’s unicameral legislature, where he served as an MP from 2015-2017 with the Labor Party.

Israel is a very small country with a relative dearth of large companies though, a huge challenge for startups looking to scale up. “Today if you want to build the next-generation leading companies, you have to be not only where the ideas are being brewed, but also where the solutions are being [purchased],” Margalit explained. “You need to be working with the biggest customers in the world.”

That place, in his mind, is New York City. It’s a city he has known since his youth – he worked at Moshe’s Moving IN NYC while attending Columbia as a grad student where he got his PhD in philosophy. Now, he can pack up his own success from Israel and scale it up to an even larger ecosystem.

Since its founding, JVP has successfully raised $1.1 billion across eight funds, including a $60 million fund specifically focused on the cybersecurity space. Over the same period, the firm has seen 32 successful exits, including cybersecurity companies CyberArk (IPO in 2014) and CyActive (Acquired by PayPal in 2013).

JVP’s efforts in the cybersecurity space also go beyond the investment process, with the firm recently establishing an incubator, known as JVP Cyber Labs, specifically focused on identifying, nurturing and building the next wave of Israeli cybersecurity and big data companies.

On average, the firm has focused on deals in the $5-$10 million range, with a general proclivity for earlier-stage companies where the firm can take a more hands-on mentorship role. Some of JVP’s notable active portfolio companies include Source Defense, which uses automation to protect against website supply chain attacks, ThetaRay, which uses big data to analyze threats, and Morphisec, which sells endpoint security solutions.

Opening up innovation with SOSA

The self-described “open-innovation platform,” SOSA is a global network of corporations, investors, and entrepreneurs that connects major institutions with innovative startups tackling core needs.

SOSA works closely with its partner startups, providing investor sourcing, hands-on mentorship and the physical resources needed to achieve growth. The group’s areas of expertise include cybersecurity, fintech, automation, energy, mobility, and logistics. Though headquartered in Tel Aviv, SOSA recently opened an innovation lab in New York, backed by major partners including HP, RBC, and Jefferies.

With the eight-floor Global Cyber Center located in Chelsea, it is turning its attention to an even more ambitious agenda. Uzi Scheffer, CEO of SOSA, said to TechCrunch in a statement that “The Global Cyber Center will serve as a center of gravity for the entire cybersecurity industry where they can meet, interact and connect to the finest talent from New York, the States, Israel and our entire global network.”

SOSA’s new building in Chelsea will be a center for the cybersecurity community (Photo from SOSA)

With an already established presence in New York, SOSA’s local network could help spur the local corporate participation key to the EDC’s plan, while SOSA’s broader global network can help achieve aspirations of turning New York City into a global cybersecurity leader.

It is no coincidence that both of the EDC’s venture partners are familiar with the Israeli cybersecurity ecosystem. Israel has long been viewed as a leader in cybersecurity innovation and policy, and has benefited from the same successful public-private sector coordination New York hopes to replicate.

Furthermore, while New York hopes to create organic growth within its own local ecosystem, the partnerships could also benefit the city if leading Israeli cybersecurity companies look to relocate due to the limited size of the Israeli market.

Big plans, big results?

While we spent comparatively less time discussing them, the NYCEDC’s educational programs are particularly interesting. Students will be able to take classes at any university in the five-member consortium, and transfer credits freely, a concept that the NYCEDC bills as “stackable certificates.”

Meanwhile, Facebook has partnered with the City University of New York to create a professional master’s degree program to train up a new class of cybersecurity leaders. The idea is to provide a pathway to a widely-respected credential without having to take too much time off of work. NYCEDC CEO Patchett said, ”you probably don’t have the time to take two years off to do a masters program,” and so the program’s flexibility should provide better access to more professionals.

Together, all of these disparate programs add up to a bold attempt to put New York City on the map for cybersecurity. Talent development, founder development, customer development – all have been addressed with capital and new initiatives.

Will the community show up at initiatives like the Global Cyber Center, pictured here? (Photo from SOSA)

Yet, despite the time that NYCEDC has spent to put all of these partners together cohesively under one initiative, the real challenge starts with getting the community to participate and build upon these nascent institutions. “What we hear from folks a lot of time,” Patchett said to us, is that “there is no community for cyber professionals in New York City.” Now the buildings have been placed, but the people need to walk through the front doors.

The city wants these programs to be self-sustaining as soon as possible. “In all cases, we don’t want to support these ecosystems forever,” Patchett said. “If we don’t think they’re financially sustainable, we haven’t done our job right.” He believes that “there should be a natural incentive to invest once the ecosystem is off the ground.”

As the world encounters an ever-increasing array of cyber threats, old empires can falter – and new empires can grow. Cybersecurity may well be one of the next great industries, and it may just provide the needed defenses to ensure that New York City’s other empires can live another day.

Sep
27
2018
--

Alphabet’s Chronicle launches an enterprise version of VirusTotal

VirusTotal, the virus and malware scanning service own by Alphabet’s Chronicle, launched an enterprise-grade version of its service today. VirusTotal Enterprise offers significantly faster and more customizable malware search, as well as a new feature called Private Graph, which allows enterprises to create their own private visualizations of their infrastructure and malware that affects their machines.

The Private Graph makes it easier for enterprises to create an inventory of their internal infrastructure and users to help security teams investigate incidents (and where they started). In the process of building this graph, VirtusTotal also looks are commonalities between different nodes to be able to detect changes that could signal potential issues.

The company stresses that these graphs are obviously kept private. That’s worth noting because VirusTotal already offered a similar tool for its premium users — the VirusTotal Graph. All of the information there, however, was public.

As for the faster and more advanced search tools, VirusTotal notes that its service benefits from Alphabet’s massive infrastructure and search expertise. This allows VirusTotal Enterprise to offers a 100x speed increase, as well as better search accuracy. Using the advanced search, the company notes, a security team could now extract the icon from a fake application, for example, and then return all malware samples that share the same file.

VirusTotal says that it plans to “continue to leverage the power of Google infrastructure” and expand this enterprise service over time.

Google acquired VirusTotal back in 2012. For the longest time, the service didn’t see too many changes, but earlier this year, Google’s parent company Alphabet moved VirusTotal under the Chronicle brand and the development pace seems to have picked up since.

Sep
25
2018
--

Snyk raises $22M on a $100M valuation to detect security vulnerabilities in open source code

Open source software is now a $14 billion+ market and growing fast, in use in one way or another in 95 percent of all enterprises. But that expansion comes with a shadow: open source components can come with vulnerabilities, and so their widespread use in apps becomes a liability to a company’s cybersecurity.

Now, a startup out of the UK called Snyk, which has built a way to detect when those apps or components are compromised, is announcing a $22 million round of funding to meet the demand from enterprises wanting to tackle the issue head on.

Led by Accel, with participation from GV plus previous investors Boldstart Ventures and Heavybit, this Series B notably is the second round raised by Snyk within seven months — it raised a $7 million Series A in March. That’s a measure of how the company is growing (and how enthusiastic investors are about what it has built so far). The startup is not disclosing its valuation but a source close to the deal says it is around $100 million now (it’s raised about $33 million to date).

As another measure of Snyk’s growth, the company says it now has over 200 paying customers and 150,000 users, with revenues growing five-fold in the last nine months. In March, it had 130 paying customers.

(Current clients include ASOS, Digital Ocean, New Relic and Skyscanner, the company said.)

Snyk plays squarely in the middle of how the landscape for enterprise services exists today. It provides options for organisations to use it on-premises, via the cloud, or in a hybrid version of the two, with a range of paid and free tiers to get users acquainted with the service.

Guy Podjarny, the company’s CEO who co-founded Snyk with Assaf Hefetz and Danny Grander, explained that Snyk works in two parts. First, the startup has built a threat intelligence system “that listens to open source activity.” Tapping into open-conversation platforms — for example, GitHub commits and forum chatter — Snyk uses machine learning to detect potential mentions of vulnerabilities. It then funnels these to a team of human analysts, “who verify and curate the real ones in our vulnerability DB.”

Second, the company analyses source code repositories — including, again, GitHub as well as BitBucket — “to understand which open source components each one uses, flag the ones that are vulnerable, and then auto-fix them by proposing the right dependency version to use and through patches our security team builds.”

Open source components don’t have more vulnerabilities than closed source ones, he added, “but their heavy reuse makes those vulnerabilities more impactful.” Components can be used in thousands of applications, and by Snyk’s estimation, some 77 percent of those applications will end up with components that have security vulnerabilities. “As a result, the chances of an organisation being breached through a vulnerable open source component are far greater than a security flaw purely in their code.”

Podjarny says the plan is not to tackle proprietary code longer term but to expand how it can monitor apps built on open source.

“Our focus is on two fronts – building security tools developers love, and fixing open source security,” he said. “We believe the risk from insecure use of open source code is far greater than that of your own code, and is poorly addressed in the industry. We do intend to expand our protection from fixing known vulnerabilities in open source components to monitoring and securing them in runtime, flagging and containing malicious and compromised components.”

While this is a relatively new area for security teams to monitor and address, he added that the Equifax breach highlighted what might happen in the worst-case scenario if such issues go undetected. Snyk is not the only company that has identified the gap in the market. Black Duck focuses on flagging non-compliant open source licences, and offers some security features as well.

However, it is Snyk — whose name derives from a play on the word “sneak”, combined with the acronym meaning “so now you know” — that seems to be catching the most attention at the moment.

“Some of the largest data breaches in recent years were the result of unfixed vulnerabilities in open source dependencies; as a result, we’ve seen the adoption of tools to monitor and remediate such vulnerabilities grow exponentially,” said Philippe Botteri, partner at Accel, who is joining the board with this round. “We’ve also seen the ownership of application security shifting towards developers. We feel that Snyk is uniquely positioned in the market given the team’s deep security domain knowledge and developer-centric mindset, and are thrilled to join them on this mission of bringing security tools to developers.”

Sep
24
2018
--

Backing up Percona Server for MySQL with keyring_vault plugin enabled

Percona XtraBackup with keyring_vault

Percona XtraBackup with keyring_vaultTo use Percona XtraBackup with keyring_vault plugin enabled you need to take some special measures to secure a working backup. This post addresses how to backup Percona Server for MySQL with keyring_vault plugin enabled. We also run through the steps needed to restore the backup from the master to a slave.

This is the second of a two-part series on setting up Hashicorp Vault with Percona Server for MySQL with the keyring_vault plugin. First part is Using the keyring_vault plugin with Percona Server for MySQL 5.7.

Backing up from the master

First you need to install the latest Percona XtraBackup 2.4 package, in this tutorial I used this version:

[root@mysql1 ~]# xtrabackup --version
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=1
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)

Create a transition key using any method you prefer.  This transition key will be used by Percona XtraBackup to encrypt keys of the files being backed up. Make sure to keep the transition key and not lose it or else the backup will be unrecoverable.

[root@mysql1 ~]# openssl rand -base64 24
NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT

You can store the transition-key in Vault and retrieve it later:

[root@mysql1 ~]# # Store the transition-key to the Vault server
[root@mysql1 ~]# curl -H "Content-Type: application/json" -H "X-Vault-Token: be515093-b1a8-c799-b237-8e04ea90ad7a" --cacert "/etc/vault_ca/vault.pem" -X PUT -d '{"value": "NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT"}' "https://192.168.0.114:8200/v1/secret/dc1/master/transition_key"
[root@mysql1 ~]# # Retrieve the transition-key from the Vault server
[root@mysql1 ~]# curl -s -H "X-Vault-Token: be515093-b1a8-c799-b237-8e04ea90ad7a" --cacert "/etc/vault_ca/vault.pem" -X GET "https://192.168.0.114:8200/v1/secret/dc1/master/transition_key" | jq .data.value
"NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT"
[root@mysql1 ~]# # Delete the transition-key from the Vault server
[root@mysql1 ~]# curl -H "Content-Type: application/json" -H "X-Vault-Token: be515093-b1a8-c799-b237-8e04ea90ad7a" --cacert "/etc/vault_ca/vault.pem" -X DELETE "https://192.168.0.114:8200/v1/secret/dc1/master/transition_key"

We will stream the backup to the slave server using netcat, first run this on the slave:

[root@mysql2 ~]# ncat -l 9999 | cat - > backup.xbstream

Then on the master I used --stream=xbstream  since it fails with --stream=tar reported here (PXB-1571). Run the xtrabackup command like this:

[root@mysql1 ~]# xtrabackup --stream=xbstream --backup --target-dir=backup/ --transition-key=NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT | nc 192.168.0.117 9999
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=1 --transition-key=*
xtrabackup: recognized client arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=1 --transition-key=* --user=root --stream=xbstream --backup=1 --target-dir=backup/
180715 01:28:56  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root'  (using password: NO).
180715 01:28:56  version_check Connected to MySQL server
180715 01:28:56  version_check Executing a version check against the server...
180715 01:28:56  version_check Done.
180715 01:28:56 Connecting to MySQL server host: localhost, user: root, password: not set, port: not set, socket: not set
Using server version 5.7.22-22-log
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 0, set to 65536
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 50331648
InnoDB: Number of pools: 1
180715 01:28:56 Added plugin 'keyring_vault.so' to load list.
180715 01:28:56 >> log scanned up to (2616858)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 2 for mysql/plugin, old maximum was 0
...
180715 01:28:58 Finished backing up non-InnoDB tables and files
180715 01:28:58 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
xtrabackup: The latest check point (for incremental): '2616849'
xtrabackup: Stopping log copying thread.
.180715 01:28:58 >> log scanned up to (2616865)
180715 01:28:58 Executing UNLOCK TABLES
180715 01:28:58 All tables unlocked
180715 01:28:58 [00] Streaming ib_buffer_pool to
180715 01:28:58 [00]        ...done
180715 01:28:58 Backup created in directory '/root/backup/'
180715 01:28:58 [00] Streaming
180715 01:28:58 [00]        ...done
180715 01:28:58 [00] Streaming
180715 01:28:58 [00]        ...done
180715 01:28:58 Saving xtrabackup_keys.
xtrabackup: Transaction log of lsn (2616849) to (2616865) was copied.
Shutting down plugin 'keyring_vault'
180715 01:28:58 completed OK!

Restoring the backup on the Slave server

Extract the backup to a temporary location:

[root@mysql2 backup]# xbstream -x < ../backup.xbstream

And then prepare it with the following command. Notice that we are still using the same transition key we used when backing up the database in the master server.

[root@mysql2 ~]# xtrabackup --prepare --target-dir=backup/ --transition-key=NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT
xtrabackup: recognized server arguments: --innodb_checksum_algorithm=crc32 --innodb_log_checksum_algorithm=strict_crc32 --innodb_data_file_path=ibdata1:12M:autoextend --innodb_log_files_in_group=2 --innodb_log_file_size=50331648 --innodb_fast_checksum=0 --innodb_page_size=16384 --innodb_log_block_size=512 --innodb_undo_directory=./ --innodb_undo_tablespaces=0 --server-id=1 --redo-log-version=1 --transition-key=*
xtrabackup: recognized client arguments: --innodb_checksum_algorithm=crc32 --innodb_log_checksum_algorithm=strict_crc32 --innodb_data_file_path=ibdata1:12M:autoextend --innodb_log_files_in_group=2 --innodb_log_file_size=50331648 --innodb_fast_checksum=0 --innodb_page_size=16384 --innodb_log_block_size=512 --innodb_undo_directory=./ --innodb_undo_tablespaces=0 --server-id=1 --redo-log-version=1 --transition-key=* --prepare=1 --target-dir=backup/
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
xtrabackup: cd to /root/backup/
xtrabackup: This target seems to be not prepared yet.
...
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 2617384
180715 01:31:10 completed OK!

Configure keyring_vault.conf on slave

Create the keyring_vault.conf file with the following contents:

[root@mysql2 ~]# cat /var/lib/mysql-keyring/keyring_vault.conf
vault_url = https://192.168.0.114:8200
secret_mount_point = secret/dc1/slave
token = be515093-b1a8-c799-b237-8e04ea90ad7a
vault_ca = /etc/vault_ca/vault.pem

Notice that it uses the same token as the master server but has a different secret_mount_point. The same CA certificate will be used across all servers connecting to this Vault server.

Use –copy-back option to finalize backup restoration

Next use the --copy-back option to copy the files from the temporary backup location to the mysql data directory on the slave. Observe that during this phase XtraBackup generates a new master key, stores it in the Vault server and re-encrypts tablespace headers using this key.

[root@mysql2 ~]# xtrabackup --copy-back --target-dir=backup/ --transition-key=NSu7kfUgcTTIY2ym7Qu6jnYOotOuMIeT --generate-new-master-key --keyring-vault-config=/var/lib/mysql-keyring/keyring_vault.conf
xtrabackup: recognized server arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=2 --transition-key=* --generate-new-master-key=1
xtrabackup: recognized client arguments: --datadir=/var/lib/mysql --log_bin=mysqld-bin --server-id=2 --transition-key=* --generate-new-master-key=1 --copy-back=1 --target-dir=backup/
xtrabackup version 2.4.12 based on MySQL server 5.7.19 Linux (x86_64) (revision id: 170eb8c)
180715 01:32:28 Loading xtrabackup_keys.
180715 01:32:28 Loading xtrabackup_keys.
180715 01:32:29 Generated new master key with ID 'be1ba51c-87c0-11e8-ac1c-00163e79c097-2'.
...
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/plugin.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/servers.ibd to /var/lib/mysql/mysql/servers.ibd
180715 01:32:29 [01]        ...done
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/servers.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/help_topic.ibd to /var/lib/mysql/mysql/help_topic.ibd
180715 01:32:29 [01]        ...done
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/help_topic.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/help_category.ibd to /var/lib/mysql/mysql/help_category.ibd
180715 01:32:29 [01]        ...done
180715 01:32:29 [01] Encrypting /var/lib/mysql/mysql/help_category.ibd tablespace header with new master key.
180715 01:32:29 [01] Copying ./mysql/help_relation.ibd to /var/lib/mysql/mysql/help_relation.ibd
180715 01:32:29 [01]        ...done
...
180715 01:32:30 [01] Encrypting /var/lib/mysql/encryptedschema/t1.ibd tablespace header with new master key.
180715 01:32:30 [01] Copying ./encryptedschema/db.opt to /var/lib/mysql/encryptedschema/db.opt
180715 01:32:30 [01]        ...done
...
180715 01:32:31 [01] Copying ./xtrabackup_binlog_pos_innodb to /var/lib/mysql/xtrabackup_binlog_pos_innodb
180715 01:32:31 [01]        ...done
180715 01:32:31 [01] Copying ./xtrabackup_master_key_id to /var/lib/mysql/xtrabackup_master_key_id
180715 01:32:31 [01]        ...done
180715 01:32:31 [01] Copying ./ibtmp1 to /var/lib/mysql/ibtmp1
180715 01:32:31 [01]        ...done
Shutting down plugin 'keyring_vault'
180715 01:32:31 completed OK!

Once that’s done, change file/directory ownership to mysql.

[root@mysql2 ~]# chown -R mysql:mysql /var/lib/mysql/

Start the mysqld instance on the slave server configured similarly to the master configuration in the first part of this series.

early-plugin-load="keyring_vault=keyring_vault.so"
loose-keyring_vault_config="/var/lib/mysql-keyring/keyring_vault.conf"
encrypt_binlog=ON
innodb_encrypt_online_alter_logs=ON
innodb_encrypt_tables=ON
innodb_temp_tablespace_encrypt=ON
master_verify_checksum=ON
binlog_checksum=CRC32
log_bin=mysqld-bin
binlog_format=ROW
server-id=2
log-slave-updates

[root@mysql2 ~]# systemctl status mysqld
? mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2018-07-15 01:32:59 UTC; 6h ago
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html
  Process: 1390 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 1372 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 1392 (mysqld)
   CGroup: /system.slice/mysqld.service
           ??1392 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
Jul 15 01:32:58 mysql2 systemd[1]: Starting MySQL Server...
Jul 15 01:32:59 mysql2 systemd[1]: Started MySQL Server.

From here, you should be able to create the replication user on the master, and then configure slave replication based on the coordinates in the xtrabackup_binlog_info file. You can follow this section of the manual on starting replication.

For further reference, please read the manual related to Encrypted InnoDB tablespace backups.

Is validating your security strategy a concern?

Do you need to demonstrate that the security strategy that you have implemented for your databases is sufficient and appropriate? Perhaps you could benefit from a professional database audit? It could provide the reassurance that your organization needs.

The post Backing up Percona Server for MySQL with keyring_vault plugin enabled appeared first on Percona Database Performance Blog.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com