Oct
16
2018
--

GitHub launches Actions, its workflow automation tool

For the longest time, GitHub was all about storing source code and sharing it either with the rest of the world or your colleagues. Today, the company, which is in the process of being acquired by Microsoft, is taking a step in a different but related direction by launching GitHub Actions. Actions allow developers to not just host code on the platform but also run it. We’re not talking about a new cloud to rival AWS here, but instead about something more akin to a very flexible IFTTT for developers who want to automate their development workflows, whether that is sending notifications or building a full continuous integration and delivery pipeline.

This is a big deal for GitHub . Indeed, Sam Lambert, GitHub’s head of platform, described it to me as “the biggest shift we’ve had in the history of GitHub.” He likened it to shortcuts in iOS — just more flexible. “Imagine an infinitely more flexible version of shortcut, hosted on GitHub and designed to allow anyone to create an action inside a container to augment and connect their workflow.”

GitHub users can use Actions to build their continuous delivery pipelines, and the company expects that many will do so. And that’s pretty much the first thing most people will think about when they hear about this new project. GitHub’s own description of Actions in today’s announcement makes definitely fits that bill, too. “Easily build, package, release, update, and deploy your project in any language—on GitHub or any external system—without having to run code yourself,” the company writes. But it’s about more than that.

“I see CI/CD as one narrow use case of actions. It’s so, so much more,” Lambert stressed. “And I think it’s going to revolutionize DevOps because people are now going to build best in breed deployment workflows for specific applications and frameworks, and those become the de facto standard shared on GitHub. […] It’s going to do everything we did for open source again for the DevOps space and for all those different parts of that workflow ecosystem.”

That means you can use it to send a text message through Twilio every time someone uses the ‘urgent issue’ tag in your repository, for example. Or you can write a one-line command that searches your repository with a basic grep command. Or really run any other code you want to because all you have to do to turn any code in your repository into an Action is to write a Docker file for it so that GitHub can run it. “As long as there is a Docker file, we can build it, run in and connect it to your workflow,” Lambert explained. If you don’t want to write a Docker file, though, there’s also a visual editor you can use to build your workflow.

As Corey Wilkerson, GitHub’s head of product engineering also noted, many of these Actions already exist in repositories on GitHub today. And there are now over 96 million of those on GitHub, so that makes for a lot of potential actions that will be available from the start.

With Actions, which is now in limited public beta, developers can set up the workflow to build, package, release, update and deploy their code without having to run the code themselves.

Now developers could host those Actions themselves — they are just Docker containers, after all — but GitHub will also host and run the code for them. And that includes developers on the free open source plan.

Over time — and Lambert seemed to be in favor of this — GitHub could also allow developers to sell their workflows and Actions through the GitHub marketplace. For now, that’s not an option, but it it’s definitely that’s something the company has been thinking about. Lambert also noted that this could be a way for open source developers who don’t want to build an enterprise version of their tools (and the sales force that goes with that) to monetize their efforts.

While GitHub will make its own actions available to developers, this is an open platform and others in the GitHub community can contribute their own actions, too.

GitHub will slowly open Actions to developers, starting with daily batches for the time being. You can sign up for access here.

In addition to Actions, GitHub also announced a number of other new features on its platform. As the company stressed during today’s event, it’s mission is to make the life of developers easier — and most of the new features may be small but do indeed make it easier for developers to do their jobs.

So what else is new? GitHub Connect, which connects the silo of GitHub Enterprise with the open source repositories on its public site, is now generally available, for example. GitHub Connect enables new features like unified search, that can search through both the open source code on the site and internal code, as well as a new Unified Business Identity feature that brings together the multiple GitHub Business accounts that many businesses now manage (thanks, shadow IT) under a single umbrella to improve billing, licensing and permissions.

The company also today launched three new courses in its Learning Lab that make it easier for developers to get started with the service, as well as a business version of Learning Lab for larger organizations.

What’s maybe even more interesting for developers whose companies use GitHub Enterprise, though, is that the company will now allow admins to enable a new feature that will display those developers’ work as part of their public profile. Given that GitHub is now the de facto resume for many developers, that’s a big deal. Much of their work, after all, isn’t in open source or in building side projects, but in the day-to-day work at their companies.

The other new features the company announced today are pretty much all about security. The new GitHub Security Advisory API, for example, makes it easier for developers to find threads in their code through automatic vulnerability scans, while the new security vulnerability alerts for Java and .NET projects now extend GitHub’s existing alerts to these two languages. If your developers are prone to putting their security tokens into public code, then you can now rest easier since GitHub will now also start scanning all public repositories for known token formats. If it finds one, it’ll alert you and you can set off to create a new one.

Oct
15
2018
--

Jeff Bezos is just fine taking the Pentagon’s $10B JEDI cloud contract

Some tech companies might have a problem taking money from the Department of Defense, but Amazon isn’t one of them, as CEO Jeff Bezos made clear today at the Wired25 conference. Just last week, Google pulled out of the running for the Pentagon’s $10 billion, 10-year JEDI cloud contract, but Bezos suggested that he was happy to take the government’s money.

Bezos has been surprisingly quiet about the contract up until now, but his company has certainly attracted plenty of attention from the companies competing for the JEDI deal. Just last week IBM filed a formal protest with the Government Accountability Office claiming that the contract was stacked in favor one vendor. And while it didn’t name it directly, the clear implication was that company was the one owned by Bezos.

Last summer Oracle also filed a protest and also complained that they believed the government had set up the contract to favor Amazon, a charge spokesperson Heather Babb denied. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said last month.

While competitors are clearly worried about Amazon, which has a substantial lead in the cloud infrastructure market, the company itself has kept quiet on the deal until now. Bezos set his company’s support in patriotic terms and one of leadership.

“Sometimes one of the jobs of the senior leadership team is to make the right decision, even when it’s unpopular. And if if big tech companies are going to turn their back on the US Department of Defense, this country is going to be in trouble,” he said.

“I know everyone is conflicted about the current politics in this country, but this country is a gem,” he added.

While Google tried to frame its decision as taking a principled stand against misuse of technology by the government, Bezos chose another tack, stating that all technology can be used for good or ill. “Technologies are always two-sided. You know there are ways they can be misused as well as used, and this isn’t new,” Bezos told Wired25.

He’s not wrong of course, but it’s hard not to look at the size of the contract and see it as purely a business decision on his part. Amazon is as hot for that $10 billion contract as any of its competitors. What’s different in this talk is that Bezos made it sound like a purely patriotic decision, rather than economic one.

The Pentagon’s JEDI contract could have a value of up to $10 billion with a maximum length of 10 years. The contract is framed as a two year deal with two three-year options and a final one for two years. The DOD can opt out before exercising any of the options.

Bidding for the contract closed last Friday. The DOD is expected to choose the winning vendor next April.

Oct
15
2018
--

Twilio acquires email API platform SendGrid for $2 billion in stock

Twilio, the ubiquitous communications platform, today announced its plan to acquire the API-centric email platform SendGrid for about $2 billion in an all-stock transaction. That’s Twilio’s largest acquisition to date, but also one that makes a lot of sense given that both companies aim to make building communications platforms easier for developers.

“The two companies share the same vision, the same model, and the same values,” said Twilio co-founder and CEO Jeff Lawson in today’s announcement. “We believe this is a once-in-a-lifetime opportunity to bring together the two leading developer-focused communications platforms to create the unquestioned platform of choice for all companies looking to transform their customer engagement.”

SendGrid will become a wholly owned subsidiary of Twilio and its common stock will be converted into Twilio stock. The companies expect the acquisition to close in the first half of 2019, after it has been cleared by the authorities.

Twilio’s current focus is on omnichannel communication, and email is obviously a major part of that. And while it offers plenty of services around voice, video and chat, email hasn’t been on its radar in the same way. This acquisition now allows it to quickly build up expertise in this area and expand its services there.

SendGrid went public in 2017. At the time, it priced its stock at $16. Today, before the announcement, the company was trading at just under $31, though that price obviously spiked after the announcement went public. That’s still down from a high of more than $36.5 last month, but that’s in line with the overall movement of the market in recent weeks.

Today’s announcement comes shortly before Twilio’s annual developer conference, so I expect we’ll hear a lot more about its plans for SendGrid later this week.

We asked Twilio for more details about its plans for SendGrid after the acquisition closes. We’ll update this post once we hear more.

Oct
15
2018
--

Talking Drupal #180 – Media in 8.6

In episode 180 with Ivan Zugec from Webwash about Media in Drupal 8.6.  www.talkingdrupa.com/180

Topics

  • Drupal 8 Media
  • What’s new in Media 8.6
  • Related modules
  • Current limitations 

Resources

Ivan Zugec on Drupal.org 

Ivan’s Website

Wedwash

Managing Media Assets using Core Media in Drupal 8 

New Media Management Functionality in Drupal 8.6

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

Ivan Zugec – www.Webwash.net @ivanzugec

Oct
15
2018
--

Celonis brings intelligent process automation software to cloud

Celonis has been helping companies analyze and improve their internal processes using machine learning. Today the company announced it was providing that same solution as a cloud service with a few nifty improvements you won’t find on prem.

The new approach, called Celonis Intelligent Business Cloud, allows customers to analyze a workflow, find inefficiencies and offer improvements very quickly. Companies typically follow a workflow that has developed over time and very rarely think about why it developed the way it did, or how to fix it. If they do, it usually involves bringing in consultants to help. Celonis puts software and machine learning to bear on the problem.

Co-founder and CEO Alexander Rinke says that his company deals with massive volumes of data and moving all of that to the cloud makes sense. “With Intelligent Business Cloud, we will unlock that [on prem data], bring it to the cloud in a very efficient infrastructure and provide much more value on top of it,” he told TechCrunch.

The idea is to speed up the whole ingestion process, allowing a company to see the inefficiencies in their business processes very quickly. Rinke says it starts with ingesting data from sources such as Salesforce or SAP and then creating a visual view of the process flow. There may be hundreds of variants from the main process workflow, but you can see which ones would give you the most value to change, based on the number of times the variation occurs.

Screenshot: Celonis

By packaging the Celonis tools as a cloud service, they are reducing the complexity of running and managing it. They are also introducing an app store with over 300 pre-packaged options for popular products like Salesforce and ServiceNow and popular process like order to cash. This should also help get customers up and running much more quickly.

New Celonis App Store. Screenshot: Celonis

The cloud service also includes an Action Engine, which Rinke describes as a big step toward moving Celonis from being purely analytical to operational. “Action Engine focuses on changing and improving processes. It gives workers concrete info on what to do next. For example in process analysis, it would notice on time delivery isn’t great because order to cash is to slow. It helps accelerate changes in system configuration,” he explained.

Celonis Action Engine. Screenshot: Celonis

The new cloud service is available today. Celonis was founded in 2011. It has raised over $77 million. The most recent round was a $50 million Series B on a valuation over $1 billion.

Oct
15
2018
--

Truphone, an eSIM mobile carrier that works with Apple, raises another $71M, now valued at $507M

Truphone — a UK startup that provides global mobile voice and data services by way of an eSIM model for phones, tablets and IoT devices — said that it has raised another £18 million ($23.7 million) in funding; plus it said it has secured £36 million ($47 million) more “on a conditional basis” to expand its business after signing “a number of high-value deals.”

It doesn’t specify which deals these are, but Truphone was an early partner of Apple’s to provide eSIM-based connectivity to the iPad — that is, a way to access a mobile carrier without having to swap in a physical SIM card, which has up to now been the standard for GMSA-based networks. Truphone is expanding on this by offering a service for new iPhone XS and XR models, taking advantage of the dual SIM capability in these devicews. Truphone says that strategic partners of the company include Apple (“which chose Truphone as the only carrier to offer global data, voice and text plans on the iPad and iPhone digital eSIM”); Synopsys, which has integrated Truphone’s eSIM technology into its chipset designs; and Workz Group, a SIM manufacturer, which has a license from Truphone for its GSMA-accredited remote SIM provisioning platform and SIM operating system.

The company said that this funding, which was made by way of a rights issue, values Truphone at £386 million ($507 million at today’s rates) post-money. Truphone told TechCrunch that the funding came from Vollin Holdings and Minden Worldwide — two investment firms with ties to Roman Abramovich, the Russian oligarch who also owns the Chelsea football club, among other things — along with unspecified minority shareholders. Collectively, Abramovich-connected entities control more than 80 percent of the company.

We have asked the company for more detail on what the conditions are for the additional £36 million in funding to be released and all it is willing to say is that “it’s KPI-driven and related to the speed of growth in the business.” It’s unclear what the state of the business is at the moment because Truphone has not updated its accounts at Companies House (they are overdue). We have asked about that, too.

For some context, Truphone most recently raised money almost exactly a year ago, when it picked up £255 million also by way of a rights issue, and also from the same two big investors. The large amount that time was partly being raised to retire debt. That deal was done at a valuation of £370 million ($491 million at the time of the deal). Going just on sterling values, this is a slight down-round.

Truphone, however, says that business is strong right now:

“The appetite for our technology has been enormous and we are thrilled that our investors have given us the opportunity to accelerate and scale these groundbreaking products to market,” said Ralph Steffens, CEO, Truphone, in a statement. “We recognised early on that the more integrated the supply chain, the smoother the customer experience. That recognition paid off—not just for our customers, but for our business. Because we have this capability, we can move at a speed and proficiency that has never before seen in our industry. This investment is particularly important because it is testament not just to our investors’ confidence in our ambitions, but pride in our accomplishments and enthusiasm to see more of what we can do.”

Truphone is one of a handful of providers that is working with Apple to provide plans for the digital eSIM by way of the MyTruphone app. Essentially this will give users an option for international data plans while travelling — Truphone’s network covers 80 countries — without having to swap out the SIMs for their home networks.

The eSIM technology is bigger than the iPhone itself, of course: some believe it could be the future of how we connect on mobile networks. On phones and tablets, it does away with users ordering, and inserting or swapping small, fiddly chips into their devices (that ironically is also one reason that carriers have been resistant to eSIMs traditionally: it makes it much easier for their customers to churn away). And in IoT networks where you might have thousands of connected, unmanned devices, this becomes one way of scaling those networks.

“eSIM technology is the next big thing in telecommunications and the impact will be felt by everyone involved, from consumers to chipset manufacturers and all those in-between,” said Steve Alder, chief business development officer at Truphone. “We’re one of only a handful of network operators that work with the iPhone digital eSIM. Choosing Truphone means that your new iPhone works across the world—just as it was intended.” Of note, Alder was the person who brokered the first iPhone carrier deal in the UK, when he was with O2.

However, one thing to consider when sizing up the eSIM market is that rollout has been slow so far: there are around 10 countries where there are carriers that support eSIM for handsets. Combining that with machine-to-machine deployments, the market is projected to be worth $254 million this year. However, forecasts put that the market size at $978 million by 2023, possibly pushed along by hardware companies like Apple making it an increasingly central part of the proposition, initially as a complement to a “home carrier.”

Truphone has not released numbers detailing how many devices are using its eSIM services at the moment — either among enterprises or consumers — but it has said that customers include more than 3,500 multinational enterprises in 196 countries. We have asked for more detail and will update this post as we learn more.

Oct
12
2018
--

Anaplan hits the ground running with strong stock market debut up over 42 percent

You might think that Anaplan CEO, Frank Calderoni would have had a few sleepless nights this week. His company picked a bad week to go public as market instability rocked tech stocks. Still he wasn’t worried, and today the company had by any measure a successful debut with the stock soaring up over 42 percent. As of 4 pm ET, it hit $24.18, up from the IPO price of $17. Not a bad way to launch your company.

Stock Chart: Yahoo Finance

“I feel good because it really shows the quality of the company, the business model that we have and how we’ve been able to build a growing successful business, and I think it provides us with a tremendous amount of opportunity going forward,” Calderoni told TechCrunch.

Calderoni joined the company a couple of years ago, and seemed to emerge from Silicon Valley central casting as former CFO at Red Hat and Cisco along with stints at IBM and SanDisk. He said he has often wished that there were a tool around like Anaplan when he was in charge of a several thousand person planning operation at Cisco. He indicated that while they were successful, it could have been even more so with a tool like Anaplan.

“The planning phase has not had much change in in several decades. I’ve been part of it and I’ve dealt with a lot of the pain. And so having something like Anaplan, I see it’s really being a disrupter in the planning space because of the breadth of the platform that we have. And then it goes across organizations to sales, supply chain, HR and finance, and as we say, really connects the data, the people and the plan to make for better decision making as a result of all that,” he said.

Calderoni describes Anaplan as a planning and data analysis tool. In his previous jobs he says that he spent a ton of time just gathering data and making sure they had the right data, but precious little time on analysis. In his view Anaplan, lets companies concentrate more on the crucial analysis phase.

“Anaplan allows customers to really spend their time on what I call forward planning where they can start to run different scenarios and be much more predictive, and hopefully be able to, as we’ve seen a lot of our customers do, forecast more accurately,” he said.

Anaplan was founded in 2006 and raised almost $300 million along the way. It achieved a lofty valuation of $1.5 billion in its last round, which was $60 million in 2017. The company has just under 1000 customers including Del Monte, VMware, Box and United.

Calderoni says although the company has 40 percent of its business outside the US, there are plenty of markets left to conquer and they hope to use today’s cash infusion in part to continue to expand into a worldwide company.

Oct
12
2018
--

IBM files formal JEDI protest a day before bidding process closes

IBM announced yesterday that it has filed a formal protest with the U.S. Government Accountability Office over the structure of the Pentagon’s winner-take-all $10 billion, 10-year JEDI cloud contract. The protest came just a day before the bidding process is scheduled to close. As IBM put it in a blog post, they took issues with the single vendor approach. They are certainly not alone.

Just about every vendor short of Amazon, which has remained mostly quiet, has been complaining about this strategy. IBM certainly faces a tough fight going up against Amazon and Microsoft.

IBM doesn’t disguise the fact that it thinks the contract has been written for Amazon to win and they believe the one-vendor approach simply doesn’t make sense. “No business in the world would build a cloud the way JEDI would and then lock in to it for a decade. JEDI turns its back on the preferences of Congress and the administration, is a bad use of taxpayer dollars and was written with just one company in mind.” IBM wrote in the blog post explaining why it was protesting the deal before a decision was made or the bidding was even closed.

For the record, DOD spokesperson Heather Babb told TechCrunch last month that the bidding is open and no vendor is favored. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said.

Much like Oracle, which filed a protest of its own back in August, IBM is a traditional vendor that was late to the cloud. It began a journey to build a cloud business in 2013 when it purchased Infrastructure as a Service vendor SoftLayer and has been using its checkbook to buy software services to add on top of SoftLayer ever since. IBM has concentrated on building cloud services around AI, security, big data, blockchain and other emerging technologies.

Both IBM and Oracle have a problem with the one-vendor approach, especially one that locks in the government for a 10-year period. It’s worth pointing out that the contract actually is an initial two-year deal with two additional three year options and a final two year option. The DOD has left open the possibility this might not go the entire 10 years.

It’s also worth putting the contract in perspective. While 10 years and $10 billion is nothing to sneeze at, neither is it as market altering as it might appear, not when some are predicting the cloud will be $100 billion a year market very soon.

IBM uses the blog post as a kind of sales pitch as to why it’s a good choice, while at the same time pointing out the flaws in the single vendor approach and complaining that it’s geared toward a single unnamed vendor that we all know is Amazon.

The bidding process closes today, and unless something changes as a result of these protests, the winner will be selected next April

Oct
12
2018
--

Track PostgreSQL Row Changes Using Public/Private Key Signing

PostgreSQL encryption and authorization

row signing with postgresqlAuthorisations and encryption/decryption within a database system establish the basic guidelines in protecting your database by guarding against malicious structural or data changes.

What are authorisations?

Authorisations are the access privileges that mainly control what a user can and cannot do on the database server for one or more databases. So consider this to be like granting a key to unlock specific doors. Think of this as more like your five star hotel smart card. It allows you access all facilities that are meant for you, but doesn’t let you open every door. Whereas, privileged staff have master keys which let them open any door.

Similarly, in the database world, granting permissions secures the system by allowing specific actions by specific users or user groups, yet it allows database administrator to perform whatever action(s) on the database he/she wishes. PostgreSQL provides user management where you can can create users, and grant and revoke their privileges.

Encryption

Encryption, decryption can protect your data, obfuscate schema structure and help hide code from prying eyes. Encryption/decryption hides the valuable information and ensures that there are no mischievous changes in the code or data that may be considered harmful. In almost all cases, data encryption and decryption happens on the database server. This is more like hiding your stuff somewhere in your room so that nobody can see it, but also making your stuff difficult to access.

PostgreSQL also provides encryption using pgcrypto (PostgreSQL extension). There are some cases where you don’t want to hide the data, but don’t want people to update it either. You can revoke the privileges to modify the data.

Data modifications

But what if an admin user modifies the data? How you can identify that data is changed? If somebody changes the data and you don’t know about, then it is more dangerous than you losing your data, as you are relying on data which may no longer be valid.

Logs in database systems allow us to track back changes and “potentially” identify what was changed—unless, those logs are removed by the administrator.

So consider if you can leave your stuff openly in your room and in case of any changes, you can identify that something was tampered with. In database terms, that translates to data without encryption, but with your very own signature. One option is to add a column to your database table which keeps a checksum for the data that is generated on the client side using the user’s own private key.  Any changes in the data would mean that checksum doesn’t match anymore, and hence, one can easily identify if the data has changed. The data signing happens on the client-side, thereby ensuring that only users with the required private key can insert the data and anyone with a public key can validate.

Public/Private Keys

Asymmetric cryptographic system uses pairs of keys; public keys and private keys. Private keys are known only to the owner(s). It is used for signing or decrypting data. Public keys are shared with other stakeholders who may use it to encrypt messages or validate messages signed by the owner.

Generate Private / Public Key

Private Key

$ openssl genrsa -aes128 -passout pass:password -out key.private.pem
Generating RSA private key, 2048 bit long modulus

Public Key

$ openssl rsa -in key.private.pem -passin pass:password -pubout -out key.public.pem
writing RSA key

Signing Data

Create a sample table tbl_marks and insert a sample row in that. We’ll need to add additional columns for signature verification. This will understandably increase the table size as we are adding additional columns.

postgres=# CREATE TABLE tbl_marks (id INTEGER, name TEXT, marks INTEGER, hash TEXT);

Let’s add a row that we’d like to validate.

postgres=# INSERT INTO tbl_marks VALUES(1, 'Alice', 80);

We will select the data to store the value into into query buffer using

\gset

  command (https://www.postgresql.org/docs/current/static/app-psql.html). The complete row will be saved into “row” psql variable.

postgres=# SELECT row(id,name,marks) FROM tbl_marks WHERE id = 1;
     row   
---------------
(1,Alice,80)
(1 row)
postgres=# \gset
postgres=# SELECT :'row' as row;
     row   
---------------
(1,Alice,80)
(1 row)

Now let’s generate signature for the data stored in “row” variable.

postgres=# \set sign_command `echo :'row' | openssl dgst -sha256 -sign key.private.pem | openssl base64 | tr -d '\n' | tr -d '\r'`
Enter pass phrase for key.private.pem:

The signed hash is stored into the “sign_command” psql variable. Let’s now add this to the data row in tbl_marks table.

postgres=# UPDATE tbl_marks SET hash = :'sign_command' WHERE id = 1;
UPDATE 1

Validating Data

So our data row now contains data with a valid signature. Let’s try to validate to it. We are going to select our data in “row” psql variable and the signature hash in “hash” psql variable.

postgres=# SELECT row(id,name,marks), hash FROM tbl_marks;    
Row           hash                                                                                                                                                                                                                                                                                                                                                                                            
---------------+-----------------------------------------------
(1,Alice,80) | U23g3RwaZmbeZpYPmwezP5xvbIs8ILupW7jtrat8ixA ...
(1 row)
postgres=# \gset

Let’s now validate the data using a public key.

postgres=# \set verify_command `echo :'hash' | awk '{gsub(/.{65}/,"&\n")}1' | openssl base64 -d -out v && echo :'row' | openssl dgst -sha256 -verify key.public.pem -signature v`
postgres=# select :'verify_command' as verify;
  verify    
-------------
Verified OK
(1 row)

Perfect! The data is validated and all this happened on the client side. Imagine somebody doesn’t like that Alice got 80 marks, and they decide to reduce Alice’s marks to 30. Nobody knows if the teacher had given Alice 80 or 30 unless somebody goes and checks the database logs. We’ll give Alice 30 marks now.

postgres=# UPDATE tbl_marks SET marks = 30;
UPDATE 1

The school admin now decides to check that all data is correct before giving out the final results. The school admin has the teacher’s public key and tries to validate the data.

postgres=# SELECT row(id,name,marks), hash FROM tbl_marks;
    row    | hash                                                                                                                                                                                                                                                                  
--------------+--------------------------------------------------
(1,Alice,30) | yO20vyPRPR+HgW9D2nMSQstRgyGmCxyS9bVVrJ8tC7nh18iYc...
(1 row)
postgres=# \gset

postgres=# \set verify_command `echo :'hash' | awk '{gsub(/.{65}/,"&\n")}1' | openssl base64 -d -out v && echo :'row' | openssl dgst -sha256 -verify key.public.pem -signature v`
postgres=# SELECT :'verify_command' AS verify;
      verify      
----------------------
Verification Failure

As expected, the validation fails. Nobody other than the teacher had the private key to sign that data, and any tampering is easily identifiable.

This might not be the most efficient way of securing a dataset, but it is definitely an option if you want to keep the data unencrypted, and yet easily detect any unauthorised changes. All the load is shifted on to the client side for signing and verification thereby reducing load on the server. It allows only users with private keys to update the data, and anybody with the associated public key to validate it.

The example used psql as a client application for signing but you can do this on any client which can call the required openssl functions or directly used openssl binaries for signing and verification.

Oct
11
2018
--

How to Fix ProxySQL Configuration When it Won’t Start

restart ProxySQL config

restart ProxySQL configWith the exception of the three configuration variables described here, ProxySQL will only parse the configuration files the first time it is started, or if the proxysql.db file is missing for some other reason.

If we want to change any of this data we need to do so via ProxySQL’s admin interface and then save them to disk. That’s fine if ProxySQL is running, but what if it won’t start because of these values?

For example, perhaps we accidentally configured ProxySQL to run on port 3306 and restarted it, but there’s already a production MySQL instance running on this port. ProxySQL won’t start, so we can’t edit the value that way:

2018-10-02 09:18:33 network.cpp:53:listen_on_port(): [ERROR] bind(): Address already in use

We could delete proxysql.db and have it reload the configuration files, but that would mean any changes we didn’t mirror into the configuration files will be lost.

Another option is to edit ProxySQL’s database file using sqlite3:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/
[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db
sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces';
mysql-interfaces|127.0.0.1:3306
sqlite> UPDATE global_variables SET variable_value='127.0.0.1:6033' WHERE variable_name='mysql-interfaces';
sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces';
mysql-interfaces|127.0.0.1:6033

Or if we have a few edits to make we may prefer to do so with a text editor:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/
[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db
sqlite> .output /tmp/global_variables
sqlite> .dump global_variables
sqlite> .exit

The above commands will dump the global_variables table into a file in SQL format, which we can then edit:

[root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables
INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:3306’);
[root@centos7-pxc57-4 proxysql]# vim /tmp/global_variables
[root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables
INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:6033’);

Now we need to restore this data. We’ll use the restore command to empty the table (as we’re restoring from a missing backup):

[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db
sqlite> .restore global_variables
sqlite> .read /tmp/global_variables
sqlite> .exit

Once we’ve made the change, we should be able to start ProxySQL again:

[root@centos7-pxc57-4 proxysql]# /etc/init.d/proxysql start
Starting ProxySQL: DONE!
[root@centos7-pxc57-4 proxysql]# lsof -I | grep proxysql
proxysql 15171 proxysql 19u IPv4 265881 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 20u IPv4 265882 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 21u IPv4 265883 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 22u IPv4 265884 0t0 TCP localhost:6033 (LISTEN)
proxysql 15171 proxysql 23u IPv4 266635 0t0 TCP *:6032 (LISTEN)

While you are here

You might enjoy my recent post Using ProxySQL to connect to IPV6-only databases over IPV4

You can download ProxySQL from Percona repositories, and you might also want to check out our recorded webinars that feature ProxySQL too.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com