Aug
15
2019
--

Alibaba cloud biz is on a run rate over $4B

Alibaba announced its earnings today, and the Chinese e-commerce giant got a nice lift from its cloud business, which grew 66% to more than $1.1 billion, or a run rate surpassing $4 billion.

It’s not exactly on par with Amazon, which reported cloud revenue of $8.381 billion last quarter, more than double Alibaba’s yearly run rate, but it’s been a steady rise for the company, which really began taking the cloud seriously as a side business in 2015.

At that time, Alibaba Cloud’s president Simon Hu boasted to Reuters that his company would overtake Amazon in four years. It is not even close to doing that, but it has done well to get to more than a billion a quarter in just four years.

In fact, in its most recent data for the Asia-Pacific region, Synergy Research, a firm that closely tracks the public cloud market, found that Amazon was still number one overall in the region. Alibaba was first in China, but fourth in the region outside of China, with the market’s Big 3 — Amazon, Microsoft and Google — coming in ahead of it. These numbers were based on Q1 data before today’s numbers were known, but they provide a sense of where the market is in the region.

Screenshot 2019 08 15 11.17.26

Synergy’s John Dinsdale says the company’s growth has been impressive, outpacing the market growth rate overall. “Alibaba’s share of the worldwide cloud infrastructure services market was 5% in Q2 — up by almost a percentage point from Q2 of last year, which is a big deal in terms of absolute growth, especially in a market that is growing so rapidly,” Dinsdale told TechCrunch.

He added, “The great majority of its revenue does indeed come from China (and Hong Kong), but it is also making inroads in a range of other APAC country markets — Indonesia, Malaysia, Singapore, India, Australia, Japan and South Korea. While numbers are relatively small, it has also got a foothold in EMEA and some operations in the U.S.”

The company was busy last quarter adding more than 300 new products and features in the period ending June 30th (and reported today). That included changes and updates to core cloud offerings, security, data intelligence and AI applications, according to the company.

While the cloud business still isn’t a serious threat to the industry’s Big Three, especially outside its core Asia-Pacific market, it’s still growing steadily and accounted for almost 7% of Alibaba’s total of $16.74 billion in revenue for the quarter — and that’s not bad at all.

Aug
15
2019
--

A Faster, Lightweight Trigger Function in C for PostgreSQL

Trigger Function in C for PostgreSQL

Trigger Function in C for PostgreSQLWe have been writing blog posts about how to write simple extensions in C language and a little more complex one by Ibrar which were well received by PostgreSQL user community. Then we observed that many PostgreSQL users create simple triggers for small auditing requirements, and then feel the pain of trigger on transactions. So we were discussing how simple/lightweight and faster a trigger function is when written in C. Generally, Trigger functions are written in high-level languages like PlpgSQL, but it has a higher overhead during execution and it can impact the transactions – and thereby application performance.

This blog post is an attempt to create a simple trigger function to address one of the common use-cases of triggers, which is to update auditing columns in a table.

In this post, we are going to introduce SPI (Server Programming Interface) functions for novice users. Towards the end of the blog, we share some of the quick benchmark results for understanding the benefits.

Example of Audit timestamp

Let’s proceed with taking up a case and assume that we have a table to hold transaction details. But auditing requirements say that there should be a timestamp on each tuple when the tuple is inserted and when it was last updated.

CREATE TABLE transdtls(
  transaction_id int,
  cust_id int,
  amount  int,
...
  insert_ts timestamp,
  update_ts timestamp
);

For demonstration purpose, let’s remove and trim the other columns and create a table with only 3 essential columns.

CREATE TABLE transdtls(
  transaction_id int,
  insert_ts timestamp,
  update_ts timestamp
);

Developing Trigger Function

The trigger function can also be developed and packaged as an extension, which we discussed in s previous blog post here. So we are not going to repeat those steps here. The difference is that file names are named as “trgr” instead of “addme” in the previous blog. Makefile is also modified to refer “trgr” files. This need not be same as the function name “trig_test” in the C source detailed below.

In the end, the following files are available in the development folder:

$ ls
Makefile trgr--0.0.1.sql trgr.c trgr.control

The trgr.c is the main source files with the following content:

#include <stdio.h>
#include <time.h>
#include "postgres.h"
#include "utils/rel.h"
#include "executor/spi.h"
#include "commands/trigger.h"
#include "utils/fmgrprotos.h"
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

extern Datum trig_test(PG_FUNCTION_ARGS);

PG_FUNCTION_INFO_V1(trig_test);

Datum
trig_test(PG_FUNCTION_ARGS)
{
    TriggerData *trigdata = (TriggerData *) fcinfo->context;
    //TupleDesc   tupdesc;
    HeapTuple   tuple;
    HeapTuple   rettuple;
    int         attnum = 0;
    Datum       datumVal;

    //Get the structure of the tuple in the table.
    //tupdesc = trigdata->tg_relation->rd_att;

    //Make sure that the function is called from a trigger
    if (!CALLED_AS_TRIGGER(fcinfo))
        elog(ERROR, "are you sure you are calling from trigger manager?");

    //If the trigger is part of an UPDATE event
    if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))
    {
        //attnum = SPI_fnumber(tupdesc,"update_ts");
        attnum = 3;
        tuple = trigdata->tg_newtuple;
    }
    else   //If the trigger is part of INSERT event
    {
        //attnum = SPI_fnumber(tupdesc,"insert_ts");
        attnum = 2;
        tuple = trigdata->tg_trigtuple;
    }
    //Get the current timestamp using "now"
    datumVal = DirectFunctionCall3(timestamp_in, CStringGetDatum("now"), ObjectIdGetDatum(InvalidOid), Int32GetDatum(-1));

    //Connect to Server and modify the tuple
    SPI_connect();
    rettuple = SPI_modifytuple(trigdata->tg_relation, tuple, 1, &attnum, &datumVal, NULL);
    if (rettuple == NULL)
    {
        if (SPI_result == SPI_ERROR_ARGUMENT || SPI_result == SPI_ERROR_NOATTRIBUTE)
                elog(ERROR, "SPI_result failed! SPI_ERROR_ARGUMENT or SPI_ERROR_NOATTRIBUTE");
         elog(ERROR, "SPI_modifytuple failed!");
    }
    SPI_finish();                           /* don't forget say Bye to SPI mgr */
    return PointerGetDatum(rettuple);
}

and

trgr--0.0.1.sql

  with the following content:

CREATE OR REPLACE FUNCTION trig_test() RETURNS trigger
     AS 'MODULE_PATHNAME','trig_test'
LANGUAGE C STRICT;

Now it is a matter of building, installing, and creating the extension.

$ make
$ sudo make install
psql> create extension trgr;

In case you don’t want to develop it as an extension, you may compile it to generate a shared object file (.so) file. Copy the same to the library folder of PostgreSQL binaries, which on my Ubuntu laptop is : /usr/lib/postgresql/11/lib/, and then define the function. You can even specify the full path of the shared object file like this:

CREATE FUNCTION trig_test() RETURNS trigger     
  AS '/usr/lib/postgresql/11/lib/trgr.so'
LANGUAGE C;

Using Trigger Function

Usage of trigger function is not different from regular PLpgSQL functions. You just need to attach the function to the table for all INSERT and UPDATE events.

CREATE TRIGGER transtrgr
 BEFORE INSERT OR UPDATE ON public.transdtls 
FOR EACH ROW EXECUTE PROCEDURE public.trig_test();

Benchmarking

For a fair comparison with trigger function written in PLpgSQL, a similar function is created as follows:

CREATE OR REPLACE FUNCTION transtrgr_pl()
  RETURNS TRIGGER AS $$
  BEGIN
     if  (TG_OP = 'UPDATE') then
        NEW.update_ts = now();
     else 
        NEW.insert_ts = now();
     end if;
    RETURN NEW;
  END;
  $$ language 'plpgsql';

The number of lines and the readability of the code is in favor of PLpgSQL. The development and debugging time required is much less.

Regarding the performance benchmarking, three cases are compared.

  1. PostgreSQL client/application providing the audit timestamp, so that trigger can be avoided.
  2. Trigger function in C language.
  3. Trigger function in PLpgSQL.

Here are the performance numbers in milliseconds for 1 million bulk inserts, obviously a smaller number is better.


Caveats

  1. The first case where there is no trigger on the database side, it takes less time. But the application and network need to take up the extra load, which is not considered in this test.
  2. The C function is bit hardcoded with an attribute number like
    attnum = 3;

    and if we want a generic trigger function which looks for specific column name, we can use SPI_fnumber function like

    attnum = SPI_fnumber(tupdesc,"update_ts");

    .  Such a generic trigger function can be used in multiple tables. Obviously, this involves more processing. Those lines are commented out in the source code. On repeated tests, the average time of execution increases to 1826.722 ms. Still, we can see that it is considerably faster than the PLpgSQL trigger function.

Discuss on Hacker News

Aug
15
2019
--

Incorta raises $30M Series C for ETL-free data processing solution

Incorta, a startup founded by former Oracle executives who want to change the way we process large amounts of data, announced a $30 million Series C today led by Sorenson Capital.

Other investors participating in the round included GV (formerly Google Ventures), Kleiner Perkins, M12 (formerly Microsoft Ventures), Telstra Ventures and Ron Wohl. Today’s investment brings the total raised to $75 million, according to the company.

Incorta CEO and co-founder Osama Elkady says he and his co-founders were compelled to start Incorta because they saw so many companies spending big bucks for data projects that were doomed to fail. “The reason that drove me and three other guys to leave Oracle and start Incorta is because we found out with all the investment that companies were making around data warehousing and implementing advanced projects, very few of these projects succeeded,” Elkady told TechCrunch.

A typical data project involves ETL (extract, transform, load). It’s a process that takes data out of one database, changes the data to make it compatible with the target database and adds it to the target database.

It takes time to do all of that, and Incorta is trying to make access to the data much faster by stripping out this step. Elkady says that this allows customers to make use of the data much more quickly, claiming they are reducing the process from one that took hours to one that takes just seconds. That kind of performance enhancement is garnering attention.

Rob Rueckert, managing director for lead investor Sorenson Capital, sees a company that’s innovating in a mature space. “Incorta is poised to upend the data warehousing market with innovative technology that will end 30 years of archaic and slow data warehouse infrastructure,” he said in a statement.

The company says revenue is growing by leaps and bounds, reporting 284% year over year growth (although they did not share specific numbers). Customers include Starbucks, Shutterfly and Broadcom.

The startup, which launched in 2013, currently has 250 employees, with developers in Egypt and main operations in San Mateo, Calif. They recently also added offices in Chicago, Dubai and Bangalore.

Aug
14
2019
--

VMware says it’s looking to acquire Pivotal

VMware today confirmed that it is in talks to acquire software development platform Pivotal Software, the service best known for commercializing the open-source Cloud Foundry platform. The proposed transaction would see VMware acquire all outstanding Pivotal Class A stock for $15 per share, a significant markup over Pivotal’s current share price (which unsurprisingly shot up right after the announcement).

Pivotal’s shares have struggled since the company’s IPO in April 2018. The company was originally spun out of EMC Corporation (now DellEMC) and VMware in 2012 to focus on Cloud Foundry, an open-source software development platform that is currently in use by the majority of Fortune 500 companies. A lot of these enterprises are working with Pivotal to support their Cloud Foundry efforts. Dell itself continues to own the majority of VMware and Pivotal, and VMware also owns an interest in Pivotal already and sells Pivotal’s services to its customers, as well. It’s a bit of an ouroboros of a transaction.

Pivotal Cloud Foundry was always the company’s main product, but it also offered additional consulting services on top of that. Despite improving its execution since going public, Pivotal still lost $31.7 million in its last financial quarter as its stock price traded at just over half of the IPO price. Indeed, the $15 per share VMware is offering is identical to Pivotal’s IPO price.

An acquisition by VMware would bring Pivotal’s journey full circle, though this is surely not the journey the Pivotal team expected. VMware is a Cloud Foundry Foundation platinum member, together with Pivotal, DellEMC, IBM, SAP and Suse, so I wouldn’t expect any major changes in VMware’s support of the overall open-source ecosystem behind Pivotal’s core platform.

It remains to be seen whether the acquisition will indeed happen, though. In a press release, VMware acknowledged the discussion between the two companies but noted that “there can be no assurance that any such agreement regarding the potential transaction will occur, and VMware does not intend to communicate further on this matter unless and until a definitive agreement is reached.” That’s the kind of sentence lawyers like to write. I would be quite surprised if this deal didn’t happen, though.

Buying Pivotal would also make sense in the grand scheme of VMware’s recent acquisitions. Earlier this year, the company acquired Bitnami, and last year it acquired Heptio, the startup founded by two of the three co-founders of the Kubernetes project, which now forms the basis of many new enterprise cloud deployments and, most recently, Pivotal Cloud Foundry.

Aug
14
2019
--

Every TC Sessions: Enterprise 2019 ticket includes a free pass to Disrupt SF

Shout out to all the savvy enterprise software startuppers. Here’s a quick, two-part money-saving reminder. Part one: TC Sessions: Enterprise 2019 is right around the corner on September 5, and you have only two days left to buy an early-bird ticket and save yourself $100. Part two: for every Session ticket you buy, you get one free Expo-only pass to TechCrunch Disrupt SF 2019.

Save money and increase your ROI by completing one simple task: buy your early-bird ticket today.

About 1,000 members of enterprise software’s powerhouse community will join us for a full day dedicated to exploring the current and future state of enterprise software. It’s certainly tech’s 800-pound gorilla — a $500 billion industry. Some of the biggest names and brightest minds will be on hand to discuss critical issues all players face — from early-stage startups to multinational conglomerates.

The day’s agenda features panel discussions, main-stage talks, break-out sessions and speaker Q&As on hot topics including intelligent marketing automation, the cloud, data security, AI and quantum computing, just to name a few. You’ll hear from people like SAP CEO Bill McDermott; Aaron Levie, Box co-founder; Jim Clarke, director of Quantum Hardware at Intel and many, many more.

Customer experience is always a hot topic, so be sure to catch this main-stage panel discussion with Amit Ahuja (Adobe), Julie Larson-Green (Qualtrics) and Peter Reinhardt (Segment):

The Trials and Tribulations of Experience Management: As companies gather more data about their customers and employees, it should theoretically improve their experience, but myriad challenges face companies as they try to pull together information from a variety of vendors across disparate systems, both in the cloud and on prem. How do you pull together a coherent picture of your customers, while respecting their privacy and overcoming the technical challenges?

TC Sessions: Enterprise 2019 takes place in San Francisco on September 5. Take advantage of this two-part money-saving opportunity. Buy your early-bird ticket by August 16 at 11:59 p.m. (PT) to save $100. And score a free Expo-only pass to TechCrunch Disrupt SF 2019 for every ticket you buy. We can’t wait to see you in September!

Interested in sponsoring TC Sessions: Enterprise? Fill out this form and a member of our sales team will contact you.

Aug
14
2019
--

Why chipmaker Broadcom is spending big bucks for aging enterprise software companies

Last year Broadcom, a chipmaker, raised eyebrows when it acquired CA Technologies, an enterprise software company with a broad portfolio of products, including a sizable mainframe software tools business. It paid close to $19 billion for the privilege.

Then last week, the company opened up its wallet again and forked over $10.7 billion for Symantec’s enterprise security business. That’s almost $30 billion for two aging enterprise software companies. There has to be some sound strategy behind these purchases, right? Maybe.

Here’s the thing about older software companies. They may not out-innovate the competition anymore, but what they have going for them is a backlog of licensing revenue that appears to have value.

Aug
14
2019
--

Slack announces new admin features for larger organizations

Slack has been working to beef up the product recently for its larger customers. A couple of weeks ago that involved more sophisticated security tools. Today, it was the admins’ turn to get a couple of new tools that help make it easier to manage Slack in larger settings.

For starters, Slack has created an Announcements channel as a way to send a message to the entire organization. It would typically be used to communicate about administrative matters like changes in HR policy or software updates. The Announcements channel allows admins to limit who can send messages, and who can respond, so the channels stay clean and limit chatter.

Illan Frank, director of product for enterprise at Slack, says that companies have been demanding this ability because they need a clean channel with reliable information from a trusted source.

“With this feature, [admins] can set this channel up as an announcement-only channel with the right folks in [IT or HR] who can who make announcements, and now this is a clean, controlled environment for important announcements and updates,” Frank explained.

The other piece Slack is announcing today is new APIs for creating templated workspaces. This is especially useful in environments where users have to create a bevy of new spaces frequently. Picture a university with professors setting up spaces for each of their classes with a set of tools for students, who all have to join the space.

Doing this manually, especially when everybody is setting them up at the same time at the beginning of a semester, could be tedious and chaotic, but by providing programmatic templated workflows, it brings a level of automation to the process.

Frank says while workspaces in and of themselves are not new, the automation layer is. “What is new about this is the API and the ability to automate the creation and management of these connectors [programmatically with code],” he said.

For starters, it will allow automated workspace creation based on information in Web forms. Later, the company will be adding scripting capabilities to build even more sophisticated workflows with automated configuration, apps and content.

Finally, Slack is automating the approval process for tools used inside Slack channels or workspaces. Pre-approved applications can be added to Slack automatically, while those not on the approved list would have to go through a separate process to get approved.

The Announcements tool is available starting today for customers with Plus and Enterprise Grid plans. The API and approval tools will be available soon for Enterprise Grid customers.

Aug
14
2019
--

How to Manage ProxySQL Cluster with Core and Satellite Nodes

Manage ProxySQL Cluster

Manage ProxySQL ClusterIn this post, we will manage ProxySQL Cluster with “core” and “satellite” nodes. What does that mean? Well, in hundreds of nodes, only one change in one of those nodes will replicate to the entire cluster.

Any mistake or changes will replicate to all nodes, and this can make it difficult to find the most recently updated node or the node of true.

Before continuing, you need to install and configure ProxySQL Cluster. You can check my previous blogs for more information:

The idea to use “core” and “satellite” nodes is to limit only a few nodes as masters (aka core) and the rest of the nodes as slaves (aka satellite). Any change in the “core” nodes will be replicated to all core/satellite nodes, but any change in a “satellite” node will not be replicated. This is useful to manage big amount of nodes because we are minimizing manual errors and false/positive changes, doing the difficult task of finding the problematic node over all the nodes in the cluster.

This works in ProxySQL version 1.4 and 2.

How does it work?

When you configure a classic ProxySQL Cluster, all nodes listen to all nodes, but with this feature, all nodes will only listen to a couple of nodes or the nodes you want to use as “core” nodes.

Any change in one or more nodes not listed in the “proxysql_servers” table will not be replicated, due to the fact that there aren’t nodes listening in the admin port waiting for changes.

Each node opens one thread per server listed in the proxysql_server table and connects to the IP on admin port (default admin port is 6032), waiting for any change in four tables – mysql_servers, mysql_users, proxysql_servers, mysql_query_rules. The only relationship between a core and satellite node is a satellite node connects to the core node and it waits for any change.

How to configure

It’s easy, we will configure only the IPs of core nodes in all cluster nodes, including “core and satellite” nodes, into the proxysql_servers tables. If you read my previous posts and configured a ProxySQL Cluster, we will clean the previous config for the next tables to test from scratch:

delete from mysql_query_rules;
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;

delete from mysql_servers;
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

delete from mysql_users;
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

delete from proxysql_servers;
LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

Suppose we have 100 ProxySQL nodes, for example, and here is the list of hostnames and IPs from our instances:

proxysql_node1 = 10.0.0.1
proxysql_node2 = 10.0.0.2
...
proxysql_node100 = 10.0.0.100

And we want to configure and use only 3 core nodes, so we select the first 3 nodes from the cluster:

proxysql_node1 = 10.0.0.1
proxysql_node2 = 10.0.0.2
proxysql_node3 = 10.0.0.3

And the rest of the nodes will be the satellite nodes:

proxysql_node4 = 10.0.0.4
proxysql_node5 = 10.0.0.5
...
proxysql_node100 = 10.0.0.100

We will use the above IPs to configure the proxysql_servers table, with only those 3 IPs over all nodes. So all ProxySQL nodes (from proxysql_node1 to proxysql_node100) will listen for changes only on those 3 nodes.

DELETE FROM proxysql_servers;

INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.1',6032,0,'proxysql_node1');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.2',6032,0,'proxysql_node2');
INSERT INTO proxysql_servers (hostname,port,weight,comment) VALUES ('10.0.0.3',6032,0,'proxysql_node3');

LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;

Now all nodes from proxysql_node4 to proxysql_node100 are the satellite nodes.

We can see something like this in the proxysql.log file:

2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.1:6032
2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.2:6032
2019-08-09 15:50:14 [INFO] Created new Cluster Node Entry for host 10.0.0.3:6032
...
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.1:6032
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.2:6032
2019-08-09 15:50:14 [INFO] Cluster: starting thread for peer 10.0.0.3:6032

How to Test

I’ll create a new entry mysql_users table in the core node to test if the replication from core to satellite is working fine.

Connect to proxysql_node1 and run the next queries:

INSERT INTO mysql_users(username,password, active, default_hostgroup) VALUES ('user1','123456', 1, 10);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

Now from any satellite node, for example, proxysql_node4, check the ProxySQL log file to find if there are updates. If this is working fine we see something like this:

2019-08-09 18:49:24 [INFO] Cluster: detected a new checksum for mysql_users from peer 10.0.1.113:6032, version 3, epoch 1565376564, checksum 0x5FADD35E6FB75557 . Not syncing yet ...
2019-08-09 18:49:26 [INFO] Cluster: detected a peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564, diff_check 3. Own version: 2, epoch: 1565375661. Proceeding with remote sync
2019-08-09 18:49:27 [INFO] Cluster: detected a peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564, diff_check 4. Own version: 2, epoch: 1565375661. Proceeding with remote sync
2019-08-09 18:49:27 [INFO] Cluster: detected peer 10.0.1.113:6032 with mysql_users version 3, epoch 1565376564
2019-08-09 18:49:27 [INFO] Cluster: Fetching MySQL Users from peer 10.0.1.113:6032 started
2019-08-09 18:49:27 [INFO] Cluster: Fetching MySQL Users from peer 10.0.1.113:6032 completed
2019-08-09 18:49:27 [INFO] Cluster: Loading to runtime MySQL Users from peer 10.0.1.113:6032
2019-08-09 18:49:27 [INFO] Cluster: Saving to disk MySQL Query Rules from peer 10.0.1.113:6032

Then check if the previous update exists in the mysql_users table on proxysql_node4 or any other satellite node. These updates should exist in the mysql_users and runtime_mysql_users tables.

admin ((none))>select * from runtime_mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password                                  | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 1       | 0        | 10000           |
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 0       | 1        | 10000           |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+

admin ((none))>select * from mysql_users;
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| username | password                                  | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 1       | 0        | 10000           |
| user1    | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | 1      | 0       | 0                 |                | 0             | 1                      | 0            | 0       | 1        | 10000           |
+----------+-------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+

Now the final test is to create a new MySQL user into a satellite node, connect to proxysql_node4, and run the next queries to create a new username:

INSERT INTO mysql_users(username,password, active, default_hostgroup) VALUES ('user2','123456', 1, 10);
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;

From the proxysql log on proxysql_node4, we see the next output:

[root@ip-10-0-1-10 ~]# tail -f /var/lib/proxysql/proxysql.log -n100
...
2019-08-09 18:59:12 [INFO] Received LOAD MYSQL USERS TO RUNTIME command
2019-08-09 18:59:12 [INFO] Received SAVE MYSQL USERS TO DISK command

The last thing to check is the proxysql log file in the core node, to see if there are updates from the table mysql_users. Below is the output from proxysql_node1:

[root@proxysql proxysql]# tail /var/lib/proxysql/proxysql.log -n100
...
2019-08-09 19:09:21 [INFO] ProxySQL version 1.4.14-percona-1.1
2019-08-09 19:09:21 [INFO] Detected OS: Linux proxysql 4.14.77-81.59.amzn2.x86_64 #1 SMP Mon Nov 12 21:32:48 UTC 2018 x86_64

As you can see there are no updates, because the core nodes are not listening for changes from satellite nodes. Core nodes only listen for changes in other core nodes.

And finally, this feature is really useful when you have many servers to manage. Hope you can test this!

Aug
14
2019
--

MySQL 8 and MySQL 5.7 Memory Consumption on Small Devices

MySQL 8 and MySQL 5.7 Memory Consumption

MySQL 8 and MySQL 5.7 Memory ConsumptionWhile we often run MySQL on larger scale systems in Production for Test and Dev, sometimes we want to run MySQL on the tiniest cloud instances possible or just run it on our laptops. In these cases, MySQL 8 and MySQL 5.7 memory consumption is quite important.

In comparing MySQL 8 vs MySQL 5.7, you should know that MySQL 8 uses more memory. Basic tests on a 1GB VM with MySQL 8 and MySQL 5.7 (actually they’re Percona Server versions) running the same light workload, I see the following vmstat output:

MySQL 5.7 vmstat output

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 4  0  65280  71608  58352 245108    0    0  2582  3611 1798 8918 18  9 11 33 30
 4  0  65280  68288  58500 247512    0    0  2094  2662 1769 8508 19  9 13 30 29
 3  1  65280  67780  58636 249656    0    0  2562  3924 1883 9323 20  9  7 37 27
 4  1  65280  66196  58720 251072    0    0  1936  3949 1587 7731 15  7 11 36 31

MySQL 8.0 vmstat output

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa st
 9  1 275356  62280  60832 204736    0    0  2197  5245 2638 13453 24 13  2 34 27
 3  0 275356  60548  60996 206340    0    0  1031  3449 2446 12895 25 11 10 28 26
 7  1 275356  78188  59564 190632    0    1  2448  5082 2677 13661 26 13  6 30 25
 4  1 275356  76516  59708 192096    0    0  2247  3750 2401 12210 22 12  4 38 24

As you can see, MySQL 8 uses some 200MB more swap and also uses less OS cache, signaling more memory being allocated and at least “committed.” If we look at the “top” output we see:

MySQL 5.7

mysql 5.7

MySQL 8.0

MySQL 8.0

This also shows more Resident memory and virtual memory used by MySQL8.   Virtual Memory, in particular, is “scary” as it is well in excess of the 1GB of physical memory available on these VMs.  Of course, Virtual Memory usage (VSZ) is a poor indicator of actual memory needs for modern applications, but it does corroborate the higher memory needs story.

In reality, though, as we know from the “vmstat” output, neither MySQL 8 nor MySQL 5.7 is swapping with this light load, even though there isn’t much “room” left. If you have more than a handful of connections or wish to run some applications on the same VM, you would get swapping (or OOM killer if you have not enabled swap).

It would be an interesting project to see how low I can drive MySQL 5.7 and MySQL 8 memory consumption, but I will leave it to another project. Here are the settings I used for this test:

[mysqld]
innodb_buffer_pool_size=256M
innodb_buffer_pool_instances=1
innodb_log_file_size=1G
innodb_flush_method=O_DIRECT
innodb_numa_interleave=1
innodb_flush_neighbors=0
log_bin
server_id=1
expire_logs_days=1
log_output=file
slow_query_log=ON
long_query_time=0
log_slow_rate_limit=1
log_slow_rate_type=query
log_slow_verbosity=full
log_slow_admin_statements=ON
log_slow_slave_statements=ON
slow_query_log_always_write_time=1
slow_query_log_use_global_control=all
innodb_monitor_enable=all
userstat=1

Summary:  When moving to MySQL 8 in a development environment, keep in mind it will require more memory than MySQL 5.7 with the same settings.

Aug
14
2019
--

Procore brings 3D construction models to iOS

Today, Procore, a construction software company, announced Procore BIM (Building Information Modeling), a new tool that takes advantage of Apple hardware advances to bring the 3D construction model to iOS.

Dave McCool, senior product manager at Procore, says that for years architects and engineers have been working with 3D models of complex buildings on desktop computers and laptops, but these models never made it into the hands of the tradespeople actually working on the building. This forced them to make trips to the job site office to see the big picture whenever they ran into issues, a process that was inefficient and costly.

Procore has created a 3D model that corresponds to a virtual version of the 2D floor plan and runs on an iOS device. Touching a space on the floor plan opens a corresponding spot in the 3D model. What’s more, Procore has created a video game-like experience, so that contractors can use a virtual joystick to move around a 3D representation of the building, or they can use gestures to move around the rendering.

black iphone in landscape position held by a construction worker with a yellow hat a12584

Procore BIM running on an iPhone (Photo: Procore)

The app has been designed so that it can run on an iPhone 7, but for optimal performance, Procore recommends using an iPad Pro. The software takes advantage of Apple Metal, which gives developers “near direct” access to the GPU running on these devices. This ability to tap into GPU power speeds up performance and allows this level of sophisticated rendering quickly on iOS devices.

McCool says that this enables tradespeople to find the particular area on the drawing where their part of the project needs to go much more easily and intuitively, whether it’s wiring, duct work or plumbing. As he pointed out, it can get crowded in the space above a ceiling or inside a utility room, and the various teams need to work together to make sure they are putting their parts in the correct spot. Working with this tool helps make that placement crystal clear.

It’s essentially been designed to gamify the experience in order to help tradespeople who aren’t necessarily technically savvy operate the tool themselves and find their way around a drawing in 3D, while reducing the number of trips to the office to have a discussion with the architects or engineers to resolve issues.

This is the latest tool from a company that has been producing construction software since 2002. As a company spokesperson said, early on the company founder had to wire routers on the site to allow workers to use the earliest versions. Today, it offers a range of construction software to track financials, project, labor and safety management information.

Procore BIM will be available starting next month.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com