Jul
20
2019
--

Urban Ghoul series now available!

 This whole Urban Fantasy series is now available on all ebook retailers. You can buy and binge-read all 4 books if you like. Click below to find out more and read an excerpt. Find out more

Jul
19
2019
--

Upcoming Webinar 7/23: 10 Common Mistakes Java Developers Make when Writing SQL

Mistakes Java Developers Make when Writing SQL

Please join Percona’s Senior Support Engineer Charly Batista as he presents “10 Common Mistakes (Java) Developers Make when Writing SQL” on Tuesday, July 23rd, 2019 at 8:00 AM EDT (UTC-4).

Register Now

It’s easy for Java developers (and users of other OO languages) to mix object-oriented thinking and imperative thinking. But when it comes to writing SQL the nightmare begins! Firstly, SQL is a declarative language and it has nothing to do with either OO or imperative thinking. It is relatively easy to express a condition in SQL but it is not so easy to express it optimally – and even worse to translate it to the OO paradigm. Secondly, they need to think in terms of set and relational algebra, even if unconsciously!

In this talk, we’ll see the most common mistakes that developers make in OO, especially Java, when writing SQL code, and how we can avoid them.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Speakers:
Charly Batista

Charly Batista
Senior support engineer

Charly worked as Java Architect for many years and using many different database technologies. He helped to design some of the features of the system used in the Brazilian Postal Service, the largest Java project in Latin America in that time. He also helped to design the database of the Brazilian REDESIM project, the system that is responsible for the municipalities taxation in Brazil. He now lives in China and works as Senior Engineer at Percona.

Jul
19
2019
--

Assessing MySQL Performance Amongst AWS Options – Part Two

Compare Amazon RDS to Percona Server

See part one of this series here

This post is part two of my series “Assessing MySQL Performance Amongst AWS Options”, taking a look at how current Amazon RDS services – Amazon Aurora and Amazon RDS for MySQL – compare with Percona Server with InnoDB and RocksDB engines on EC2 instances. This time around, I am reviewing the total cost of one test run for each database as well as seeing which databases are the most efficient.

First, a quick recap of the evaluation scenario:

The benchmark scripts

For these evaluations, we use the sysbench/tpcc LUA test with a scale factor of 500 warehouses/10 tables. This is the equivalent of 5000 warehouses of the official TPC-C benchmark.

Amazon MySQL Environments

These are the AWS MySQL environments under analysis:

  • Amazon RDS Aurora
  • Amazon RDS for MySQL with the InnoDB storage engine
  • Percona Server for MySQL with the InnoDB storage engine on Amazon EC2
  • Percona Server for MySQL with the RocksDB storage engine on Amazon EC2

Technical Setup – Server

These general notes apply across the board:

  • AWS region us-east-1(N.Virginia) was used for all tests
  • Server and client instances were spawned in the same availability zone
  • All data for tests were prepared in advance, stored as snapshots, and restored before the test
  • Encryption was not used

And we believe that these configuration notes allow for a fair comparison of the different technologies:

  • AWS EBS optimization was enabled for EC2 instances
  • For RDS/Amazon Aurora only a primary DB instance was created and used
  • In the case of RDS/MySQL, a single AZ deployment was used for RDS/MySQL
  • EC2/Percona Server for MySQL tests were run with binary log enabled

Finally, here are the individual server configurations per environment:

Server test #1: Amazon RDS Aurora

  • Database server: Aurora MySQL 5.7
  • DB instances: r5.large, r5.xlarge, r5.2xlarge, r5.4xlarge
  • volume: used ~450GB(>15000 IOPS)

Server test #2: Amazon RDS for MySQL with InnoDB Storage Engine

  • Database server: MySQL Server 5.7.25
  • RDS instances: db.m5.large, db.m5.xlarge, db.m5.2xlarge, db.m5.4xlarge
  • volumes(allocated space):
    • gp2: 5400GB(~16000 IOPs)
    • io1: 700GB(15000 IOPs)

Server test #3: Percona Server for MySQL with InnoDB Storage Engine

  • Database server: Percona Server 5.7.25
  • EC2 instances: m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge
  • volumes(allocated space):
    • gp2: 5400GB(~16000 IOPs)
    • io1: 700GB(15000 IOPs)

Server test #4: Percona Server for MySQL with RocksDB using LZ4 compression

  • Database server: Percona Server 5.7.25
  • EC2 instances: m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge
  • volumes(allocated space):
    • gp2: 5400GB(~16000 IOPs)
    • io1: 350GB(15000 IOPs)

Technical Setup – Client

Common to all tests, we used an EC2 instance: m5.xlarge. And now that we have established the setup, let’s take a look at what we found.

Costs

Now we are getting down to the $’s! First, let’s review the total cost of one test run for each database:

Sorting the costs of one test run in order from cheapest to most expensive we see this order emerge:

  1. EC2/gp2 carrying server tests #3 or #4 featuring Percona Server for MySQL [represents the LEAST cost in $’s]
  2. RDS/gp2 carrying server test #2, RDS/MySQL
  3. EC2/io1 carrying server tests #3 or #4
  4. RDS/io1 carrying server test #2, RDS/MySQL
  5. RDS/Aurora, server test #1  [GREATEST COST IN $’s]

How does that translate to $’s? Let’s find out how the structure of these costs looks like for every database. Before we study that, though, there are some things to bear in mind:

  • Our calculations include only server-side costs
  • Per instance, the price we used as a baseline was RESERVED INSTANCE STANDARD 1-YEAR TERM
  • For RDS/Amazon Aurora the values for volume size and amount of I/O requests represent real data obtained from CloudWatch metrics (VolumeBytesUsed for used volume space and VolumeReadIOPs+VolumeWriteIOPs for IOPs used) after the test run
  • In the case of Percona Server/RocksDB due to LZ4 compression, the database on disk is 5x smaller, so we used a half-sized io1 volume – 350GB vs 700GB for either Percona Server with InnoDB or RDS/MySQL. This still complies with the requirement for io1 volumes to deliver 50 IOPS per GB.
  • The duration set for the test run is 30 mins

Our total cost formulas

These are the formulas we used in calculating these costs:

  • EC2/gp2, EC2/io1, RDS/gp2, RDS/io1
    • total cost = server instance size cost + allocated volume size cost + requested amount of IOPS cost
  • RDS/Amazon Aurora
    • total cost = server instance size cost + allocated volume size cost + actually used amount of I/O cost

The results

Here are our calculations in chart form, you can click on the chart to enlarge it on screen:

One interesting observation here is that, as you can see from the costs structure chart, the most significant part of costs is IO provisioning – either the requested amount of IOPS (EC2/io1 or RDS/io1) or the actually used amount of IOPS (RDS/Aurora). In the former case, the cost is a function of time, and in the latter case, costs depend only on the amount of I/O requests actually issued.

Let’s check how these costs might look like if we provision EC2/io1, RDS/io1 volumes and RDS/aurora storage for one month. From the cost structure, it’s clear that in case of RDS/aurora 4xlarge – db instance performed 51M I/O requests for half an hour. So we effectively got 51000000 (I/O request) / 1800(seconds) ~= 28000 IOPs.

EC2/io1:    (28000 (IOPS)      * 0.065(IOPs price)    * 24(hours)*30(days)/(24(hours)*30(days))   1820$
RDS/io1:    (28000 (IOPS)      *   0.1(IOPs price)    * 24(hours)*30(days)/(24(hours)*30(days))   2800$
RDS/aurora: 102M(I/O per hour) *   0.2(I/O req price) * 24(hours)*30(days)                       14688$

In this way, IO provisioning of 28000 IOPS for EC2/io1 costs 8x less and for RDS/io1 costs 5x less. That means that to be cost-efficient, the throughput of RDS/Aurora should be at least 5x or even 8x better than that of EC2 or RDS with io1 volume.

Conclusion: the IO provisioning factor should be taken into account during your planning of deployments with io1 volumes or RDS/aurora

Efficiency

Now it’s time to review which databases perform the most efficiently by analyzing their transaction/cost ratio:

Below you can find the minimum and maximum prices for 1000 transactions for each of the database servers in our tests, again running from cheapest to most expensive in $ terms:

Server Min $’s per 1000 TX Server Config Min $’s per 1000 TX Server Config
Server test #4 EC2#Percona Server/RocksDB 0.42 4xlarge/io1 1.93 large/io1
Server test #3 EC2#Percona Server/InnoDB 1.66 4xlarge/gp2 12.11 large/io1
Server test #2 RDS#MySQL/InnoDB 2.23 4xlarge/gp2 22.3 large/io1
Server test #1 RDS#Amazon Aurora 8.29 4xlarge 13.31 xlarge

Some concluding thoughts

  • EC2#Percona Server/RocksDB offers the lowest price per 1000 transactions – $0.42 on m5.4xlarge instance with 350GB io1 volume/15000 IOPs
  • RDS/MySQL looked to be the most expensive in this evaluation – $22.3 for 1000 transactions – db.m5.large with 700GB io1 volume/15000 IOPs
  • Lowest price for each database was obtained on 4xlarge instances, most expensive on large instances.
  • IO provisioning is a key factor that impacts run costs
  • For both EC2 and RDS gp2/5400GB (~16000 IOPS) is the cost wise choice
  • RDS/Aurora – the lowest price per 1000 transactions is $8.29, but that is 4x more expensive than the best price of 1000 transactions for RDS/MySQL, 5x more expensive than for EC2#Percona/InnoDB, and 20x more expensive than for EC2#Percona/RockDB. That means that despite the fact that Amazon Aurora shows very good throughput (actually the best among InnoDB-like engines), it may not be as cost-effective as other options.

One Final Note

When estimating your expenses, you will need to keep in mind that each company is different in terms of what they offer, how they build and manage those offerings, and of course, their pricing structure and cost per transaction. For AWS, you do need to be aware of the expenses of building and managing those things yourself that AWS handles for you; i.e. built into their cost. We can see, however, that in these examples, MyRocks is definitely a cost-effective solution when comparing direct costs.

Jul
18
2019
--

Investor Jocelyn Goldfein to join us on AI panel at TechCrunch Sessions: Enterprise

Artificial intelligence is quickly becoming a foundational technology for enterprise software development and startups have begun addressing a variety of issues around using AI to make software and processes much more efficient.

To that end, we are delighted to announce that Jocelyn Goldfein, a Managing Director at Zetta Venture Partners will be joining on us a panel to discuss AI in the enterprise. It will take place at the TechCrunch Sessions: Enterprise show on September 5 at the Yerba Buena Center in San Francisco.

It’s not just startups that are involved in AI in the enterprise. Some of the biggest names in enterprise software including Salesforce Einstein, Adobe Sensei and IBM Watson have been addressing the need for AI to help solve the enterprise data glut.

Computers can process large amounts of information much more quickly than humans, and as enterprise companies generate increasing amounts of data, they need help understanding it all as the volume of information exceeds human capacity to sort through it.

Goldfein brings a deep engineering background to her investment work. She served as a VP of engineering at VMware and as an engineering director at Facebook, where she led the project that adopted machine learning for the News Feed ranker, launched major updates in photos and search, and helped spearhead Facebook’s pivot to mobile. Goldfein drove significant reforms in Facebook hiring practices and is a prominent evangelist for women in computer science. As an investor, she primarily is focused on startups using AI to take more efficient approaches to infrastructure, security, supply chains and worker productivity.

At TC Sessions: Enterprise, she’ll be joining Bindu Reddy from Reality Engines along with other panelists to discuss the growing role of AI in enterprise software with TechCrunch editors. You’ll learn why AI startups are attracting investor attention and how AI in general could fundamentally transform enterprise software.

Prior to joining Zetta, Goldfein had stints at Facebook and VMware, as well as startups Datify, MessageOne and Trilogy/pcOrder.

Early Bird tickets to see Joyce at TC Sessions: Enterprise are on sale for just $249 when you book here; but hurry, prices go up by $100 soon! Students, grab your discounted tickets for just $75 here.

Jul
18
2019
--

InCountry raises $15M for its cloud-based private data storage-as-a-service solution

The rise of data breaches, along with an expanding raft of regulations (now numbering 80 different regional regimes, and growing) have thrust data protection — having legal and compliant ways of handling personal user information — to the top of the list of things that an organization needs to consider when building and operating their businesses. Now a startup called InCountry, which is building both the infrastructure for these companies to securely store that personal data in each jurisdiction, as well as a comprehensive policy framework for them to follow, has raised a Series A of $15 million. The funding is coming in just three months after closing its seed round — underscoring both the attention this area is getting and the opportunity ahead.

The funding is being led by three investors: Arbor Ventures of Singapore, Global Founders Capital of Berlin and Mubadala of Abu Dhabi. Previous investors Caffeinated Capital, Felicis Ventures, Charles River Ventures and Team Builder Ventures (along with others that are not being named) also participated. It brings the total raised to date to $21 million.

Peter Yared, the CEO and founder, pointed out in an interview the geographic diversity of the three lead backers: he described this as a strategic investment, which has resulted from InCountry already expanding its work in each region. (As one example, he pointed out a new law in the UAE requiring all health data of its citizens to be stored in the country — regardless of where it originated.)

As a result, the startup will be opening offices in each of the regions and launching a new product, InCountry Border, to focus on encryption and data handling that keep data inside specific jurisdictions. This will sit alongside the company’s compliance consultancy as well as its infrastructure business.

“We’re only 28 people and only six months old,” Yared said. “But the proposition we offer — requiring no code changes, but allowing companies to automatically pull out and store the personally identifiable information in a separate place, without anything needed on their own back end, has been a strong pull. We’re flabbergasted with the meetings we’ve been getting.” (The alternative, of companies storing this information themselves, has become massively unpalatable, given all the data breaches we’ve seen, he pointed out.)

In part because of the nature of data protection, in its short six months of life, InCountry has already come out of the gates with a global viewpoint and global remit.

It’s already active in 65 countries — which means it’s already equipped to store, process and regulate profile data in the country of origin in these markets — but that is actually just the tip of the iceberg. The company points out that more than 80 countries around the world have data sovereignty regulations, and that in the U.S., some 25 states already have data privacy laws. Violating these can have disastrous consequences for a company’s reputation, not to mention its bottom line: In Europe, the U.K. data regulator is now fining companies the equivalent of hundreds of millions of dollars when they violate GDPR rules.

This ironically is translating into a big business opportunity for startups that are building technology to help companies cope with this. Just last week, OneTrust raised a $200 million Series A to continue building out its technology and business funnel — the company is a “gateway” specialist, building the welcome screens that you encounter when you visit sites to accept or reject a set of cookies and other data requests.

Yared says that while InCountry is very young and is still working on its channel strategy — it’s mainly working directly with companies at this point — there is a clear opportunity both to partner with others within the ecosystem as well as integrators and others working on cloud services and security to build bigger customer networks.

That speaks to the complexity of the issue, and the different entry points that exist to solve it.

“The rapidly evolving and complex global regulatory landscape in our technology driven world is a growing challenge for companies,” said Melissa Guzy of Arbor Ventures, in a statement. Guzy is joining the board with this round. “InCountry is the first to provide a comprehensive solution in the cloud that enables companies to operate globally and address data sovereignty. We’re thrilled to partner and support the company’s mission to enable global data compliance for international businesses.”

Jul
18
2019
--

Resolving MongoDB Stack Traces

MongoDB Stack Traces

MongoDB Stack TracesWhen a MongoDB server crashes you will usually find what is called a “stack trace” in its log file. But what is it and what purpose does it have? Let’s simulate a simple crash so we can dig into it.

Crashing a test server

In a test setup with a freshly installed MongoDB server, we connect to it and create some test data:

$ mongo
MongoDB shell version v3.6.12
(...)
> use test
switched to db test
> db.albums.insert({ name: "The Wall" })
WriteResult({ "nInserted" : 1 })
> db.albums.find()
{ "_id" : ObjectId("5d237cef9affce6d7e4e8345"), "name" : "The Wall" }

On a separate connection to the server, we change the ownership of the MongoDB data files, so the mongod user will no longer have access to them:

$ sudo chown root:root /var/lib/mongo/*

Going back to the mongo session, we try to add a new record and it fails, as expected:

> db.albums.insert({ name: "The Division Bell" })
2019-07-08T17:27:40.275+0000 E QUERY    [thread1] Error: error doing query: failed: network error while attempting to run command 'insert' on host '127.0.0.1:27017'  :
DB.prototype.runCommand@src/mongo/shell/db.js:168:1
DBCollection.prototype._dbCommand@src/mongo/shell/collection.js:173:1
Bulk/executeBatch@src/mongo/shell/bulk_api.js:903:22
Bulk/this.execute@src/mongo/shell/bulk_api.js:1154:21
DBCollection.prototype.insert@src/mongo/shell/collection.js:317:22
@(shell):1:1
2019-07-08T17:27:40.284+0000 I NETWORK  [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2019-07-08T17:27:40.284+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2019-07-08T17:27:40.284+0000 I NETWORK  [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) failed failed

Looking at the error log we confirm the server has crashed, leaving a stack trace (also called a “backtrace”) behind:

$ sudo cat /var/log/mongodb/mongod.log 
(...)
2019-07-08T17:27:39.666+0000 E STORAGE  [thread2] WiredTiger error (13) [1562606859:666004][24742:0x7f70a3501700], log-server: __directory_list_worker, 48: /home/vagrant/db/journal: directory-list: opendir: Permission denied
(...)
2019-07-08T17:27:39.666+0000 E STORAGE  [thread2] WiredTiger error (-31804) [1562606859:666313][24742:0x7f70a3501700], log-server: __wt_panic, 523: the process must exit and restart: WT_PANIC: WiredTiger library panic
(...)
----- BEGIN BACKTRACE -----
(...)
 mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x5618a3ab92c1]
 mongod(+0x22744D9) [0x5618a3ab84d9]
 mongod(+0x22749BD) [0x5618a3ab89bd]
 libpthread.so.0(+0xF6D0) [0x7f70a6cff6d0]
 libc.so.6(gsignal+0x37) [0x7f70a6959277]
 libc.so.6(abort+0x148) [0x7f70a695a968]
 mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x5618a21e064c]
 mongod(+0xA6D9EE) [0x5618a22b19ee]
 mongod(+0xADEEF1) [0x5618a2322ef1]
 mongod(__wt_err_func+0x90) [0x5618a217b742]
 mongod(__wt_panic+0x3F) [0x5618a217bb62]
 mongod(+0xB3DFB2) [0x5618a2381fb2]
 libpthread.so.0(+0x7E25) [0x7f70a6cf7e25]
 libc.so.6(clone+0x6D) [0x7f70a6a21bad]
-----  END BACKTRACE  -----
Aborted

But what can we infer from these somewhat cryptic lines full of hexadecimal content?

Inspecting the MongoDB stack trace

In the bottom of the stack trace, we can see a list of function names and addresses. Note the resolution of most functions worked reasonably well in the example above; the mongod binary used by our test server is not stripped of symbols (if yours is you will need to install the respective debugsymbols/debuginfo package and use the mongod binary provided by it to resolve the stack trace):

$ file `which mongod`
/usr/bin/mongod: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=1d0fd59529274e06c35e6dc4c74e0ef08caf931c, not stripped

This means we can actually extract them from the mongod binary with the help of a tool such as nm, from GNU Development Tools:

$ nm -n /usr/bin/mongod > mongod.symbols

Function names appear all mangled though:

$ tail mongod.symbols 
0000000003125f88 u _ZNSt9money_getIwSt19istreambuf_iteratorIwSt11char_traitsIwEEE2idE
0000000003125f90 u _ZNSt10moneypunctIwLb1EE2idE
0000000003125f98 u _ZNSt10moneypunctIwLb0EE2idE
0000000003125fa0 b _ZZN9__gnu_cxx27__verbose_terminate_handlerEvE11terminating
0000000003125fc0 b _ZZN12_GLOBAL__N_112get_catalogsEvE10__catalogs
0000000003126008 b _ZGVZN12_GLOBAL__N_112get_catalogsEvE10__catalogs
0000000003126020 b _ZZN12_GLOBAL__N_112get_catalogsEvE10__catalogs
0000000003126068 b _ZGVZN12_GLOBAL__N_112get_catalogsEvE10__catalogs
0000000003126080 B __wt_process
00000000031260e8 A _end

We can use another tool from that same toolkit, c++filt, to get them straightened, for example:

$ echo _ZNSt9money_getIwSt19istreambuf_iteratorIwSt11char_traitsIwEEE2idE | c++filt 
std::money_get<wchar_t, std::istreambuf_iterator<wchar_t, std::char_traits<wchar_t> > >::id

In fact, we can process the whole stack trace with c++filt all at once…

$ cat <<EOT | c++filt
> mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x5618a3ab92c1]
> mongod(+0x22744D9) [0x5618a3ab84d9]
> mongod(+0x22749BD) [0x5618a3ab89bd]
> libpthread.so.0(+0xF6D0) [0x7f70a6cff6d0]
> libc.so.6(gsignal+0x37) [0x7f70a6959277]
> libc.so.6(abort+0x148) [0x7f70a695a968]
> mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x5618a21e064c]
> mongod(+0xA6D9EE) [0x5618a22b19ee]
> mongod(+0xADEEF1) [0x5618a2322ef1]
> mongod(__wt_err_func+0x90) [0x5618a217b742]
> mongod(__wt_panic+0x3F) [0x5618a217bb62]
> mongod(+0xB3DFB2) [0x5618a2381fb2]
> libpthread.so.0(+0x7E25) [0x7f70a6cf7e25]
> libc.so.6(clone+0x6D) [0x7f70a6a21bad]
> EOT

… and get it fully demangled, with C++ function and method names easily recognizable now:

mongod(mongo::printStackTrace(std::basic_ostream<char, std::char_traits<char> >&)+0x41) [0x5618a3ab92c1]
mongod(+0x22744D9) [0x5618a3ab84d9]
mongod(+0x22749BD) [0x5618a3ab89bd]
libpthread.so.0(+0xF6D0) [0x7f70a6cff6d0]
libc.so.6(gsignal+0x37) [0x7f70a6959277]
libc.so.6(abort+0x148) [0x7f70a695a968]
mongod(mongo::fassertFailedNoTraceWithLocation(int, char const*, unsigned int)+0x0) [0x5618a21e064c]
mongod(+0xA6D9EE) [0x5618a22b19ee]
mongod(+0xADEEF1) [0x5618a2322ef1]
mongod(__wt_err_func+0x90) [0x5618a217b742]
mongod(__wt_panic+0x3F) [0x5618a217bb62]
mongod(+0xB3DFB2) [0x5618a2381fb2]
libpthread.so.0(+0x7E25) [0x7f70a6cf7e25]
libc.so.6(clone+0x6D) [0x7f70a6a21bad]

While easily reproducible, this was not a very interesting example: the change in the database files ownership caused WiredTiger to crash upon the insert without leaving much trace behind. Let’s have a look at another one.

A more realistic example

Despite being somewhat old, bug SERVER-13751 (mongod crash on geo nearSphere query) provides a realistic yet easy to reproduce example of a simple routine that crashed MongoDB 2.6.0 (this bug is, in fact, a duplicate of SERVER-13666, but it provides a simpler test case). Here’s how to get to it.

1) First, we download these old binaries and start a MongoDB server:

$ wget http://downloads.mongodb.org/linux/mongodb-linux-x86_64-2.6.0.tgz
$ tar zxvf mongodb-linux-x86_64-2.6.0.tgz
$ cd mongodb-linux-x86_64-2.6.0/bin
$ mkdir /home/vagrant/db
$ ./mongod --dbpath /home/vagrant/db

2) In a second terminal window, we connect to the MongoDB server we just started and run a more simplified version of the routine described in the bug, which consists of creating a 2dsphere index and querying for a point described with invalid coordinates:

$ cd mongodb-linux-x86_64-2.6.0/bin
$ ./mongo
> db.places.ensureIndex({loc:"2dsphere"})
> db.places.find({loc:{$nearSphere: [200.4905, 300.2646]}})

Now when we look back at the first terminal we find the server has crashed, leaving the following stack trace:

./mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11bd301]
./mongod() [0x11bc6de]
/lib64/libc.so.6(+0x36340) [0x7f11cd866340]
/lib64/libc.so.6(gsignal+0x37) [0x7f11cd8662c7]
/lib64/libc.so.6(abort+0x148) [0x7f11cd8679b8]
./mongod(_ZN5mongo13fassertFailedEi+0x13a) [0x11421ea]
./mongod(_ZN15LogMessageFatalD1Ev+0x1d) [0x125d58d]
./mongod(_ZN5S2Cap13FromAxisAngleERK7Vector3IdERK7S1Angle+0x169) [0x1267699]
./mongod(_ZN5mongo11S2NearStage11nextAnnulusEv+0xd2) [0xabd142]
./mongod(_ZN5mongo11S2NearStage4workEPm+0x1fb) [0xabf2cb]
./mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd66a7f]
./mongod(_ZN5mongo11newRunQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x958) [0xd4acf8]
./mongod() [0xb96382]
./mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x442) [0xb98962]
./mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x9f) [0x76b76f]
./mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x4fb) [0x117367b]
/lib64/libpthread.so.0(+0x7dd5) [0x7f11ce62bdd5]
/lib64/libc.so.6(clone+0x6d) [0x7f11cd92e02d]

Processing  the stack trace with c++filt we get:

./mongod(mongo::printStackTrace(std::basic_ostream<char, std::char_traits<char> >&)+0x21) [0x11bd301]
./mongod() [0x11bc6de]
/lib64/libc.so.6(+0x36340) [0x7f11cd866340]
/lib64/libc.so.6(gsignal+0x37) [0x7f11cd8662c7]
/lib64/libc.so.6(abort+0x148) [0x7f11cd8679b8]
./mongod(mongo::fassertFailed(int)+0x13a) [0x11421ea]
./mongod(LogMessageFatal::~LogMessageFatal()+0x1d) [0x125d58d]
./mongod(S2Cap::FromAxisAngle(Vector3<double> const&, S1Angle const&)+0x169) [0x1267699]
./mongod(mongo::S2NearStage::nextAnnulus()+0xd2) [0xabd142]
./mongod(mongo::S2NearStage::work(unsigned long*)+0x1fb) [0xabf2cb]
./mongod(mongo::PlanExecutor::getNext(mongo::BSONObj*, mongo::DiskLoc*)+0xef) [0xd66a7f]
./mongod(mongo::newRunQuery(mongo::Message&, mongo::QueryMessage&, mongo::CurOp&, mongo::Message&)+0x958) [0xd4acf8]
./mongod() [0xb96382]
./mongod(mongo::assembleResponse(mongo::Message&, mongo::DbResponse&, mongo::HostAndPort const&)+0x442) [0xb98962]
./mongod(mongo::MyMessageHandler::process(mongo::Message&, mongo::AbstractMessagingPort*, mongo::LastError*)+0x9f) [0x76b76f]
./mongod(mongo::PortMessageServer::handleIncomingMsg(void*)+0x4fb) [0x117367b]
/lib64/libpthread.so.0(+0x7dd5) [0x7f11ce62bdd5]
/lib64/libc.so.6(clone+0x6d) [0x7f11cd92e02d]

This particular mongod binary is stripped of symbols:

$ file mongod
mongod: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped

So in order to resolve the stack trace, we need to first obtain one that is not:

$ wget http://downloads.mongodb.org/linux/mongodb-linux-x86_64-debugsymbols-2.6.0.tgz
$ tar zxvf mongodb-linux-x86_64-debugsymbols-2.6.0.tgz
$ cd mongodb-linux-x86_64-debugsymbols-2.6.0/bin

We can now extract the actual function names from the addresses between the brackets using addr2line (option “f” provides the function name and if we also use “i” we get the preceding ones as well if the main one was inline; option “C” provides some extent of demangling, similar to c++filt):

$ addr2line -e mongod -ifC 0x1267699
S2Cap::FromAxisAngle(Vector3<double> const&, S1Angle const&)
/srv/10gen/mci-exec/mci/git@github.commongodb/mongo.git/mongodb-mongo-v2.6/src/third_party/s2/s2cap.cc:35

One of the greatest values of working with Open Source software is being able to have a direct look at this exact piece of code, which translates to https://github.com/mongodb/mongo/blob/v2.6/src/third_party/s2/s2cap.cc#L35 :

S2Cap S2Cap::FromAxisAngle(S2Point const& axis, S1Angle const& angle) {
  DCHECK(S2::IsUnitLength(axis));
  DCHECK_GE(angle.radians(), 0);
  return S2Cap(axis, GetHeightForAngle(angle.radians()));
}

Note that the actual fix for this bug didn’t come from modifying this function, which is being re-used from a third-party (another beauty of working with Open Source!), but in making sure the arguments that are being passed to it (which compose the point’s coordinates) are validated beforehand.

There is a home for bugs

If you ever run into a MongoDB server crash, I hope this little set of instructions can serve as a reference in helping you make sense of the stack trace that will (hopefully) have been left behind. You can then search for bugs at https://jira.mongodb.org if you’re running a MongoDB server, or at https://jira.percona.com/projects/PSMDB if you’re running Percona Server for MongoDB. If you can’t find a bug that matches your crash, please consider filing a new one; providing a clear stack trace alongside the exact binary version you’re using is a must. If you are able to reproduce the problem at will and can provide a reproducible test case as well, like the ones we showed above, that will not only make the life of our developers easier, it also increases the likelihood of getting the bug fixed much, much faster.

Jul
18
2019
--

VComply raises $2.5 million seed round led by Accel to simplify risk and compliance management

Risk and compliance management platform VComply announced today that it has picked up a $2.5 million seed round led by Accel Partners for its international growth plan. The funding will be used to acquire more customers in the United States, open a new office in the United Kingdom to support customers in Europe and expand its presence in New Zealand and Australia.

The company was founded in 2016 by CEO Harshvardhan Kariwala and has customers in a wide range of industries, including Acreage Holdings, Ace Energy Solutions, CHD, the United Kingdom’s Department of International Trade and Burger King. It currently claims about 4,000 users in more than 100 countries. VComply is meant to be used by all departments in a company, with compliance information organized into a central dashboard.

While there are already a roster of governance, risk and compliance management solutions on the market (including ones from Oracle, HPE, Thomson Reuters, IBM and other established enterprise software companies), VComply’s competitive edge may be its flexibility, simple user interface and easy deployment (the company claims customers can on-board and start using the solution for compliance tasks in about 30 minutes). It also seeks out smaller companies whose needs have not been met by compliance solutions meant for large enterprises.

Kariwala told TechCrunch in an email that he began thinking of creating a new risk and compliance solution while working at his first startup, LIME Learning Systems, an education management platform, after being hit with a $4,000 penalty due to a non-compliance issue.

“Believe me, $4,000 really hurts when you’re bootstrapped and trying to save every single cent you can. In this case, I had asked our outsourced accounting partners to manage this compliance and they forgot!,” he said. After talking to other entrepreneurs, he realized compliance posed a challenge for most of them. LIME’s team built an internal compliance tracking tool for their own use, but also shared it with other people. After getting good feedback, Kariwala realized that despite the many governance, risk and compliance management solutions already on the market, there was still a gap in the market, especially for smaller businesses.

VComply is designed so organizations can customize it for their industry’s regulations and standards, as well as their own workflow and data needs, with competitive pricing for small to medium-sized organizations (a subscription starts at $3,999 a year).

“Most of the traditional GRC solutions that exist today are expensive, have a steep learning curve and entail a prolonged deployment. Not only are they expensive, they are also rigid, which means that organizations have little to no control or flexibility,” Kariwala said. “A GRC tool is often looked at as an expense, while it should really be treated as an investment. It is particularly the SMB sector that suffers the most. With the current solutions costing thousands of dollars (and sometimes millions), it becomes the least of their priorities to invest in a GRC platform, and as a result they fall prey to heightened risks and hefty penalties for non-compliance.”

In a press statement, Accel partner Dinesh Katiyar said, “The first generation of GRC solutions primarily allowed companies to comply with industry-mandated regulations. However, the modern enterprise needs to govern its operations to maintain integrity and trust, and monitor internal and external risks to stay successful. That is where VComply shines, and we’re delighted to be partnering with a company that can redefine the future of enterprise risk management.”

Jul
18
2019
--

Intel announces deep, multi-year partnership with SAP

Intel announced a deep partnership with SAP today around using advanced Intel technology to optimize SAP software tools. Specifically, the company plans to tune its Intel Xeon Scalable processors and Intel Optane DC persistent memory for SAP’s suite of applications.

The multi-year partnership includes giving SAP early access to emerging Intel technologies and building a Center of Excellence. “We’re announcing a multi-year technology partnership that’s focused on optimizing Intel’s platform innovations… across the entire portfolio of SAP’s end-to-end enterprise software applications including SAP S/4HANA,” Rajeeb Hazra, corporate vice president of Intel’s Enterprise and Government Business, told TechCrunch.

He says that this will cover broad areas of Intel technology, including CPU, accelerators, data center, persistent memory and software infrastructure. “We’re taking all of that data-centric portfolio to move data faster, store data more efficiently and process all kinds of data for all kinds of workloads,” he explained.

The idea is to work closely together to help customers understand and use the two sets of technologies in tandem in a more efficient manner. “The goal here is [to expose] a broad portfolio of Intel technologies for the data-centric era, close collaboration with SAP to accelerate the pace of innovation of SAP’s entire broad suite of enterprise class applications, while making it easier for customers to see, test and deploy this technology,” he said.

Irfan Khan, president of Platform and Technologies at SAP, says this partnership should help deliver better performance across the SAP suite of products including SAP S/4HANA, its in-memory database product. “Our expanded partnership with Intel will accelerate our customers’ move to SAP S/4HANA by allowing organizations to unlock the value of data assets with greater ease and operate with increased visibility, focus and agility,” Khan said in a statement.

Hazra says that this is part of a broader enterprise strategy the company has been undertaking for many years, but it is focusing specifically on SAP for this agreement because of its position in the enterprise software ecosystem. He believes that by partnering with SAP at this level, the two companies can gain further insight that could help customers as they use advanced technologies like AI and machine learning.

“This partnership is [significant for us] given SAP’s focus and position in the markets that they serve with enterprise class applications, and the importance of what they’re doing for our core enterprise customers in those areas of the enterprise. This includes the emerging areas of machine learning and AI. With their suite [of products], it gives those customers the ability to accelerate innovation in their businesses by being able to see, touch, feel and consume this innovation much more efficiently,” he said.

Jul
17
2019
--

Southeast Asian cloud communications platform Wavecell acquired by 8×8 in deal worth $125 million

Wavecell, a cloud-communications platform for companies in Southeast Asia, announced today that it has been acquired by 8×8 in a deal worth about $125 million. The acquisition will help San Jose, Calif.-based 8×8 expand in Asia, where Wavecell already has offices in Singapore, Indonesia, the Philippines, Thailand and Hong Kong.

Wavecell’s cloud API platform, which includes SMS, chat, video and voice messaging, is used by companies such as Paidy, Lalamove and Tokopedia. It has relationships with 192 network operators and partners like WhatsApp and claims its infrastructure is used to share more than two billion messages each year.

The terms of the deal includes $69 million in cash and about $56 million in 8×8 common shares. Founded in 2010, Wavecell’s investors included Qualgro VC, Wavemaker Partners and MDI Ventures.

In a prepared statement, 8×8 CEO Vik Verma said “8×8 is now the only cloud provider that owns the full, global-scale, cloud-native, technology stack offering voice, video, messaging, and contact center delivered both as pre-packaged applications and as enterprise-class APIs. We’re excited to welcome the Wavecell employees to the 8×8 family. We now have a significant market presence in Asia and expect to continue to expand in the region and globally in order to meet evolving customer requirements.”

Jul
17
2019
--

AT&T signs $2 billion cloud deal with Microsoft

While AWS leads the cloud infrastructure market by a wide margin, Microsoft isn’t doing too badly, ensconced firmly in second place, the only other company with double-digit share. Today, it announced a big deal with AT&T that encompasses both Azure cloud infrastructure services and Office 365.

A person with knowledge of the contract pegged the combined deal at a tidy $2 billion, a nice feather in Microsoft’s cloud cap. According to a Microsoft blog post announcing the deal, AT&T has a goal to move most of its non-networking workloads to the public cloud by 2024, and Microsoft just got itself a big slice of that pie, surely one that rivals AWS, Google and IBM (which closed the $34 billion Red Hat deal last week) would dearly have loved to get.

As you would expect, Microsoft CEO Satya Nadella spoke of the deal in lofty terms around transformation and innovation. “Together, we will apply the power of Azure and Microsoft 365 to transform the way AT&T’s workforce collaborates and to shape the future of media and communications for people everywhere,” he said in a statement in the blog post announcement.

To that end, they are looking to collaborate on emerging technologies like 5G and believe that by combining Azure with AT&T’s 5G network, the two companies can help customers create new kinds of applications and solutions. As an example cited in the blog post, they could see using the speed of the 5G network combined with Azure AI-powered live voice translation to help first responders communicate instantaneously with someone who speaks a different language.

It’s worth noting that while this deal to bring Office 365 to AT&T’s 250,000 employees is a nice win, that part of the deal falls under the SaaS umbrella, so it won’t help with Microsoft’s cloud infrastructure market share. Still, any way you slice it, this is a big deal.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com