Apr
30
2018
--

RedHat’s CoreOS launches a new toolkit for managing Kubernetes applications

CoreOS, the Linux distribution and container management startup Red Hat acquired for $250 million earlier this year, today announced the Operator Framework, a new open source toolkit for managing Kubernetes clusters.

CoreOS first talked about operators in 2016. The general idea here is to encode the best practices for deploying and managing container-based applications as code. “The way we like to think of this is that the operators are basically a picture of the best employee you have,” Red Hat OpenShift product manager Rob Szumski told me. Ideally, the Operator Framework frees up the operations team from doing all the grunt work of managing applications and allows them to focus on higher-level tasks. And at the same time, it also removes the error-prone humans from the process since the operator will always follow the company rulebook.

“To make the most of Kubernetes, you need a set of cohesive APIs to extend in order to service and manage your applications that run on Kubernetes,” CoreOS CTO Brandon Philips explains in today’s announcement. “We consider Operators to be the runtime that manages this type of application on Kubernetes.”

As Szumski told me, the CoreOS team developed many of these best practices in building and managing its own Tectonic container platform (and from the community that uses it). Once written, the operators watch over the Kubernetes cluster and can handle upgrades, for example, and when things go awry, the can react to failures within milliseconds.

The overall Operator Framework consists of three pieces: an SDK for building, testing and packaging the actual operator, the Operator Lifecycle Manager for deploying the operator to a Kubernetes cluster and managing them, and the Operator Metering tool for metering Kubernetes users for enterprises that need to do chargebacks or that want to charge their customers based on usage.

The metering tool doesn’t quite seem to fit into the overall goal here, but as Szumski told me, it’s something a lot of businesses have been looking for and CoreOS actually argues that this is a first for Kubernetes.

Today’s CoreOS/Red Hat announcement only marks the start of a week that’ll likely see numerous other Kubernetes-related announcements. That’s because the Cloud Native Computing Foundation is its KubeCon developer conference in the next few days and virtually every company in the container ecosystem will attend the event and have some kind of announcements.

Apr
30
2018
--

Suki raises $20M to create a voice assistant for doctors

When trying to figure out what to do after an extensive career at Google, Motorola, and Flipkart, Punit Soni decided to spend a lot of time sitting in doctors’ offices to figure out what to do next.

It was there that Soni said he figured out one of the most annoying pain points for doctors in any office: writing down notes and documentation. That’s why he decided to start Suki — previously Robin AI — to create a way for doctors to simply start talking aloud to take notes when working with patients, rather than having to put everything into a medical record system, or even writing those notes down by hand. That seemed like the lowest hanging fruit, offering an opportunity to make it easier for doctors that see dozens of patients to make their lives significantly easier, he said.

“We decided we had found a powerful constituency who were burning out because of just documentation,” Soni said. “They have underlying EMR systems that are much older in design. The solution aligns with the commoditization of voice and machine learning. If you put it all together, if we can build a system for doctors and allow doctors to use it in a relatively easy way, they’ll use it to document all the interactions they do with patients. If you have access to all data right from a horse’s mouth, you can use that to solve all the other problems on the health stack.”

The company said it has raised a $15 million funding round led by Venrock, with First Round, Social+Capital, Nat Turner of Flatiron Health, Marc Benioff, and other individual Googlers and angels. Venrock also previously led a $5 million seed financing round, bringing the company’s total funding to around $20 million. It’s also changing its name from Robin AI to Suki, though the reason is actually a pretty simple one: “Suki” is a better wake word for a voice assistant than “Robin” because odds are there’s someone named Robin in the office.

The challenge for a company like Suki is not actually the voice recognition part. Indeed, that’s why Soni said they are actually starting a company like this today: voice recognition is commoditized. Trying to start a company like Suki four years ago would have meant having to build that kind of technology from scratch, but thanks to incredible advances in machine learning over just the past few years, startups can quickly move on to the core business problems they hope to solve rather than focusing on early technical challenges.

Instead, Suki’s problem is one of understanding language. It has to ingest everything that a doctor is saying, parse it, and figure out what goes where in a patient’s documentation. That problem is even more complex because each doctor has a different way of documenting their work with a patient, meaning it has to take extra care in building a system that can scale to any number of doctors. As with any company, the more data it collects over time, the better those results get — and the more defensible the business becomes, because it can be the best product.

“Whether you bring up the iOS app or want to bring it in a website, doctors have it in the exam room,” Soni said. “You can say, ‘Suki, make sure you document this, prescribe this drug, and make sure this person comes back to me for a follow-up visit.’ It takes all that, it captures it into a clinically comprehensive note and then pushes it to the underlying electronic medical record. [Those EMRs] are the system of record, it is not our job to day-one replace these guys. Our job is to make sure doctors and the burnout they are having is relieved.”

Given that voice recognition is commoditized, there will likely be others looking to build a scribe for doctors as well. There are startups like Saykara looking to do something similar, and in these situations it often seems like the companies that are able to capture the most data first are able to become the market leaders. And there’s also a chance that a larger company — like Amazon, which has made its interest in healthcare already known — may step in with its comprehensive understanding of language and find its way into the doctors’ office. Over time, Soni hopes that as it gets more and more data, Suki can become more intelligent and more than just a simple transcription service.

“You can see this arc where you’re going from an Alexa, to a smarter form of a digital assistant, to a device that’s a little bit like a chief resident of a doctor,” Soni said. “You’ll be able to say things like, ‘Suki, pay attention,’ and all it needs to do is listen to your conversation with the patient. I’m, not building a medical transcription company. I’m basically trying to build a digital assistant for doctors.”

Apr
30
2018
--

A Look at MyRocks Performance

MyRocks Performance

In this blog post, I’ll look at MyRocks performance through some benchmark testing.

As the MyRocks storage engine (based on the RocksDB key-value store http://rocksdb.org ) is now available as part of Percona Server for MySQL 5.7, I wanted to take a look at how it performs on a relatively high-end server and SSD storage. I wanted to check how it performs for different amounts of available memory for the given database size. This is similar to the benchmark I published a while ago for InnoDB (https://www.percona.com/blog/2010/04/08/fast-ssd-or-more-memory/).

In this case, I plan to use a sysbench-tpcc benchmark (https://www.percona.com/blog/2018/03/05/tpcc-like-workload-sysbench-1-0/) and I will execute it for both MyRocks and InnoDB. We’ll use InnoDB as a baseline.

For the benchmark, I will use 100 TPC-C warehouses, with a set of 10 tables (to shift the bottleneck from row contention). This should give roughly 90GB of data size (when loaded into InnoDB) and is a roughly equivalent to 1000 warehouses data size.

To vary the memory size, I will change innodb_buffer_pool_size from 5GB to 100GB for InnoDB, and rocksdb_block_cache_size for MyRocks.

For MyRocks we will use LZ4 as the default compression on disk. The data size in the MyRocks storage engine is 21GB. Interesting to note, that in MyRocks uncompressed size is 70GB on the storage.

For both engines, I did not use FOREIGN KEYS, as MyRocks does not support it at the moment.

MyRocks does not support SELECT .. FOR UPDATE statements in REPEATABLE-READ mode in the Percona Server for MySQL implementation. However, “SELECT .. FOR UPDATE” is used in this benchmark. So I had to use READ-COMMITTED mode, which is supported.

The most important setting I used was to enable binary logs, for the following reasons:

  1. Any serious production uses binary logs
  2. With disabled binary logs, MyRocks is affected by a suboptimal transaction coordinator

I used the following settings for binary logs:

  • binlog_format = ‘ROW’
  • binlog_row_image=minimal
  • sync_binlog=10000 (I am not using 0, as this causes serious stalls during binary log rotations, when the  content of binary log is flushed to storage all at once)

While I am not a full expert in MyRocks tuning yet, I used recommendations from this page: https://github.com/facebook/mysql-5.6/wiki/my.cnf-tuning. The Facebook-MyRocks engineering team also provided me input on the best settings for MyRocks.

Let’s review the results for different memory sizes.

This first chart shows throughput jitter. This helps to understand the distribution of throughput results. Throughput is measured every 1 second, and on the chart I show all measurements after 2000 seconds of a run (the total length of each run is 3600 seconds). So I show the last 1600 seconds of each run (to remove warm-up phases):

MyRocks Performance

To better quantify results, let’s take a look at them on a boxplot. The quickest way to understand boxplots is to take a look at the middle line. It represents a median of measurements (see more at https://www.percona.com/blog/2012/02/23/some-fun-with-r-visualization/):

MyRocks Performance 2

Before we jump to the summary of results, let’s take a look at a variation of the throughput for both InnoDB and MyRocks. We will zoom to a 1-second resolution chart for 100 GB of allocated memory:

MyRocks Performance 3

We can see that there is a lot of variation with periodical 1-second performance drops with MyRocks. At this moment, I do not know what causes these drops.

So let’s take a look at the average throughput for each engine for different memory settings (the results are in tps, and more is better):

Memory, GB InnoDB MyRocks
5 849.0664 4205.714
10 1321.9 4298.217
20 1808.236 4333.424
30 2275.403 4394.413
40 2968.101 4459.578
50 3867.625 4503.215
60 4756.551 4571.163
70 5527.853 4576.867
80 5984.642 4616.538
90 5949.249 4620.87
100 5961.2 4599.143

 

This is where MyRocks behaves differently from InnoDB. InnoDB benefits greatly from additional memory, up to the size of working dataset. After that, there is no reason to add more memory.

At the same time, interestingly MyRocks does not benefit much from additional memory.

Basically, MyRocks performs as expected for a write-optimized engine. You can refer to my article How Three Fundamental Data Structures Impact Storage and Retrieval for more details. 

In conclusion, InnoDB performs better (compared to itself) when the working dataset fits (or almost fits) into available memory, while MyRocks can operate (and outperform InnoDB) on small memory sizes.

IO and CPU usage

It is worth looking at resource utilization for each engine. I took vmstat measurements for each run so that we can analyze IO and CPU usage.

First, let’s review writes per second (in KB/sec). Please keep in mind that these writes include binary log writes too, not just writes from the storage engine.

Memory, GB InnoDB MyRocks
5 244754.4 87401.54
10 290602.5 89874.55
20 311726 93387.05
30 313851.7 93429.92
40 316890.6 94044.94
50 318404.5 96602.42
60 276341.5 94898.08
70 217726.9 97015.82
80 184805.3 96231.51
90 187185.1 96193.6
100 184867.5 97998.26

 

We can also calculate how many writes per transaction each storage engine performs:

MyRocks Performance 4

This chart shows the essential difference between InnoDB and MyRocks. MyRocks, being a write-optimized engine, uses a constant amount of writes per transaction.

For InnoDB, the amount of writes greatly depends on the memory size. The less memory we have, the more writes it has to perform.

What about reads?

The following table shows reads in KB per second.

Memory, GB InnoDB MyRocks
5 218343.1 171957.77
10 171634.7 146229.82
20 148395.3 125007.81
30 146829.1 110106.87
40 144707 97887.6
50 132858.1 87035.38
60 98371.2 77562.45
70 42532.15 71830.09
80 3479.852 66702.02
90 3811.371 64240.41
100 1998.137 62894.54

 

We can translate this to the number of reads per transaction:

MyRocks Performance 5

This shows MyRocks’ read-amplification. The allocation of more memory helps to decrease IO reads, but not as much as for InnoDB.

CPU usage

Let’s also review CPU usage for each storage engine. Let’s start with InnoDB:

MyRocks Performance 6

The chart shows that for 5GB memory size, InnoDB spends most of its time in IO waits (green area), and the CPU usage (blue area) increases with more memory.

This is the same chart for MyRocks:

MyRocks Performance 7

In tabular form:

Memory, GB engine us sys wa id
5 InnoDB 8 2 57 33
5 MyRocks 56 11 18 15
10 InnoDB 12 3 57 28
10 MyRocks 57 11 18 13
20 InnoDB 16 4 55 25
20 MyRocks 58 11 19 11
30 InnoDB 20 5 50 25
30 MyRocks 59 11 19 10
40 InnoDB 26 7 44 24
40 MyRocks 60 11 20 9
50 InnoDB 35 8 38 19
50 MyRocks 60 11 21 7
60 InnoDB 43 10 36 10
60 MyRocks 61 11 22 6
70 InnoDB 51 12 34 4
70 MyRocks 61 11 23 5
80 InnoDB 55 12 31 1
80 MyRocks 61 11 23 5
90 InnoDB 55 12 32 1
90 MyRocks 61 11 23 4
100 InnoDB 55 12 32 1
100 MyRocks 61 11 24 4

 

We can see that MyRocks uses a lot of CPU (in us+sys state) no matter how much memory is allocated. This leads to the conclusion that MyRocks performance is limited more by CPU performance than by available memory.

MyRocks directory size

As MyRocks writes all changes and compacts SST files down the road, it would be interesting to see how the data directory size changes during the benchmark so we can estimate our storage needs. Here is a chart of datadirectory size:

MyRocks Performance 8

We can see that datadirectory goes from 20GB at the start, to 31GB during the benchmark. It is interesting to observe the data growing until compaction shrinks it.

Conclusion

In conclusion, I can say that MyRocks performance increases as the ratio of dataset size to memory increases, outperforming InnoDB by almost five times in the case of 5GB memory allocation. Throughput variation is something to be concerned about, but I hope this gets improved in the future.

MyRocks does not require a lot of memory and shows constant write IO, while using most of the CPU resources.

I think this potentially makes MyRocks a great choice for cloud database instances, where both memory and IO can cost a lot. MyRocks deployments would make it cheaper to deploy in the cloud.

I will follow up with further cloud-oriented benchmarks.

Extras

Raw results, scripts and config

My goal is to provide fully repeatable benchmarks. To this end, I’m  sharing all the scripts and settings I used in the following GitHub repo:

https://github.com/Percona-Lab-results/201803-sysbench-tpcc-myrocks

MyRocks settings

rocksdb_max_open_files=-1
rocksdb_max_background_jobs=8
rocksdb_max_total_wal_size=4G
rocksdb_block_size=16384
rocksdb_table_cache_numshardbits=6
# rate limiter
rocksdb_bytes_per_sync=16777216
rocksdb_wal_bytes_per_sync=4194304
rocksdb_compaction_sequential_deletes_count_sd=1
rocksdb_compaction_sequential_deletes=199999
rocksdb_compaction_sequential_deletes_window=200000
rocksdb_default_cf_options="write_buffer_size=256m;target_file_size_base=32m;max_bytes_for_level_base=512m;max_write_buffer_number=4;level0_file_num_compaction_trigger=4;level0_slowdown_writes_trigger=20;level0_stop_writes_trigger=30;max_write_buffer_number=4;block_based_table_factory={cache_index_and_filter_blocks=1;filter_policy=bloomfilter:10:false;whole_key_filtering=0};level_compaction_dynamic_level_bytes=true;optimize_filters_for_hits=true;memtable_prefix_bloom_size_ratio=0.05;prefix_extractor=capped:12;compaction_pri=kMinOverlappingRatio;compression=kLZ4Compression;bottommost_compression=kLZ4Compression;compression_opts=-14:4:0"
rocksdb_max_subcompactions=4
rocksdb_compaction_readahead_size=16m
rocksdb_use_direct_reads=ON
rocksdb_use_direct_io_for_flush_and_compaction=ON

InnoDB settings

# files
 innodb_file_per_table
 innodb_log_file_size=15G
 innodb_log_files_in_group=2
 innodb_open_files=4000
# buffers
 innodb_buffer_pool_size= 200G
 innodb_buffer_pool_instances=8
 innodb_log_buffer_size=64M
# tune
 innodb_doublewrite= 1
 innodb_support_xa=0
 innodb_thread_concurrency=0
 innodb_flush_log_at_trx_commit= 1
 innodb_flush_method=O_DIRECT_NO_FSYNC
 innodb_max_dirty_pages_pct=90
 innodb_max_dirty_pages_pct_lwm=10
 innodb_lru_scan_depth=1024
 innodb_page_cleaners=4
 join_buffer_size=256K
 sort_buffer_size=256K
 innodb_use_native_aio=1
 innodb_stats_persistent = 1
 #innodb_spin_wait_delay=96
# perf special
 innodb_adaptive_flushing = 1
 innodb_flush_neighbors = 0
 innodb_read_io_threads = 4
 innodb_write_io_threads = 2
 innodb_io_capacity=2000
 innodb_io_capacity_max=4000
 innodb_purge_threads=4
 innodb_adaptive_hash_index=1

Hardware spec

Supermicro server:

  • CPU:
    • Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
    • 2 sockets / 28 cores / 56 threads
  • Memory: 256GB of RAM
  • Storage: SAMSUNG  SM863 1.9TB Enterprise SSD
  • Filesystem: ext4
  • Percona-Server-5.7.21-20
  • OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic

The post A Look at MyRocks Performance appeared first on Percona Database Performance Blog.

Apr
30
2018
--

Keep Sensitive Data Secure in a Replication Setup

Keep sensitive data secure

Keep sensitive data secureThis blog post describes how to keep sensitive data secure on slave servers in a MySQL async replication setup.

Almost every web application has a sensitive data: passwords, SNN, credit cards, emails, etc. Splitting the database to secure and “public” parts allows for restricting user and application parts access to sensitive data.

Field encryption

This is based on MySQL encryption functions or on client-side encryption when the authorized user knows a secret, but encrypted data is distributed to all slaves.

  • If possible, use hashes with a big enough salt, and do not store real sensitive data in the database. A good example is passwords. An end-user sends the login and password, application/SQL code calculates the hash with a salt value unique for each end-user and compares the hash with the value stored in the database. Even if the attacker gets the hashes, it’s still hard or even impossible to extract real passwords for all users. Make sure that you are using a good random number generator for the salt, application-side secret, and a good hash function (not MD5).
  • Encryption is not suitable if you are going to provide public access to your database (via slave dumps in sql/csv/xml/json format).
  • Encryption is a complex topic. Check here for a good blog post explaining hashing usage, and try to find a security consultant if you are inventing some “new” method of storing and encrypting data.

Field encryption example

I’m using a single server setup, because the most important part of data separation should be done on the application side. The secure part of the application has a secret passphrase. For example, you can place the code working with authentication, full profile and payments on a separate server and use a dedicated MySQL account.

create database encrypted;
use encrypted;
create table t(c1 int, c2 varchar(255), rnd_pad varbinary(16), primary key(c1));
SET block_encryption_mode = 'aes-256-cbc';
SET @key_str = SHA2('My secret passphrase',512);
SET @init_vector = RANDOM_BYTES(16);
insert into t (c1,c2, rnd_pad) values (1, AES_ENCRYPT('Secret', @key_str, @init_vector), @init_vector);
-- decrypt data
select c1, AES_DECRYPT(c2,@key_str, rnd_pad) from t;

Summary

  • GOOD: Master and slave servers have exactly the same data and no problems with replication.
  • GOOD: Even if two different end-users have exactly the same password, the stored values are different due to random bytes in the init vector for AES encryption.
  • GOOD: Both the encryption and random number generation uses an external library (openssl).
  • CONF: It’s important to have binlog_format=ROW to avoid sending the secret to slave servers.
  • CONF: Do not allow end-users to change data without changing the init_vector, especially for small strings without random padding. Each update should cause init_vector re-generation.
  • BAD: Encrypted data is still sent to slave servers. If the encryption algorithm or protocol is broken, it is possible to get access to data from an insecure part of the application.
  • BAD: The described protocol still could be insecure.

Replication filters

There are two types of replication filters: a master-side with binlog-*db and a slave-side with replicate-*.

Both could cause replication breakage. Replication filters were created for STATEMENT-based replication and are problematic with modern binlog_format=ROW + gtid_mode=on setup. You can find several cases related to database-level slave-side filters in this blog post. If you still need slave-side filtering, use per-table replicate-wild-*-table options.

Master-side

Even if binary logging is disabled for a specific database, the statement still could be stored in the binary log if it’s a DDL statement, or if the binlog_format is STATEMENT or MIXED and default database is not used by the statement. For details, see the reference manual for the binlog-do-db option. In order to avoid replication issues, you should use ROW-based replication and run SET SESSION sql_log_bin=0; before each DDL statement is executed against the ignored database. It’s not a good idea to use binlog-do-db, because you are losing control of what should be replicated.

Why is binary log filtering useful? Changing the sql_log_bin variable is prohibited inside transactions. The sql_log_bin is DANGEROUS, please do not use it instead of binlog-ignore-db in production on the application side. If you need it for database administration, make sure that you are always typing the “session” word before sql_log_bin. This makes problematic consistent updates of multiple entities inside database.

We still should have the ability to hide just one column from the table. But if we are ignoring the database, we should provide a method of reading non-secure data on slaves / by restricted MySQL accounts. This is possible with triggers and views:

create database test;
set session sql_log_bin=0;
create table test.t(c1 int, c2 int, primary key(c1));
alter table test.t add primary key(c1);
set session sql_log_bin=1;
create database test_insecure;
create table test_insecure.t(c1 int, c2 int default NULL, primary key(c1));
use test
delimiter //
create trigger t_aft_ins
after insert
 on test.t FOR EACH ROW
BEGIN
  INSERT test_insecure.t (c1) values (NEW.c1);
END //
create trigger t_aft_upd
after update
 on test.t FOR EACH ROW
BEGIN
  UPDATE test_insecure.t SET c1 = NEW.c1 WHERE c1 = OLD.c1;
END //
create trigger t_aft_del
after delete
 on test.t FOR EACH ROW
BEGIN
  DELETE FROM test_insecure.t WHERE c1 = OLD.c1;
END //
delimiter ;
-- just on slave:
create database test;
create view test.t as select * from test_insecure.t;
-- typical usage
INSERT INTO test.t values(1,1234);
SELECT * from test.t; -- works on both master and slave, c2 field will have NULL value on slave.

Summary

  • BAD: The data is not the same on the master and slaves. It potentially breaks replication. It’s not possible to use a slave’s backup to restore the master or promote the slave as a new master.
  • BAD: Triggers could reduce DML statement performance.
  • GOOD: The sensitive data is not sent to slaves at all (and not written to binary log).
  • GOOD: It works with GTID
  • GOOD: It requires no application changes (or almost no application changes).
  • GOOD: binlog-ignore-db allows us to not use the dangerous sql_log_bin variable after initial table creation.

The post Keep Sensitive Data Secure in a Replication Setup appeared first on Percona Database Performance Blog.

Apr
28
2018
--

Emissary wants to make sales networking obsolete

There is nothing meritocratic about sales. A startup may have the best product, the best vision, and the most compelling presentation, only to discover that their sales team is talking to the wrong decision-maker or not making the right kind of small talk. Unfortunately, that critical information — that network intelligence — isn’t written down in a book somewhere or on an online forum, but generally is uncovered by extensive networking and gossip.

For David Hammer and his team at Emissary, that is a problem to solve. “I am not sure I want a world where the best networkers win,” he explained to me.

Emissary is a hybrid SaaS marketplace which connects sales teams on one side with people (called emissaries, naturally) who can guide them through the sales process at companies they are familiar with. The best emissaries are generally ex-executives and employees who have recently left the target company, and therefore understand the decision-making processes and the politics of the organization. “Our first mission is pretty simple: there should be an Emissary on every deal out there,” Hammer said.

Expert networks, such as GLG, have been around for years, but have traditionally focused on investors willing to shell out huge dollars to understand a company’s strategic thinking. Emissary’s goal is to be much more democratized, targeting a broader range of both decision-makers and customers. It’s product is designed to be intelligent, encouraging customers to ask for help before a sales process falters. The startup has raised $14 million to date according to Crunchbase, with Canaan leading the last series A round.

While Emissary is certainly a creative startup, its the questions spanning knowledge arbitrage, labor markets, and ethics it poses that I think are most interesting.

Sociologists of science generally distinguish between two forms of knowledge, concepts descended from the work of famed scholar Michael Polanyi. The first is explicit knowledge — the stuff you find in books and on TechCrunch. These are facts and figures — a funding round was this size, or the CEO of a company is this individual. The other form is tacit knowledge. The quintessential example is riding a bike — one has to learn by doing it, and no number of physics or mechanics textbooks are going to help a rider avoid falling down.

While org charts may be explicit knowledge, tacit knowledge is the core of all organizations. It’s the politics, the people, the interests, the culture. There is no handbook on these topics, but anyone who has worked in an organization long enough knows exactly the process for getting something done.

That knowledge is critical and rare, and thus ripe for monetization. That was the original inspiration for Hammer when he set out to build a new startup.“Why does Google ever make a bad decision?” Hammer asked at the time. Here you have the company with the most data in the world and the tools to search through it. “How do they not have the information they need?” The answer is that it has all the explicit knowledge in the world, but none of the implicit knowledge required.

That thinking eventually led into sales, where the information asymmetry between a customer and a salesperson was obvious. “The more I talked to sales people, the more I realized that they needed to understand how their account thinks,” Hammer said. Sales automation tools are great, but what message should someone be sending, and to who? That’s a much harder problem to solve, but ultimately the one that will lead to a signed deal. Hammer eventually realized that there were individuals who could arbitrage their valuable knowledge for a price.

That monetization creates a new labor market for these sorts of consultants. For employees at large companies, they can now leave, take a year off or even retire, and potentially get paid to talk about what they know about an organization. Hammer said that “people are fundamentally looking for ways to be helpful,” and while the pay is certainly a major highlight, a lot of people see an opportunity to just get engaged. Clearly that proposition is attractive, since the platform has more than 10,000 emissaries today.

What makes this market more fascinating long-term though is whether this can transition from a part-time, between-jobs gig into something more long-term and professional. Could people specialize in something like “how does Oracle purchase things,” much as how there is an infrastructure of people who support companies working through the government procurement system?

Hammer demurred a bit on this point, noting that “so much of that is being on the other side of those walls.” It’s not any easier for a potential consultant to learn the decision-making outside of a company than it is for a salesperson. Furthermore, the knowledge of an internal company’s processes degrades, albeit at different rates depending on the organization. Some companies experience rapid change and turnover, while knowledge of other companies may last a decade or more.

All that said, Hammer believes that there will come a tipping point when companies start to recommend emissaries to help salespeople through their own processes. Some companies who are self-aware and acknowledge their convoluted procurement procedures may eventually want salespeople to be advised by people who can smooth the process for all sides.

Obviously, with money and knowledge trading hands, there are significant concerns about ethics. “Ethics have to be at the center of what we do,” Hammer said. “They are not sharing deep confidential information, they’re sharing knowledge about the culture of the organization.” Emissary has put in place procedures to monitor ethics compliance. “Emissaries can not work with competitors at the same time,” he said. Furthermore, emissaries obviously have to have left their companies, so they can’t influence the buying decision itself.

Networking has been the millstone of every salesperson. It’s time consuming, and there is little data on what calls or coffees might improve a sale or not. If you take Emissary’s vision to its asymptote though, all that could potentially be replaced. Under the guidance of people in the know, the fits and starts of sales could be transformed into a smooth process with the right talking points at just the right time. Maybe the best products could win after all.

Apr
27
2018
--

DocuSign CEO: ‘we’re becoming a verb,’ company up 37% following public debut

DocuSign CEO Dan Springer was all smiles at the Nasdaq on Friday, following the company’s public debut.

And he had a lot to be happy about. After pricing the IPO at a better-than-expected $29, the company raised $629 million. Then DocuSign finished its first day of trading at $39.73, up 37% in its debut.

Springer, who took over DocuSign just last year, spoke with TechCrunch in a video interview about the direction of the company. “We’ve figured out a way to help businesses really transform the way they operate,” he said about document-signing business. The goal is to “make their life more simple.”

But when asked about the competitive landscape which includes Adobe Sign and HelloSign, Springer was confident that DocuSign is well-positioned to remain the market leader. “We’re becoming a verb,” he said. Springer believes that DocuSign has convinced large enterprises that it is the most secure platform.

Yet the IPO was a long-time coming. The company was formed in 2003 and raised over $500 million over the years from Sigma Partners, Ignition Partners, Frazier Technology Partners, Bain Capital Ventures and Kleiner Perkins, amongst others. It is not uncommon for a venture-backed company to take a decade to go public, but 15 years is atypical, for those that ever reach this coveted milestone.

Dell Technologies Capital president Scott Darling, who sits on the board of DocuSign, said that now was the time to go public because he believes the company “is well positioned to continue aggressively pursuing the $25 billion e-signature market and further revolutionizing how business agreements are handled in the digital age.”

Sales are growing, but it is not yet profitable. DocuSign brought in $518.5 million in revenue for its fiscal year ending in 2018. This is an increase from $381.5 million last year and $250.5 million the year before. Losses for this year were $52.3 million, reduced from $115.4 million last year and, $122.6 million for 2016.

Springer says DocuSign won’t be in the red for much longer. The company is “on that fantastic path to GAAP profitability.” He believes that international expansion is a big opportunity for growth.

Apr
27
2018
--

This Week In Data with Colin Charles 37: Percona Live 2018 Wrap Up

Colin Charles

Colin CharlesJoin Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Percona Live Santa Clara 2018 is now over! All things considered, I think it went off quite well; if you have any comments/complaints/etc., please don’t hesitate to drop me a line. I believe a survey will be going out as to where you’d like to see the conference in 2019 – yes, it is no longer going to be at the Santa Clara Convention Centre.

I was pleasantly surprised that several people came up to me saying they read this column and enjoy it. Thank you!

The whole conference was abuzz with MySQL 8.0 GA chatter. Many seemed to enjoy the PostgreSQL focus too, now that Percona announced PostgreSQL support.

Congratulations as well to the MySQL Community Awards 2018 winners.

Releases

Link List

Upcoming appearances

Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

The post This Week In Data with Colin Charles 37: Percona Live 2018 Wrap Up appeared first on Percona Database Performance Blog.

Apr
27
2018
--

MySQL 8.0 GA: Quality or Not?

MySQL 8.0 GA

MySQL 8.0 GAWhat does Anton Ego – a fictional restaurant critic from the Pixar movie Ratatouille – have to do with MySQL 8.0 GA?

When it comes to being a software critic, a lot.

In many ways, the work of a software critic is easy. We risk very little and thrive on negative criticism, which is fun to read and write.

But what about those who give their many hours of code development, and those who have tested such code before release? How about the many people behind the scenes who brought together packaging, documentation, multiple hours of design, marketing, online resources and more?

And all of that, I might add, is open source! Free for the world to take, copy, adapt and even incorporate in full or in part into their own open development.

It is in exactly that area that the team at MySQL shines once again – they have from their humble beginnings build up a colossally powerful database software that handles much of the world’s data, fast.

Used in every area of life – aerospace, defense, education, finances, government, healthcare, pharma, manufacturing, media, retail, telecoms, hospitality, and finally the web – it truly is a community effort.

My little contribution to this effort is first and foremost to say: well done! Well done for such an all-in-all huge endeavor. When I tested MySQL 8.0, I experienced something new: an extraordinarily clean bug report screen when I unleashed our bug hunting rats, ahem, I mean tools. This was somewhat unexpected. Usually, new releases are a fun playground even for seasoned QA engineers who look for the latest toy to break.

I have a suspicion that the team at Oracle either uses newly-improved bug-finding tools or perhaps they included some of our methods and tools in their setup. In either case, it is, was and will be welcome.

When the unexpected occurs, a fight or flight syndrome happens. I tend to be a fighter, so I upped the battle and managed to find about 30 bugs, with 21 bugs logged already. Quite a few of them are Sig 11’s in release builds. Signal 11 exceptions are unexpected crashes, and release builds are the exact same build you would download at dev.mysql.com.

The debug build also had a number of issues, but less than expected, leading me to the conclusions drawn above. Since Oracle engineers marked many of the issues logged as security bugs, I didn’t list them here. I’ll give Oracle some time to fix them, but I might add them later.

In summary, my personal recommendation is this: unless you are a funky new web company thriving on the latest technology, give Oracle the opportunity to make a few small point bugfix releases before adapting MySQL 8.0 GA. After that, providing upgrade prerequisites are matched, and that your software application is compatible, go for it and upgrade.

Before that, this is a great time to start checking out the latest and greatest that MySQL 8.0 GA has to offer!

All in all, I like what I saw, and I expect MySQL 8.0 GA to have a bright future.

Signed, a seasoned software critic.

The post MySQL 8.0 GA: Quality or Not? appeared first on Percona Database Performance Blog.

Apr
27
2018
--

DocuSign pops 30% and Smartsheet 23% in their debuts on Nasdaq and NYSE

Enterprise tech IPOs continue to roar in 2018. Today, not one but two enterprise tech companies, DocuSign and Smartsheet, saw their share prices pop as they made their debuts on to the public markets.

As of 1:33 New York time, DocuSign is trading at $40.22, up 39 percent from its IPO price and giving the company a market cap of over $6 billion. Smartsheet is at $19.35, up 29 percent and giving it a market cap of $1.8 billion. We’ll continue to update these numbers during the day.

Smartsheet was first out of the gates. Trading on NYSE under the ticker SMAR, the company clocked an opening price of $18.40. This represented a pop of 22.7 percent on its IPO pricing of $15 yesterday evening — itself a higher figure than the expected range of $12-$14. The company, whose primary product is a workplace collaboration and project management platform (it competes with the likes of Basecamp, Wrike and Asana), raised $150 million in its IPO and is currently trading around $18.30/share.

Later in the day, DocuSign — a company that facilitates e-signatures and other features to speed up contractural negotiations online, competing against the likes of AdobeSign and HelloSign — also started to trade, and it saw an even bigger pop. Trading on Nasdaq under DOCU, the stock opened at $37.75, which worked out to a jump of 30 percent on its IPO price last night of $29. Like Smartsheet, DocuSign had priced its IPO higher than the expected range of $26-$28, raising $629 million in the process.

In the case of both companies, they are coming to the market with net losses on their balance sheets, but evidence of strong revenue growth. And in a period that seems to be a generally strong market for IPOs at the moment, combined with the generally positive climate for cloud-based enterprise services (with both Microsoft and Amazon crediting their cloud businesses for their own strong earnings), that rising tide appears to be lifting these two boats.

DocuSign reported $518.5 million in revenue for its fiscal year ending in 2018 in its IPO filings, up from $381.5 million last year and $250.5 million in 2016. Losses were $52.3 million, but that figure was halved over 2017, when it posted a net loss of $115.4 million. DocuSign’s customers include T-Mobile, Salesforce, Morgan Stanley and Bank of America.

Smartsheet reported 3.6 million users in its IPO filings, with business customers including Cisco and Starbucks. The company brought in $111.3 million in revenue for its fiscal 2018 year, but as with many SaaS companies, it’s going public with a loss. Specifically in 2018 it reported a loss of $49.1 million for 2018, up from a net loss of $15.2 million and $14.3 million in 2017 and 2016 respectively.

Other strong enterprise tech public offerings this year have included Dropbox, Zscaler, Cardlytics, Zuora and Pivotal. All of them closed above their opening prices, in what is shaping up to be a huge year for tech IPOs overall.

We’ll update the pricing as the day progresses.

Apr
27
2018
--

Microsoft makes managing and updating Windows 10 easier for its business users

With the Windows 10 April 2018 Update, Microsoft is launching a number of new features for its desktop operating system today. Most of those apply to all users, but in addition to all the regular feature updates, the company also today announced a couple of new features and tools specifically designed for its business users with Microsoft 365 subscriptions that combine a license for Windows 10 with an Office 365 subscription and device management tools.

According to Brad Anderson, Microsoft’s corporate VP for its enterprise and mobility services, the overall thinking behind all of these new features is make it easier for businesses to give their employees access to a “modern desktop.” In Microsoft’s parlance, that’s basically a desktop that’s part of a Microsoft 365 subscription. But in many ways, this so much about the employees but the IT departments that support them. For them, these updates will likely simplify their day-to-day lives.

The most headline-grabbing feature of today’s update is probably the addition of an S-mode to Windows 10. As the name implies, this allows admins to switch a Windows 10 Enterprise device into the more restricted and secure Windows 10 S mode, where users can only install applications from a centrally managed Microsoft Store. Until now, the only way to do this was to buy a Windows 10 S device, but now, admins can automatically configure any device that run Windows 10 Enterprise to go into S mode.

It’s no secret that Windows 10 S as a stand-alone operating system wasn’t exactly a hit (and launching itat an education-focused event with the Surface Laptop probably didn’t help). The overall idea is sound, though, and probably quite attractive to many an IT department.

“We built S mode as a way to enable IT to ensure what’s installed on a device,” Anderson told me. “It’s the most secure way to provision Windows.”

The main surprise here is actually that S mode is already available now, since it was only in March that Microsoft’s Joe Belfiore said that it would launch next year.

Another part of this update is what Microsoft calls ‘delivery optimization” for updates. With this, a single device can download an update and the distribute it to other Windows 10 devices over the local network. Downloads take a while and eat up a lot of bandwidth, after all. And to monitor those deployments, the Windows Analytics dashboard now includes a tab for keeping tabs on them.

Another new deployment feature Microsoft is launching today is an improvement to the AutoPilot service. AutoPilot allows IT to distribute laptops to employees without first setting them up to a company’s specifications. Once a user logs in, the system will check what needs to be done and then applies those settings, provisions policies and installs apps as necessary. With this update, AutoPilot now includes an enrollment status page that does all of this before the user ever gets to the desktop. That way, users can’t get in the way of the set-up process and IT knows that everything is up to spec.

A number of PC vendors are now also supporting AutoPilot out of the box, including Lenovo and Dell, with HP, Toshiba and Fujitsu planning to launch their AutoPilot-enabled PCs later this year.

To manage all of this, Microsoft is also launching a new Microsoft 365 admin center today that brings all the previously disparate configurations and monitoring tools of Office 365 and Microsoft 365 under a single roof.

One other aspect of this launch is an addition to Microsoft 365 for firstline workers. Windows 10 in S mode is one part of this, but the company is also updating the Office mobile apps licensing terms to add the company’s iOS and Android apps to the Office 365 E1, F1 and Business Essential licenses. For now, though, only access to Outlook for iOS and Android is available under these licenses. Support for Word, PowerPoint, Excel and OneNote will launch in the next few months.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com