Jan
28
2019
--

Dropbox snares HelloSign for $230M, gets workflow and e-signature

Dropbox announced today that it intends to purchase HelloSign, a company that provides lightweight document workflow and e-signature services. The company paid a hefty $230 million for the privilege.

Dropbox’s SVP of engineering, Quentin Clark, sees this as more than simply bolting on electronic signature functionality to the Dropbox solution. For him, the workflow capabilities that HelloSign added in 2017 were really key to the purchase.

“What is unique about HelloSign is that the investment they’ve made in APIs and the workflow products is really so aligned with our long-term direction,” Clark told TechCrunch. “It’s not just a thing to do one more activity with Dropbox, it’s really going to help us pursue that broader vision,” he added. That vision involves extending the storage capabilities that is at the core of the Dropbox solution.

This can also been seen in the context of the Extension capability that Dropbox added last year. HelloSign was actually one of the companies involved at launch. While Clark says the company will continue to encourage companies to extend the Dropbox solution, today’s acquisition gives it a capability of its own that doesn’t require a partnership and already is connected to Dropbox via Extensions.

Fast integration

Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis, who has been following this market for many years, says the fact it’s an Extensions partner should allow much faster integration than would happen normally in an acquisition like this. “Simple document processes that relate to small and medium business are still largely manual. The fact that HelloSign has solutions for things like real estate, insurance and customer/employee on boarding, plus the existing extension to Dropbox, means it can be leveraged quickly for revenue growth by Dropbox, Pelz-Sharpe explained.

He added that the size of the deal shows there is high demand for these kinds capabilities. “It is a very high multiple, but in such a fast growth area not an unreasonable one to demand for a startup showing such growth potential. The price suggests that there were almost certainly other highly motivated bidders for the deal,” he said.

HelloSign CEO Joseph Walla says being part of Dropbox gives HelloSign access to resources of a much larger public company, which should allow it to reach a broader market than it could on its own. “Together with Dropbox, we can bring more seamless document workflows to even more customers and dramatically accelerate our impact,” Walla said in a blog post announcing the deal.

HelloSign remains standalone

Whitney Bouck, COO at HelloSign, who previously held stints at Box and EMC Documentum, said the company will remain an independent entity. That means it will continue to operate with its current management structure as part of the Dropbox family. In fact, Clark indicated that all of the HelloSign employees will be offered employment at Dropbox as part of the deal.

“We’re going to remain effectively a standalone business within the Dropbox family, so that we can continue to focus on developing the great products that we have and delivering value. So the good news is that our customers won’t really experience any massive change. They just get more opportunity,” Bouck said.

Alan Lepofsky, an analyst at Constellation Research who specializes in enterprise workflow, sees HelloSign giving Dropbox an enterprise-class workflow tool, but adds that the addition of Bouck and her background in enterprise content management is also a nice bonus for Dropbox in this deal. “While this is not an acqui-hire, Dropbox does end up with Whitney Bouck, a proven leader in expanding offerings into enterprise scale accounts. I believe she could have a large impact in Dropbox’s battle with her former employer Box,” Lepofsky told TechCrunch.

Clark said that it was too soon to say exactly how it will bundle and incorporate HelloSign functionality beyond the Extensions. But he expects that the company will find a way to integrate the two products where it make sense, even while HelloSign operates as a separate company with its own customers.

When you consider that HelloSign, a Bay Area startup that launched in 2011, raised just $16 million, it appears to be an impressive return for investors and a solid exit for the company. 

The deal is expected to close in Q1 and is, per usual, dependent on regulatory approval.

Jan
28
2019
--

Talking Drupal #195 – Finding Drupal Jobs and Projects

In episode #195 with talk about finding Drupal work with Shane Thomas. www.talkingdrupal.com/195.

Topics

  • Stories
  • DrupalCon North America
  • Linux on Dell
  • Finding Drupal Jobs and Works
  • Tactical Steps
  • Strategic Steps
  • Job Services

Resources

Guests

Hosts

Stephen Cross – www.ParallaxInfoTech.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

 

 

Jan
26
2019
--

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China will bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.

Jan
25
2019
--

Pentagon stands by finding of no conflict of interest in JEDI RFP process

A line in a new court filing by the Department of Defense suggests that it might reopen the investigation into a possible conflict of interest in the JEDI contract RFP process involving a former AWS employee. The story has attracted a great deal of attention in major news publications, including The Washington Post and The Wall Street Journal, but a Pentagon spokesperson has told TechCrunch that nothing has changed.

In the document, filed with the court on Wednesday, the government’s legal representatives sought to outline its legal arguments in the case. The line that attracted so much attention stated, “Now that Amazon has submitted a proposal, the contracting officer is considering whether Amazon’s re-hiring Mr. Ubhi creates an OCI that cannot be avoided, mitigated, or neutralized.” OCI stands for Organizational Conflict of Interest in DoD lingo.

When asked about this specific passage, Pentagon spokesperson Heather Babb made clear the conflict had been investigated earlier and that Ubhi had recused himself from the process. “During his employment with DDS, Mr. Deap Ubhi recused himself from work related to the JEDI contract. DOD has investigated this issue, and we have determined that Mr. Ubhi complied with all necessary laws and regulations,” Babb told TechCrunch.

She repeated that statement when asked specifically about the language in the DoD’s filing. Ubhi did work at Amazon prior to joining the DoD and returned to work for them after he left.

The Department of Defense’s decade-long, $10 billion JEDI cloud contract process has attracted a lot of attention, and not just for the size of the deal. The Pentagon has said this will be a winner-take-all affair. Oracle and IBM have filed formal complaints and Oracle filed a lawsuit in December alleging, among other things, that there was a conflict of interest by Ubhi, and that they believed the single-vendor approach was designed to favor AWS. The Pentagon has denied these allegations.

The DoD completed the RFP process at the end of October and is expected to choose the winning vendor in April.

Jan
25
2019
--

Vodafone pauses Huawei network supply purchases in Europe

Huawei had a very good 2018, and it’s likely to have a very good 2019, as well. But there’s one little thing that keeps putting a damper on the hardware maker’s global expansion plans. The U.S. and Canada have already taken action over the company’s perceived link to the Chinese government, and now Vodafone is following suit over concerns that other countries may join. 

The U.K.-based telecom giant announced this week that it’s enacting a temporary halt on purchases from the Chinese hardware maker. The move arrives out of concern that additional countries may ban Huawei products, putting the world’s second largest carrier in a tricky spot as it works to roll out 5G networks across the globe.

For now, the move is focused on European markets. As The Wall Street Journal notes, there remains some possibility that Vodafone could go forward with Huawei networking gear in other markets, including India, Turkey and parts of Africa. In Europe, however, these delays could ultimately work to raise the price and/or delay its planned 5G push.

“We have decided to pause further Huawei in our core whilst we engage with the various agencies and governments and Huawei just to finalize the situation, of which I feel Huawei is really open and working hard,” Vodafone CEO Nick Read said in a statement.

Huawei has continued to deny all allegations related to Chinese government spying.

Jan
25
2019
--

Open Source Database Conference CFP Deadline Sunday January 27

open source database conference 2019

open source database conference 2019This year at our Open Source Database Conference we’re celebrating open source database technologies that don’t fit into the MySQL®, MongoDB®, MariaDB®, or PostgreSQL realms by featuring them in their very own track. The glamorously-named Other Open Source Databases track! As unbiased champions of open source database solutions, we embrace all flavors of open source database, and pride ourselves at presenting one of the biggest events dedicated to any and all OSDBs.

Another innovation this year is the introduction of a Java programming for open source databases track. Maybe that would be of interest?

The conference takes place at the end of May in Austin, a fantastic place to visit, and state capital of Texas.

As mentioned in a recent blog post, the Track Steering Committee featuring some very talented technologists is in place, and we are ready to start reviewing submissions.  We already have great content across all topics, but in order to make it even better, we would like to keep on getting new submissions until the very last day ?

The call for papers closes this Sunday, January 27, so there is still time for you to send a talk or two for our review. In the open source databases track, we have the following list in mind for potential good topics:

  • Apache Hive
  • Cassandra
  • ClickHouse
  • CockroachDB
  • Consul
  • Elasticsearch
  • FoundationDB
  • InfluxDB
  • Kafka
  • Neo4j
  • Prometheus
  • Redis
  • ScyllaDB
  • Solr
  • SQLite
  • Teradata
  • TiDB
  • Timescale

This list is not exhaustive, so if you can think of any others let me know—and go ahead and submit away, please!

Of course, MySQL, MariaDB, MongoDB and PostgreSQL all have their own tracks, too, so if you have any interesting talks for those, please don’t hesitate to send them in for review by their respective track steering committees.

Did we not mention? Yes, it’s Percona Live!

Jan
24
2019
--

Apple finally brings Microsoft Office to the Mac App Store, and there is much rejoicing

That slow clap you hear spreading around the internet today could be due to the fact that Apple has finally added Microsoft Office to the Mac App Store. The package will include Word, Excel, PowerPoint, Outlook and OneNote.

Shaan Pruden, senior director of worldwide developer relations at Apple, says that when the company overhauled the App Store last year, it added the ability to roll several apps into a subscription package with the idea of bringing Microsoft Office into the fold. That lack of bundling had been a stumbling block to an earlier partnership.

“One of the features that we brought specifically in working with Microsoft was the ability to subscribe to bundles, which is obviously something that they would need in order to bring Office 365 to the Mac App Store.”

That’s because Microsoft sells Office 365 subscriptions as a package of applications, and it didn’t want to alter the experience by forcing customers to download each one individually, Jared Spataro, corporate vice president for Microsoft 365 explained.

PowerPoint on the Mac. Photo: Apple

Spataro said that until now, customers could of course go directly to Microsoft or another retail outlet to subscribe to the same bundle, but what today’s announcement does is wrap the subscription process into an integrated Mac experience where installation and updates all happen in a way you expect with macOS.

“The apps themselves are updated through the App Store, and we’ve done a lot of great work between the two companies to make sure that the experience really feels good and feels like it’s fully integrated,” he said. That includes support for dark mode, photo continuity to easily insert photos into Office apps from Apple devices and app-specific toolbars for the Touch Bar.

A subscription will run you $69 for an individual or $99 for a household. The latter allows up to six household members to piggyback on the subscription, and each person gets one terabyte of storage, to boot. What’s more, you can access your subscription across all of your Apple, Android and Windows devices and your files, settings and preferences will follow wherever you go.

Businesses can order Microsoft Office bundles through the App Store and then distribute them using the Apple Business Manager, a tool Apple developed last year to help IT manage the application distribution process. Once installed, users have the same ability to access their subscriptions, complete with settings across devices.

Microsoft OneNote on the Mac. Photo: Apple

While Apple and Microsoft have always had a complicated relationship, the two companies have been working together in one capacity or another for nearly three decades now. Neither company was willing to discuss the timeline it took to get to this point, or the financial arrangements between the two companies, but in the standard split for subscriptions, the company gets 70 percent of the price the first year with Apple getting 30 percent for hosting fees. That changes to an 85/15 split in subsequent years.

Apple noted that worldwide availability could take up to 24 hours depending on your location, but you’ve waited this long, you can wait one more day, right?

Jan
24
2019
--

Microsoft acquires Citus Data

Microsoft today announced that it has acquired Citus Data, a company that focused on making PostgreSQL databases faster and more scalable. Citus’ open-source PostgreSQL extension essentially turns the application into a distributed database and, while there has been a lot of hype around the NoSQL movement and document stores, relational databases — and especially PostgreSQL — are still a growing market, in part because of tools from companies like Citus that overcome some of their earlier limitations.

Unsurprisingly, Microsoft plans to work with the Citus Data team to “accelerate the delivery of key, enterprise-ready features from Azure to PostgreSQL and enable critical PostgreSQL workloads to run on Azure with confidence.” The Citus co-founders echo this in their own statement, noting that “as part of Microsoft, we will stay focused on building an amazing database on top of PostgreSQL that gives our users the game-changing scale, performance, and resilience they need. We will continue to drive innovation in this space.”

PostgreSQL is obviously an open-source tool, and while the fact that Microsoft is now a major open-source contributor doesn’t come as a surprise anymore, it’s worth noting that the company stresses that it will continue to work with the PostgreSQL community. In an email, a Microsoft spokesperson also noted that “the acquisition is a proof point in the company’s commitment to open source and accelerating Azure PostgreSQL performance and scale.”

Current Citus customers include the likes of real-time analytics service Chartbeat, email security service Agari and PushOwl, though the company notes that it also counts a number of Fortune 100 companies among its users (they tend to stay anonymous). The company offers both a database as a service, an on-premises enterprise version and the free open-source edition. For the time being, it seems like that’s not changing, though over time I would suspect that Microsoft will transition users of the hosted service to Azure.

The price of the acquisition was not disclosed. Citus Data, which was founded in 2010 and graduated from the Y Combinator program, previously raised more than $13 million from the likes of Khosla Ventures, SV Angel and Data Collective.

Jan
24
2019
--

A Quick Look into TiDB Performance on a Single Server

TiDB MySQL plot

TiDB is an open-source distributed database developed by PingCAP. This is a very interesting project as it is can be used as a MySQL drop-in replacement: it implements MySQL protocol, and basically emulates MySQL. PingCAP defines TiDB is as a “one-stop data warehouse for both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) workloads”. In this blog post I have decided to see how TiDB performs on a single server compared to MySQL for both OLTP and OLAP workload. Please note, this benchmark is very limited in scope: we are only testing TiDB and MySQL on a single server – TiDB is a distributed database out of the box.

Short version: TiDB supports parallel query execution for selects and can utilize many more CPU cores – MySQL is limited to a single CPU core for a single select query. For the higher-end hardware – ec2 instances in my case – TiDB can be 3-4 times faster for complex select queries (OLAP workload) which do not use, or benefit from, indexes. At the same time point selects and writes, especially inserts, can be 5x-10x slower. Again, please note that this test was on a single server, with a single TiKV process.

Installation

Please note: the following setup is only intended for testing and not for production. 

I installed the latest version of TiDB to take advantage of the latest performance improvements, at the time of writing:

cat make-full-tidb-server
#!/bin/bash
set -x
cd /tidb
wget http://download.pingcap.org/tidb-v2.1.2-linux-amd64.tar.gz
tar -xzf tidb-*.tar.gz
cd tidb-*-linux-amd64/
./bin/pd-server  --data-dir=pd --log-file=pd.log &
sleep 5
./bin/tikv-server --pd="127.0.0.1:2379" --data-dir=tikv -A 127.0.0.1:20165 --log-file=tikv.log &
sleep 5
cd ~/go/src/github.com/pingcap/tidb
make server
./bin/tidb-server --store=tikv --path="127.0.0.1:2379"

The normal installation process is described here (different methods are available).

Benchmarks

The main purpose of this test is to compare MySQL to TiDB. As with any distributed database it is hard to design an “apples to apples” comparison: we may compare a distributed workload spanning across many servers/nodes (in this case TiDB) to a single server workload (in this case MySQL). To overcome this challenge, I decided to focus on “efficiency”. If the distributed database is not efficient – i.e. it may require 10s or 100s of nodes to do the same job as the non-distributed database – it may be cost prohibitive to use such database for a small or medium size DB.

The preliminary results are: TiDB is much more efficient for SELECT (OLAP workload) but much less efficient for WRITES and typical OLTP workload. To overcome these limitations it is possible to use more servers.

For this test I was using two types of benchmarks:

  1. OLAP: a set of complex queries on top of an “ontime” database (airline historical flight information database). For this benchmark I used different AWS ec2 instances with CPU cores ranging from 2 to 96. This is response time test (not a throughput test)
  2. OLTP: sysbench (as always): point-select and write-only standard workloads. This is throughput test, increasing the number of threads.

OLAP / analytical queries test

Database size is 70Gb in MySQL and 30Gb in TiDB (compressed). The table has no secondary indexes (except the primary key).

I used the following four queries:

  1. Simple count(*): select count(*) from ontime;
  2. Simple group by: select count(*), year from ontime group by year order by year;
  3. Complex filter for a full table scan: select * from ontime where UniqueCarrier = 'DL' and TailNum = 'N317NB' and FlightNum = '2' and Origin = 'JFK' and Dest = 'FLL' limit 10;
  4. Complex group by and order by query:
    select SQL_CALC_FOUND_ROWS
    FlightDate, UniqueCarrier as carrier,
    FlightNum,
    Origin,
    Dest
    FROM ontime
    WHERE
    DestState not in ('AK', 'HI', 'PR', 'VI')
    and OriginState not in ('AK', 'HI', 'PR', 'VI')
    and flightdate > '2015-01-01'
    and ArrDelay < 15
    and cancelled = 0 and Diverted = 0
    and DivAirportLandings = '0'
    ORDER by DepDelay DESC
    LIMIT 10;

I used five ec2 instances:

  • t2.medium: 2 CPU cores
  • x1e.xlarge: 4 CPU cores
  • r4.4xlarge: 16 CPU cores
  • m4.16xlarge: 64 CPU cores
  • m5.24xlarge: 96 CPU cores

The following graph represents the results (bars represents the query response time, the smaller the better):

As we can see, TiDB scales very well increasing the number of CPU cores, as we go from lower to higher end instances. t2.medium and x1e.xlarge is interesting here thou:

  1. t2.medium has 2 CPU cores and not enough RAM (2Gb) to store database in memory. Both MySQL/InnoDB and TiDB/TiKV performs a lots of disk reads – this is disk bound workload
  2. x1e.xlarge is an example of the opposite instance type: 4 CPU core and  122GB RAM, I’m using memory bound workload here (where both MySQL and TiDB data is cached).

All other instances have enough RAM to cache the database in memory, and with more CPU TiDB can take advantages of query parallelism and provide better response time.

Sysbench test

Select test

I used point select (meaning select one row by primary key, threads ranges from 1 to 128) with Sysbench on an m4.16xlarge instance (memory bound: no disk reads). The results are here.  The bars represent the number of transactions per second, the more the better:

This workload is actually gives a great advantage to MySQL/InnoDB as it retrieves a single row based on the primary key. MySQL is significantly faster here: 5x to 10x faster. Unlike the previous workload – 1 single slow query – for “point select” queries MySQL scales much better than TiDB with more CPU cores.

Write only test

I have used a write-only sysbench workload as well with threads ranging from 1 to 128. The instance has enough memory to cache full datast. Here are the results:

Here we can see that TiDB is also significatly slower than MySQL (for an in-memory workload).

Limitations of the write only test
Running TiDB as a single server is not a recommended (or documented) configuration, so some optimizations for this case may be missing. To create a production level test, we would need to compare TiDB to MySQL with the binlog enabled + some sort of synchronous/virtually synchronous or semi-sync replication  (e.g. Percona XtraDB Cluster, group replication or semi-sync replication).  Both of these changes are known to decrease the write-throughput of MySQL considerably. Some tuning may be done to reduce the effects of that.
Some of the performance characteristics here are also derived from TiDB using RocksDB. The performance of InnoDB should be higher for an in-memory insert, with an LSM tree performing better for data sets that no longer fit in memory.

Conclusion

TiDB scales very well for OLAP / analytical queries (typically complex queries not able to take advantages of indexes) – this is the area where MySQL performs much worse as it does not take advantage of multiple CPU cores. At the same time, there is always a price to pay: TiDB has worse “efficiency” for fast queries (i.e. select by primary key) and writes. TiDB can scale across multiple servers (nodes). However, if we need to archive the same level of write efficiency as MySQL we will have to setup tens of nodes. In my opinion, TiDB can be a great fit for an analytical workload when you need almost full compatibility with MySQL: syntax compatibility, inserts/updates, etc.

Jan
24
2019
--

Blue Prism to issue $130M in stock to raise new funds

Just this morning, robotic process automation (RPA) firm Blue Prism announced enhancements to its platform. A little later, the company, which went public on the London Stock Exchange in 2016, announced it was raising £100 million (approximately $130 million) by issuing new stock. The announcement comes after reporting significant losses in its most recent fiscal year, which ended in October.

The company indicated it plans to sell the new shares on the public market, and that they will be made available to new and existing shareholders, including company managers and directors.

CEO Alastair Bathgate attempted to put the announcement in the best possible light. “The outcome of this placing, which builds on another year of significant progress for the company, highlights the meteoric growth opportunity with RPA and intelligent automation,” he said in a statement.

While the company’s revenue more than doubled last fiscal year, from £24.5 million (approximately $32 million) in 2017 to £55.2 million (approximately $72 million) in 2018, losses also increased dramatically, from £10.1 million (approximately $13 million) in 2017 to £26.0 million (approximately $34 million), according to reports.

The move, which requires shareholder approval, will be used to push the company’s plans, outlined in a TechCrunch article earlier this morning, to begin enhancing the platform with help from partners, a move the company hopes will propel it into the future.

Today’s announcement included a new AI engine, an updated marketplace where companies can share Blue Prism extensions and a new lab, where the company plans to work on AI innovation in-house.

Bathgate isn’t wrong about the market opportunity. Investors have been pouring big bucks into this market for the last couple of years. As we noted, in this morning’s article, “UIPath, a NYC RPA company has raised almost $450 million. Its most recent round in September was for $225 million on a $3 billion valuation. Automation Anywhere, a San Jose RPA startup, has raised $550 million including an enormous $300 million investment from SoftBank in November on a valuation of $2.6 billion.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com