Oct
17
2019
--

Pendo scores $100M Series E investment on $1 billion valuation

Pendo, the late-stage startup that helps companies understand how customers are interacting with their apps, announced a $100 million Series E investment today on a valuation of $1 billion.

The round was led by Sapphire Ventures . Also participating were new investors General Atlantic and Tiger Global, and existing investors Battery Ventures, Meritech Capital, FirstMark, Geodesic Capital and Cross Creek. Pendo has now raised $206 million, according to the company.

Company CEO and co-founder Todd Olson says that one of the reasons they need so much money is they are defining a market, and the potential is quite large. “Honestly, we need to help realize the total market opportunity. I think what’s exciting about what we’ve seen in six years is that this problem of improving digital experiences is something that’s becoming top of mind for all businesses,” Olson said.

The company integrates with customer apps, capturing user behavior and feeding data back to product teams to help prioritize features and improve the user experience. In addition, the product provides ways to help those users either by walking them through different features, pointing out updates and new features or providing other notes. Developers can also ask for feedback to get direct input from users.

Olson says early on its customers were mostly other technology companies, but over time they have expanded into lots of other verticals, including insurance, financial services and retail, and these companies are seeing digital experience as increasingly important. “A lot of this money is going to help grow our go-to-market teams and our product teams to make sure we’re getting our message out there, and we’re helping companies deal with this transformation,” he says. Today, the company has more than 1,200 customers.

While he wouldn’t commit to going public, he did say it’s something the executive team certainly thinks about, and it has started to put the structure in place to prepare should that time ever come. “This is certainly an option that we are considering, and we’re looking at ways in which to put us in a position to be able to do so, if and when the markets are good and we decide that’s the course we want to take.”

Oct
16
2019
--

Zoho launches Catalyst, a new developer platform with a focus on microservices

Zoho may be one of the most underrated tech companies. The 23-year-old company, which at this point offers more than 45 products, has never taken outside funding and has no ambition to go public, yet it’s highly profitable and runs its own data centers around the world. And today, it’s launching Catalyst, a cloud-based developer platform with a focus on microservices that it hopes can challenge those of many of its larger competitors.

The company already offered a low-code tool for building business apps. But Catalyst is different. Zoho isn’t following in the footsteps of Google or Amazon here and offering a relatively unopinionated platform for running virtual machines and containers. Indeed, it does nothing of the sort. The company is 100% betting on serverless as the next major technology for building enterprise apps and the whole platform has been tuned for this purpose.

Catalyst Zia AI

“Historically, when you look at cloud computing, when you look at any public clouds, they pretty much range from virtualizing your servers and renting our virtual servers all the way up the stack,” Raju Vegesna, Zoho’s chief evangelist, said when I asked him about this decision to bet on serverless. “But when you look at it from a developer’s point of view, you still have to deal with a lot of baggage. You still have to figure out the operating system, you still have to figure out the database. And then you have to scale and manage the updates. All of that has to be done at the application infrastructure level.” In recent years, though, said Vegesna, the focus has shifted to the app logic side, with databases and file servers being abstracted away. And that’s the trend Zoho is hoping to capitalize on with Catalyst.

What Catalyst does do is give advanced developers a platform to build, run and manage event-driven microservice-based applications that can, among other things, also tap into many of the tools that Zoho built for running its own applications, like a grammar checker for Zoho Writer, document previews for Zoho Drive or access to its Zia AI tools for OCR, sentiment analysis and predictions. The platform gives developers tools to orchestrate the various microservices, which obviously means it’ll make it easy to scale applications as needed, too. It integrates with existing CI/CD pipelines and IDEs.

Catalyst Functions

Catalyst also complies with the SOC Type II and ISO 27001 certifications, as well as GDPR. It also offers developers the ability to access data from Zoho’s own applications, as well as third-party tools, all backed by Zoho’s Unified Data Model, a relational datastore for server-side and client deployment.

“The infrastructure that we built over the last several years is now being exposed,” said Vegesna. He also stressed that Zoho is launching the complete platform in one go (though it will obviously add to it over time). “We are bringing everything together so that you can develop a mobile or web app from a single interface,” he said. “We are not just throwing 50 different disparate services out there.” At the same time, though, the company is also opting for a very deliberate approach here with its focus on serverless. That, Vegesna believes, will allow Zoho Catalyst to compete with its larger competitors.

It’s also worth noting that Zoho knows that it’s playing the long-game here, something it is familiar with, given that it launched its first product, Zoho Writer, back in 2005 before Google had launched its productivity suite.

Catalyst Homepage

 

Oct
16
2019
--

Edge computing startup Pensando comes out of stealth mode with a total of $278 million in funding

Pensando, an edge computing startup founded by former Cisco engineers, came out of stealth mode today with an announcement that it has raised a $145 million Series C. The company’s software and hardware technology, created to give data centers more of the flexibility of cloud computing servers, is being positioned as a competitor to Amazon Web Services Nitro.

The round was led by Hewlett Packard Enterprise and Lightspeed Venture Partners and brings Pensando’s total raised so far to $278 million. HPE chief technology officer Mark Potter and Lightspeed Venture partner Barry Eggers will join Pensando’s board of directors. The company’s chairman is former Cisco CEO John Chambers, who is also one of Pensando’s investors through JC2 Ventures.

Pensando was founded in 2017 by Mario Mazzola, Prem Jain, Luca Cafiero and Soni Jiandani, a team of engineers who spearheaded the development of several of Cisco’s key technologies, and founded four startups that were acquired by Cisco, including Insieme Networks. (In an interview with Reuters, Pensando chief financial officer Randy Pond, a former Cisco executive vice president, said it isn’t clear if Cisco is interested in acquiring the startup, adding “our aspirations at this point would be to IPO. But, you know, there’s always other possibilities for monetization events.”)

The startup claims its edge computing platform performs five to nine times better than AWS Nitro, in terms of productivity and scale. Pensando prepares data center infrastructure for edge computing, better equipping them to handle data from 5G, artificial intelligence and Internet of Things applications. While in stealth mode, Pensando acquired customers including HPE, Goldman Sachs, NetApp and Equinix.

In a press statement, Potter said “Today’s rapidly transforming, hyper-connected world requires enterprises to operate with even greater flexibility and choices than ever before. HPE’s expanding relationship with Pensando Systems stems from our shared understanding of enterprises and the cloud. We are proud to announce our investment and solution partnership with Pensando and will continue to drive solutions that anticipate our customers’ needs together.”

Oct
16
2019
--

&Open helps businesses distribute gifts to reward customer loyalty

&Open is a startup with an unusual name, and one that fills an unusual niche in the business world. It has built a gift-giving platform, so that businesses can reward loyalty with a small token of appreciation. The gift depends on the business and the circumstances, but it could be something like a book or a tea towel and a recipe.

Co-founder and CEO Jonathan Legge says the Dublin-based startup fits most easily in the corporate gift-giving category, but he sees the company handling much more than that. “We are more about gifting for loyalty and customer retention. We grew out of a B2C operation in which we got visibility on this market, and then quickly evolved &Open to fulfill this market,” Legge explained.

In fact, the company developed out of a business Legge had prior to launching &Open, producing high-end gifts. As part of that business, he was finding that he would get requests from CMOs of big companies like Google, Airbnb and Jameson’s to develop gifts for their events. From that, Legge saw the potential for a full-fledged business based on that idea and he launched &Open.

He sees a world in which transactions increasingly take place in the digital realm, yet consumers still crave physical interactions with businesses beyond an email or a text thanking them. That’s where &Open can help.

“We’re filling the space of helping businesses connect with their customers and showing they care, and not by kind of devaluing their own product and putting on sales. It’s more working with the customer support team, the loyalty team or the marketing team to watch the life cycle of the customer and make sure they’re being gifted at key moments in the life cycle and within their journey with a brand,” he said.

He says this definitely is not swag like you would get a conference, but something more personal that shows the brand cares about the customer. Nor is it a set of generic gifts that every &Open customer can select from. Instead it’s a catalog it creates with each one to reflect that brand’s values.

&Open welcome screen

Image: &Open

“We will design a catalog of gifts for our clients, and then they will be grouped into subsets of situations based on price. For Airbnb, the gift set could depend on whether it’s for a host or guest, and there’s different gifts within those situations. So for a host, it will be more stuff for the home such as a recipe book, a tea towel with a recipe or a guest book,” Legge said.

The company has been around since 2017 and is already in 52 countries. To make this all work, it has developed a three-part system. In addition to building a custom catalog for each brand, it has a logistics component to distribute the gift and make sure it has been delivered, and finally a technology platform that brings these different systems together.

The way it works for most customers is that the customer service team or the social media team will see situations where they think a gift is warranted, and they will log into the &Open system and choose a gift based on whatever the circumstances are — such as an apology for bad service or a reward for loyalty.

Today, the company has 25 employees, most of whom are in Dublin. The company is self-funded so far and has not sought outside investment.

Oct
16
2019
--

Autify raises $2.5M seed round for its no-code software testing platform

Autify, a platform that makes testing web application as easy as clicking a few buttons, has raised a $2.5 million seed round from Global Brain, Salesforce Ventures, Archetype Ventures and several angels. The company, which recently graduated from the Alchemist accelerator program for enterprise startups, splits its base between the U.S., where it keeps an office, and Japan, where co-founders Ryo Chikazawa (CEO) and Sam Yamashita got their start as software engineers.

The main idea here is that Autify, which was founded in 2016, allows teams to write tests by simply recording their interactions with the app with the help of a Chrome extension, then having Autify run these tests automatically on a variety of other browsers and mobile devices. Typically, these kinds of tests are very brittle and quickly start to fail whenever a developer makes changes to the design of the application.

Autify gets around this by using some machine learning smarts that give it the ability to know that a given button or form is still the same, no matter where it is on the page. Users can currently test their applications using IE, Edge, Chrome and Firefox on macOS and Windows, as well as a range of iOS and Android devices.

Scenario Editor

Chikazawa tells me that the main idea of Autify is based on his own experience as a developer. He also noted that many enterprises are struggling to hire automation engineers who can write tests for them, using Selenium and similar frameworks. With Autify, any developer (and even non-developer) can create a test without having to know the specifics of the underlying testing framework. “You don’t really need technical knowledge,” explained Chikazawa. “You can just out of the box use Autify.”

There are obviously some other startups that are also tackling this space, including SpotQA, for example. Chikazawa, however, argues that Autify is different, given its focus on enterprises. “The audience is really different. We have competitors that are targeting engineers, but because we are saying that no coding [is required], we are selling to the companies that have been struggling with hiring automating engineers,” he told me. He also stressed that Autify is able to do cross-browser testing, something that’s also not a given among its competitors.

The company introduced its closed beta version in March and is currently testing the service with about a hundred companies. It integrates with development platforms like TestRail, Jenkins and CircleCI, as well as Slack.

Screen Shot 2019 10 01 at 2.04.24 AM

Oct
16
2019
--

Canva, now valued at $3.2 billion, launches an enterprise product

Canva, the Australian-based design tool maker, has today announced that it has raised an additional $85 million to bring its valuation to $3.2 billion, up from $2.5 billion in May.

Investors in the company include Mary Meeker’s Bond, General Catalyst, Bessemer Venture Partners, Blackbird and Sequoia China.

Alongside the new funding and valuation, Canva is also making its foray into enterprise with the launch of Canva for Enterprise.

Thus far, Canva has offered users a lightweight tool set for creating marketing and sales decks, social media materials, and other design products mostly unrelated to product design. The idea here is that, outside of product designers, the rest of the organization is often left behind with regards to keeping brand parity in the materials they use.

Canva is available for free for individual users, but the company has addressed the growing need within professional organizations to keep brand parity through Canva Pro, a premium version of the product available for $12.95/month.

The company is now extending service to organizations with the launch of Canva for Enterprise. The new product will not only offer a brand kit (Canva’s parlance for Design System), but will also offer marketing and sales templates, locked approval-based workflows, and even hide Canva’s massive design library within the organization so employees only have access to their approved brand assets, fonts, colors, etc.

Canva for Enterprise also adds another layer of organization, allowing collaboration across comments, a dashboard to manage teams and assign roles, and team folders.

“We’re in a fortunate place because the market has been disaggregated,” said Canva CEO and founder Melanie Perkins. “The way we think about the pain point consumers have is that people are being inconsistent with the brand, and there are huge inefficiencies within the organization, which is why people have been literally asking us to build this exact product.”

More than 20 million users sign into Canva each month across 190 countries, with 85 percent of Fortune 500 companies using the product, according to the company.

Perkins says that the ultimate goal is to have every person in the world with access to the internet and a design need to be on the platform.

Oct
15
2019
--

Databricks brings its Delta Lake project to the Linux Foundation

Databricks, the big data analytics service founded by the original developers of Apache Spark, today announced that it is bringing its Delta Lake open-source project for building data lakes to the Linux Foundation under an open governance model. The company announced the launch of Delta Lake earlier this year, and, even though it’s still a relatively new project, it has already been adopted by many organizations and has found backing from companies like Intel, Alibaba and Booz Allen Hamilton.

“In 2013, we had a small project where we added SQL to Spark at Databricks […] and donated it to the Apache Foundation,” Databricks CEO and co-founder Ali Ghodsi told me. “Over the years, slowly people have changed how they actually leverage Spark and only in the last year or so it really started to dawn upon us that there’s a new pattern that’s emerging and Spark is being used in a completely different way than maybe we had planned initially.”

This pattern, he said, is that companies are taking all of their data and putting it into data lakes and then doing a couple of things with this data, machine learning and data science being the obvious ones. But they are also doing things that are more traditionally associated with data warehouses, like business intelligence and reporting. The term Ghodsi uses for this kind of usage is “Lake House.” More and more, Databricks is seeing that Spark is being used for this purpose and not just to replace Hadoop and doing ETL (extract, transform, load). “This kind of Lake House patterns we’ve seen emerge more and more and we wanted to double down on it.”

Spark 3.0, which is launching today soon, enables more of these use cases and speeds them up significantly, in addition to the launch of a new feature that enables you to add a pluggable data catalog to Spark.

Delta Lake, Ghodsi said, is essentially the data layer of the Lake House pattern. It brings support for ACID transactions to data lakes, scalable metadata handling and data versioning, for example. All the data is stored in the Apache Parquet format and users can enforce schemas (and change them with relative ease if necessary).

It’s interesting to see Databricks choose the Linux Foundation for this project, given that its roots are in the Apache Foundation. “We’re super excited to partner with them,” Ghodsi said about why the company chose the Linux Foundation. “They run the biggest projects on the planet, including the Linux project but also a lot of cloud projects. The cloud-native stuff is all in the Linux Foundation.”

“Bringing Delta Lake under the neutral home of the Linux Foundation will help the open-source community dependent on the project develop the technology addressing how big data is stored and processed, both on-prem and in the cloud,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation. “The Linux Foundation helps open-source communities leverage an open governance model to enable broad industry contribution and consensus building, which will improve the state of the art for data storage and reliability.”

Oct
15
2019
--

Amazon migrates more than 100 consumer services from Oracle to AWS databases

AWS and Oracle love to take shots at each other, but as much as Amazon has knocked Oracle over the years, it was forced to admit that it was in fact a customer. Today in a company blog post, the company announced it was shedding Oracle for AWS databases, and had effectively turned off its final Oracle database.

The move involved 75 petabytes of internal data stored in nearly 7,500 Oracle databases, according to the company. “I am happy to report that this database migration effort is now complete. Amazon’s Consumer business just turned off its final Oracle database (some third-party applications are tightly bound to Oracle and were not migrated),” AWS’s Jeff Barr wrote in the company blog post announcing the migration.

Over the last several years, the company has been working to move off of Oracle databases, but it’s not an easy task to move projects on Amazon scale. Barr wrote there were lots of reasons the company wanted to make the move. “Over the years we realized that we were spending too much time managing and scaling thousands of legacy Oracle databases. Instead of focusing on high-value differentiated work, our database administrators (DBAs) spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted,” he wrote.

More than 100 consumer services have been moved to AWS databases, including customer-facing tools like Alexa, Amazon Prime and Twitch, among others. It also moved internal tools like AdTech, its fulfillment system, external payments and ordering. These are not minor matters. They are the heart and soul of Amazon’s operations.

Each team moved the Oracle database to an AWS database service like Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS) and Amazon Redshift. Each group was allowed to choose the service they wanted, based on its individual needs and requirements.

Oracle declined to comment on this story.

 

Oct
15
2019
--

How to Start a 3-Node Percona XtraDB Cluster with the Binary Tarball Package

3-Node Percona XtraDB Cluster

This blog post will help you configure a 3-node Percona XtraDB Cluster using a binary tarball on your local machine. Configuration files are auto-generated with mostly default configurations except for port/IP address details. The tool has the handy script to create configuration files and start multiple Percona XtraDB Cluster nodes on the fly, helping you to start PXC quickly without spending time on startup configuration as well as avoid using any virtual environments.  The script is available in the percona-qa github project. Currently, this script supports PXC binary tarball distributions only.

You can download the appropriate tarball package from the Percona-XtraDB-Cluster-8.0 downloads page. Once you have the packages available on your local machine, unpack the tarball package.

Note: You can use the DBDeployer tool to deploy PXC-5.7 servers easily. pxc-startup.sh script also works with PXC-5.7 packages.

Now we need to run the pxc-startup.sh script from the Percona XtraDB Cluster base directory. It will check out the PXC startup script called start_pxc.

The following steps will help you to start a 3-node PXC in CentOS 7.

1. Checkout pxc-startup.sh repo:

wget https://raw.githubusercontent.com/Percona-QA/percona-qa/master/pxc-tests/pxc-startup.sh

2. Download PXC binary tarball packages for CentOS7 (In this blog we will be using the PXC-8.0 experimental package):

wget https://www.percona.com/redir/downloads/TESTING/Percona-XtraDB-Cluster-8.0/centos7/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102.tar.gz

3. Unpack tarball package and run pxc-startup.sh script from Percona XtraDB Cluster base directory:

tar -xzf Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102.tar.gz

$ cd Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/

$ bash ../pxc-startup.sh
Added scripts: ./start_pxc
./start_pxc will create ./stop_pxc | ./*node_cli | ./wipe scripts
$

4. If you want to start the 3-node cluster, please use numeric 3 as parameter with ./start_pxc:

$ ./start_pxc 3
Starting PXC nodes…
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1 started
  Configuration file : /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1.cnf
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2 started
  Configuration file : /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2.cnf
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3 started
  Configuration file : /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3.cnf
$

The start_pxc script will also create shutdown(stop)/wipe/cli sanity scripts.

$ ls -1 *_node_cli wipe *_pxc
1_node_cli
2_node_cli
3_node_cli
start_pxc
stop_pxc
wipe
$

The ./stop_pxc script will stop all cluster nodes.

$ ./stop_pxc
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node3 halted
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node2 halted
Server on socket /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1/socket.sock with datadir /home/vagrant/Percona-XtraDB-Cluster_8.0.15.5-27dev.4.2_Linux.x86_64.ssl102/node1 halted
$

The ./[1-3]_node_cli  scripts will help to login to the respective node using the MySQL client.

$ ./1_node_cli
[..]
node1:root@localhost> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

1 row in set (0.03 sec)
node1:root@localhost>

The ./wipe script will trigger stop_pxc script and move the data directory to .PREV.

$ ls  -d1 *.PREV
node1.PREV
node2.PREV
node3.PREV
$

The configuration files will be created in the base directory. You can also add custom configurations in the start_pxc script.

$ ls -1 *.cnf
node1.cnf
node2.cnf
node3.cnf
$

Oct
14
2019
--

Webinar 10/16: What’s New in Percona Monitoring and Management 2?

Percona Monitoring and Management 2

How can you ensure you are properly managing and optimizing the performance of your database environment?

Join Percona’s Product Manager Michael Coburn as he presents “What’s New in Percona Monitoring and Management 2?” and walks you through practical demonstration. This will be taking place on Wednesday, October 16, 2019, at 11:00 AM EDT.

Register Now

Percona Monitoring and Management (PMM) is a free, open source platform that supports MySQL, MariaDB, MongoDB, and PostgreSQL environments, providing detailed time-based analysis of your data. PMM allows you to embrace multiple database options and can be used on-premises and in the cloud.

Our recent major upgrade to PMM2 gives you far greater Query Analytics performance and usability and enables you to monitor much larger environments. Key features of PMM2 include:

•    New performance and usability query improvements.
•    New query analytics for PostgreSQL.
•    New ability to tag queries.
•    New administrative API.
•    New service-level dashboards.
•    Enhanced security protocols to ensure your data is safe.

Michael Coburn, Product Manager, Percona will provide an overview of these new features and a working demonstration of PMM2. You will also have the opportunity to ask questions in the chat window.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com