Nov
27
2019
--

Box looks to balance growth and profitability as it matures

Prevailing wisdom states that as an enterprise SaaS company evolves, there’s a tendency to sacrifice profitability for growth — understandably so, especially in the early days of the company. At some point, however, a company needs to become profitable.

Box has struggled to reach that goal since going public in 2015, but yesterday, it delivered a mostly positive earnings report. Wall Street seemed to approve, with the stock up 6.75% as we published this article.

Box CEO Aaron Levie says the goal moving forward is to find better balance between growth and profitability. In his post-report call with analysts, Levie pointed to some positive numbers.

“As we shared in October [at BoxWorks], we are focused on driving a balance of long-term growth and improved profitability as measured by the combination of revenue growth plus free cash flow margin. On this combined metric, we expect to deliver a significant increase in FY ’21 to at least 25% and eventually reaching at least 35% in FY ’23,” Levie said.

Growing the platform

Part of the maturation and drive to profitability is spurred by the fact that Box now has a more complete product platform. While many struggle to understand the company’s business model, it provides content management in the cloud and modernizing that aspect of enterprise software. As a result, there are few pure-play content management vendors that can do what Box does in a cloud context.

Nov
27
2019
--

Running PMM1 and PMM2 Clients on the Same Host

Running PMM1 and PMM2 Clients

Running PMM1 and PMM2 ClientsWant to try out Percona Monitoring and Management 2 (PMM 2) but you’re not ready to turn off your PMM 1 environment?  This blog is for you! Keep in mind that the methods described are not intended to be a long-term migration strategy, but rather, simply a way to deploy a few clients in order to sample PMM 2 before you commit to the upgrade. ?

Here are step-by-step instructions for deploying PMM 1 & 2 client functionality i.e. pmm-client and pmm2-client, on the same host.

  1. Deploy PMM 1 on Server1 (you’ve probably already done this)
  2. Install and setup pmm-client for connectivity to Server1
  3. Deploy PMM 2 on Server2
  4. Install and setup pmm2-client for connectivity to Server2
  5. Remove pmm-client and switched completely to pmm2-client

The first few steps are already described in our PMM1 documentation so we are simply providing links to those documents.  Here we’ll focus on steps 4 and 5.

Install and Setup pmm2-client Connectivity to Server2

It’s not possible to install both clients from a repository at the same time. So you’ll need to download a tarball of pmm2-client. Here’s a link to the latest version directly from our site.

Download pmm2-client Tarball

* Note that depending on when you’re seeing this, the commands below may not be for the latest version, so the commands may need to be updated for the version you downloaded.

$ wget https://www.percona.com/downloads/pmm2/2.1.0/binary/tarball/pmm2-client-2.1.0.tar.gz

Extract Files From pmm2-client Tarball

$ tar -zxvf pmm2-client-2.1.0.tar.gz 
$ cd pmm2-client-2.1.0

Register and Generate Configuration File

Now it’s time to set up a PMM 2 client. In our example, the PMM2 server IP is 172.17.0.2 and the monitored host IP is 172.17.0.1.

$ ./bin/pmm-agent setup --config-file=config/pmm-agent.yaml \
--paths-node_exporter="$PWD/pmm2-client-2.1.0/bin/node_exporter" \
--paths-mysqld_exporter="$PWD/pmm2-client-2.1.0/bin/mysqld_exporter" \
--paths-mongodb_exporter="$PWD/pmm2-client-2.1.0/bin/mongodb_exporter" \
--paths-postgres_exporter="$PWD/pmm2-client-2.1.0/bin/postgres_exporter" \
--paths-proxysql_exporter="$PWD/pmm2-client-2.1.0/bin/proxysql_exporter" \
--server-insecure-tls --server-address=172.17.0.2:443 \
--server-username=admin  --server-password="admin" 172.17.0.1 generic node8.ca

Start pmm-agent

Let’s run the pmm-agent using a screen.  There’s no service manager integration when deploying alongside pmm-client, so if your server restarts, pmm-agent won’t automatically resume.

# screen -S pmm-agent

$ ./bin/pmm-agent --config-file="$PWD/config/pmm-agent.yaml"

Check the Current State of the Agent

$ ./bin/pmm-admin list
Service type  Service name         Address and port  Service ID

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/805db700-3607-40a9-a1fa-be61c76fe755  
node_exporter               running    /agent_id/805eb8f6-3514-4c9b-a05e-c5705755a4be

Add MySQL Service

Detach the screen, then add the mysql service:

$ ./bin/pmm-admin add mysql --use-perfschema --username=root mysqltest
MySQL Service added.
Service ID  : /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b
Service name: mysqltest

Here is the state of pmm-agent:

$ ./bin/pmm-admin list
Service type  Service name         Address and port  Service ID
MySQL         mysqltest            127.0.0.1:3306    /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b

Agent type                  Status     Agent ID                                        Service ID
pmm-agent                   connected  /agent_id/805db700-3607-40a9-a1fa-be61c76fe755   
node_exporter               running    /agent_id/805eb8f6-3514-4c9b-a05e-c5705755a4be   
mysqld_exporter             running    /agent_id/efb01d86-58a3-401e-ae65-fa8417f9feb2  /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b
qan-mysql-perfschema-agent  running    /agent_id/26836ca9-0fc7-4991-af23-730e6d282d8d  /service_id/28c4a4cd-7f4a-4abd-a999-86528e38992b

Confirm you can see activity in each of the two PMM Servers:

PMM 1 PMM 2

Remove pmm-client and Switch Completely to pmm2-client

Once you’ve decided to move over completely to PMM2, it’s better to make a switch from the tarball version to installation from the repository. It will allow you to perform client updates much easier as well as register the new agent as a service for automatically starting with the server. Also, we will show you how to make a switch without re-adding monitored instances.

Configure Percona Repositories

$ sudo yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm 
$ sudo percona-release disable all 
$ sudo percona-release enable original release 
$ yum list | grep pmm 
pmm-client.x86_64                    1.17.2-1.el6                  percona-release-x86_64
pmm2-client.x86_64                   2.1.0-1.el6                   percona-release-x86_64

Here is a link to the apt variant.

Remove pmm-client

yum remove pmm-client

Install pmm2-client

$ yum install pmm2-client
Loaded plugins: priorities, update-motd, upgrade-helper
4 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package pmm2-client.x86_64 0:2.1.0-5.el6 will be installed
...
Installed:
  pmm2-client.x86_64 0:2.1.0-5.el6                                                                                                                                                           

Complete!

Configure pmm2-client

Let’s copy the currently used pmm2-client configuration file in order to omit re-adding monitored instances.

$ cp pmm2-client-2.1.0/config/pmm-agent.yaml /tmp

It’s required to set the new location of exporters (/usr/local/percona/pmm2/exporters/) in the file.

$ sed -i 's|node_exporter:.*|node_exporter: /usr/local/percona/pmm2/exporters/node_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|mysqld_exporter:.*|mysqld_exporter: /usr/local/percona/pmm2/exporters/mysqld_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|mongodb_exporter:.*|mongodb_exporter: /usr/local/percona/pmm2/exporters/mongodb_exporter|g' /tmp/pmm-agent.yaml 
$ sed -i 's|postgres_exporter:.*|postgres_exporter: /usr/local/percona/pmm2/exporters/postgres_exporter|g' /tmp/pmm-agent.yaml
$ sed -i 's|proxysql_exporter:.*|proxysql_exporter: /usr/local/percona/pmm2/exporters/proxysql_exporter|g' /tmp/pmm-agent.yaml

The default configuration file has to be replaced by our file and the service pmm-agent has to be restarted.

$ cp /tmp/pmm-agent.yaml /usr/local/percona/pmm2/config/
$ systemctl restart pmm-agent

Check Monitored Services

So now we can verify the current state of monitored instances.

$ pmm-admin list

Also, it can be checked on PMM server-side.

Nov
26
2019
--

Xerox tells HP it will bring takeover bid directly to shareholders

Xerox fired the latest volley in the Xerox HP merger letter wars today. Xerox CEO John Visentin wrote to the HP board that his company planned to take its $33.5 billion offer directly to HP shareholders.

He began his letter with a tone befitting a hostile takeover attempt, stating that their refusal to negotiate defied logic. “We have put forth a compelling proposal – one that would allow HP shareholders to both realize immediate cash value and enjoy equal participation in the substantial upside expected to result from a combination. Our offer is neither ‘highly conditional’ nor ‘uncertain’ as you claim,” Visentin wrote in his letter.

He added, “We plan to engage directly with HP shareholders to solicit their support in urging the HP Board to do the right thing and pursue this compelling opportunity.”

The letter was in response to one yesterday from HP in which it turned down Xerox’s latest overture, stating that the deal seemed beyond Xerox’s ability to afford it. It called into question Xerox’s current financial situation, citing Xerox’s own financial reports, and took exception to the way in which Xerox was courting the company.

“It is clear in your aggressive words and actions that Xerox is intent on forcing a potential combination on opportunistic terms and without providing adequate information,” the company wrote.

Visentin fired back in his letter, “While you may not appreciate our “aggressive” tactics, we will not apologize for them. The most efficient way to prove out the scope of this opportunity with certainty is through mutual due diligence, which you continue to refuse, and we are obligated to require.”

He further pulled no punches writing that he believes the deal is good for both companies and good for the shareholders. “The potential benefits of a combination between HP and Xerox are self-evident. Together, we could create an industry leader – with enhanced scale and best-in-class offerings across a complete product portfolio — that will be positioned to invest more in innovation and generate greater returns for shareholders.”

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategies, thinks HP ultimately has the upper hand in this situation. “I feel like we have seen this movie before when Carl Icahn meddled with Dell in a similar way. Xerox is a third of the size HP Inc., has been steadily declining in revenue, is running out of options, and needs HP more than HP needs it.”

It would seem Xerox has chosen a no-holds barred approach to the situation. The pen is now in HP’s hands as we await the next letter and see how the printing giant intends to respond to the latest missive from Xerox.

Nov
26
2019
--

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and to process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s MySQL (and Postgres)-compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Asay wrote in a blog post announcing these new capabilities.

Asay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

Nov
26
2019
--

Instagram founders join $30M raise for Loom work video messenger

Why are we all trapped in enterprise chat apps if we talk 6X faster than we type, and our brain processes visual info 60,000X faster than text? Thanks to Instagram, we’re not as camera-shy anymore. And everyone’s trying to remain in flow instead of being distracted by multi-tasking.

That’s why now is the time for Loom. It’s an enterprise collaboration video messaging service that lets you send quick clips of yourself so you can get your point across and get back to work. Talk through a problem, explain your solution, or narrate a screenshare. Some engineering hocus pocus sees videos start uploading before you finish recording so you can share instantly viewable links as soon as you’re done.

Loom video messaging on mobile

“What we felt was that more visual communication could be translated into the workplace and deliver disproportionate value” co-founder and CEO Joe Thomas tells me. He actually conducted our whole interview over Loom, responding to emailed questions with video clips.

Launched in 2016, Loom is finally hitting its growth spurt. It’s up from 1.1 million users and 18,000 companies in February to 1.8 million people at 50,000 businesses sharing 15 million minutes of Loom videos per month. Remote workers are especially keen on Loom since it gives them face-to-face time with colleagues without the annoyance of scheduling synchronous video calls. “80% of our professional power users had primarily said that they were communicating with people that they didn’t share office space with” Thomas notes.

A smart product, swift traction, and a shot at riding the consumerization of enterprise trend has secured Loom a $30 million Series B. The round that’s being announced later today was led by prestigious SAAS investor Sequoia and joined by Kleiner Perkins, Figma CEO Dylan Field, Front CEO Mathilde Collin, and Instagram co-founders Kevin Systrom and Mike Krieger.

“At Instagram, one of the biggest things we did was focus on extreme performance and extreme ease of use and that meant optimizing every screen, doing really creative things about when we started uploading, optimizing everything from video codec to networking” Krieger says. “Since then I feel like some products have managed to try to capture some of that but few as much as Loom did. When I first used Loom I turned to Kevin who was my Instagram co-founder and said, ‘oh my god, how did they do that? This feels impossibly fast.’”


Systrom concurs about the similarities, saying “I’m most excited because I see how they’re tackling the problem of visual communication in the same way that we tried to tackle that at Instagram.” Loom is looking to double-down there, potentially adding the ability to Like and follow videos from your favorite productivity gurus or sharpest co-workers.

Loom is also prepping some of its most requested features. The startup is launching an iOS app next month with Android coming the first half of 2020, improving its video editor with blurring for hiding your bad hair day and stitching to connect multiple takes. New branding options will help external sales pitches and presentations look right. What I’m most excited for is transcription, which is also slated for the first half of next year through a partnership with another provider, so you can skim or search a Loom. Sometimes even watching at 2X speed is too slow.

But the point of raising a massive $30 million Series B just a year after Loom’s $11 million Kleiner-led Series A is to nail the enterprise product and sales process. To date, Loom has focused on a bottom-up distribution strategy similar to Dropbox. It tries to get so many individual employees to use Loom that it becomes a team’s default collaboration software. Now it needs to grow up so it can offer the security and permissions features IT managers demand. Loom for teams is rolling out in beta access this year before officially launching in early 2020.

Loom’s bid to become essential to the enterprise, though, is its team video library. This will let employees organize their Looms into folders of a knowledge base so they can explain something once on camera, and everyone else can watch whenever they need to learn that skill. No more redundant one-off messages begging for a team’s best employees to stop and re-teach something. The Loom dashboard offers analytics on who’s actually watching your videos. And integration directly into popular enterprise software suites will let recipients watch without stopping what they’re doing.

To build out these features Loom has already grown to a headcount of 45, though co-founder Shahed Khan is stepping back from company. For new leadership, it’s hired away former head of web growth at Dropbox Nicole Obst, head of design for Slack Joshua Goldenberg, and VP of commercial product strategy for Intercom Matt Hodges.


Still, the elephants in the room remain Slack and Microsoft Teams. Right now, they’re mainly focused on text messaging with some additional screensharing and video chat integrations. They’re not building Loom-style asynchronous video messaging…yet. “We want to be clear about the fact that we don’t think we’re in competition with Slack or Microsoft Teams at all. We are a complementary tool to chat” Thomas insists. But given the similar productivity and communication ethos, those incumbents could certainly opt to compete. Slack already has 12 million daily users it could provide with video tools.

Loom co-founder and CEO Joe Thomas

Hodges, Loom’s head of marketing, tells me “I agree Slack and Microsoft could choose to get into this territory, but what’s the opportunity cost for them in doing so? It’s the classic build vs. buy vs. integrate argument.” Slack bought screensharing tool Screenhero, but partners with Zoom and Google for video chat. Loom will focus on being easily integratable so it can plug into would-be competitors. And Hodges notes that “Delivering asynchronous video recording and sharing at scale is non-trivial. Loom holds a patent on its streaming, transcoding, and storage technology, which has proven to provide a competitive advantage to this day.”

The tea leaves point to video invading more and more of our communication, so I expect rival startups and features to Loom will crop up. Vidyard and Wistia’s Soapbox are already pushing into the space. As long as it has the head start, Loom needs to move as fast as it can. “It’s really hard to maintain focus to deliver on the core product experience that we set out to deliver versus spreading ourselves too thin. And this is absolutely critical” Thomas tells me.

One thing that could set Loom apart? A commitment to financial fundamentals. “When you grow really fast, you can sometimes lose sight of what is the core reason for a business entity to exist, which is to become profitable. . . Even in a really bold market where cash can be cheap, we’re trying to keep profitability at the top of our minds.”

Nov
26
2019
--

Comparing S3 Streaming Tools with Percona XtraBackup

Comparing S3 Streaming Tools

Making backups over the network can be done in two ways: either save on disk and transfer or just transfer without saving. Both ways have their strong and weak points. The second way, particularly, is highly dependent on the upload speed, which would either reduce or increase the backup time. Other factors that influence it are chunk size and the number of upload threads.

Percona XtraBackup 2.4.14 has gained S3 streaming, which is the capability to upload backups directly to s3-compatible storage without saving locally first. This feature was developed because we wanted to improve the upload speeds of backups in Percona Operator for XtraDB Cluster.

There are many implementations of S3 Compatible Storage: AWS S3, Google Cloud Storage, Digital Ocean Spaces, Alibaba Cloud OSS, MinIO, and Wasabi.

We’ve measured the speed of AWS CLI, gsutil, MinIO client, rclone, gof3r and the xbcloud tool (part of Percona XtraBackup) on AWS (in single and multi-region setups) and on Google Cloud. XtraBackup was compared in two variants: a default configuration and one with tuned chunk size and amount of uploading threads.

Here are the results.

AWS (Same Region)

The backup data was streamed from the AWS EC2 instance to the AWS S3, both in the us-east-1 region.

 

 

tool settings CPU max mem speed speed comparison
AWS CLI default settings 66% 149Mb 130MiB/s baseline
AWS CLI 10Mb block, 16 threads 68% 169Mb 141MiB/s +8%
MinIO client not changeable 10% 679Mb 59MiB/s -55%
rclone rcat not changeable 102% 7138Mb 139MiB/s +7%
gof3r default settings 69% 252Mb 97MiB/s -25%
gof3r 10Mb block, 16 threads 77% 520Mb 108MiB/s -17%
xbcloud default settings 10% 96Mb 25MiB/s -81%
xbcloud 10Mb block, 16 threads 60% 185Mb 134MiB/s +3%

 

Tip: If you run MySQL on an EC2 instance to make backups inside one region, do snapshots instead.

AWS (From US to EU)

The backup data was streamed from AWS EC2 in us-east-1 to AWS S3 in eu-central-1.

 

 

tool settings CPU max mem speed speed comparison
AWS CLI default settings 31% 149Mb 61MiB/s baseline
AWS CLI 10Mb block, 16 threads 33% 169Mb 66MiB/s +8%
MinIO client not changeable 3% 679Mb 20MiB/s -67%
rclone rcat not changeable 55% 9307Mb 77MiB/s +26%
gof3r default settings 69% 252Mb 97MiB/s +59%
gof3r 10Mb block, 16 threads 77% 520Mb 108MiB/s +77%
xbcloud default settings 4% 96Mb 10MiB/s -84%
xbcloud 10Mb block, 16 threads 59% 417Mb 123MiB/s +101%

 

Tip: Think about disaster recovery, and what will you do when the whole region is not available. It makes no sense to back up to the same region; always transfer backups to another region.

Google Cloud (From US to EU)

The backup data were streamed from Compute Engine instance in us-east1 to Cloud Storage europe-west3. Interestingly, Google Cloud Storage supports both native protocol and S3(interoperability) API. So, Percona XtraBackup can transfer data to Google Cloud Storage directly via S3(interoperability) API.

 

tool settings CPU max mem speed speed comparison
gsutil not changeable, native protocol 8% 246Mb 23MiB/s etalon
rclone rcat not changeable, native protocol 6% 61Mb 16MiB/s -30%
xbcloud default settings, s3 protocol 3% 97Mb 9MiB/s -61%
xbcloud 10Mb block, 16 threads, s3 protocol 50% 417Mb 133MiB/s +478%

 

Tip: A cloud provider can block your account due to many reasons, such as human or robot mistakes, inappropriate content abuse after hacking, credit card expire, sanctions, etc. Think about disaster recovery and what will you do when a cloud provider blocks your account, it may make sense to back up to another cloud provider or on-premise.

Conclusion

xbcloud tool (part of Percona XtraBackup) is 2-5 times faster with tuned settings on long-distance with native cloud vendor tools, and 14% faster and requires 20% less memory than analogs with the same settings. Also, xbcloud is the most reliable tool for transferring backups to S3-compatible storage because of two reasons:

  • It calculates md5 sums during the uploading and puts them into a .md5/filename.md5 file and verifies sums on the download (gof3r does the same).
  • xbcloud sends data in 10mb chunks and resends them if any network failure happens.

PS: Please find instructions on GitHub if you would like to reproduce this article’s results.

Nov
26
2019
--

Coralogix announces $10M Series A to bring more intelligence to logging

Coralogix, a startup that wants to bring automation and intelligence to logging, announced a $10 million Series A investment today.

The round was led by Aleph with participation from StageOne Ventures, Janvest Capital Partners and 2B Angels. Today’s investment brings the total raised to $16.2 million, according to the company.

CEO and co-founder Ariel Assaraf says his company focuses on two main areas: logging and analysis. The startup has been doing traditional applications performance monitoring up until now, but today, it also announced it was getting into security logging, where it tracks logs for anomalies and shares this information with security information and event management (SEIM) tools.

“We do standard log analytics in terms of ingesting, parsing, visualizing, alerting and searching for log data at scale using scaled, secure infrastructure,” Assaraf said. In addition, the company has developed a set of algorithms to analyze the data, and begin to understand patterns of expected behavior, and how to make use of that data to recognize and solve problems in an automated fashion.

“So the idea is to generally monitor a system automatically for customers plus giving them the tools to quickly drill down into data, understand how it behaves and get context to the issues that they see,” he said.

For instance, the tool could recognize that a certain sequence of events like a user logging in, authenticating that user and redirecting him or her to the application or website. All of those events happen every time, so if there is something different, the system will recognize that and share the information with DevOps team that something is amiss.

The company, which has offices in Tel Aviv, San Francisco and Kiev, was founded in 2015. It already has 1500 customers including Postman, Fiverr, KFC and Caesars Palace. They’ve been able to build the company with just 30 people to this point, but want to expand the sales and marketing team to help build it out the customer base further. The new money should help in that regard.

Nov
26
2019
--

Vivun snags $3M seed round to bring order to pre-sales

Vivun, a startup that wants to help companies keep better track of pre-sales data announced a $3 million seed round today led by Unusual Ventures, the venture firm run by Harness CEO Jyoti Bansal.

Vivun founder and CEO Matt Darrow says that pre-sales team works more closely with the customer than anyone else, delivering demos and proof of concepts, and generally helping sales get over the finish line. While sales has CRM to store knowledge about the customer, pre-sales has been lacking a tool to track info about their interactions with customers, and that’s what his company built.

“The main problem that we solve is we give technology to those pre-sales leaders to run and operate their teams, but then take those insights from the group that knows more about the technology and the customer than anybody else, and we deliver that across the organization to the product team, sales team and executive staff,” Darrow explained.

Darrow is a Zuora alumni, and his story is similar to that company’s founder Tien Tzuo, who built the first billing system for Salesforce, then founded Zuroa to build a subscription billing system for everyone else. Similarly, Darrow built a pre-sales tool for Zuroa after finding there wasn’t anything else out there that was devoted specifically to tracking that kind of information.

“At Zuora, I had to build everything from scratch. After the IPO, I realized that this is something that every tech company can take advantage of because every technology company will really need this role to be of high value and impact,” he said.

The company not only tracks information via a mobile app and browser tool, it also has a reporting dashboard to help companies understand and share the information the pre-sales team is hearing from the customer. For example, they might know that x number of customers have been asking for a certain feature, and this information can be organized and passed onto other parts of the company.

Screenshot: Vivun

Bansal, who was previously CEO and co-founder at AppDynamics, a company he sold to Cisco for $3.7 billion just before its IPO in 2017, saw a company filling a big hole in the enterprise software ecosystem. He is not just an investor, he’s also a customer.

“To be successful, a technology company needs to understand three things: where it will be in five years, what its customers need right now, and what the market wants that it’s not currently providing. Pre-sales has answers to all three questions and is a strategically important department that needs management, analytics, and tools for accelerating deals. Yet, no one was making software for this critical department until Vivun,” he said in a statement.

The company was founded in 2018 and has been bootstrapped until now. It spent the first year building out the product. Today, the company has 20 customers including SignalFx (acquired by Splunk in August for $1.05 billion) and Harness.

Nov
25
2019
--

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Nov
22
2019
--

Making sense of a multi-cloud, hybrid world at KubeCon

More than 12,000 attendees gathered this week in San Diego to discuss all things containers, Kubernetes and cloud-native at KubeCon.

Kubernetes, the container orchestration tool, turned five this year, and the technology appears to be reaching a maturity phase where it accelerates beyond early adopters to reach a more mainstream group of larger business users.

That’s not to say that there isn’t plenty of work to be done, or that most enterprise companies have completely bought in, but it’s clearly reached a point where containerization is on the table. If you think about it, the whole cloud-native ethos makes sense for the current state of computing and how large companies tend to operate.

If this week’s conference showed us anything, it’s an acknowledgment that it’s a multi-cloud, hybrid world. That means most companies are working with multiple public cloud vendors, while managing a hybrid environment that includes those vendors — as well as existing legacy tools that are probably still on-premises — and they want a single way to manage all of this.

The promise of Kubernetes and cloud-native technologies, in general, is that it gives these companies a way to thread this particular needle, or at least that’s the theory.

Kubernetes to the rescue

Photo: Ron Miller/TechCrunch

If you were to look at the Kubernetes hype cycle, we are probably right about at the peak where many think Kubernetes can solve every computing problem they might have. That’s probably asking too much, but cloud-native approaches have a lot of promise.

Craig McLuckie, VP of R&D for cloud-native apps at VMware, was one of the original developers of Kubernetes at Google in 2014. VMware thought enough of the importance of cloud-native technologies that it bought his former company, Heptio, for $550 million last year.

As we head into this phase of pushing Kubernetes and related tech into larger companies, McLuckie acknowledges it creates a set of new challenges. “We are at this crossing the chasm moment where you look at the way the world is — and you look at the opportunity of what the world might become — and a big part of what motivated me to join VMware is that it’s successfully proven its ability to help enterprise organizations navigate their way through these disruptive changes,” McLuckie told TechCrunch.

He says that Kubernetes does actually solve this fundamental management problem companies face in this multi-cloud, hybrid world. “At the end of the day, Kubernetes is an abstraction. It’s just a way of organizing your infrastructure and making it accessible to the people that need to consume it.

“And I think it’s a fundamentally better abstraction than we have access to today. It has some very nice properties. It is pretty consistent in every environment that you might want to operate, so it really makes your on-prem software feel like it’s operating in the public cloud,” he explained.

Simplifying a complex world

One of the reasons Kubernetes and cloud-native technologies are gaining in popularity is because the technology allows companies to think about hardware differently. There is a big difference between virtual machines and containers, says Joe Fernandes, VP of product for Red Hat cloud platform.

“Sometimes people conflate containers as another form of virtualization, but with virtualization, you’re virtualizing hardware, and the virtual machines that you’re creating are like an actual machine with its own operating system. With containers, you’re virtualizing the process,” he said.

He said that this means it’s not coupled with the hardware. The only thing it needs to worry about is making sure it can run Linux, and Linux runs everywhere, which explains how containers make it easier to manage across different types of infrastructure. “It’s more efficient, more affordable, and ultimately, cloud-native allows folks to drive more automation,” he said.

Bringing it into the enterprise

Photo: Ron Miller/TechCrunch

It’s one thing to convince early adopters to change the way they work, but as this technology enters the mainstream. Gabe Monroy, partner program manager at Microsoft says to carry this technology to the next level, we have to change the way we talk about it.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com