Aug
27
2020
--

COVID-19 is driving demand for low-code apps

Now that the great Y Combinator rush is behind us, we’re returning to a topic many of you really seem to care about: no-code and low-code apps and their development.

We’ve explored the theme a few times recently, once from a venture-capital perspective, and another time building from a chat with the CEO of Claris, an Apple subsidiary and an early proponent of low-code work.

Today we’re adding notes from a call with Appian CEO Matt Calkins that took place yesterday shortly after the company released its most recent earnings report.


The Exchange explores startups, markets and money. You can read it every morning on Extra Crunch, or get The Exchange newsletter every Saturday.


Appian is built on low-code development. Having gone public back in 2017, it is the first low-code IPO we can think of. With its Q2 results reported on August 6, we wanted to dig a bit more into what Calkins is seeing in today’s market so we can better understand what is driving demand for low- and no-code development, specifically, and demand for business apps more generally in 2020.

As you can imagine, COVID-19 and the accelerating digital transformation are going to come up in our notes. But, first, let’s take a look at Appian’s quarter quickly before digging into how its low-code-focused CEO sees the world.

Results, expectations

Appian had a pretty good Q2. The company reported $66.8 million in revenue for the three-month period, ahead of market expectations that it would report around $61 million, though collected analyst estimates varied. The low-code platform also beat on per-share profit, reporting a $0.12 per-share loss after adjustments. Analysts had expected a far worse $0.25 per-share deficit.

The period was better than expected, certainly, but it was not a quarter that showed sharp year-over-year growth. There’s a reason for that: Appian is currently shedding professional services revenue (lower-margin, human-powered stuff) for subscription incomes (higher-margin, software-powered stuff). So, as it exchanges one type of revenue for another with total subscription revenue rising a little over 12% in Q2 2020 compared to the year-ago quarter, and professional services revenue falling around 10%, the company’s growth will be slow but the resulting revenue mix improvement is material.

Most importantly, inside of its larger subscription result for the quarter ($41.4 million) were its cloud subscription revenues, worth $29.6 million for the quarter and up 30% compared to the year-ago period. Summing, the company’s least lucrative revenues are falling as its most lucrative accelerate at the fastest clip of any of its cohorts. That’s what you’d want to see if you are an Appian bull.

Shares in the technology company are up around 45% this year. With that, we can get started.

Aug
27
2020
--

Salesforce confirms it’s laying off around 1,000 people in spite of monster quarter

In what felt like strange timing, Salesforce has confirmed a report in yesterday’s Wall Street Journal that it was laying off around 1,000 people, or approximately 1.9% of the company’s 54,000 strong workforce. This news came in spite of the company reporting a monster quarter on Tuesday, in which it passed $5 billion in quarterly revenue for the first time.

In fact, Wall Street was so thrilled with Salesforce’s results, the company’s stock closed up an astonishing 26% yesterday, adding great wealth to the company’s coffers. It seemed hard to reconcile such amazing financial success with this news.

Yet it was actually something that president and chief financial officer Mark Hawkins telegraphed in Tuesday’s earnings call with industry analysts, although he didn’t come right and use the L (layoff) word. Instead he couched that impending change as a reallocation of resources.

And he talked about strategically shifting investments over the next 12-24 months. “This means we’ll be redirecting some of our resources to fuel growth in areas that are no longer as aligned with the business priority will be now deemphasized,” Hawkins said in the call.

This is precisely how a Salesforce spokesperson put it when asked by TechCrunch to confirm the story. “We’re reallocating resources to position the company for continued growth. This includes continuing to hire and redirecting some employees to fuel our strategic areas, and eliminating some positions that no longer map to our business priorities. For affected employees, we are helping them find the next step in their careers, whether within our company or a new opportunity,” the spokesperson said.

It’s worth noting that earlier this year, Salesforce CEO Marc Benioff pledged there would be no significant layoffs for 90 days.

The 90-day period has long since passed and the company has decided the time is right to make some adjustments to the workforce.

It’s worth contrasting this with the pledge that ServiceNow CEO Bill McDermott made a few weeks after the Benioff tweet, promising not to lay off a single employee for the rest of this year, while also pledging to hire 1,000 people worldwide the remainder of this year, while bringing in 360 summer interns.

Aug
27
2020
--

Talking Drupal #261 – Jeff Geerling

 As you can tell by the show title, today we are talking about Jeff Geerling and what better person could do that than Jeff himself.

www.talkingdrupal.com/261

Topics

  • Ansible 101 Series
  • Books
  • Drupal VM Jeff’s health
  • YouTube – growing channel
  • Drupal 9 and the futre of Drupal

Resources

Jeff Geerling

Ansible 101 Series

Anisble for DevOps

Ansible for Kubernetes

The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary

Did breaking backwards compatibility kill Drupal?

Youtube Channel

Livestream Setup

 

Hosts

Stephen Cross – www.stephencross.com @stephencross

John Picozzi – www.oomphinc.com @johnpicozzi

Nic Laflin – www.nLighteneddevelopment.com @nicxvan

Guest

Jeff Geerling  @geerlingguy   www.jeffgeerling.com

Aug
26
2020
--

Percona Announces Updates to MongoDB Solutions

Percona MongoDB Updates

Percona MongoDB UpdatesOn August 26, 2020, Percona announced the latest release of Percona Distribution for MongoDB which includes new releases of Percona Backup for MongoDB 1.3, now with Point in Time Recovery and Percona Server for MongoDB 4.4. These new releases include several key features:

Percona Backup for MongoDB

Point in Time Recovery

Percona Backup for MongoDB now provides Point in Time Recovery (PITR). With PITR, an administrator can recover the entire replica set to a specific timestamp. Along with taking backup snapshots users can enable Incremental Backup. By doing this, you capture the Oplog 24/7 for each replica set (including those in clusters) to the same storage you use to store backup snapshots.

This is especially important in cases of data corruption. PITR enables you to reset the system to a time before the offending incident occurred, thus restoring the environment to a healthy state. For example, if you drop an important collection at 2020-08-24T11:05:30, you can restore the environment to 2020-08-24T11:05:29 to recover as much data as possible before the damage occurred.

Percona Server for MongoDB

  • Refine Shard Keys
    With the ability to extend your shard keys, you are no longer stuck with a potentially bad decision made early in your implementation. By extending the shard key, you enable data restructuring with no downtime.
  • Hashed Compound Shard Keys
    Hash-provided simple distribution across shards is now available for compound shard keys as well as single-field ones.
  • Mirrored Reads
    This option pre-warms the cache of secondary replicas to reduce the impact of primary elections following an outage or after planned maintenance.
  • Hedged Reads
    To improve tail latency guarantees during times of heavy load, you can send reads to two replica set members at once and use the faster response. Only available when using a cluster, and using a read preference other than the default of “primary”.
  • New Aggregation Operators
    Providing a better way to do MapReduce.

For more details on what’s new in the community MongoDB 4.4 release, please review our expert’s blog.

To learn how Percona Backup for MongoDB can benefit your business, please visit our website.

To learn more about the latest release of Percona Distribution for MongoDB, check out the release notes.

Download Percona Distribution for MongoDB

Aug
26
2020
--

Creating an External Replica of AWS Aurora MySQL with Mydumper

Oftentimes, we need to replicate between Amazon Aurora and an external MySQL server. The idea is to start by taking a point-in-time copy of the dataset. Next, we can configure MySQL replication to roll it forward and keep the data up-to-date.

This process is documented by Amazon, however, it relies on the mysqldump method to create the initial copy of the data. If the dataset is in the high GB/TB range, this single-threaded method could take a very long time. Similarly, there are ways to improve the import phase (which can easily take 2x the time of the export).

Let’s explore some tricks to significantly improve the speed of this process.

Preparation Steps

The first step is to enable binary logs in Aurora. Go to the Cluster-level parameter group and make sure binlog_format is set to ROW. There is no log_bin option in Aurora (in case you are wondering), simply setting binlog_format is enough. The change requires a restart of the writer instance, so it, unfortunately, means a few minutes of downtime.

We can check if a server is generating binary logs as follows:

mysql> SHOW MASTER LOGS;

+----------------------------+-----------+
| Log_name                   | File_size |
+----------------------------+-----------+
| mysql-bin-changelog.034148 | 134219307 |
| mysql-bin-changelog.034149 | 134218251 |
...

Otherwise, you will get an error:

ERROR 1381 (HY000): You are not using binary logging

We also need to ensure a proper binary log retention period. For example, if we expect the initial data export/import to take one day, we can set the retention period to something like three days to be on the safe side. This will help ensure we can roll forward the restored data.

mysql> call mysql.rds_set_configuration('binlog retention hours', 72);
Query OK, 0 rows affected (0.27 sec)

mysql> CALL mysql.rds_show_configuration;
+------------------------+-------+------------------------------------------------------------------------------------------------------+
| name                   | value | description                                                                                          |
+------------------------+-------+------------------------------------------------------------------------------------------------------+
| binlog retention hours | 72    | binlog retention hours specifies the duration in hours before binary logs are automatically deleted. |
+------------------------+-------+------------------------------------------------------------------------------------------------------+
1 row in set (0.25 sec)

The next step is creating a temporary cluster to take the export. We need to do this for a number of reasons: first to avoid overloading the actual production cluster by our export process, also because mydumper relies on FLUSH TABLES WITH READ LOCK to get a consistent backup, which in Aurora is not possible (due to the lack of SUPER privilege).

Go to the RDS console and restore a snapshot that was created AFTER the date/time where you enabled the binary logs. The restored cluster should also have binlog_format set, so select the correct Cluster parameter group.

Next, capture the binary log position for replication. This is done by inspecting the Recent events section in the console. After highlighting your new temporary writer instance in the console, you should see something like this:

Binlog position from crash recovery is mysql-bin-changelog.034259 32068147

So now we have the information to prepare the CHANGE MASTER command to use at the end of the process.

Exporting the Data

To get the data out of the temporary instance, follow these steps:

  1. Backup the schema
  2. Save the user privileges
  3. Backup the data

This gives us added flexibility; we can do some schema changes, add indexes, or extract only a subset of the data.

Let’s create a configuration file with the login details, for example:

tee /backup/aurora.cnf <<EOF
[client]
user=percona
password=percona
host=percona-tmp.cgutr97lnli6.us-west-1.rds.amazonaws.com
EOF

For the schema backup, use mydumper to do a no-rows export:

mydumper --no-data \
--triggers \
--routines \
--events \
-v 3 \
--no-locks \
--outputdir /backup/schema \
--logfile /backup/mydumper.log \
--regex '^(?!(mysql|test|performance_schema|information_schema|sys))' \
--defaults-file /backup/aurora.cnf

To get the user privileges I normally like to use pt-show-grants. Aurora is, however, hiding the password hashes when you run SHOW GRANTS statement, so pt-show-grants will print incomplete statements e.g.:

mysql> SHOW GRANTS FOR 'user'@'%';
+---------------------------------------------------------+
| Grants for user@%                                       |
+---------------------------------------------------------+
| GRANT USAGE ON *.* TO 'user'@'%' IDENTIFIED BY PASSWORD |
| GRANT SELECT ON `db`.* TO 'user'@'%'                    |
+---------------------------------------------------------+

We can still gather the hashes and replace them manually in the pt-show-grants output if there is a small-ish number of users.

pt-show-grants --user=percona -ppercona -hpercona-tmp.cgutr97lnli6.us-west-1.rds.amazonaws.com  > grants.sql

mysql> select user, password from mysql.user;

Finally, run mydumper to export the data:

mydumper -t 8 \
--compress \
--triggers \
--routines \
--events \
—-rows=10000000 \
-v 3 \
--long-query-guard 999999 \
--no-locks \
--outputdir /backup/export \
--logfile /backup/mydumper.log \
--regex '^(?!(mysql|test|performance_schema|information_schema|sys))' \
-O skip.txt \
--defaults-file /backup/aurora.cnf

The number of threads should match the number of CPUs of the instance running mydumper. In the skip.txt file, you can include any tables that you don’t want to copy. The –rows argument will give you the ability to split tables in chunks of X number of rows. Each chunk can run in parallel, so it is a huge speed bump for big tables.

Importing the Data

We need to stand up a MySQL instance to do the data import. In order to speed up the process as much as possible, I suggest doing a number of optimizations to my.cnf as follows:

[mysqld]
pid-file=/var/run/mysqld/mysqld.pid
log-error=/var/log/mysqld.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log_slave_updates
innodb_buffer_pool_size=16G
binlog_format=ROW
innodb_log_file_size=1G
innodb_flush_method=O_DIRECT
innodb_flush_log_at_trx_commit=0
server-id=1000
log-bin=/log/mysql-bin
sync_binlog=0
master_info_repository=TABLE
relay_log_info_repository=TABLE
query_cache_type=0
query_cache_size=0
innodb_flush_neighbors=0
innodb_io_capacity_max=10000
innodb_stats_on_metadata=off
max_allowed_packet=1G
net_read_timeout=60
performance_schema=off
innodb_adaptive_hash_index=off
expire_logs_days=3
sql_mode=NO_ENGINE_SUBSTITUTION
innodb_doublewrite=off

Note that mydumper is smart enough to turn off the binary log for the importer threads.

After the import is complete, it is important to revert these settings to “safer” values: innodb_doublewriteinnodb_flush_log_at_trx_commit, sync_binlog, and also enable performance_schema again.

The next step is to create an empty schema by running myloader:

myloader \
-d /backup/schema \
-v 3 \
-h localhost \
-u root \
-p percona

At this point, we can easily introduce modifications like adding indexes, since the tables are empty. We can also restore the users at this time:

(echo "SET SQL_LOG_BIN=0;" ; cat grants.sql ) | mysql -uroot -ppercona -f

Now we are ready to restore the actual data using myloader. It is recommended to run this inside a screen session:

myloader -t 4 \
-d /backup/export \
-q 100 \
-v 3 \
-h localhost \
-u root \
-p percona

The rule of thumb here is to use half the number of vCPU threads. I also normally like to reduce mydumper default transaction size (1000) to avoid long transactions, but your mileage may vary.

After the import process is done, we can leverage faster methods (like snapshots or Percona Xtrabackup) to seed any remaining external replicas.

Setting Up Replication

The final step is setting up replication from the actual production cluster (not the temporary one!) to your external instance.

It is a good idea to create a dedicated user for this process in the source instance, as follows:

CREATE USER 'repl'@'%' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';

Now we can start replication, using the binary log coordinates that we captured before:

CHANGE MASTER TO MASTER_HOST='aurora-cluster-gh5s6lnli6.us-west-1.rds.amazonaws.com', MASTER_USER='repl', MASTER_PASSWORD='percona', MASTER_LOG_FILE='mysql-bin-changelog.034259', MASTER_LOG_POS=32068147;
START SLAVE;

Final Words

Unfortunately, there is no quick and easy method to get a large dataset out of an Aurora cluster. We have seen how mydumper and myloader can save a lot of time when creating external replicas, by introducing parallel operations. We also reviewed some good practices and configuration tricks for speeding up the data loading phase as much as possible.


Optimize your database performance with Percona Monitoring and Management, a free, open source database monitoring tool. Designed to work with Amazon RDS MySQL and Amazon Aurora MySQL with a specific dashboard for monitoring Amazon Aurora MySQL using Cloudwatch and direct sampling of MySQL metrics.

Visit the Demo

Aug
26
2020
--

LaunchNotes raises a $1.8M seed round to help companies communicate their software updates

LaunchNotes, a startup founded by the team behind Statuspage (which Atlassian later acquired) and the former head of marketing for Jira, today announced that it has raised a $1.8 million seed round co-led by Cowboy Ventures and Bull City Ventures. In addition, Tim Chen (general partner, Essence Ventures), Eric Wittman (chief growth officer, JLL Technologies), Kamakshi Sivaramakrishnan (VP Product, LinkedIn), Scot Wingo (co-founder and CEO, Spiffy), Lin-Hua Wu (chief communications officer, Dropbox) and Steve Klein (co-founder, Statuspage) are participating in this round.

The general idea behind LaunchNotes is to help businesses communicate their software updates to internal and external customers, something that has become increasingly important as the speed of software developments — and launches — has increased.

In addition to announcing the new funding round, LaunchNotes also today said that it will revamp its free tier to include the ability to communicate updates externally through public embeds as well. Previously, users needed to be on a paid plan to do so. The team also now allows businesses to customize the look and feel of these public streams more and it did away with subscriber limits.

“The reason we’re doing this is largely because [ … ] our long-term goal is to drive this shift in how release communications is done,” LaunchNotes co-founder Jake Brereton told me. “And the easiest way we can do that and get as many teams on board as possible is to lower the barrier to entry. Right now, that barrier to entry is asking users to pay for it.”

As Brereton told me, the company gained about 100 active users since it launched three months ago.

Image Credits: LaunchNotes

“I think, more than anything, our original thesis has been validated much more than I expected,” co-founder and CEO Tyler Davis added. “This problem really does scale with team size and in a very linear way and the interest that we’ve had has largely been on the much larger, enterprise team side. It’s just become very clear that that specific problem — while it is an issue for smaller teams — is much more of a critical problem as you grow and as you scale out into multiple teams and multiple business units.”

It’s maybe no surprise then that many of the next items on the team’s roadmap include features that large companies would want from a tool like this, including integrations with issue trackers, starting with Jira, single sign-on solutions and better team management tools.

“With that initial cohort being on the larger team size and more toward enterprise, issue tracker integration is a natural first step into our integrations platform, because a lot of change status currently lives in all these different tools and all these different processes and LaunchNotes is kind of the layer on top of that,” explained co-founder Tony Ramirez. “There are other integrations with things like feature flagging systems or git tools, where we want LaunchNotes to be the one place where people can go. And for these larger teams, that pain is more acute.”

The fact that LaunchNotes is essentially trying to create a system of record for product teams was also part of what attracted Cowboy Ventures founder Aileen Lee to the company.

Image Credits: LaunchNotes

“One of the things that I thought was kind of exciting is that this is potentially a new system of record for product people to use that kind of lives in different places right now — you might have some of it in Jira and some in Trello, or Asana, and some of that in Sheets and some of it in Airtable or Slack,” she said. She also believes that LaunchNotes will make a useful tool when bringing on new team members or handing off a product to another developer.

She also noted that the founding team, which she believes has the ideal background for building this product, was quite upfront about the fact that it needs to bring more diversity to the company. “They recognized, even in the first meeting, ‘Hey, we understand we’re three guys, and it’s really important to us to actually build out [diversity] on our cap table and in our investing team, but then also in all of our future hires so that we are setting our company up to be able to attract all kinds of people,” she said.

Aug
26
2020
--

Cisco acquiring BabbleLabs to filter out the lawn mower screeching during your video conference

We’ve all been in a video conference, especially this year, when the neighbor started mowing the lawn or kids were playing outside your window — and it can get pretty loud. Cisco, which owns the WebEx video conferencing service, wants to do something about that, and late yesterday it announced it was going to acquire BabbleLabs, a startup that can help filter out background noise.

BabbleLabs has a very particular set of skills. It uses artificial intelligence to enhance the speaking voice, while filtering out those unwanted background noises that seem to occur whenever you happen to be in a meeting.

Interestingly enough, Cisco also sees this as a kind of privacy play by removing background conversation. Jeetu Patel, senior vice president and general manager in the Cisco Security and Applications Business Unit, says that this should go a long way toward improving the meeting experience for Cisco users.

“Their technology is going to provide our customers with yet another important innovation — automatically removing unwanted noise — to continue enabling exceptional Webex meeting experiences,” Patel, who was at Box for many years before joining Cisco, recently said in a statement.

In a blog post, BabbleLabs CEO and co-founder Chris Rowen wrote that conversations about being acquired by Cisco began just recently, and the deal came together pretty quickly. “We quickly reached a common view that merging BabbleLabs into the Cisco Collaboration team could accelerate our common vision dramatically,” he wrote.

BabbleLabs, which launched three years ago and raised $18 million, according to Crunchbase, had an interesting, but highly technical idea. That can sometimes be difficult to translate into a viable commercial product, but makes a highly attractive acquisition target for a company like Cisco.

Brent Leary, founder and principal analyst at CRM Essentials, says this acquisition could be seen as part of a broader industry consolidation. “We’re seeing consolidation taking place as the big web conferencing players are snapping up smaller players to round out their platforms,” he said.

He added, “WebEx may not be getting the attention that Zoom is, but it still has a significant presence in the enterprise, and this acquisition will allow them to keep improving their offering.”

The deal is expected to close in the current quarter after regulatory approval. Upon closing, BabbleLabs employees will become part of Cisco’s Collaboration Group.

Aug
25
2020
--

Industry experts say it’s full speed ahead as Snowflake files S-1

When Snowflake filed its S-1 ahead of an upcoming IPO yesterday, it wasn’t exactly a shock. The company which raised $1.4 billion had been valued at $12.4 billion in its last private raise in February. CEO Frank Slootman, who had taken over from Bob Muglia in May last year, didn’t hide the fact that going public was the end game.

When we spoke to him in February at the time of his mega $479 million raise, he was candid about the fact he wanted to take his company to the next level, and predicted it could happen as soon as this summer. In spite of the pandemic and the economic fallout from it, the company decided now was the time to go — as did 4 other companies yesterday including J Frog, Sumo Logic, Unity and Asana.

If you haven’t been following this company as it went through its massive private fund raising process, investors see a company taking a way to store massive amounts of data and moving it to the cloud. This concept is known as a cloud data warehouse as it it stores immense amounts of data.

While the Big 3 cloud companies all offer something similar, Snowflake has the advantage of working on any cloud, and at a time where data portability is highly valued, enables customers to shift data between clouds.

We spoke to several industry experts to get their thoughts on what this filing means for Snowflake, which after taking a blizzard of cash, has to now take a great idea and shift it into the public markets.

Pandemic? What pandemic?

Big market opportunities usually require big investments to build companies that last, that typically go public, and that’s why investors were willing to pile up the dollars to help Snowflake grow. Blake Murray, a research analyst at Canalys says the pandemic is actually working in the startup’s favor as more companies are shifting workloads to the cloud.

“We know that demand for cloud services is higher than ever during this pandemic, which is an obvious positive for Snowflake. Snowflake also services multi-cloud environments, which we see in increasing adoption. Considering the speed it is growing at and the demand for its services, an IPO should help Snowflake continue its momentum,” Murray told TechCrunch.

Leyla Seka, a partner at Operator Collective, who spent many years at Salesforce agrees that the pandemic is forcing many companies to move to the cloud faster than they might have previously. “COVID is a strange motivator for enterprise SaaS. It is speeding up adoption in a way I have never seen before,” she said.

It’s clear to Seka that we’ve moved quickly past the early cloud adopters, and it’s in the mainstream now where a company like Snowflake is primed to take advantage. “Keep in mind, I was at Salesforce for years telling businesses their data was safe in the cloud. So we certainly have crossed the chasm, so to speak and are now in a rapid adoption phase,” she said.

So much coopetition

The fact is Snowflake is in an odd position when it comes to the big cloud infrastructure vendors. It both competes with them on a product level, and as a company that stores massive amounts of data, it is also an excellent customer for all of them. It’s kind of a strange position to be in says Canalys’ Murray.

“Snowflake both relies on the infrastructure of cloud giants — AWS, Microsoft and Google — and competes with them. It will be important to keep an eye on the competitive dynamic even although Snowflake is a large customer for the giants,” he explained.

Forrester analyst Noel Yuhanna agrees, but says the IPO should help Snowflake take on these companies as they expand their own cloud data warehouse offerings. He added that in spite of that competition, Snowflake is holding its own against the big companies. In fact, he says that it’s the number one cloud data warehouse clients inquire about, other than Amazon RedShift. As he points out, Snowflake has some key advantages over the cloud vendors’ solutions.

“Based on Forrester Wave research that compared over a dozen vendors, Snowflake has been positioned as a Leader. Enterprises like Snowflake’s ease of use, low cost, scalability and performance capabilities. Unlike many cloud data warehouses, Snowflake can run on multiple clouds such as Amazon, Google or Azure, giving enterprises choices to choose their preferred provider.”

Show them more money

In spite of the vast sums of money the company has raised in the private market, it had decided to go public to get one final chunk of capital. Patrick Moorhead, founder and principal analyst at Moor Insight & Strategy says that if the company is going to succeed in the broader market, it needs to expand beyond pure cloud data warehousing, in spite of the huge opportunity there.

“Snowflake needs the funding as it needs to expand its product footprint to encompass more than just data warehousing. It should be focused less on niches and more on the entire data lifecycle including data ingest, engineering, database and AI,” Moorhead said.

Forrester’s Yuhanna agrees that Snowflake needs to look at new markets and the IPO will give it the the money to do that. “The IPO will help Snowflake expand it’s innovation path, especially to support new and emerging business use cases, and possibly look at new market opportunities such as expanding to on-premises to deliver hybrid-cloud capabilities,” he said.

It would make sense for the company to expand beyond its core offerings as it heads into the public markets, but the cloud data warehouse market is quite lucrative on its own. It’s a space that has required a considerable amount of investment to build a company, but as it heads towards its IPO, Snowflake is should be well positioned to be a successful company for years to come.

Aug
25
2020
--

New Zendesk dashboard delivers customer service data in real time

Zendesk has been offering customers the ability to track customer service statistics for some time, but it has always been a look back. Today, the company announced a new product called Explore Enterprise that lets customers capture that valuable info in real time, and share it with anyone in the organization, whether they have a Zendesk license or not.

While it has had Explore in place for a couple of years now, Jon Aniano, senior VP of product at Zendesk says the new enterprise product is in response to growing customer data requirements. “We now have a way to deliver what we call Live Team Dashboards, which delivers real-time analytics directly to Zendesk users,” Aniano told TechCrunch.

In the days before COVID that meant displaying these on big monitors throughout the customer service center. Today, as we deal with the pandemic, and customer service reps are just as likely to be working from home, it means giving management the tools they need to understand what’s happening in real time, a growing requirement for Zendesk customers as they scale, regardless of the pandemic.

“What we’ve found over the last few years is that our customers’ appetite for operational analytics is insatiable, and as customers grow, as customer service needs get more complex, the demands on a contact center operator or customer service team are higher and higher, and teams really need new sets of tools and new types of capabilities to meet what they’re trying to do in delivering customer service at scale in the world,” Aniano told TechCrunch.

One of the reasons for this is the shift from phone and email as the primary ways of accessing customer service to messaging tools like WhatsApp. “With the shift to messaging, there are new demands on contact centers to be able to handle real-time interactions at scale with their customers,” he said.

In order to meet that kind of demand, it requires real-time analytics that Zendesk is providing with this announcement. This arms managers with the data they need to put their customer service resources where they are needed most in the moment in real time.

But Zendesk is also giving customers the ability to share these statistics with anyone in the company. “Users can share a dashboard or historical report with anybody in the company regardless of whether they have access to Zendesk. They can share it in Slack, or they can embed a dashboard anywhere where other people in the company would like to have access to those metrics,” Aniano explained.

The new service will be available starting on August 31 for $29 per user per month.

Aug
25
2020
--

Eden intros SaaS tools in a bid to become a more comprehensive office management platform

Eden, the office management platform founded by Joe du Bey and Kyle Wilkinson, is today announcing the launch of several new enterprise software features. The company, which offers a marketplace for office managers to procure services like office cleaning, repairs, etc., is looking to offer a more comprehensive platform.

The software features include a COVID team safety tool that tracks who is coming into the office, and lets them reserve a specific desk to help ensure social distancing precautions are being taken.

“For us, the pandemic really accelerated our plans around enterprise tools,” said Joe du Bey. “We realized by talking to our clients that what they need right now isn’t services. Services are important, but what they really want in this moment is to have software so they can get back into the office.”

Eden is also introducing a service desk ticketing tool to allow workers to make requests or file a ticket for a broken piece of equipment from their own desktop, as well as a visitor management tool and a room booking tool.

The company’s acquisition of Managed By Q, its biggest competitor in the services space, also greatly accelerated its ability to deliver software. Managed By Q, which was acquired by WeWork in 2019 for $220 million, was already on the trajectory of building out software well before its acquisition by Eden, and had itself acquired companies like Hivy to offer SaaS-based tools to customers.

As Eden grows its product portfolio, competition still abounds. Envoy (with just under $60 million in funding) has been in the visitor management space since its inception and is looking to broaden its product portfolio beyond office visitors. UpKeep is charging into the service ticket space with a mobile app to make it easier for service workers within an office to do their job and move seamlessly from task to task. Meanwhile, Robin is in the mix with its own room booking platform.

The point? There is clearly a rush to build out a platform that helps folks manage the physical space of an office and the people within it. Eden, with $40 million in total funding, is well positioned to duke it out for the top spot among a variety of competitors who are angling to ‘do it all.’

“This is a board meeting question: are we fighting too many battles or is comprehensiveness our most important asset?” said du Bey. “We have a completeness to our vision. A lot of our customers are saying they want a few tools from one place versus the very fragmented experience they have today. But there are trade offs in comprehensiveness. It means that someone can can spend all day building a hundred integrations for their app that for us might not be possible. So, there are some really interesting trade offs.”

That’s not without hardship, however. Eden had to layoff about 40 percent of its workforce amid the coronavirus pandemic. And though COVID has slowed growth, du Bey says that revenue in April 2020 was still higher than it was the year prior.

Alongside trying to support marketplace partners and customers through the pandemic, Eden has also introduced new ways to search for service providers, including a way to solicit a bid from black-owned businesses in the wake of the Black Lives Matter movement.

The Eden team is 52 percent female. Black employees represent 12 percent of the workforce, and Latinx employees represent 8 percent of the workforce.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com