Oct
11
2019
--

Why it might have been time for new leadership at SAP

SAP CEO Bill McDermott announced he was stepping down last night after a decade at the helm in an announcement that shocked many. It’s always tough to measure the performance of an enterprise leader when he or she leaves. Some people look at stock price over their tenure. Some at culture. Some at the acquisitions made. Whatever the measure, it will be up to the new co-CEOs Jennifer Morgan and Christian Klein to put their own mark on the company.

What form that will take remains to be seen. McDermott’s tenure ended without much warning, but it also happened against a wider backdrop that includes other top executives and board members leaving the company over the last year, an activist investor coming on board and some controversial licensing changes in recent years.

Why now?

The timing certainly felt sudden. McDermott, who was interviewed at TechCrunch Sessions: Enterprise last month sounded more like a man who was fully engaged in the job, not one ready to leave, but a month later he’s gone.

But as McDermott told our own Frederic Lardinois last night, after 10 years, it seemed like the right time to leave. “The consensus was 10 years is about the right amount of time for a CEO because you’ve accomplished a lot of things if you did the job well, but you certainly didn’t stay too long. And if you did really well, you had a fantastic success plan,” he said in the interview.

There is no reason to doubt that, but you should at least look at context and get a sense of what has been going in the company. As the new co-CEOs take over for McDermott, several other executives including SAP SuccessFactors COO Brigette McInnis-Day, Robert Enslin, president of its cloud business and a board member, CTO Björn Goerke and Bernd Leukert, a member of the executive board have all left this year.

Oct
11
2019
--

How to Set Up Streaming Replication in PostgreSQL 12

Streaming Replication in PostgreSQL

PostgreSQLPostgreSQL 12 can be considered revolutionary considering the performance boost we observe with partitioning enhancements, planner improvements, several SQL features, Indexing improvements, etc. You may see some of such features discussed in future blog posts. But, let me start this blog with something interesting. You might have already seen some news that there is no

recovery.conf

file in standby anymore and that the replication setup (streaming replication) has slightly changed in PostgreSQL 12. We have earlier blogged about the steps involved in setting up a simple Streaming Replication until PostgreSQL 11 and also about using replication slots for the same. Let’s see how different is it to set up the same Streaming Replication in PostgreSQL 12.

Installing PostgreSQL 12 on Master and Standby

On CentOS/RedHat, you may use the rpms available in the PGDG repo (the following link may change depending on your OS release).

# as root:
yum install -y https://yum.postgresql.org/12/redhat/rhel-7.4-x86_64/pgdg-redhat-repo-latest.noarch.rpm -y
yum install -y postgresql12-server

Steps to set up Streaming Replication in PostgreSQL 12

In the following steps, the Master server is: 192.168.0.108 and the Standby server is: 192.168.0.107

Step 1 :
Initialize and start PostgreSQL, if not done already on the Master.

## Preparing the environment
$ sudo su - postgres
$ echo "export PATH=/usr/pgsql-12/bin:$PATH PAGER=less" >> ~/.pgsql_profile
$ source ~/.pgsql_profile

## As root, initialize and start PostgreSQL 12 on the Master
$ /usr/pgsql-12/bin/postgresql-12-setup initdb
$ systemctl start postgresql-12

 

Step 2 :
Modify the parameter

listen_addresses

to allow a specific IP interface or all (using *). Modifying this parameter requires a restart of the PostgreSQL instance to get the change into effect.

# as postgres
$ psql -c "ALTER SYSTEM SET listen_addresses TO '*'";
ALTER SYSTEM

# as root, restart the service
$ systemctl restart postgresql-12

You may not have to set any other parameters on the Master for simple replication setup, because the defaults hold good.

 

Step 3 :
Create a User for replication in the Master. It is discouraged to use superuser postgres in order to setup replication, though it works.

postgres=# CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'secret';
CREATE ROLE

 

Step 4 :
Allow replication connections from Standby to Master by appending a similar line as following to the

pg_hba.conf

file of the Master. If you are enabling automatic failover using any external tool, you must also allow replication connections from Master to the Standby. In the event of a failover, the Standby may be promoted as a Master and the Old Master need to replicate changes from the New Master (previously a standby). You may use any of the authentication methods as supported by PostgreSQL today.

$ echo "host replication replicator 192.168.0.107/32 md5" >> $PGDATA/pg_hba.conf

## Get the changes into effect through a reload.

$ psql -c "select pg_reload_conf()"

 

Step 5 :
You may use

pg_basebackup

  to backup the data directory of the Master from the Standby. While creating the backup, you may also tell

pg_basebackup

  to create the replication specific files and entries in the data directory using

"-R"

 .

## This command must be executed on the standby server.
$ pg_basebackup -h 192.168.0.108 -U replicator -p 5432 -D $PGDATA -Fp -Xs -P -R
Password:
25314/25314 kB (100%), 1/1 tablespace

You may use multiple approaches such as rsync or any other disk backup methods to copy the master’s data directory to the standby. But, there is an important file (standby.signal) that must exist in a standby data directory to help postgres determine its state as a standby. It is automatically created when you use the

"-R"

option while taking

pg_basebackup

. If not, you may simply use touch to create this empty file.

$ touch $PGDATA/standby.signal

$ ls -l $PGDATA
total 60
-rw-------. 1 postgres postgres 224 Oct 8 16:41 backup_label
drwx------. 5 postgres postgres 41 Oct 8 16:41 base
-rw-------. 1 postgres postgres 30 Oct 8 16:41 current_logfiles
drwx------. 2 postgres postgres 4096 Oct 8 16:41 global
drwx------. 2 postgres postgres 32 Oct 8 16:41 log
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_commit_ts
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_dynshmem
-rw-------. 1 postgres postgres 4581 Oct 8 16:41 pg_hba.conf
-rw-------. 1 postgres postgres 1636 Oct 8 16:41 pg_ident.conf
drwx------. 4 postgres postgres 68 Oct 8 16:41 pg_logical
drwx------. 4 postgres postgres 36 Oct 8 16:41 pg_multixact
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_notify
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_replslot
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_serial
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_snapshots
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_stat
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_stat_tmp
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_subtrans
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_tblspc
drwx------. 2 postgres postgres 6 Oct 8 16:41 pg_twophase
-rw-------. 1 postgres postgres 3 Oct 8 16:41 PG_VERSION
drwx------. 3 postgres postgres 60 Oct 8 16:41 pg_wal
drwx------. 2 postgres postgres 18 Oct 8 16:41 pg_xact
-rw-------. 1 postgres postgres 288 Oct 8 16:41 postgresql.auto.conf
-rw-------. 1 postgres postgres 26638 Oct 8 16:41 postgresql.conf
-rw-------. 1 postgres postgres 0 Oct 8 16:41 standby.signal

One of the most important observations should be the contents of the

postgresql.auto.conf

file in the standby server. As you see in the following log, an additional parameter

primary_conninfo

has been added to this file. This parameter tells the standby about its Master. If you haven’t used

pg_basebackup

with

-R

option, you would not see this entry (of

primary_conninfo

) in this file, on the standby server. Which means that you have to add this manually.

$ cat $PGDATA/postgresql.auto.conf
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
listen_addresses = '*'
primary_conninfo = 'user=replicator password=secret host=192.168.0.108 port=5432 sslmode=prefer sslcompression=0 gssencmode=prefer krbsrvname=postgres target_session_attrs=any'

postgresql.auto.conf

file is the configuration file that is read at the end when you start Postgres. So, if there is a parameter that has different values in

postgresql.conf

and

postgresql.auto.conf

files, the value set in the

postgresql.auto.conf

is considered by PostgreSQL. Also, any parameter that has been modified using

ALTER SYSTEM

would automatically be written to

postgresql.auto.conf

  file by postgres.

How was the replication configuration handled until PostgreSQL 11?

Until PostgreSQL 11, we must create a file named:

recovery.conf

that contains the following minimalistic parameters. If the

standby_mode

is ON, it is considered to be a standby.

$ cat $PGDATA/recovery.conf
standby_mode = 'on'
primary_conninfo = 'host=192.168.0.8 port=5432 user=replicator password=secret'

So the first difference between PostgreSQL 12 and earlier (until PostgreSQL 11) is that the

standby_mode

parameter is not present in PostgreSQL 12 and the same has been replaced by an empty file

standby.signal

in the standby’s data directory. And the second difference is the parameter

primary_conninfo

. This can now be added to the

postgresql.conf

or

postgresql.auto.conf

 file of the standby’s data directory.

 

Step 6 :
Start PostgreSQL using

pg_ctl

on the Standby.

$ pg_ctl -D $PGDATA start

 

Step 7 :
Verify the replication between the Master and the Standby. In order to verify, run this command on the Master. In the following log, you see a lot of details of the standby and the lag between the Master and Standby.

$ psql -x -c "select * from pg_stat_replication"
-[ RECORD 1 ]----+------------------------------
pid | 2522
usesysid | 16384
usename | replicator
application_name | walreceiver
client_addr | 192.168.0.107
client_hostname |
client_port | 36382
backend_start | 2019-10-08 17:15:19.658917-04
backend_xmin |
state | streaming
sent_lsn | 0/CB02A90
write_lsn | 0/CB02A90
flush_lsn | 0/CB02A90
replay_lsn | 0/CB02A90
write_lag | 00:00:00.095746
flush_lag | 00:00:00.096522
replay_lag | 00:00:00.096839
sync_priority | 0
sync_state | async
reply_time | 2019-10-08 17:18:04.783975-04

Enabling Archiving on Master and the Standby recovery using Archives.

Most of the time, the default or modified retention settings of WAL segments on the Master may not be enough to maintain a healthy replication between itself and its standby. So, we need the WALs to be safely archived to another disk or a remote backup server. These archived WAL segments can be used by the standby to replay them when the WALs are gone from the Master.

To enable archiving on the Master, we can still use the same approach of setting the following 2 parameters.

archive_mode = ON
archive_command = 'cp %p /archives/%f' ## Modify this with an appropriate shell command.

But to enable recovery from archives on a standby, we used to add a parameter named

restore_command

to the

recovery.conf

file until PostgreSQL 11. But starting from PostgreSQL 12, we can add the same parameter to

postgresql.conf

or

postgresql.auto.conf

file of the standby. Please note that it requires a restart of PostgreSQL to update the changes made to

archive_mode

and

restore_command

parameters.

echo "restore_command = 'cp /archives/%f %p'" >> $PGDATA/postgresql.auto.conf
pg_ctl -D $PGDATA restart -mf

In my next blog post, I shall talk about Point-in-time-recovery on PostgreSQL 12, where I will discuss a few more parameters related to recovery in detail. Meanwhile, have you tried Percona Distribution for PostgreSQL? It is a collection of finely-tested and implemented open source tools and extensions along with PostgreSQL 11, maintained by Percona. Please subscribe to our blog posts to learn more interesting features in PostgreSQL.

Discuss on HackerNews

Oct
11
2019
--

Descartes Labs snaps up $20M more for its AI-based geospatial imagery analytics platform

Satellite imagery holds a wealth of information that could be useful for industries, science and humanitarian causes, but one big and persistent challenge with it has been a lack of effective ways to tap that disparate data for specific ends.

That’s created a demand for better analytics, and now, one of the startups that has been building solutions to do just that is announcing a round of funding as it gears up for expansion. Descartes Labs, a geospatial imagery analytics startup out of Santa Fe, New Mexico, is today announcing that it has closed a $20 million round of funding, money that CEO and founder Mark Johnson described to me as a bridge round ahead of the startup closing and announcing a larger growth round.

The funding is being led by Union Grove Venture Partners, with Ajax Strategies, Crosslink Capital, and March Capital Partners (which led its previous round) also participating. It brings the total raised by Descartes Labs to $60 million, and while Johnson said the startup would not be disclosing its valuation, PitchBook notes that it is $220 million ($200 million pre-money in this round).

As a point of comparison, another startup in the area of geospatial analytics, Orbital Insight, is reportedly now raising money at a $430 million valuation (that data is from January of this year, and we’ve contacted the company to see if it ever closed).

Santa Fe — a city popular with retirees that counts tourism as its biggest industry — is an unlikely place to find a tech startup. Descartes Labs’ presence there is a result of that fact that it is a spinoff from the Los Alamos National Laboratory near the city.

Johnson — who had lived in San Francisco before coming to Santa Fe to help create Descartes Labs (his previous experience building Zite for media, he said, led the Los Alamos scientists to first conceive of the Descartes Labs IP as the basis of a kind of search engine) — admitted that he never thought the company would stay headquartered there beyond a short initial phase of growth of six months.

However, it turned out that the trends around more distributed workforces (and cloud computing to enable that), engineers looking for employment alternatives to living in pricey San Francisco, plus the heated competition for talent you get in the Valley all came together in a perfect storm that helped Descartes Labs establish and thrive on its home turf.

Descartes Labs — named after the seminal philosopher/mathematician Rene Descartes — describes itself as a “data refinery”. By this, it means it injests a lot of imagery and unstructured data related to the earth that is picked up primarily by satellites but also other sensors (Johnson notes that its sources include data from publicly available satellites; data from NASA and the European space agency, and data from the companies themselves); applies AI-based techniques including computer vision analysis and machine learning to make sense of the sometimes-grainy imagery; and distills and orders it to create insights into what is going on down below, and how that is likely to evolve.

Screenshot 2019 10 11 at 13.26.33

This includes not just what is happening on the surface of the earth, but also in the air above it: Descartes Labs has worked on projects to detect levels of methane gas in oil fields, the spread of wildfires, and how crops might grow in a particular area, and the impact of weather patterns on it all.

It has produced work for a range of clients that have included governments (the methane detection, pictured above, was commissioned as part of New Mexico’s effort to reduce greenhouse gas emissions), energy giants and industrial agribusiness, and traders.

“The idea is to help them take advantage of all the new data going online,” Johnson said, noting that this can help, for example, bankers forecast how much a commodity will trade for, or the effect of a change in soil composition on a crop.

The fact that Descartes Labs’ work has connected it with the energy industry gives an interesting twist to the use of the phrase “data refinery”. But in case you were wondering, Johnson said that the company goes through a process of vetting potential customers to determine if the data Descartes Labs provides to them is for a positive end, or not.

“We have a deep belief that we can help them become more efficient,” he said. “Those looking at earth data are doing so because they care about the planet and are working to try to become more sustainable.”

Johnson also said (in answer to my question about it) that so far, there haven’t been any instances where the startup has been prohibited to work with any customers or countries, but you could imagine how — in this day of data being ‘the new oil’ and the fulcrum of power — that could potentially be an issue. (Related to this: Orbital Insight counts In-Q-Tel, the CIA’s venture arm, as one of its backers.)

Looking ahead, the company is building what it describes as a “digital twin” of the earth, the idea being that in doing so it can better model the imagery that it injests and link up data from different regions more seamlessly (since, after all, a climatic event in one part of the world inevitably impacts another). Notably, “digital twinning” is a common concept that we see applied in other AI-based enterprises to better predict activity: this is the approach that, for example, Forward Networks takes when building models of an enterprise’s network to determine how apps will behave and identify the reasons behind an outage.

In addition to the funding round, Descartes Labs named Phil Fraher its new CFO, and is announcing Veery Maxwell, Director for Energy Innovation and Patrick Cairns, who co-founded UGVP, as new board observers.

Oct
10
2019
--

SAP’s Bill McDermott on stepping down as CEO

SAP’s CEO Bill McDermott today announced that he wouldn’t seek to renew his contract for the next year and would step down immediately after nine years at the helm of the German enterprise giant.

Shortly after the announcement, I talked to McDermott, as well as SAP’s new co-CEOs Jennifer Morgan and Christian Klein. During the call, McDermott stressed that his decision to step down was very much a personal one, and that while he’s not ready to retire just yet, he simply believes that now is the right time for him to pass on the reins of the company.

To say that today’s news came as a surprise is a bit of an understatement, but it seems like it’s something McDermott has been thinking about for a while. But after talking to McDermott, Morgan and Klein, I can’t help but think that the actual decision came rather recently.

I last spoke to McDermott about a month ago, during a fireside chat at our TechCrunch Sessions: Enterprise event. At the time, I didn’t come away with the impression that this was a CEO on his way out (though McDermott reminded me that if he had already made his decision a month ago, he probably wouldn’t have given it away).

Keeping an Enterprise Behemoth on Course with Bill McDermott SAPDSC00240

“I’m not afraid to make decisions. That’s one of the things I’m known for,” he told me when I asked him about how the process unfolded. “This one, I did a lot of deep soul searching. I really did think about it very heavily — and I know that it’s the right time and that’s why I’m so happy. When you can make decisions from a position of strength, you’re always happy.”

He also noted that he has been with SAP for 17 years, with almost 10 years as CEO, and that he recently spent some time talking to fellow high-level CEOs.

“The consensus was 10 years is about the right amount of time for a CEO because you’ve accomplished a lot of things if you did the job well, but you certainly didn’t stay too long. And if you did really well, you had a fantastic success plan,” he said.

In “the recent past,” McDermott met with SAP chairman and co-founder Hasso Plattner to explain to him that he wouldn’t renew his contract. According to McDermott, both of them agreed that the company is currently at “maximum strength” and that this would be the best time to put the succession plan into action.

SAP's new co-CEO Jennifer Morgan.

SAP co-CEO Jennifer Morgan

“With the continuity of Jennifer and Christian obviously already serving on the board and doing an unbelievable job, we said let’s control our destiny. I’m not going to renew, and these are the two best people for the job without question. Then they’ll get a chance to go to Capital Markets Day [in November]. Set that next phase of our growth story. Kick off the New Year — and do so with a clean slate and a clean run to the finish line.

“Very rarely do CEOs get the joy of handing over a company at maximum strength. And today is a great day for SAP. It’s a great day for me personally and Hasso Plattner, the chairman and [co-]founder of SAP. And also — and most importantly — a great day for Jennifer Morgan and Christian Klein.”

Don’t expect McDermott to just fade into the background, though, now that he is leaving SAP. If you’ve ever met or seen McDermott speak, you know that he’s unlikely to simply retire. “I’m busy. I’m passionate and I’m just getting warmed up,” he said.

As for the new leadership, Morgan and Klein noted that they hadn’t had a lot of time to think about the strategy going forward. Both previously held executive positions in the company and served on SAP’s board together for the last few years. For now, it seems, they are planning to continue on a similar path as McDermott.

“We’re excited about creating a renewed focus on the engineering DNA of SAP, combining the amazing strength and heritage of SAP — and many of the folks who have built the products that so many customers around the world run today — with a new DNA that’s come in from many of the cloud acquisitions that we’ve made,” Morgan said, noting that both she and Klein spent a lot of time over the last few months bringing their teams together in new ways. “So I think for us, that tapestry of talent and that real sense of urgency and support of our customers and innovation is top of mind for us.”

SAP co-CEO Christian Klein

SAP co-CEO Christian Klein

Klein also stressed that he believes SAP’s current strategy is the right one. “We had unbelievable deals again in Q3 where we actually combined our latest innovations — where we combined Qualtrics with SuccessFactors with S/4 [Hana] to drive unbelievable business value for our customers. This is the way to go. The business case is there. I see a huge shift now towards S/4, and the core and business case is there, supporting new business models, driving automation, steering the company in real time. All of these assets are now coming together with our great cloud assets, so for me, the strategy works.”

Having co-CEOs can be a recipe for conflict, but McDermott’s time as CEO also started out as co-CEO, so the company does have some experience there. Morgan and Klein noted that they worked together on the SAP board before and know each other quite well.

What’s next for the new CEOs? “There has to be a huge focus on Q4,” Klein said. “And then, of course, we will continue like we did in the past. I’ve known Jen now for quite a while — there was a lot of trust there in the past and I’m really now excited to really move forward together with her and driving huge business outcomes for our customers. And let’s not forget our employees. Our employee morale is at an all-time high. And we know how important that is to our employees. We definitely want that to continue.”

It’s hard to imagine SAP with McDermott, but we’ve clearly not seen the last of him yet. I wouldn’t be surprised if we saw him pop up as the CEO of another company soon.

Below is my interview with McDermott from TechCrunch Sessions: Enterprise.

Oct
10
2019
--

Bill McDermott steps down as SAP’s CEO

SAP today announced that Bill McDermott, its CEO for the last nine years, is stepping down immediately. The company says he decided not to renew his contract. SAP Executive Board members Jennifer Morgan and Christian Klein have been appointed co-CEOs.

McDermott, who started his business career as a deli owner in Amityville, Long Island and recently spoke at our TechCrunch Sessions: Enterprise event, joined SAP in 2002 as the head of SAP North America. He became co-CEO in 2008 and the company’s sole CEO in 2014, making him the first American to take on this role at the German enterprise giant. Under his guidance, SAP’s annual revenue and stock price continued to increase. He’ll remain with the company in an advisory role for the next two months.

It’s unclear why McDermott decided to step down at this point. Activist investor Elliott Management recently disclosed a $1.35 billion stake in SAP, but when asked for a comment about today’s news, an Elliott spokesperson told us that it didn’t have any “immediate comment.”

It’s also worth noting that the company saw a number of defections among its executive ranks in recent months, with both SAP SuccessFactors COO Brigette McInnis-Day and Robert Enslin, the president of its cloud business and a board member, leaving the company for Google Cloud.

Keeping an Enterprise Behemoth on Course with Bill McDermott SAPDSC00248

“SAP would not be what it is today without Bill McDermott,” said Plattner in today’s announcement. “Bill made invaluable contributions to this company and he was a main driver of SAP’s transition to the cloud, which will fuel our growth for many years to come. We thank him for everything he has done for SAP. We also congratulate Jennifer and Christian for this opportunity to build on the strong foundation we have for the future of SAP. Bill and I made the decision over a year ago to expand Jennifer and Christian’s roles as part of a long-term process to develop them as our next generation of leaders. We are confident in their vision and capabilities as we take SAP to its next phase of growth and innovation.”

McDermott’s biggest bet in recent years came with the acquisition of Qualtrics for $8 billion. At our event last month, McDermott compared this acquisition to Apple’s acquisition of Next and Facebook’s acquisition of Instagram. “Qualtrics is to SAP what those M&A moves were to those wonderful companies,” he said. Under his leadership, SAP also acquired corporate expense and travel management company Concur for $8.3 billion and SuccessFactors for $3.4 billion.

“Now is the moment for everyone to begin an exciting new chapter, and I am confident that Jennifer and Christian will do an outstanding job,” McDermott said in today’s announcement. “I look forward to supporting them as they finish 2019 and lay the foundation for 2020 and beyond. To every customer, partner, shareholder and colleague who invested their trust in SAP, I can only relay my heartfelt gratitude and enduring respect.”

Updating…

Oct
10
2019
--

Top VCs, founders share how to build a successful SaaS company

Last week at TechCrunch Disrupt in San Francisco, we hosted a panel on the Extra Crunch stage on “How to build a billion-dollar SaaS company.” A better title probably would have been “How to build a successful SaaS company.”

We spoke to Whitney Bouck, COO at HelloSign; Jyoti Bansal, CEO and founder at Harness, and Neeraj Agrawal, a partner at Battery Ventures to get their view on how to move through the various stages to build that successful SaaS company.

While there is no magic formula, we covered a lot of ground, including finding a product-market fit, generating early revenue, the importance of building a team, what to do when growth slows and finally, how to resolve the tension between growth and profitability.

Finding product-market fit

Neeraj Agrawal: When we’re talking to the market, what we’re really looking for is a repeatable pattern of use cases. So when we’re talking to prospects — the words they use, the pain point they use — are very similar from call to call to call? Once we see that pattern, we know we have product-market fit, and then we can replicate that.

Jyoti Bansal: Revenue is one measure of product-market fit. Are customers adopting it and getting value out of it and renewing? Until you start getting a first set of renewals and a first set of expansions and happy successful customers, you don’t really have product-market fit. So that’s the only way you can know if the product is really working or not.

Whitney Bouck: It isn’t just about revenue — the measures of success at all phases have to somewhat morph. You’ve got to be looking at usage, at adoption, value renewals, expansion, and of course, the corollary, churn, to give you good health indicators about how you’re doing with product-market fit.

Generating early revenue

Jyoti Bansal: As founders we’ve realized, getting from idea to early revenue is one of the hardest things to do. The first million in revenue is all about street fighting. Founders have to go out there and win business and do whatever it takes to get to revenue.

As your revenue grows, what you focus on as a company changes. Zero to $1 million, your goal is to find the product-market fit, do whatever it takes to get early customers. One million to $10 million, you start scaling it. Ten million to $75 million is all about sales, execution, and [at] $75 million plus, the story changes to how do you go into new markets and things like that.

Whitney Bouck: You really do have to get that poll from the market to be able to really start the momentum and growth. The freemium model is one of the ways that we start to engage people — getting visibility into the product, getting exposure to the product, really getting people thinking about, and frankly, spreading the word about how this product can provide value.

48833421487 5933a39235 k

Photo: Kimberly White/Getty Images for TechCrunch

 

Oct
10
2019
--

Flaw in Cyberoam firewalls exposed corporate networks to hackers

Sophos said it is fixing a vulnerability in its Cyberoam firewall appliances, which a security researcher says can allow an attacker to gain access to a company’s internal network without needing a password.

The vulnerability allows an attacker to remotely gain “root” permissions on a vulnerable device, giving them the highest level of access, by sending malicious commands across the internet. The attack takes advantage of the web-based operating system that sits on top of the Cyberoam firewall.

Once a vulnerable device is accessed, an attacker can jump onto a company’s network, according to the researcher who shared their findings exclusively with TechCrunch.

Cyberoam devices are typically used in large enterprises, sitting on the edge of a network and acting as a gateway to allow employees in while keeping hackers out. These devices filter out bad traffic, and prevent denial-of-service attacks and other network-based attacks. They also include virtual private networking (VPN), allowing remote employees to log on to their company’s network when they are not in the office.

It’s a similar vulnerability to recently disclosed flaws in corporate VPN providers, notably Palo Alto Networks, Pulse Secure and Fortinet, which allowed attackers to gain access to a corporate network without needing a user’s password. Many large tech companies, including Twitter and Uber, were affected by the vulnerable technology, prompting Homeland Security to issue an advisory to warn of the risks.

Sophos, which bought Cyberoam in 2014, issued a short advisory this week, noting that the company rolled out fixes on September 30.

The researcher, who asked to remain anonymous, said an attacker would only need an IP address of a vulnerable device. Getting vulnerable devices was easy, they said, by using search engines like Shodan, which lists around 96,000 devices accessible to the internet. Other search engines put the figure far higher.

A Sophos spokesperson disputed the number of devices affected, but would not provide a clearer figure.

“Sophos issued an automatic hotfix to all supported versions in September, and we know that 99% of devices have already been automatically patched,” said the spokesperson. “There are a small amount of devices that have not as of yet been patched because the customer has turned off auto-update and/or are not internet-facing devices.”

Customers still affected can update their devices manually, the spokesperson said. Sophos said the fix will be included in the next update of its CyberoamOS operating system, but the spokesperson did not say when that software would be released.

The researcher said they expect to release the proof-of-concept code in the coming months.

Oct
10
2019
--

Percona Server for MongoDB 3.6.14-3.4 Now Available

Percona Server for MongoDB 3.6.14-3.4

Percona Server for MongoDB 3.6.14-3.4Percona announces the release of Percona Server for MongoDB 3.6.14-3.4 on October 10, 2019. Download the latest version from the Percona website or the Percona software repositories.

Percona Server for MongoDB is an enhanced, open source, and highly-scalable database that is a fully-compatible, drop-in replacement for MongoDB 3.6 Community Edition. It supports MongoDB 3.6 protocols and drivers.

Percona Server for MongoDB extends Community Edition functionality by including the Percona Memory Engine storage engine, as well as several enterprise-grade features. Also, it includes MongoRocks storage engine, which is now deprecated. Percona Server for MongoDB requires no changes to MongoDB applications or code.

Percona Server for MongoDB 3.6.14-3.4 is based on MongoDB 3.6.14. In this release, the license of RPM and DEB packages has been changed from AGPLv3 to SSPL.

Bugs Fixed

  • PSMDB-447: The license for RPM and DEB packages has been changed from AGPLv3 to SSPL.

The Percona Server for MongoDB 3.6.14-3.4 release notes are available in the official documentation.

Oct
10
2019
--

Xage now supports hierarchical blockchains for complex implementations

Xage is working with utilities, energy companies and manufacturers to secure their massive systems, and today it announced some significant updates to deal with the scale and complexity of these customers’ requirements, including a new hierarchical blockchain.

Xage enables customers to set security policy, then enforce that policy on the blockchain. Company CEO Duncan Greatwood says as customers deploy his company’s solutions more widely, it has created a set of problems around scaling that they had to address inside the product, including the use of blockchain.

As you have multiple sites involved in a system, there needed to be a way for these individual entities to operate, whether they are connected to the main system or not. The answer was to provide each site with its own local blockchain, then have a global blockchain that acts as the ultimate enforcer of the rules once the systems reconnected.

“What we’ve done is by creating independent blockchains for each location, you can continue to write even if you are separated or the latency is too high for a global write. But when the reconnect happens with the global system, we replay the writes into the global blockchain,” Greatwood explained.

While classical blockchain doesn’t allow these kinds of separations, Xage felt it was necessary to deal with its particular kind of use case. When there is a separation, a resynchronization happens where the global blockchain checks the local chains for any kinds of changes, and if they are not consistent with the global rules, it will overwrite those entries.

Greatwood says these changes can be malicious if someone managed to take over a node or they could be non-malicious, such as a password change that wasn’t communicated to the global chain until it reconnected. Whatever the reason, the global blockchain has this power to fix the record when it’s required.

Another issue that has come up for Xage customers is the idea that majority rules on a blockchain, but that’s not always a good idea when you have multiple entities working together. As Greatwood explains, if one entity has 600 nodes and the other has 400, the larger entity can always enforce its rules on the smaller one. To fix that, they have created what they are calling a supermajority.

“The supermajority allows us to impose impose rules such as, after you have the majority of 600 nodes, you also have to have the majority of the 400 nodes. Obviously, that will give you an overall majority. But the important point is that the company with 400 nodes is protected now because the write to the ledger account can’t happen unless a majority of the 400 node customers also agrees and participates in the write,” Greatwood explained.

Finally, the company also announced scaling improvements, which reduce computing requirements to run Xage by 10x, according to the company.

Oct
10
2019
--

Salesforce adds integrated order management system to its arsenal

Salesforce certainly has a lot of tools crossing the sales, service and marketing categories, but until today when it announced Lightning Order Management, it lacked an integration layer that allowed companies to work across these systems to manage orders in a seamless way.

“This is a new product built from the ground up on the Salesforce Lightning Platform to allow our customers to fulfill, manage and service their orders at scale,” Luke Ball, VP of product management at Salesforce told TechCrunch.

He says that order management is an often-overlooked part of the sales process, but it’s one that’s really key to the whole experience you’re trying to provide for your customers. “We think about advertising and acquisition and awareness. We think about creating amazing, compelling commerce experiences on the storefront or on your website or in your app. But I think a lot of brands don’t necessarily think about the delivery experience as part of that customer experience,” he said.

The problem is that order management involves so many different systems along with internal and external stakeholders. Trying to pull them together into a coherent system is harder than it looks, especially when it could also involve older legacy technology. As Ball pointed out, the process includes shipping carriers, warehouse management systems, ERP systems and payment and tax and fraud tools.

The Salesforce solution involves a few key pieces. For starters there is order life cycle management, what Ball calls the brains of the operation. “This is the core logic of an order management system. Everything that extends commerce beyond the Buy button — supply chain management, order fulfillment, payment capture, invoice creation, inventory availability and custom business logic. This is the bread and butter of an order management system,” he said.

Lightning Order Management 7 LOM AppPicker bezel

Salesforce Lightning Order Management App Picker (Image: Salesforce)

Customers start by building visual order workflows. They can move between systems in an App Picker, and the information is shared between Commerce Cloud and Service Cloud, so that as customers move from sales to service, the information moves with them and it makes it easier to process inquiries from customers about an order, including returns.

Ball says that Salesforce recognizes that not every customer will be an all-Salesforce shop and the system is designed to work with tools from other vendors, although these external tools won’t show up in the App Picker. It also knows that this process involves external vendors like shipping companies, so they will be offering specific integration apps for Lightning Order Management in the Salesforce AppExchange.

The company is announcing the product today and will be making it generally available in February.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com