Sep
18
2020
--

SaaS Ventures takes the investment road less traveled

Most venture capital firms are based in hubs like Silicon Valley, New York City and Boston. These firms nurture those ecosystems and they’ve done well, but SaaS Ventures decided to go a different route: it went to cities like Chicago, Green Bay, Wisconsin and Lincoln, Nebraska.

The firm looks for enterprise-focused entrepreneurs who are trying to solve a different set of problems than you might find in these other centers of capital, issues that require digital solutions but might fall outside a typical computer science graduate’s experience.

Saas Ventures looks at four main investment areas: trucking and logistics, manufacturing, e-commerce enablement for industries that have not typically gone online and cybersecurity, the latter being the most mainstream of the areas SaaS Ventures covers.

The company’s first fund, which launched in 2017, was worth $20 million, but SaaS Ventures launched a second fund of equal amount earlier this month. It tends to stick to small-dollar-amount investments, while partnering with larger firms when it contributes funds to a deal.

We talked to Collin Gutman, founder and managing partner at SaaS Ventures, to learn about his investment philosophy, and why he decided to take the road less traveled for his investment thesis.

A different investment approach

Gutman’s journey to find enterprise startups in out of the way places began in 2012 when he worked at an early enterprise startup accelerator called Acceleprise. “We were really the first ones who said enterprise tech companies are wired differently, and need a different set of early-stage resources,” Gutman told TechCrunch.

Through that experience, he decided to launch SaaS Ventures in 2017, with several key ideas underpinning the firm’s investment thesis: after his experience at Acceleprise, he decided to concentrate on the enterprise from a slightly different angle than most early-stage VC establishments.

Collin Gutman from SaaS Ventures

Collin Gutman, founder and managing partner at SaaS Ventures (Image Credits: SaaS Ventures)

The second part of his thesis was to concentrate on secondary markets, which meant looking beyond the popular startup ecosystem centers and investing in areas that didn’t typically get much attention. To date, SaaS Ventures has made investments in 23 states and Toronto, seeking startups that others might have overlooked.

“We have really phenomenal coverage in terms of not just geography, but in terms of what’s happening with the underlying businesses, as well as their customers,” Gutman said. He believes that broad second-tier market data gives his firm an upper hand when selecting startups to invest in. More on that later.

Sep
18
2020
--

Salesforce announces 12,000 new jobs in the next year just weeks after laying off 1,000

In a case of bizarre timing, Salesforce announced it was laying off 1,000 employees at the end of last month just a day after announcing a monster quarter with over $5 billion in revenue, putting the company on a $20 billion revenue run rate for the first time. The juxtaposition was hard to miss.

Earlier today, Salesforce CEO and co-founder Marc Benioff announced in a tweet that the company would be hiring 4,000 new employees in the next six months, and 12,000 in the next year. While it seems like a mixed message, it’s probably more about reallocating resources to areas where they are needed more.

While Salesforce wouldn’t comment further on the hirings, the company has obviously been doing well in spite of the pandemic, which has had an impact on customers. In the prior quarter, the company forecasted that it would have slower revenue growth due to giving some customers facing hard times with economic downturn time to pay their bills.

That’s why it was surprising when the CRM giant announced its earnings in August and that it had done so well in spite of all that. While the company was laying off those 1,000 people, it did indicate it would give those employees 60 days to find other positions in the company. With these new jobs, assuming they are positions the laid-off employees are qualified for, they could have a variety of positions from which to choose.

The company had 54,000 employees when it announced the layoffs, which accounted for 1.9% of the workforce. If it ends up adding the 12,000 news jobs in the next year, that would put the company at approximately 65,000 employees by this time next year.

Sep
18
2020
--

MongoDB Backup Best Practices

MongoDB Backup Best Practices

MongoDB Backup Best PracticesIn this blog, we will be discussing different backup strategies for MongoDB and their use cases, along with the pros and cons of each.

Why Take Backups?

Regular database backups are a crucial step in guarding against unintended data loss events. It doesn’t matter if you lose your data because of mechanical failure, a natural disaster, or criminal malice, your data is gone. However, the data doesn’t need to be lost. You can back it up.

Generally, there are two types of backups used with databases technologies like MongoDB:

  • Logical Backups
  • Physical Backups

Additionally, we have the option of incremental backups as well (part of logical), where we can capture the deltas or incremental data changes made between full backups to minimize the data loss in case of any disaster. We will be discussing these two backup options, how to proceed with them, and which one suits better depending upon requirements and environment setup.

Logical Backups

These are the types of backups where data is dumped from the databases into the backup files. A logical backup with MongoDB means you’ll be dumping the data into a BSON formatted file.

During logical backups using client API, the data gets read from the server and returned back to the same API which will be serialized and written into respective “.bson”, “.json”, or “.csv”  backup files on disk depending upon the type of backup utilities used.

MongoDB offers the below utility to take logical backups:

Mongodump: Takes dump/backup of the databases into “.bson” format which can be later restored by replaying the same logical statements captured in dump files back to the databases.

mongodump --host=mongodb1.example.net --port=27017 --username=user --authenticationDatabase=admin --db=demo --collection=events --out=/opt/backup/mongodump-2011-10-24

Note: If we don’t specify the DB name or Collection name explicitly in the above “mongodump” syntax, then the backup will be taken for the entire database or collections respectively. If “authorization” is enabled then we must specify the “authenticationDatabase”.

Also, you should use “–oplog” to take the incremental data while the backup still running, and we can specify “–oplog” with mongodump. Keep in mind that it won’t work with –db and –collection since it will only work for entire database backups.

mongodump --host=mongodb1.example.net --port=27017 --username=user --authenticationDatabase=admin --oplog --out=/opt/backup/mongodump-2011-10-24

Pros:

  1. It can take the backup at a more granular level like a specific database or a collection which will be helpful during restoration.
  2. Does not require you to halt writes against a specific node where you will be running the backup. Hence, the node would still be available for other operations.

Cons:

  1. As it reads all data it can be slow and will require disk reads too for databases that are larger than the RAM available for the WT cache. The WT cache pressure increases which slows down the performance.
  2. It doesn’t capture the index data into the metadata backup file due to which while restoring, all the indexes have to be built again after the collection data is reinserted. This will be done in one pass through the collection after the inserts have finished, so it can add a lot of time for big collection restores..
  3. The speed of backup also depends on allocated IOPS and type of storage since lots of read/writes would be happening during this process.

Note: It is always advisable to use secondary servers for backups to avoid unnecessary performance degradation from Primary node.

As we have different types of environment setups, we should be approaching each one of them as below.

  1. Replica set: Always preferred to run on secondaries.
  2. Shard clusters: Take a backup of config server replicaset and each shard individually using the secondary nodes of them.

Since we are discussing distributed database system like shard cluster, we should also keep in mind to have consistency in our backups at a point in time (Replica sets backups using mongodump are generally consistent using “–oplog”).

Let’s discuss this scenario where the application is still writing data and cannot be stopped because of business reasons. Now, even if we take backups of the config server and each shard separately, at some point in time, the backup will finish at different times because of data volume, load, etc. Hence, while restoring there might be some inconsistencies occurring because of the same reason.

For that, Percona Backup for MongoDB is very useful (uses mongodump libraries internally) since it tails the oplog on each shard separately while the backup is still running until completion. More references can be found here in the release notes.

Now comes the restoration part when dealing with Logical backups. Same as for backups, MongoDB provides the below utilities for restoration purposes.

Mongorestore: Restores dump files created by “mongodump”. Index recreation will take place once the data is restored which causes to use additional memory resources and time.

mongorestore --host=mongodb1.example.net --port=27017 --username=user  --password --authenticationDatabase=admin --db=demo --collection=events /opt/backup/mongodump-2011-10-24/events.bson

For the restore of the incremental dump, we can add –oplogReplay in the above syntax to replay the oplog entries as well.

Note: The “–oplogReplay” can’t be used with –db and –collection flag as it will only work while restoring all the databases.

Physical/Filesystem Backups

It involves snapshotting or copying the underlying MongoDB data files (–dbPath)  at a point in time, and allowing the database to cleanly recover using the state captured in the snapshotted files. They are instrumental in backing up large databases quickly, especially when used with filesystem snapshots, such as LVM snapshots, or block storage volume snapshots.

There are several methods to take the filesystem level backup, also known as Physical backups, as below.

  1. Manually Copying the entire data files (using Rsync ? Depends on N/W bandwidth)
  2. LVM based snapshots
  3. Cloud-based disk snapshots (AWS/GCP/Azure or any other cloud provider)
  4. Percona hot backup here

We’ll be discussing all these above options but first, let’s see their Pros and Cons over Logical Based backups.

Pros:

  1. They are at least as fast as, and usually faster than, logical backups.
  2. Can be easily copied over or shared with remote servers or attached NAS.
  3. Recommended for large datasets because of speed and reliability.
  4. Can be convenient while building new nodes within the same cluster or new cluster.

Cons:

  1. It is impossible when performing a restore on a less granular level such as specific DB or Collection restore.
  2. Incremental backups cannot be achieved yet.
  3. A dedicated node is recommended for backup (might be a hidden one) as it requires halting writes or shutting down “mongod” cleanly prior to the snapshot against the node to achieve consistency.

Below is the backup time consumption comparison for the same dataset:

DB Size: 267.6GB

Index Size: <1MB (since it was only on _id for testing)

demo:PRIMARY> db.runCommand({dbStats: 1, scale: 1024*1024*1024})
{
        "db" : "test",
        "collections" : 1,
        "views" : 0,
        "objects" : 137029,
        "avgObjSize" : 2097192,
        "dataSize" : 267.6398703530431,
        "storageSize" : 13.073314666748047,
        "numExtents" : 0,
        "indexes" : 1,
        "indexSize" : 0.0011749267578125,
        "scaleFactor" : 1073741824,
        "fsUsedSize" : 16.939781188964844,
        "fsTotalSize" : 49.98826217651367,
        "ok" : 1,
        ...
}
demo:PRIMARY>

        1. Hot backup

Syntax : 

> use admin

switched to db admin

> db.runCommand({createBackup: 1, backupDir: "/my/backup/data/path"})

{ "ok" : 1 }

 

Note: The backup path “backupDir” should be absolute. It also supports storing the backups on the filesystem and AWS S3 buckets.

[root@ip-172-31-37-92 tmp]# time mongo  < hot.js
Percona Server for MongoDB shell version v4.2.8-8
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("c9860482-7bae-4aae-b0e7-5d61f8547559") }
Percona Server for MongoDB server version: v4.2.8-8
switched to db admin
{
        "ok" : 1,
        ...
}
bye

real    3m51.773s
user    0m0.067s
sys     0m0.026s
[root@ip-172-31-37-92 tmp]# ls
hot  hot.js  mongodb-27017.sock  nohup.out  systemd-private-b8f44077314a49899d0a31f99b31ed7a-chronyd.service-Qh7dpD  tmux-0
[root@ip-172-31-37-92 tmp]# du -sch hot
15G     hot
15G     total

Notice the time taken by “Percona hot backup” was just 4 minutes approx. It is even very helpful during the rebuild of a node or spinning new instances/cluster with the same dataset. The best part is it doesn’t compromise with locking of writes or any performance hits. However, it is also recommended to run it against the secondaries. 

       2.  Filesystem Snapshot

The approximate time taken for the snapshot to be completed was only 4 minutes.

[root@ip-172-31-37-92 ~]# aws ec2 describe-snapshots  --query "sort_by(Snapshots, &StartTime)[-1].{SnapshotId:SnapshotId,StartTime:StartTime}"
{
    "SnapshotId": "snap-0f4403bc0fa0f2e9c",
    "StartTime": "2020-08-26T12:26:32.783Z"
}

[root@ip-172-31-37-92 ~]# aws ec2 describe-snapshots \
> --snapshot-ids snap-0f4403bc0fa0f2e9c
{
    "Snapshots": [
        {
            "Description": "This is my snapshot backup",
            "Encrypted": false,
            "OwnerId": "021086068589",
            "Progress": "100%",
            "SnapshotId": "snap-0f4403bc0fa0f2e9c",
            "StartTime": "2020-08-26T12:26:32.783Z",
            "State": "completed",
            "VolumeId": "vol-0def857c44080a556",
            "VolumeSize": 50
        }
    ]
}

       3. Mongodump

[root@ip-172-31-37-92 ~]# time nohup mongodump -d test -c collG -o /mongodump/ &
[1] 44298

[root@ip-172-31-37-92 ~]# sed -n '1p;$p' nohup.out
2020-08-26T12:36:20.842+0000    writing test.collG to /mongodump/test/collG.bson
2020-08-26T12:51:08.832+0000    [####....................]  test.collG  27353/137029  (20.0%)

Note: Just to give an idea, we can clearly see that for the same dataset where snapshot and hot backup took only 3-5 minutes, “mongodump” took almost 15 minutes just for 20% of the dump. Hence the speed to back up the data is definitely very slow as compared to the other two options we have. And on top of that, we would only be left with one option to restore the backup that is “mongorestore” which will eventually make the whole process much slower.

Conclusion

So, which backup method would be the best? It completely depends on factors like the type of infrastructure, environment, dataset size, load, etc. But generally, if the dataset is around 100GB or less than that, then the logical backups are the best option along with scheduled incremental backups as well, depending upon RTO (Recovery Time Objective)/RPO (Recovery Point Objective)  needs. However, if the dataset size is more than that, we should always go for physical backups including incremental backups (oplogs) as well.

Interested in trying Percona Backup for MongoDB? Download it for free! 

Sep
17
2020
--

Perigee infrastructure security solution from former NSA employee moves into public beta

Perigee founder Mollie Breen used to work for NSA where she built a security solution to help protect the agency’s critical infrastructure. She spent the last two years at Harvard Business School talking to Chief Information Security Officers (CISOs) and fine-tuning that idea she started at NSA into a commercial product.

Today, the solution that she built moves into public beta and will compete at TechCrunch Disrupt Battlefield with other startups for $100,000 and the Disrupt Cup.

Perigree helps protect things like heating and cooling systems or elevators that may lack patches or true security, yet are connected to the network in a very real way. It learns what normal behavior looks like from an operations system when it interacts with the network, such as what systems it interacts with and which individual employees tend to access it. It can then determine when something seems awry and stop an anomalous activity before it reaches the network. Without a solution like the one Breen has built, these systems would be vulnerable to attack.

Perigee is a cloud-based platform that creates a custom firewall for every device on your network,” Breen told TechCrunch. “It learns each device’s unique behavior, the quirks of its operational environment and how it interacts with other devices to prevent malicious and abnormal usage while providing analytics to boost performance.”

Perigee HVAC fan dashboard view

Image Credits: Perigee

One of the key aspects of her solution is that it doesn’t require an agent, a small piece of software on the device, to make it work. Breen says this is especially important since that approach doesn’t scale across thousands of devices and can also introduce bugs from the agent itself. What’s more, it can use up precious resources on these devices if they can even support a software agent.

“Our sweet spot is that we can protect those thousands of devices by learning those nuances and we can do that really quickly, scaling up to thousands of devices with our generalized model because we take this agentless-based approach,” she said.

By creating these custom firewalls, her company is able to place security in front of the device preventing a hacker from using it as a vehicle to get on the network.

“One thing that makes us fundamentally different from other companies out there is that we sit in front of all of these devices as a shield,” she said. That essentially stops an attack before it reaches the device.

While Breen acknowledges that her approach can add a small bit of latency, it’s a tradeoff that CISOs have told her they are willing to make to protect these kinds of operational systems from possible attacks. Her system is also providing real-time status updates on how these devices are operating, giving them centralized device visibility. If there are issues found, the software recommends corrective action.

It’s still very early for her company, which Breen founded last year. She has raised an undisclosed amount of pre-seed capital. While Perigee is pre-revenue with just one employee, she is looking to add paying customers and begin growing the company as she moves into a wider public beta.

Sep
17
2020
--

APAC cloud infrastructure revenue reaches $9B in Q2 with Amazon leading the way

When you look at the Asia-Pacific (APAC) regional cloud infrastructure numbers, it would be easy to think that one of the Chinese cloud giants, particularly Alibaba, would be the leader in that geography, but new numbers from Synergy Research show Amazon leading across the region overall, which generated $9 billion in revenue in Q2.

The only exception to Amazon’s dominance was in China, where Alibaba leads the way with Tencent and Baidu coming in second and third, respectively. As Synergy’s John Dinsdale points out, China has its own unique market dynamics, and while Amazon leads in other APAC sub-regions, it remains competitive.

“China is a unique market and remains dominated by local companies, but beyond China there is strong competition between a range of global and local companies. Amazon is the leader in four of the five sub-regions, but it is not the market leader in every country,” he explained in a statement.

APAC Cloud Infrastructure leaders chart from Synergy Research

Image Credits: Synergy Research

The $9 billion in revenue across the region in Q2 represents less than a third of the more than $30 billion generated in the worldwide market in the quarter, but the APAC cloud market is still growing at more than 40% per year. It’s also worth pointing out as a means of comparison that Amazon alone generated more than the entire APAC region, with $10.81 billion in cloud infrastructure revenue in Q2.

While Dinsdale sees room for local vendors to grow, he says that the global nature of the cloud market in general makes it difficult for these players to compete with the largest companies, especially as they try to expand outside their markets.

“The challenge for local players is that in most ways cloud is a truly global market, requiring global presence, leading edge technology, strong brand name and credibility, extremely deep pockets and a long-term focus. For any local cloud companies looking to expand significantly beyond their home market, that is an extremely challenging proposition,” Dinsdale said in a statement.

Sep
16
2020
--

Narrator raises $6.2M for a new approach to data modelling that replaces star schema

Snowflake went public this week, and in a mark of the wider ecosystem that is evolving around data warehousing, a startup that has built a completely new concept for modelling warehoused data is announcing funding. Narrator — which uses an 11-column ordering model rather than standard star schema to organise data for modelling and analysis — has picked up a Series A round of $6.2 million, money that it plans to use to help it launch and build up users for a self-serve version of its product.

The funding is being led by Initialized Capital along with continued investment from Flybridge Capital Partners and Y Combinator — where the startup was in a 2019 cohort — as well as new investors, including Paul Buchheit.

Narrator has been around for three years, but its first phase was based around providing modelling and analytics directly to companies as a consultancy, helping companies bring together disparate, structured data sources from marketing, CRM, support desks and internal databases to work as a unified whole. As consultants, using an earlier build of the tool that it’s now launching, the company’s CEO Ahmed Elsamadisi said he and others each juggled queries “for eight big companies single-handedly,” while deep-dive analyses were done by another single person.

Having validated that it works, the new self-serve version aims to give data scientists and analysts a simplified way of ordering data so that queries, described as actionable analyses in a story-like format — or “Narratives,” as the company calls them — can be made across that data quickly — hours rather than weeks — and consistently. (You can see a demo of how it works below provided by the company’s head of data, Brittany Davis.)

The new data-as-a-service is also priced in SaaS tiers, with a free tier for the first 5 million rows of data, and a sliding scale of pricing after that based on data rows, user numbers and Narratives in use.

Image Credits: Narrator

Elsamadisi, who co-founded the startup with Matt Star, Cedric Dussud and Michael Nason, said that data analysts have long lived with the problems with star schema modelling (and by extension the related format of snowflake schema), which can be summed up as “layers of dependencies, lack of source of truth, numbers not matching and endless maintenance,” he said.

“At its core, when you have lots of tables built from lots of complex SQL, you end up with a growing house of cards requiring the need to constantly hire more people to help make sure it doesn’t collapse.”

(We)Work Experience

It was while he was working as lead data scientist at WeWork — yes, he told me, maybe it wasn’t actually a tech company, but it had “tech at its core” — that he had a breakthrough moment of realising how to restructure data to get around these issues.

Before that, things were tough on the data front. WeWork had 700 tables that his team was managing using a star schema approach, covering 85 systems and 13,000 objects. Data would include information on acquiring buildings, to the flows of customers through those buildings, how things would change and customers might churn, with marketing and activity on social networks, and so on, growing in line with the company’s own rapidly scaling empire.  All of that meant a mess at the data end.

“Data analysts wouldn’t be able to do their jobs,” he said. “It turns out we could barely even answer basic questions about sales numbers. Nothing matched up, and everything took too long.”

The team had 45 people on it, but even so it ended up having to implement a hierarchy for answering questions, as there were so many and not enough time to dig through and answer them all. “And we had every data tool there was,” he added. “My team hated everything they did.”

The single-table column model that Narrator uses, he said, “had been theorised” in the past but hadn’t been figured out.

The spark, he said, was to think of data structured in the same way that we ask questions, where — as he described it — each piece of data can be bridged together and then also used to answer multiple questions.

“The main difference is we’re using a time-series table to replace all your data modelling,” Elsamadisi explained. “This is not a new idea, but it was always considered impossible. In short, we tackle the same problem as most data companies to make it easier to get the data you want but we are the only company that solves it by innovating on the lowest-level data modelling approach. Honestly, that is why our solution works so well. We rebuilt the foundation of data instead of trying to make a faulty foundation better.”

Narrator calls the composite table, which includes all of your data reformatted to fit in its 11-column structure, the Activity Stream.

Elsamadisi said using Narrator for the first time takes about 30 minutes, and about a month to learn to use it thoroughly. “But you’re not going back to SQL after that, it’s so much faster,” he added.

Narrator’s initial market has been providing services to other tech companies, and specifically startups, but the plan is to open it up to a much wider set of verticals. And in a move that might help with that, longer term, it also plans to open source some of its core components so that third parties can build data products on top of the framework more quickly.

As for competitors, he says that it’s essentially the tools that he and other data scientists have always used, although “we’re going against a ‘best practice’ approach (star schema), not a company.” Airflow, DBT, Looker’s LookML, Chartio’s Visual SQL, Tableau Prep are all ways to create and enable the use of a traditional star schema, he added. “We’re similar to these companies — trying to make it as easy and efficient as possible to generate the tables you need for BI, reporting and analysis — but those companies are limited by the traditional star schema approach.”

So far the proof has been in the data. Narrator says that companies average around 20 transformations (the unit used to answer questions) compared to hundreds in a star schema, and that those transformations average 22 lines compared to 1,000+ lines in traditional modelling. For those that learn how to use it, the average time for generating a report or running some analysis is four minutes, compared to weeks in traditional data modelling. 

“Narrator has the potential to set a new standard in data,” said Jen Wolf, ?Initialized Capital COO and partner and new Narrator board member?, in a statement. “We were amazed to see the quality and speed with which Narrator delivered analyses using their product. We’re confident once the world experiences Narrator this will be how data analysis is taught moving forward.”

Sep
16
2020
--

Luther.AI is a new AI tool that acts like Google for personal conversations

When it comes to pop culture, a company executive or history questions, most of us use Google as a memory crutch to recall information we can’t always keep in our heads, but Google can’t help you remember the name of your client’s spouse or the great idea you came up with at a meeting the other day.

Enter Luther.AI, which purports to be Google for your memory by capturing and transcribing audio recordings, while using AI to deliver the right information from your virtual memory bank in the moment of another online conversation or via search.

The company is releasing an initial browser-based version of their product this week at TechCrunch Disrupt where it’s competing for the $100,000 prize at TechCrunch Disrupt Battlefield.

Luther.AI’s founders say the company is built on the premise that human memory is fallible, and that weakness limits our individual intelligence. The idea behind Luther.AI is to provide a tool to retain, recall and even augment our own brains.

It’s a tall order, but the company’s founders believe it’s possible through the growing power of artificial intelligence and other technologies.

“It’s made possible through a convergence of neuroscience, NLP and blockchain to deliver seamless in-the-moment recall. GPT-3 is built on the memories of the public internet, while Luther is built on the memories of your private self,” company founder and CEO Suman Kanuganti told TechCrunch.

It starts by recording your interactions throughout the day. For starters, that will be online meetings in a browser, as we find ourselves in a time where that is the way we interact most often. Over time though, they envision a high-quality 5G recording device you wear throughout your day at work and capture your interactions.

If that is worrisome to you from a privacy perspective, Luther is building in a few safeguards starting with high-end encryption. Further, you can only save other parties’ parts of a conversation with their explicit permission. “Technologically, we make users the owner of what they are speaking. So for example, if you and I are having a conversation in the physical world unless you provide explicit permission, your memories are not shared from this particular conversation with me,” Kanuganti explained.

Finally, each person owns their own data in Luther and nobody else can access or use these conversations either from Luther or any other individual. They will eventually enforce this ownership using blockchain technology, although Kanuganti says that will be added in a future version of the product.

Luther.ai search results recalling what person said at meeting the other day about customer feedback.

Image Credits: Luther.ai

Kanuganti says the true power of the product won’t be realized with a few individuals using the product inside a company, but in the network effect of having dozens or hundreds of people using it, even though it will have utility even for an individual to help with memory recall, he said.

While they are releasing the browser-based product this week, they will eventually have a stand-alone app, and can also envision other applications taking advantage of the technology in the future via an API where developers can build Luther functionality into other apps.

The company was founded at the beginning of this year by Kanuganti and three co-founders including CTO Sharon Zhang, design director Kristie Kaiser and scientist Marc Ettlinger . It has raised $500,000 and currently has 14 employees including the founders.

Sep
16
2020
--

ServiceNow updates its workflow automation platform

ServiceNow today announced the latest release of its workflow automation platform. With this, the company is emphasizing a number of new solutions for specific verticals, including for telcos and financial services organizations. This focus on verticals extends the company’s previous efforts to branch out beyond the core IT management capabilities that defined its business during its early years. The company is also adding new features for making companies more resilient in the face of crises, as well as new machine learning-based tools.

Dubbed the “Paris” release, this update also marks one of the first major releases for the company since former SAP CEO Bill McDermott became its president and CEO last November.

“We are in the business of operating on purpose,” McDermott said. “And that purpose is to make the world of work work better for people. And frankly, it’s all about people. That’s all CEOs talk about all around the world. This COVID environment has put the focus on people. In today’s world, how do you get people to achieve missions across the enterprise? […] Businesses are changing how they run to drive customer loyalty and employee engagement.”

He argues that at this point, “technology is no longer supporting the business, technology is the business,” but at the same time, the majority of companies aren’t prepared to meet whatever digital disruption comes their way. ServiceNow, of course, wants to position itself as the platform that can help these businesses.

“We are very fortunate at ServiceNow,” CJ Desai, ServiceNow’s chief product officer, said. “We are the critical platform for digital transformation, as our customers are thinking about transforming their companies.”

As far as the actual product updates, ServiceNow is launching a total of six new products. These include new business continuity management features with automated business impact analysis and tools for continuity plan development, as well as new hardware asset management for IT teams and legal service delivery for legal operations teams.

Image Credits: ServiceNow

With specialized solutions for financial services and telco users, the company is also now bringing together some of its existing solutions with more specialized services for these customers. As ServiceNow’s Dave Wright noted, this goes well beyond just putting together existing blocks.

“The first element is actually getting familiar with the business,” he explained. “So the technology, actually building the product, isn’t that hard. That’s relatively quick. But the uniqueness when you look at all of these workflows, it’s the connection of the operations to the customer service side. Telco is a great example. You’ve got the telco network operations side, making sure that all the operational equipment is active. And then you’ve got the business service side with customer service management, looking at how the customers are getting service. Now, the interesting thing is, because we’ve got both things sitting on one platform, we can link those together really easily.”

Image Credits: ServiceNow

On the machine learning side, ServiceNow made six acquisitions in the area in the last four years, Wright noted — and that is now starting to pay off. Specifically, the company is launching its new predictive intelligence workbench with this release. This new service makes it easier for process owners to detect issues, while also suggesting relevant tasks and content to agents, for example, and prioritizing incoming requests automatically. Using unsupervised learning, the system can also identify other kinds of patterns and with a number of pre-built templates, users can build their own solutions, too.

“The ServiceNow advantage has always been one architecture, one data model and one born-in-the-cloud platform that delivers workflows companies need and great experiences employees and customers expect,” said Desai. “The Now Platform Paris release provides smart experiences powered by AI, resilient operations, and the ability to optimize spend. Together, they will provide businesses with the agility they need to help them thrive in the COVID economy.”

Sep
16
2020
--

Grafana 7 Arrives with Percona Monitoring and Management 2.10.0

Grafana 7 Arrives with Percona Monitoring and Management

Grafana 7 Arrives with Percona Monitoring and ManagementAs you may know, Percona Monitoring and Management (PMM) is based on Grafana, and all our frontend applications are either Grafana plugins or dashboards.

We’ve just released PMM version 2.10 and it includes a lot of improvements and new functionality. In this article, I would like to highlight some. They are quite important for users and developers despite this being just a version update.

The biggest change is the upgrade to a major new release of Grafana. Why is this important? Because we have a lot of interaction with the underlying mechanisms of Grafana. Consequently, we’ve made a lot of changes related to styling, platform core, and infrastructure. A downside of this is that sometimes changes are incompatible with previously used functionality, and we need to follow and mitigate these.

Consistent UI

Some time ago, Grafana switched to developing its own library for components on the frontend. In the 7th version, they made an updated UI kit. This brought a more complete look and feel, and overall, the interface is much neater, better looking, and more universal. Along with the transition to the 7th version, we also started using new UI kit interface elements in many places. The interface has become noticeably more consistent and holistic and presents a lighter and cleaner style.

Better React Support

Grafana has simultaneously been developing with Angular and, more lately, ReactJS, and for some time have been supporting both. In version 7 a turning point has come and support for development on ReactJS has come to the fore. We have long since switched to using ReactJS, so these changes were very useful for us.

What this means for PMM: With better support for ReactJS plugins, we can develop faster and remove some workarounds.

API Changes

Significant changes were made to how plugins interact with Grafana. Many of the methods that allowed us to interact with low-level functions seem to be gone. For example, there is no longer the ability to redefine or add your own listeners to Grafana core events, such as time range changes.

As we used such functionality for different purposes, we were faced with some complications:

  • In QAN we have filters that need to be stored as Grafana variables and be synchronized with panel states.
  • We have complex interactions with the time range and refresh controls.
  • Many other things, such as open tabs in details, added columns, and so on. All these are part of the URL. This data must be stored somewhere because it is occasionally needed in other dashboards.

Use Grafana Themes

In conjunction with this update, we converted almost every part of PMM to using the Grafana themes factory. Of course, there is still some old CSS code, but the conversion work is ongoing and will be finished soon.

The Grafana themes factory approach gives us an easier way to manage styles and make interfaces more consistent, which corresponds more closely to how Grafana itself works.

Stability Improvements

Grafana is running a mix of Angular and ReactJS, and in recent years there has been a migration to pure ReactJS. Now, the majority of Grafana system components are implemented with ReactJS. A lot of legacy code has been cleaned up, reducing the number of errors and made the work easier.

Infrastructure Enhancements

We use the grafana-toolkit to build the front-end of PMM. As we use it within a CI process, it is crucially important that it works as quickly and smoothly as possible. Grafana version 7 also brought these advantages:

  1. Improved typescript support
  2. Faster build time
  3. Fewer bugs

Download PMM 2.10.0 Today!

Also, learn more about the new Percona Customer Portal rolling out starting with the 2.10.0 release of Percona Monitoring and Management.

Sep
16
2020
--

Percona Customer Portal and Your New, Free User Account

Percona Customer Portal

Percona Customer PortalGoogle, Microsoft, and Apple have it. So do open source organizations like MongoDB, Red Hat, and SUSE. What do all these companies provide? A customer portal that enables anyone to create an identity with the company. To date, Percona hasn’t had a customer portal with a free user account –  but that’s about to change.

Starting with the 2.10.0 release of Percona Monitoring and Management (PMM) and rolling out in stages, Percona is introducing the ability for anyone to create a free Percona customer portal user account. In the immediate term, users will access the page to create their accounts from either within PMM or at https://platform.percona.com.

I’m sure you’re asking… “why would I want to create a free user account with Percona?” I’m going to answer that question from the end-state, long-term goal perspective, and then work backward to the short-term.

Percona envisions a customer portal that any registered user would access after login. This customer portal would be personalized for each account user and contain elements relevant to them as an individual as well as part of a larger organization. Some personalized information on this portal would include:

  • Content: based on the preferences you set, personalized content would be accessible via this portal. If you were most interested in content related to a database technology (MySQL, MongoDB, MariaDB, or PostgreSQL), new blog posts would be pushed to the content section of your customer portal related to that technology. Or perhaps you want to subscribe to content relating to particular functionality (e.g., backup strategies, best practices in alert notifications, etc.).
  • Training: like the content topic above, Percona’s database experts produce training content from webinars to videos and other educational materials relating to database technologies, database best practices, tuning tips and tricks, etc. Within your customer portal account, you would be able to subscribe to training content personalized to your preferences.
  • Subscriptions: if your organization has purchased a support contract or services from Percona, the portal would provide you with details about that subscription as appropriate for each user and defined in permissions (i.e., duration, end date, entitlements, scope, renewal information, etc.).
  • Support: via your account login, you would have access to the Percona support tickets you (and, based on permissions, others within your organization) opened and the status of those tickets.
  • Single Sign-On: this single account/password combination will allow you to access all Percona properties without the need to register/maintain multiple accounts to ask a question on forums, comment on a blog post, suggest a correction to documentation, register for a webinar, and much more!
  • Intelligence: with decades of expertise and best practices, Percona is building additional tools that leverage this knowledge to help users and customers more quickly and easily optimize and secure their database environments. This may include security threat tool checks to examine the security vulnerabilities of your database environment or advisors to provide recommendations on optimizing current configurations, diagnostics, issue remediation, and other capabilities.

As you can see, Percona has big plans for the future customer portal and your new user account to access that portal. We hope you agree that it makes sense to create a new user account to take advantage of these offerings.

As stated above, we are introducing the user account in version 2.10.0 of PMM. In the beginning, the capabilities of this account and its associated customer portal will be limited. But, the functionality will grow over time until the vision of the above capabilities is realized within the customer portal. And, we welcome your ideas and suggestions to make the customer portal even more useful and dynamic. By creating your free user account, we will be able to provide you a more custom-tailored experience beginning with more relevant/enhanced security checks to your currently running environment.

Version 2.10.0 of PMM goes well beyond just the introduction of the customer portal and free user account, though. The following highlights are also included in this new PMM release:

New MongoDB Exporter

When PMM either warns about an impending potential issue and/or detects an issue requiring immediate attention relating to a MongoDB database, you, as someone that needs to get to the bottom of this issue, need to have access to data about this issue and how to take action to remediate and resolve the issue. PMM’s new MongoDB exporter provides a much-improved set of exported data to enable that richer exploration of the issue for quick resolution without requiring user-driven modification of the exporter.

Upgrade of Grafana to v7.1 (from v6.4)

New user interface look and components to improve overall usability also enabling Percona to brand PMM more prominently and de-emphasize Grafana branding. Added time zone support for better personalization. Introduction of search functionality in query history, enabling search across queries and your comments. Check out Grafana 7 Arrives with Percona Monitoring and Management 2.10.0 for more information.

Group Replication Dashboard

Dashboard support to present MySQL group replication metrics and statuses, which enables the creation of elastic, highly-available, fault-tolerant replication topologies, guaranteeing that the database service is continuously available.

pt-summary on Node Summary dashboard

Bringing back this PMM1 functionality, which enabled a user to see the results of pt-summary output on the PMM user interface. This tool provided a lot of useful information regarding the system and parameters. Within PMM2, the newly added support for pt-summary output is conveniently placed on the Node dashboard to provide a more well-rounded view of the user system.

Percona Monitoring and Management 2.10.0 is available now so grab the latest version and update today.  Let us know what additional functionality we could enable with your new account!

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com