Nov
25
2020
--

As IBM shifts to hybrid cloud, reports have them laying off 10,000 in EU

As IBM makes a broad shift in strategy, Bloomberg reported this morning that the company would be cutting around 10,000 jobs in Europe. This comes on the heels of last month’s announcement that the organization will be spinning out its infrastructure services business next year. While IBM wouldn’t confirm the layoffs, a spokesperson suggested there were broad structural changes ahead for the company as it concentrates fully on a hybrid cloud approach.

IBM had this to say in response to a request for comment on the Bloomberg report: “Our staffing decisions are made to provide the best support to our customers in adopting an open hybrid cloud platform and AI capabilities. We also continue to make significant investments in training and skills development for IBMers to best meet the needs of our customers.”

Unfortunately, that means basically if you don’t have the currently required skill set, chances are you might not fit with the new version of IBM. IBM CEO Arvind Krishna alluded to the changing environment in an interview with Jon Fortt at the CNBC Evolve Summit earlier this month when he said:

The Red Hat acquisition gave us the technology base on which to build a hybrid cloud technology platform based on open-source, and based on giving choice to our clients as they embark on this journey. With the success of that acquisition now giving us the fuel, we can then take the next step, and the larger step, of taking the managed infrastructure services out. So the rest of the company can be absolutely focused on hybrid cloud and artificial intelligence.

The story has always been the same around IBM layoffs, that as they make the transition to a new model, it requires eliminating positions that don’t fit into the new vision, and today’s report is apparently no different, says Holger Mueller, an analyst at Constellation Research.

“IBM is in the biggest transformation of the company’s history as it moves from services to software and specialized hardware with Quantum. That requires a different mix of skills in its employee base and the repercussions of that manifest itself in the layoffs that IBM has been doing, mostly quietly, for the last 5+ years,” he said.

None of this is easy for the people involved. It’s never a good time to lose your job, but the timing of this one feels worse. In the middle of a recession brought on by COVID, and as a second wave of the virus sweeps over Europe, it’s particularly difficult.

We have reported on a number of IBM layoffs over the last five years. In May, it confirmed layoffs, but wouldn’t confirm numbers. In 2015, we reported on a 12,000 employee layoff.

Nov
25
2020
--

Cast.ai nabs $7.7M seed to remove barriers between public clouds

When you launch an application in the public cloud, you usually put everything on one provider, but what if you could choose the components based on cost and technology and have your database one place and your storage another?

That’s what Cast.ai says that it can provide, and today it announced a healthy $7.7 million seed round from TA Ventures, DNX, Florida Funders and other unnamed angels to keep building on that idea. The round closed in June.

Company CEO and co-founder Yuri Frayman says that they started the company with the idea that developers should be able to get the best of each of the public clouds without being locked in. They do this by creating Kubernetes clusters that are able to span multiple clouds.

“Cast does not require you to do anything except for launching your application. You don’t need to know  […] what cloud you are using [at any given time]. You don’t need to know anything except to identify the application, identify which [public] cloud providers you would like to use, the percentage of each [cloud provider’s] use and launch the application,” Frayman explained.

This means that you could use Amazon’s RDS database and Google’s ML engine, and the solution decides how to make that work based on your requirements and price. You set the policies when you are ready to launch and Cast will take care of distributing it for you in the location and providers that you desire, or that makes most sense for your application.

The company takes advantage of cloud-native technologies, containerization and Kubernetes to break the proprietary barriers that exist between clouds, says company co-founder Laurent Gil. “We break these barriers of cloud providers so that an application does not need to sit in one place anymore. It can sit in several [providers] at the same time. And this is great for the Kubernetes application because they’re kind of designed with this [flexibility] in mind,” Gil said.

Developers use the policy engine to decide how much they want to control this process. They can simply set location and let Cast optimize the application across clouds automatically, or they can select at a granular level exactly the resources they want to use on which cloud. Regardless of how they do it, Cast will continually monitor the installation and optimize based on cost to give them the cheapest options available for their configuration.

The company currently has 25 employees with four new hires in the pipeline, and plans to double to 50 by the end of 2021. As they grow, the company is trying keep diversity and inclusion front and center in its hiring approach; they currently have women in charge of HR, marketing and sales at the company.

“We have very robust processes on the continuous education inside of our organization on diversity training. And a lot of us came from organizations where this was very visible and we took a lot of those processes [and lessons] and brought them here,” Frayman said.

Frayman has been involved with multiple startups, including Cujo.ai, a consumer firewall startup that participated in TechCrunch Disrupt Battlefield in New York in 2016.

Nov
24
2020
--

Proxyclick visitor management system adapts to COVID as employee check-in platform

Proxyclick began life by providing an easy way to manage visitors in your building with an iPad-based check-in system. As the pandemic has taken hold, however, customer requirements have changed, and Proxyclick is changing with them. Today the company announced Proxyclick Flow, a new system designed to check in employees during the time of COVID.

“Basically when COVID hit, our customers told us that actually our employees are the new visitors. So what you used to ask your visitors, you are now asking your employees — the usual probing questions, but also when are you coming and so forth. So we evolved the offering into a wider platform,” Proxyclick co-founder and CEO Gregory Blondeau explained.

That means instead of managing a steady flow of visitors — although it can still do that — the company is focusing on the needs of customers who want to open their offices on a limited basis during the pandemic, based on local regulations. To help adapt the platform for this purpose, the company developed the Proovr smartphone app, which employees can use to check in prior to going to the office, complete a health checklist, see who else will be in the office and make sure the building isn’t over capacity.

When the employee arrives at the office, they get a temperature check, and then can use the QR code issued by the Proovr app to enter the building via Proxyclick’s check-in system or whatever system they have in place. Beyond the mobile app, the company has designed the system to work with a number of adjacent building management and security systems so that customers can use it in conjunction with existing tooling.

They also beefed up the workflow engine that companies can adapt based on their own unique entrance and exit requirements. The COVID workflow is simply one of those workflows, but Blondeau recognizes not everyone will want to use the exact one they have provided out of the box, so they designed a flexible system.

“So the challenge was technical on one side to integrate all the systems, and afterwards to group workflows on the employee’s smartphone, so that each organization can define its own workflow and present it on the smartphone,” Blondeau said.

Once in the building, the systems registers your presence and the information remains on the system for two weeks for contact tracing purposes should there be an exposure to COVID. You check out when you leave the building, but if you forget, it automatically checks you out at midnight.

The company was founded in 2010 and has raised $18.5 million. The most recent raise was a $15 million Series B in January.

Nov
24
2020
--

Adobe expands customer data platform to include B2B sales

The concept of the customer data platform (CDP) is a relatively new one. Up until now, it has focused primarily on pulling data about an individual consumer from a variety of channels into a super record, where in theory you can serve more meaningful content and deliver more customized experiences based on all this detailed knowledge. Adobe announced its intention today to create such a product for business to business (B2B) customers, a key market where this kind of data consolidation had been missing.

Indeed, Brian Glover, Adobe’s director of product marketing for Marketo Engage, who has been put in charge of this product, says that these kinds of sales are much more complex and B2B sales and marketing teams are clamoring for a CDP.

“We have spent the last couple of years integrating Marketo Engage across Adobe Experience Cloud, and now what we’re doing is building out the next generation of new and complementary B2B offerings on the Experience platform, the first of which is the B2B CDP offering,” Glover told me.

He says that they face unique challenges adapting CDP for B2B sales because they typically involve buying groups, meaning you need to customize your messages for different people depending on their role in the process.

An individual consumer usually knows what they want and you can prod them to make a decision and complete the purchase, but a B2B sale is usually longer and more complex, involving different levels of procurement. For example, in a technology sale, it may involve the CIO, a group, division or department who will be using the tech, the finance department, legal and others. There may be an RFP and the sales cycle may span months or even years.

Adobe believes this kind of sale should still be able to use the same customized messaging approach you use in an individual sale, perhaps even more so because of the inherent complexity in the process. Yet B2B marketers face the same issues as their B2C counterparts when it comes to having data spread across an organization.

“In B2B that complexity of buying groups and accounts just adds another level to the data management problem because ultimately you need to be able to connect to your customer people data, but you also need to be able to connect the account data too and be able to [bring] the two together,” Glover explained.

By building a more complete picture of each individual in the buying cycle, you can, as Glover puts it, begin to put the bread crumbs together for the entire account. He believes that a CRM isn’t built for this kind of complexity and it requires a specialty tool like a CDP built to support B2B sales and marketing.

Adobe is working with early customers on the product and expects to go into beta before the end of next month with GA some time in the first half of next year.

Nov
24
2020
--

Industrial drone maker Percepto raises $45M and integrates with Boston Dynamics’ Spot

Consumer drones have over the years struggled with an image of being no more than expensive and delicate toys. But applications in industrial, military and enterprise scenarios have shown that there is indeed a market for unmanned aerial vehicles, and today, a startup that makes drones for some of those latter purposes is announcing a large round of funding and a partnership that provides a picture of how the drone industry will look in years to come.

Percepto, which makes drones — both the hardware and software — to monitor and analyze industrial sites and other physical work areas largely unattended by people, has raised $45 million in a Series B round of funding.

Alongside this, it is now working with Boston Dynamics and has integrated its Spot robots with Percepto’s Sparrow drones, with the aim being better infrastructure assessments, and potentially more as Spot’s agility improves.

The funding is being led by a strategic backer, Koch Disruptive Technologies, the investment arm of industrial giant Koch Industries (which has interests in energy, minerals, chemicals and related areas), with participation also from new investors State of Mind Ventures, Atento Capital, Summit Peak Investments and Delek-US. Previous investors U.S. Venture Partners, Spider Capital and Arkin Holdings also participated. (It appears that Boston Dynamics and SoftBank are not part of this investment.)

Israel-based Percepto has now raised $72.5 million since it was founded in 2014, and it’s not disclosing its valuation, but CEO and founder Dor Abuhasira described as “a very good round.”

“It gives us the ability to create a category leader,” Abuhasira said in an interview. It has customers in around 10 countries, with the list including ENEL, Florida Power and Light and Verizon.

While some drone makers have focused on building hardware, and others are working specifically on the analytics, computer vision and other critical technology that needs to be in place on the software side for drones to work correctly and safely, Percepto has taken what I referred to, and Abuhasira confirmed, as the “Apple approach”: vertical integration as far as Percepto can take it on its own.

That has included hiring teams with specializations in AI, computer vision, navigation and analytics as well as those strong in industrial hardware — all strong areas in the Israel tech landscape, by virtue of it being so closely tied with its military investments. (Note: Percepto does not make its own chips: these are currently acquired from Nvidia, he confirmed to me.)

“The Apple approach is the only one that works in drones,” he said. “That’s because it is all still too complicated. For those offering an Android-style approach, there are cracks in the complete flow.”

It presents the product as a “drone-in-a-box”, which means in part that those buying it have little work to do to set it up to work, but also refers to how it works: its drones leave the box to make a flight to collect data, and then return to the box to recharge and transfer more information, alongside the data that is picked up in real time.

The drones themselves operate on an on-demand basis: they fly in part for regular monitoring, to detect changes that could point to issues; and they can also be launched to collect data as a result of engineers requesting information. The product is marketed by Percepto as “AIM”, short for autonomous site inspection and monitoring.

News broke last week that Amazon has been reorganising its Prime Air efforts — one sign of how some more consumer-facing business applications — despite many developments — may still have some turbulence ahead before they are commercially viable. Businesses like Percepto’s stand in contrast to that, with their focus specifically on flying over, and collecting data, in areas where there are precisely no people present.

It has dovetailed with a bigger focus from industries on the efficiencies (and cost savings) you can get with automation, which in turn has become the centerpiece of how industry is investing in the buzz phrase of the moment, “digital transformation.”

“We believe Percepto AIM addresses a multi-billion-dollar issue for numerous industries and will change the way manufacturing sites are managed in the IoT, Industry 4.0 era,” said Chase Koch, president of Koch Disruptive Technologies, in a statement. “Percepto’s track record in autonomous technology and data analytics is impressive, and we believe it is uniquely positioned to deliver the remote operations center of the future. We look forward to partnering with the Percepto team to make this happen.”

The partnership with Boston Dynamics is notable for a couple of reasons: it speaks to how various robotics hardware will work together in tandem in an automated, unmanned world, and it speaks to how Boston Dynamics is pulling up its socks.

On the latter front, the company has been making waves in the world of robotics for years, specifically with its agile and strong dog-like (with names like “Spot” and “Big Dog”) robots that can cover rugged terrain and handle tussles without falling apart.

That led it into the arms of Google, which acquired it as part of its own secretive moonshot efforts, in 2013. That never panned out into a business, and probably gave Google more complicated optics at a time when it was already being seen as too powerful. Then, SoftBank stepped in to pick it up, along with other robotics assets, in 2017. That hasn’t really gone anywhere either, it seems, and just this month it was reported that Boston Dynamics was reportedly facing yet another suitor, Hyundai.

All of this is to say that partnerships with third parties that are going places (quite literally) become strong signs of how Boston Dynamics’ extensive R&D investments might finally pay off with enterprising dividends.

Indeed, while Percepto has focused on its own vertical integration, longer term and more generally there is an argument to be made for more interoperability and collaboration between the various companies building “connected” and smart hardware for industrial, physical applications.

It means that specific industries can focus on the special equipment and expertise they require, while at the same time complementing that with hardware and software that are recognised as best-in-class. Abuhasira said that he expects the Boston Dynamics partnership to be the first of many.

That makes this first one an interesting template. The partnership will see Spot carrying Percepto’s payloads for high-resolution imaging and thermal vision “to detect issues including hot spots on machines or electrical conductors, water and steam leaks around plants and equipment with degraded performance, with the data relayed via AIM.” It will also mean a more thorough picture, beyond what you get from the air. And, potentially, you might imagine a time in the future when the data that the combined devices source results even in Spot (or perhaps a third piece of autonomous hardware) carrying out repairs or other assistance.

“Combining Percepto’s Sparrow drone with Spot creates a unique solution for remote inspection,” said Michael Perry, VP of Business Development at Boston Dynamics, in a statement. “This partnership demonstrates the value of harnessing robotic collaborations and the insurmountable benefits to worker safety and cost savings that robotics can bring to industries that involve hazardous or remote work.”

Nov
23
2020
--

Uncommon Sense MySQL – When EXPLAIN Can Trash Your Database

When EXPLAIN Can Trash Your Database

When EXPLAIN Can Trash Your DatabaseIf I ask you if running EXPLAIN on the query can change your database, you will probably tell me NO; it is common sense. EXPLAIN should show us how the query is executed, not execute the query, hence it can’t change any data.

Unfortunately, this is the case where common sense does not apply to MySQL (at the time of this writing MySQL 8.0.21 and previous versions) – there are edge cases where EXPLAIN can actually change your database as this Bug illustrates:

DELIMITER $$
CREATE FUNCTION `cleanup`() RETURNS char(50) CHARSET latin1
    DETERMINISTIC
BEGIN 
delete from test.t1;
RETURN 'OK'; 
END $$

Query OK, 0 rows affected (0.01 sec)

DELIMITER ;

mysql> create table t1(i int);
mysql> insert into t1 values(1); 
Query OK, 1 row affected (0.00 sec)

mysql> select * from t1; 
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.00 sec)


mysql> explain select * from (select cleanup()) as t1clean; 
+----+-------------+------------+------------+--------+---------------+------+---------+------+------+----------+----------------+
| id | select_type | table      | partitions | type   | possible_keys | key  | key_len | ref  | rows | filtered | Extra          |
+----+-------------+------------+------------+--------+---------------+------+---------+------+------+----------+----------------+
|  1 | PRIMARY     | <derived2> | NULL       | system | NULL          | NULL | NULL    | NULL |    1 |   100.00 | NULL           |
|  2 | DERIVED     | NULL       | NULL       | NULL   | NULL          | NULL | NULL    | NULL | NULL |     NULL | No tables used |
+----+-------------+------------+------------+--------+---------------+------+---------+------+------+----------+----------------+
2 rows in set, 1 warning (0.00 sec)


mysql> select * from t1;
Empty set (0.00 sec)

The problem is EXPLAIN executes the cleanup() stored function… which is permitted to modify data. This is different from the more sane PostgreSQL behavior which will NOT execute stored functions while running EXPLAIN (it will if you run EXPLAIN ANALYZE).

This decision in the MySQL case comes from trying to do the right stuff and provide the most reliable explain (query execution plan may well depend on what stored function returns) but it looks like this security tradeoff was not considered.

While this consequence of the current MySQL EXPLAIN design is one of the most severe, you also have the problem that EXPLAIN – which a rational user would expect to be a fast way to check the performance of a query – can take unbound time to complete, for example:

mysql> explain select * from (select sleep(5000) as a) b;

This will run for more than an hour, creating an additional accidental (or not) Denial of Service attack vector.

Going Deeper Down the Rabbit Hole

While this behavior is unfortunate, it will happen only if you have unrestricted privileges.  If you have a more complicated setup, the behavior may vary.

If the user lacks EXECUTE privilege, the EXPLAIN statement will fail.

mysql> explain select * from (select cleanup()) as t1clean;
ERROR 1370 (42000): execute command denied to user 'msandbox_ro'@'localhost' for routine 'test.cleanup'

If the user has EXECUTE privilege but the user executing the stored function lacks DELETE privilege, it will fail too:

mysql> explain select * from (select cleanup()) as t1clean;
ERROR 1142 (42000): DELETE command denied to user 'msandbox_ro'@'localhost' for table 't2'

Note: I’m saying user executing stored function, rather than the current user, as depending on the SECURITY clause in Stored Function definition it may be run either as definer or as invoker.

So what can you do if you want to improve EXPLAIN safety, for example, if you’re developing a tool like Percona Monitoring and Management which, among other features, allows users to run EXPLAIN on their queries?

  • Advise users to set up privileges for monitoring correctly.  It should be the first line of defense from this (and many other) issues, however, it is hard to rely on.  Many users will choose the path of simplicity and will use “root” user with full privileges for monitoring.
  • Wrap your EXPLAIN statement in BEGIN … ROLLBACK which will undo any damage EXPLAIN may have caused. The downside of course is the “work” of deleting the data and when undoing the work will be done. (Note: Of course this only works for Transactional tables, if you still run MyISAM…. Well in this case you have worse problems to worry about.)
  • Use “ set transaction read-only”  to signal you’re not expecting any writes…   EXPLAIN which tries to write data will fail in this case without doing any work.

While these workarounds can have tools running EXPLAIN safer, it does not help users running EXPLAIN directly, and I really hope this issue will be fixed by redesigning EXPLAIN in a way it is not trying to run stored functions, as PostgreSQL already does.

For those who want to know how the query is executed EXACTLY, there is now EXPLAIN ANALYZE.

Nov
23
2020
--

Recover Percona XtraDB Cluster in Kubernetes From Wrong MySQL Config

Recover Percona XtraDB Cluster in Kubernetes

Recover Percona XtraDB Cluster in KubernetesKubernetes operators are meant to simplify the deployment and management of applications. Our Percona Kubernetes Operator for Percona XtraDB Cluster serves the purpose, but also provides users the flexibility to fine-tune their MySQL and proxy services configuration.

The document Changing MySQL Options describes how to provide custom

my.cnf

configuration to the operator. But what would happen if you made a mistake and specified the wrong parameter in the configuration?

Apply Configuration

I already deployed my Percona XtraDB Cluster and deliberately submitted the wrong

my.cnf

  configuration in

cr.yaml

 :

spec:
...
  pxc:
    configuration: |
      [mysqld]
      wrong_param=123
…

Apply the configuration:

$ kubectl apply -f deploy/cr.yaml

Once you do this, the Operator will apply a new MySQL configuration to one of the Pods. In a few minutes you will see that the Pod is stuck in CrashLoopBackOff status:

$ kubectl get pods
NAME                                               READY   STATUS             RESTARTS   AGE
percona-xtradb-cluster-operator-79d786dcfb-lzv4b   1/1     Running            0          5h
test-haproxy-0                                     2/2     Running            0          5m27s
test-haproxy-1                                     2/2     Running            0          4m40s
test-haproxy-2                                     2/2     Running            0          4m24s
test-pxc-0                                         1/1     Running            0          5m27s
test-pxc-1                                         1/1     Running            0          4m41s
test-pxc-2                                         0/1     CrashLoopBackOff   1          59s

In the logs it is clearly stated that this parameter is not supported and

mysqld

  process cannot start:

       2020-11-19T13:30:30.141829Z 0 [ERROR] [MY-000067] [Server] unknown variable 'wrong_param=123'.
        2020-11-19T13:30:30.142355Z 0 [ERROR] [MY-010119] [Server] Aborting
        2020-11-19T13:30:31.835199Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.20-11.1)  Percona XtraDB Cluster (GPL), Release rel11, Revision 683b26a, WSREP version 26.4.3.

It is worth noting that your Percona XtraDB Cluster is still operational and serves the requests.

Recovery

Let’s try to comment out the configuration section and reapply

cr.yaml

 :

spec:
...
  pxc:
#    configuration: |
#      [mysqld]
#      wrong_param=123
…


$ kubectl apply -f deploy/cr.yaml

And it won’t work (in v1.6). The Pod is still in CrashLoopBackOff state as the operator does not apply any changes when not all Pods are up and running. We are doing that to ensure data safety.

Fortunately, there is an easy way to recover from such a mistake: you can either delete or modify the corresponding ConfigMap resource in Kubernetes. Usually its name is

{your_cluster_name}-pxc

:

$ kubectl delete configmap test-pxc

And delete the Pod which is failing:

$ kubectl delete pod text-pxc-2

Kubernetes will restart all Percona XtraDB Cluster pods one by one after some time:

test-pxc-0                                         1/1     Running   0          2m28s
test-pxc-1                                         1/1     Running   0          3m23s
test-pxc-2                                         1/1     Running   0          4m36s

You can apply the correct MySQL configuration now through ConfigMap or cr.yaml again. We are assessing other recovery options for such cases and config validation, so stay tuned for upcoming releases.

Nov
23
2020
--

AvePoint to go public via SPAC valued at $2B

AvePoint, a company that gives enterprises using Microsoft Office 365, SharePoint and Teams a control layer on top of these tools, announced today that it would be going public via a SPAC merger with Apex Technology Acquisition Corporation in a deal that values AvePoint at around $2 billion.

The acquisition brings together some powerful technology executives, with Apex run by former Oracle CFO Jeff Epstein and former Goldman Sachs head of technology investment banking Brad Koenig, who will now be working closely with AvePoint’s CEO Tianyi Jiang. Apex filed for a $305 million SPAC in September 2019.

Under the terms of the transaction, Apex’s balance of $352 million plus a $140 million additional private investment will be handed over to AvePoint. Once transaction fees and other considerations are paid for, AvePoint is expected to have $252 million on its balance sheet. Existing AvePoint shareholders will own approximately 72% of the combined entity, with the balance held by the Apex SPAC and the private investment owners.

Jiang sees this as a way to keep growing the company. “Going public now gives us the ability to meet this demand and scale up faster across product innovation, channel marketing, international markets and customer success initiatives,” he said in a statement.

AvePoint was founded in 2001 as a company to help ease the complexity of SharePoint installations, which at the time were all on-premise. Today, it has adapted to the shift to the cloud as a SaaS tool and primarily acts as a policy layer enabling companies to make sure employees are using these tools in a compliant way.

The company raised $200 million in January this year led by Sixth Street Partners (formerly TPG Sixth Street Partners), with additional participation from prior investor Goldman Sachs, meaning that Koenig was probably familiar with the company based on his previous role.

The company has raised a total of $294 million in capital before today’s announcement. It expects to generate almost $150 million in revenue by the end of this year, with ARR growing at over 30%. It’s worth noting that the company’s ARR and revenue has been growing steadily since Q12019. The company is projecting significant growth for the next two years with revenue estimates of $257 million and ARR of $220 million by the end of 2022.

Graph of revenue and projected revenue

Image Credits: AvePoint

The deal is expected to close in the first quarter of next year. Upon close the company will continue to be known as AvePoint and be publicly traded on Nasdaq under the new ticker symbol AVPT.

Nov
23
2020
--

Impact of Percona Monitoring and Management “Get Latest” Command Change

Impact of Percona Monitoring and Management Get Latest Command Change

percona monitoring and managementIn the first quarter of 2021 (expected late January), Percona is slated to release a version of Percona Monitoring and Management (PMM) v2 that will include all of the critical functionality users of PMM v1 have come to know and love over the years. While PMM v2 has some major improvements over its v1 sibling, PMM v2 has long had this stigma that there wasn’t parity between the versions when it came to features like external services, annotations, MongoDB Explain, and custom collectors per service to name a few. By early 2021, we feel confident that users of PMM v1 will recognize all their beloved functionality they’ve come to rely upon in v1 is now in v2 and so we encourage you to come try it for yourself. While many of the missing features have since been added in, one item to note is that external services will be included in that early 2021 release; as with all external exporters, you’ll still need to create your own graphs, but getting the remainder of this functionality will make just about anything you can squeeze data out of “monitorable”.

So What’s the Big Deal?

We will be modifying our “latest” tag that currently specifies v1.x so that it will now point to v2.x on getting the “latest version”. PMM v1 users have historically just “rerun” their ‘docker run pmm-server’ command to update to the next PMM v1.x version. They could specify the latest version of the pmm-server by saying 

docker run -d --name pmm-server percona/pmm-server:1.17.3

or they’ve had the ability to replace that with 

docker run -d --name pmm-server percona/pmm-server:latest

and get whichever v1.x version is the latest released by Percona (as of this blog posting date, the latest is 1.17.4).  But, when we make PMM v2 “latest” early in 2021, those of you that run the latter command are going to be impacted (both positively and negatively), so we wanted to give you a heads-up now so you can plan accordingly and make the appropriate modifications to your deployment code. 

First the positive news… PMM v2 has some very exciting and useful improvements over PMM v1 and we can’t wait for you to leverage this new functionality including:

  • A complete rewrite of the Query Analytics (QAN) tool, including improved speed, global sparkline hover, filtering, new dimensions to collect data, and rich searching capabilities
  • The Security Threat Tool (STT) so that you not only can monitor database performance but also database security
  • A robust expansion of MongoDB and PostgreSQL support (along with continued improvements for MySQL)
  • Integration with external AlertManager to create and deploy alerting and “integrated alerting” expected by the end of December 2020 providing native alerting inside PMM itself
  • Global and local annotations across nodes and services to highlight key events for correlation

As has been stated in the past, there is no direct upgrade/migration path from PMM v1 to PMM v2 because of the complete re-architecting in PMM v2. In fact, these are basically two separate and distinct applications. So you will need to stand up and install PMM v2 as a brand new system with new clients on your endpoints. Additionally, we do not provide a data migration path to move your historical data to PMM v2. You can, however, choose to run both PMM v1 and PMM v2 on the same host using this approach to ease the transition. 

So, if you are one of those users that leverages the :latest” command to upgrade to the latest PMM version (note: this is not the recommended approach to upgrading your PMM implementation; the recommended Percona approach is to use a specific version number such as “pmm:2.11.1”.), you need to start planning now to ensure a smooth transition to PMM v2. Here’s our recommendation for how to plan for this change now:

  1. Determine if you currently upgrade PMM via
    docker run -d --name pmm-server percona/pmm-server:latest

     

    1. If “no”, you will NOT be impacted by the early 2021 change. We would recommend you develop a plan for moving to PMM v2 in 2021 at your convenience, and then proceed to step #2 below.
    2. If “yes”, you WILL be impacted by the early 2021 change and thus need to create a plan on how to minimize your impact. 
      1. If you are planning to keep the docker run command and move to PMM v2 by early 2021, please continue to bullet #2 below. 
      2. If you will not be ready to move to PMM v2 by early 2021, please disable the above docker run command and implement a temporary, manual approach to upgrading to future PMM v1.x releases. When you are ready to migrate to PMM v2, please proceed to step #2 below.
  2.  Will you require access to historical PMM v1 data after deploying PMM v2?
    1. If “yes”, you will need to run both PMM v1 and PMM v2 in parallel. This approach enables a parallel existence. You will want to keep both instances running in parallel until you no longer require access to PMM v1 data, as defined by your organization’s data retention policy.
    2. If “no”, you can install a clean deployment of PMM v2, accessible from the main Percona Monitoring and Management page. From then forward, we recommend you upgrade using the
      docker run.../pmm-server:2

      command, and upgrades will be performed from the v2.x branch of PMM.

After you upgrade in early 2021, enjoy the move to PMM v2 and please let us know your thoughts on its new features as well as any ideas you have for improvement.

Please note that this does NOT mean that we are “sunsetting” PMM v1 and will no longer support that application. While we are not creating new features for PMM v1, we do continue to maintain it with critical bug fixes as needed as well as support for the product for those customers on a support contract. This maintenance and support will continue until PMM moves to version 3.x at a date to be determined in the future.

 

Download and Try Percona Monitoring and Management Today!

Nov
23
2020
--

Friday app, a remote work tool, raises $2.1 million led by Bessemer

Friday, an app looking to make remote work more efficient, has announced the close of a $2.1 million seed round led by Bessemer Venture Partners. Active Capital, Underscore, El Cap Holdings, TLC Collective and New York Venture Partners also participated in the round, among others.

Founded by Luke Thomas, Friday sits on top of the tools that teams already use — GitHub, Trello, Asana, Slack, etc. — to surface information that workers need when they need it and keep them on top of what others in the organization are doing.

The platform offers a Daily Planner feature, so users can roadmap their day and share it with others, as well as a Work Routines feature, giving users the ability to customize and even automate routine updates. For example, weekly updates or daily standups done via Slack or Google Hangouts can be done via Friday app, eliminating the time spent by managers, or others, jotting down these updates or copying that info over from Slack.

Friday also lets users set goals across the organization or team so that users’ daily and weekly work aligns with the broader OKRs of the company.

Plus, Friday users can track their time spent in meetings, as well as team morale and productivity, using the Analytics dashboard of the platform.

Friday has a free-forever model, which allows individual users or even organizations to use the app for free for as long as they want. More advanced features like Goals, Analytics and the ability to see past three weeks of history within the app are paywalled for a price of $6/seat/month.

Thomas says that one of the biggest challenges for Friday is that people automatically assume it’s competing with an Asana or Trello, as opposed to being a layer on top of these products that brings all that information into one place.

“The number one problem is that we’re in a noisy space,” said Thomas. “There are a lot of tools that are saying they’re a remote work tool when they’re really just a layer on top of Zoom or a video conferencing tool. There is certainly increased amount of interest in the space in a good and positive way, but it also means that we have to work harder to cut through the noise.”

The Friday team is small for now — four full-time staff members — and Thomas says that he plans to double the size of the team following the seed round. Thomas declined to share any information around the diversity breakdown of the team.

Following a beta launch at the beginning of 2020, Friday says it is used by employees at organizations such as Twitter, LinkedIn, Quizlet, Red Hat and EA, among others.

This latest round brings the company’s total funding to $2.5 million.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com