Oct
21
2020
--

Wrike launches new AI tools to keep your projects on track

Project management service Wrike today announced at its user conference a major update to its platform that includes a lot of new AI smarts for keeping individual projects on track and on time, as well as new solutions for marketers and project management offices in large corporations. In addition, the company also launched a new budgeting feature and tweaks to the overall user experience.

The highlight of the launch, though, is, without doubt, the launch of the new AI and machine learning capabilities in Wrike . With more than 20,000 customers and over 2 million users on the platform, Wrike has collected a trove of data about projects that it can use to power these machine learning models.

Image Credits: Wrike

The way Wrike is now using AI falls into three categories: project risk prediction, task prioritization and tools for speeding up the overall project management workflow.

Figuring out the status of a project and knowing where delays could impact the overall project is often half the job. Wrike can now predict potential delays and alert project and team leaders when it sees events that signal potential issues. To do this, it uses basic information like start and end dates, but more importantly, it looks at the prior outcomes of similar projects to assess risks. Those predictions can then be fed into Wrike’s automation engine to trigger actions that could mitigate the risk to the project.

Task prioritization does what you would expect and helps you figure out what you should focus on right now to help a project move forward. No surprises there.

What is maybe more surprising is that the team is also launching voice commands (through Siri on iOS) and Gmail-like smart replies (in English for iOS and Android). Those aren’t exactly core features of a project management tool, but as the company notes, these features help remove the overall friction and reduce latencies. Another new feature that falls into this category is support for optical character recognition to allow you to scan printed and handwritten notes from your phones and attach them to tasks (iOS only).

“With more employees working from home, work and personal life are becoming intertwined,” the company argues. “As workers use AI in their personal lives, team managers and everyday users expect the smarts they’re accustomed to in consumer devices and apps to help them manage their work as well. Wrike Work Intelligence is the most comprehensive machine learning foundation that taps into tens of millions of work-related user engagements to power cross-functional collaboration to help organizations achieve operational efficiency, create new opportunities and accelerate digital transformation. Teams can focus on the work that matters most, predict and minimize delays, and cut communication latencies.”

Image Credits: Wrike

The other major new feature — at least if you’re in digital marketing — is Wrike’s new ability to pull in data about your campaigns from about 50 advertising, marketing automation and social media tools, which is then displayed inside the Wrike experience. In a fast-moving field, having all that data at your fingertips and right inside the tool where you think about how to manage these projects seems like a smart idea.

Image Credits: Wrike

Somewhat related, Wrike’s new budgeting feature also now makes it easier for teams to keep their projects within budget, using a new built-in rate card to manage project pricing and update their financials.

“We use Wrike for an extensive project management and performance metrics system,” said Shannon Buerk, the CEO of engage2learn, which tested this new budgeting tool. “We have tried other PM systems and have found Wrike to be the best of all worlds: easy to use for everyone and savvy enough to provide valuable reporting to inform our work. Converting all inefficiencies into productive time that moves your mission forward is one of the keys to a culture of engagement and ownership within an organization, even remotely. Wrike has helped us get there.”

Oct
20
2020
--

Egnyte introduces new features to help deal with security/governance during pandemic

The pandemic has put stress on companies dealing with a workforce that is mostly — and sometimes suddenly — working from home. That has led to rising needs for security and governance tooling, something that Egnyte is looking to meet with new features aimed at helping companies cope with file management during the pandemic.

Egnyte is an enterprise file storage and sharing (EFSS) company, though it has added security services and other tools over the years.

“It’s no surprise that there’s been a rapid shift to remote work, which has I believe led to mass adoption of multiple applications running on multiple clouds, and tied to that has been a nonlinear reaction of exponential growth in data security and governance concerns,” Vineet Jain, co-founder and CEO at Egnyte, explained.

There’s a lot of data at stake.

Egnyte’s announcements today are in part a reaction to the changes that COVID has brought, a mix of net-new features and capabilities that were on its road map, but accelerated to meet the needs of the changing technology landscape.

What’s new?

The company is introducing a new feature called Smart Cache to make sure that content (wherever it lives) that an individual user accesses most will be ready whenever they need it.

“Smart Cache uses machine learning to predict the content most likely to be accessed at any given site, so administrators don’t have to anticipate usage patterns. The elegance of the solution lies in that it is invisible to the end users,” Jain said. The end result of this capability could be lower storage and bandwidth costs, because the system can make this content available in an automated way only when it’s needed.

Another new feature is email scanning and governance. As Jain points out, email is often a company’s largest data store, but it’s also a conduit for phishing attacks and malware. So Egnyte is introducing an email governance tool that keeps an eye on this content, scanning it for known malware and ransomware and blocking files from being put into distribution when it identifies something that could be harmful.

As companies move more files around it’s important that security and governance policies travel with the document, so that policies can be enforced on the file wherever it goes. This was true before COVID-19, but has only become more true as more folks work from home.

Finally, Egnyte is using machine learning for auto-classification of documents to apply policies to documents without humans having to touch them. By identifying the document type automatically, whether it has personally identifying information or it’s a budget or planning document, Egnyte can help customers auto-classify and apply policies about viewing and sharing to protect sensitive materials.

Egnyte is reacting to the market needs as it makes changes to the platform. While the pandemic has pushed this along, these are features that companies with documents spread out across various locations can benefit from regardless of the times.

The company is over $100 million ARR today, and grew 22% in the first half of 2020. Whether the company can accelerate that growth rate in H2 2020 is not yet clear. Regardless, Egnyte is a budding IPO candidate for 2021 if market conditions hold.

Oct
14
2020
--

Atlassian Smarts adds machine learning layer across the company’s platform of services

Atlassian has been offering collaboration tools, often favored by developers and IT for some time with such stalwarts as Jira for help desk tickets, Confluence to organize your work and BitBucket to organize your development deliverables, but what it lacked was a machine learning layer across the platform to help users work smarter within and across the applications in the Atlassian family.

That changed today, when Atlassian announced it has been building that machine learning layer, called Atlassian Smarts, and is releasing several tools that take advantage of it. It’s worth noting that unlike Salesforce, which calls its intelligence layer Einstein or Adobe, which calls its Sensei, Atlassian chose to forgo the cutesy marketing terms and just let the technology stand on its own.

Shihab Hamid, the founder of the Smarts and Machine Learning Team at Atlassian, who has been with the company 14 years, says they avoided a marketing name by design. “I think one of the things that we’re trying to focus on is actually the user experience and so rather than packaging or branding the technology, we’re really about optimizing teamwork,” Hamid told TechCrunch.

Hamid says that the goal of the machine learning layer is to remove the complexity involved with organizing people and information across the platform.

“Simple tasks like finding the right person or the right document becomes a challenge, or at least they slow down productivity and take time away from the creative high-value work that everyone wants to be doing, and teamwork itself is super messy and collaboration is complicated. These are human challenges that don’t really have one right solution,” he said.

He says that Atlassian has decided to solve these problems using machine learning with the goal of speeding up repetitive, time-intensive tasks. Much like Adobe or Salesforce, Atlassian has built this underlying layer of machine smarts, for lack of a better term, that can be distributed across their platform to deliver this kind of machine learning-based functionality wherever it makes sense for the particular product or service.

“We’ve invested in building this functionality directly into the Atlassian platform to bring together IT and development teams to unify work, so the Atlassian flagship products like JIRA and Confluence sit on top of this common platform and benefit from that common functionality across products. And so the idea is if we can build that common predictive capability at the platform layer we can actually proliferate smarts and benefit from the data that we gather across our products,” Hamid said.

The first pieces fit into this vision. For starters, Atlassian is offering a smart search tool that helps users find content across Atlassian tools faster by understanding who you are and how you work. “So by knowing where users work and what they work on, we’re able to proactively provide access to the right documents and accelerate work,” he said.

The second piece is more about collaboration and building teams with the best personnel for a given task. A new tool called predictive user mentions helps Jira and Confluence users find the right people for the job.

“What we’ve done with the Atlassian platform is actually baked in that intelligence, because we know what you work on and who you collaborate with, so we can predict who should be involved and brought into the conversation,” Hamid explained.

Finally, the company announced a tool specifically for Jira users, which bundles together similar sets of help requests and that should lead to faster resolution over doing them manually one at a time.

“We’re soon launching a feature in JIRA Service Desk that allows users to cluster similar tickets together, and operate on them to accelerate IT workflows, and this is done in the background using ML techniques to calculate the similarity of tickets, based on the summary and description, and so on.”

All of this was made possible by the company’s previous shift from mostly on-premises to the cloud and the flexibility that gave them to build new tooling that crosses the entire platform.

Today’s announcements are just the start of what Atlassian hopes will be a slew of new machine learning-fueled features being added to the platform in the coming months and years.

Oct
08
2020
--

Grid AI raises $18.6M Series A to help AI researchers and engineers bring their models to production

Grid AI, a startup founded by the inventor of the popular open-source PyTorch Lightning project, William Falcon, that aims to help machine learning engineers work more efficiently, today announced that it has raised an $18.6 million Series A funding round, which closed earlier this summer. The round was led by Index Ventures, with participation from Bain Capital Ventures and firstminute. 

Falcon co-founded the company with Luis Capelo, who was previously the head of machine learning at Glossier. Unsurprisingly, the idea here is to take PyTorch Lightning, which launched about a year ago, and turn that into the core of Grid’s service. The main idea behind Lightning is to decouple the data science from the engineering.

The time argues that a few years ago, when data scientists tried to get started with deep learning, they didn’t always have the right expertise and it was hard for them to get everything right.

“Now the industry has an unhealthy aversion to deep learning because of this,” Falcon noted. “Lightning and Grid embed all those tricks into the workflow so you no longer need to be a PhD in AI nor [have] the resources of the major AI companies to get these things to work. This makes the opportunity cost of putting a simple model against a sophisticated neural network a few hours’ worth of effort instead of the months it used to take. When you use Lightning and Grid it’s hard to make mistakes. It’s like if you take a bad photo with your phone but we are the phone and make that photo look super professional AND teach you how to get there on your own.”

As Falcon noted, Grid is meant to help data scientists and other ML professionals “scale to match the workloads required for enterprise use cases.” Lightning itself can get them partially there, but Grid is meant to provide all of the services its users need to scale up their models to solve real-world problems.

What exactly that looks like isn’t quite clear yet, though. “Imagine you can find any GitHub repository out there. You get a local copy on your laptop and without making any code changes you spin up 400 GPUs on AWS — all from your laptop using either a web app or command-line-interface. That’s the Lightning “magic” applied to training and building models at scale,” Falcon said. “It is what we are already known for and has proven to be such a successful paradigm shift that all the other frameworks like Keras or TensorFlow, and companies have taken notice and have started to modify what they do to try to match what we do.”

The service is now in private beta.

With this new funding, Grid, which currently has 25 employees, plans to expand its team and strengthen its corporate offering via both Grid AI and through the open-source project. Falcon tells me that he aims to build a diverse team, not in the least because he himself is an immigrant, born in Venezuela, and a U.S. military veteran.

“I have first-hand knowledge of the extent that unethical AI can have,” he said. “As a result, we have approached hiring our current 25 employees across many backgrounds and experiences. We might be the first AI company that is not all the same Silicon Valley prototype tech-bro.”

“Lightning’s open-source traction piqued my interest when I first learned about it a year ago,” Index Ventures’ Sarah Cannon told me. “So intrigued in fact I remember rushing into a closet in Helsinki while at a conference to have the privacy needed to hear exactly what Will and Luis had built. I promptly called my colleague Bryan Offutt who met Will and Luis in SF and was impressed by the ‘elegance’ of their code. We swiftly decided to participate in their seed round, days later. We feel very privileged to be part of Grid’s journey. After investing in seed, we spent a significant amount with the team, and the more time we spent with them the more conviction we developed. Less than a year later and pre-launch, we knew we wanted to lead their Series A.”

Oct
05
2020
--

Strike Graph raises $3.9M to help automate security audits

Compliance automation isn’t exactly the most exciting topic, but security audits are big business and companies that aim to get a SOC 2, ISO 207001 or FedRamp certification can often spend six figures to get through the process with the help of an auditing service. Seattle-based Strike Graph, which is launching today and announcing a $3.9 million seed funding round, wants to automate as much of this process as possible.

The company’s funding round was led by Madrona Venture Group, with participation from Amplify.LA, Revolution’s Rise of the Rest Seed Fund and Green D Ventures.

Strike Graph co-founder and CEO Justin Beals tells me that the idea for the company came to him during his time as CTO at machine learning startup Koru (which had a bit of an odd exit last year). To get enterprise adoption for that service, the company had to get a SOC 2 security certification. “It was a real challenge, especially for a small company. In talking to my colleagues, I just recognized how much of a challenge it was across the board. And so when it was time for the next startup, I was just really curious,” he told me.

Image Credits: Strike Graph

Together with his co-founder Brian Bero, he incubated the idea at Madrona Venture Labs, where he spent some time as Entrepreneur in Residence after Koru.

Beals argues that today’s process tends to be slow, inefficient and expensive. The idea behind Strike Graph, unsurprisingly, is to remove as many of these inefficiencies as is currently possible. The company itself, it is worth noting, doesn’t provide the actual audit service. Businesses will still need to hire an auditing service for that. But Beals also argues that the bulk of what companies are paying for today is pre-audit preparation.

“We do all that preparation work and preparing you and then, after your first audit, you have to go and renew every year. So there’s an important maintenance of that information.”

Image Credits: Strike Graph

When customers come to Strike Graph, they fill out a risk assessment. The company takes that and can then provide them with controls for how to improve their security posture — both to pass the audit and to secure their data. Beals also noted that soon, Strike Graph will be able to help businesses automate the collection of evidence for the audit (say your encryption settings) and can pull that in regularly. Certifications like SOC 2, after all, require companies to have ongoing security practices in place and get re-audited every 12 months. Automated evidence collection will launch in early 2021, once the team has built out the first set of its integrations to collect that data.

That’s also where the company, which mostly targets mid-size businesses, plans to spend a lot of its new funding. In addition, the company plans to focus on its marketing efforts, mostly around content marketing and educating its potential customers.

“Every company, big or small, that sells a software solution must address a broad set of compliance requirements in regards to security and privacy. Obtaining the certifications can be a burdensome, opaque and expensive process. Strike Graph is applying intelligent technology to this problem — they help the company identify the appropriate risks, enable the audit to run smoothly and then automate the compliance and testing going forward,” said Hope Cochran, managing director at Madrona Venture Group. “These audits were a necessary pain when I was a CFO, and Strike Graph’s elegant solution brings together teams across the company to move the business forward faster.”

Oct
05
2020
--

As it closes in on Arm, Nvidia announces UK supercomputer dedicated to medical research

As Nvidia continues to work through its deal to acquire Arm from SoftBank for $40 billion, the computing giant is making another big move to lay out its commitment to investing in U.K. technology. Today the company announced plans to develop Cambridge-1, a new £40 million AI supercomputer that will be used for research in the health industry in the country, the first supercomputer built by Nvidia specifically for external research access, it said.

Nvidia said it is already working with GSK, AstraZeneca, London hospitals Guy’s and St Thomas’ NHS Foundation Trust, King’s College London and Oxford Nanopore to use the Cambridge-1. The supercomputer is due to come online by the end of the year and will be the company’s second supercomputer in the country. The first is already in development at the company’s AI Center of Excellence in Cambridge, and the plan is to add more supercomputers over time.

The growing role of AI has underscored an interesting crossroads in medical research. On one hand, leading researchers all acknowledge the role it will be playing in their work. On the other, none of them (nor their institutions) have the resources to meet that demand on their own. That’s driving them all to get involved much more deeply with big tech companies like Google, Microsoft and, in this case, Nvidia, to carry out work.

Alongside the supercomputer news, Nvidia is making a second announcement in the area of healthcare in the U.K.: it has inked a partnership with GSK, which has established an AI hub in London, to build AI-based computational processes that will be used in drug vaccine and discovery — an especially timely piece of news, given that we are in a global health pandemic and all drug makers and researchers are on the hunt to understand more about, and build vaccines for, COVID-19.

The news is coinciding with Nvidia’s industry event, the GPU Technology Conference.

“Tackling the world’s most pressing challenges in healthcare requires massively powerful computing resources to harness the capabilities of AI,” said Jensen Huang, founder and CEO of Nvidia, in his keynote at the event. “The Cambridge-1 supercomputer will serve as a hub of innovation for the U.K., and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery.”

The company plans to dedicate Cambridge-1 resources in four areas, it said: industry research, in particular joint research on projects that exceed the resources of any single institution; university granted compute time; health-focused AI startups; and education for future AI practitioners. It’s already building specific applications in areas, like the drug discovery work it’s doing with GSK, that will be run on the machine.

The Cambridge-1 will be built on Nvidia’s DGX SuperPOD system, which can process 400 petaflops of AI performance and 8 petaflops of Linpack performance. Nvidia said this will rank it as the 29th fastest supercomputer in the world.

“Number 29” doesn’t sound very groundbreaking, but there are other reasons why the announcement is significant.

For starters, it underscores how the supercomputing market — while still not a mass-market enterprise — is increasingly developing more focus around specific areas of research and industries. In this case, it underscores how health research has become more complex, and how applications of artificial intelligence have both spurred that complexity but, in the case of building stronger computing power, also provides a better route — some might say one of the only viable routes in the most complex of cases — to medical breakthroughs and discoveries.

It’s also notable that the effort is being forged in the U.K. Nvidia’s deal to buy Arm has seen some resistance in the market — with one group leading a campaign to stop the sale and take Arm independent — but this latest announcement underscores that the company is already involved pretty deeply in the U.K. market, bolstering Nvidia’s case to double down even further. (Yes, chip reference designs and building supercomputers are different enterprises, but the argument for Nvidia is one of commitment and presence.)

“AI and machine learning are like a new microscope that will help scientists to see things that they couldn’t see otherwise,” said Dr. Hal Barron, chief scientific officer and president, R&D, GSK, in a statement. “NVIDIA’s investment in computing, combined with the power of deep learning, will enable solutions to some of the life sciences industry’s greatest challenges and help us continue to deliver transformational medicines and vaccines to patients. Together with GSK’s new AI lab in London, I am delighted that these advanced technologies will now be available to help the U.K.’s outstanding scientists.”

“The use of big data, supercomputing and artificial intelligence have the potential to transform research and development; from target identification through clinical research and all the way to the launch of new medicines,” added James Weatherall, PhD, head of Data Science and AI, AstraZeneca, in his statement.

“Recent advances in AI have seen increasingly powerful models being used for complex tasks such as image recognition and natural language understanding,” said Sebastien Ourselin, head, School of Biomedical Engineering & Imaging Sciences at King’s College London. “These models have achieved previously unimaginable performance by using an unprecedented scale of computational power, amassing millions of GPU hours per model. Through this partnership, for the first time, such a scale of computational power will be available to healthcare research – it will be truly transformational for patient health and treatment pathways.”

Dr. Ian Abbs, chief executive & chief medical director of Guy’s and St Thomas’ NHS Foundation Trust Officer, said: “If AI is to be deployed at scale for patient care, then accuracy, robustness and safety are of paramount importance. We need to ensure AI researchers have access to the largest and most comprehensive datasets that the NHS has to offer, our clinical expertise, and the required computational infrastructure to make sense of the data. This approach is not only necessary, but also the only ethical way to deliver AI in healthcare – more advanced AI means better care for our patients.”

“Compact AI has enabled real-time sequencing in the palm of your hand, and AI supercomputers are enabling new scientific discoveries in large-scale genomic data sets,” added Gordon Sanghera, CEO, Oxford Nanopore Technologies. “These complementary innovations in data analysis support a wealth of impactful science in the U.K., and critically, support our goal of bringing genomic analysis to anyone, anywhere.”

 

Sep
29
2020
--

Datasaur snags $3.9M investment to build intelligent machine learning labeling platform

As machine learning has grown, one of the major bottlenecks remains labeling things so the machine learning application understands the data it’s working with. Datasaur, a member of the Y Combinator Winter 2020 batch, announced a $3.9 million investment today to help solve that problem with a platform designed for machine learning labeling teams.

The funding announcement, which includes a pre-seed amount of $1.1 million from last year and $2.8 million seed right after it graduated from Y Combinator in March, included investments from Initialized Capital, Y Combinator and OpenAI CTO Greg Brockman.

Company founder Ivan Lee says that he has been working in various capacities involving AI for seven years. First when his mobile gaming startup Loki Studios was acquired by Yahoo! in 2013, and Lee was eventually moved to the AI team, and, most recently, at Apple. Regardless of the company, he consistently saw a problem around organizing machine learning labeling teams, one that he felt he was uniquely situated to solve because of his experience.

“I have spent millions of dollars [in budget over the years] and spent countless hours gathering labeled data for my engineers. I came to recognize that this was something that was a problem across all the companies that I’ve been at. And they were just consistently reinventing the wheel and the process. So instead of reinventing that for the third time at Apple, my most recent company, I decided to solve it once and for all for the industry. And that’s why we started Datasaur last year,” Lee told TechCrunch.

He built a platform to speed up human data labeling with a dose of AI, while keeping humans involved. The platform consists of three parts: a labeling interface; the intelligence component, which can recognize basic things so the labeler isn’t identifying the same thing over and over; and finally a team organizing component.

He says the area is hot, but to this point has mostly involved labeling consulting solutions, which farm out labeling to contractors. He points to the sale of Figure Eight in March 2019 and to Scale, which snagged $100 million last year as examples of other startups trying to solve this problem in this way, but he believes his company is doing something different by building a fully software-based solution.

The company currently offers a cloud and on-prem solution, depending on the customer’s requirements. It has 10 employees, with plans to hire in the next year, although he didn’t share an exact number. As he does that, he says he has been working with a partner at investor Initialized on creating a positive and inclusive culture inside the organization, and that includes conversations about hiring a diverse workforce as he builds the company.

“I feel like this is just standard CEO speak, but that is something that we absolutely value in our top of funnel for the hiring process,” he said.

As Lee builds out his platform, he has also worried about built-in bias in AI systems and the detrimental impact that could have on society. He says that he has spoken to clients about the role of labeling in bias and ways of combatting that.

“When I speak with our clients, I talk to them about the potential for bias from their labelers and built into our product itself is the ability to assign multiple people to the same project. And I explain to my clients that this can be more costly, but from personal experience I know that it can improve results dramatically to get multiple perspectives on the exact same data,” he said.

Lee believes humans will continue to be involved in the labeling process in some way, even as parts of the process become more automated. “The very nature of our existence [as a company] will always require humans in the loop, […] and moving forward I do think it’s really important that as we get into more and more of the long tail use cases of AI, we will need humans to continue to educate and inform AI, and that’s going to be a critical part of how this technology develops.”

Sep
25
2020
--

Privacy data management innovations reduce risk, create new revenue channels

Privacy data mismanagement is a lurking liability within every commercial enterprise. The very definition of privacy data is evolving over time and has been broadened to include information concerning an individual’s health, wealth, college grades, geolocation and web surfing behaviors. Regulations are proliferating at state, national and international levels that seek to define privacy data and establish controls governing its maintenance and use.

Existing regulations are relatively new and are being translated into operational business practices through a series of judicial challenges that are currently in progress, adding to the confusion regarding proper data handling procedures. In this confusing and sometimes chaotic environment, the privacy risks faced by almost every corporation are frequently ambiguous, constantly changing and continually expanding.

Conventional information security (infosec) tools are designed to prevent the inadvertent loss or intentional theft of sensitive information. They are not sufficient to prevent the mismanagement of privacy data. Privacy safeguards not only need to prevent loss or theft but they must also prevent the inappropriate exposure or unauthorized usage of such data, even when no loss or breach has occurred. A new generation of infosec tools is needed to address the unique risks associated with the management of privacy data.

The first wave of innovation

A variety of privacy-focused security tools emerged over the past few years, triggered in part by the introduction of GDPR (General Data Protection Regulation) within the European Union in 2018. New capabilities introduced by this first wave of innovation were focused in the following three areas:

Data discovery, classification and cataloging. Modern enterprises collect a wide variety of personal information from customers, business partners and employees at different times for different purposes with different IT systems. This data is frequently disseminated throughout a company’s application portfolio via APIs, collaboration tools, automation bots and wholesale replication. Maintaining an accurate catalog of the location of such data is a major challenge and a perpetual activity. BigID, DataGuise and Integris Software have gained prominence as popular solutions for data discovery. Collibra and Alation are leaders in providing complementary capabilities for data cataloging.

Consent management. Individuals are commonly presented with privacy statements describing the intended use and safeguards that will be employed in handling the personal data they supply to corporations. They consent to these statements — either explicitly or implicitly — at the time such data is initially collected. Osano, Transcend.io and DataGrail.io specialize in the management of consent agreements and the enforcement of their terms. These tools enable individuals to exercise their consensual data rights, such as the right to view, edit or delete personal information they’ve provided in the past.

Sep
23
2020
--

WhyLabs brings more transparancy to ML ops

WhyLabs, a new machine learning startup that was spun out of the Allen Institute, is coming out of stealth today. Founded by a group of former Amazon machine learning engineers, Alessya Visnjic, Sam Gracie and Andy Dang, together with Madrona Venture Group principal Maria Karaivanova, WhyLabs’ focus is on ML operations after models have been trained — not on building those models from the ground up.

The team also today announced that it has raised a $4 million seed funding round from Madrona Venture Group, Bezos Expeditions, Defy Partners and Ascend VC.

Visnjic, the company’s CEO, used to work on Amazon’s demand forecasting model.

“The team was all research scientists, and I was the only engineer who had kind of tier-one operating experience,” she told me. “So I thought, “Okay, how bad could it be? I carried the pager for the retail website before. But it was one of the first AI deployments that we’d done at Amazon at scale. The pager duty was extra fun because there were no real tools. So when things would go wrong — like we’d order way too many black socks out of the blue — it was a lot of manual effort to figure out why issues were happening.”

Image Credits: WhyLabs

But while large companies like Amazon have built their own internal tools to help their data scientists and AI practitioners operate their AI systems, most enterprises continue to struggle with this — and a lot of AI projects simply fail and never make it into production. “We believe that one of the big reasons that happens is because of the operating process that remains super manual,” Visnjic said. “So at WhyLabs, we’re building the tools to address that — specifically to monitor and track data quality and alert — you can think of it as Datadog for AI applications.”

The team has brought ambitions, but to get started, it is focusing on observability. The team is building — and open-sourcing — a new tool for continuously logging what’s happening in the AI system, using a low-overhead agent. That platform-agnostic system, dubbed WhyLogs, is meant to help practitioners understand the data that moves through the AI/ML pipeline.

For a lot of businesses, Visnjic noted, the amount of data that flows through these systems is so large that it doesn’t make sense for them to keep “lots of big haystacks with possibly some needles in there for some investigation to come in the future.” So what they do instead is just discard all of this. With its data logging solution, WhyLabs aims to give these companies the tools to investigate their data and find issues right at the start of the pipeline.

Image Credits: WhyLabs

According to Karaivanova, the company doesn’t have paying customers yet, but it is working on a number of proofs of concepts. Among those users is Zulily, which is also a design partner for the company. The company is going after mid-size enterprises for the time being, but as Karaivanova noted, to hit the sweet spot for the company, a customer needs to have an established data science team with 10 to 15 ML practitioners. While the team is still figuring out its pricing model, it’ll likely be a volume-based approach, Karaivanova said.

“We love to invest in great founding teams who have built solutions at scale inside cutting-edge companies, who can then bring products to the broader market at the right time. The WhyLabs team are practitioners building for practitioners. They have intimate, first-hand knowledge of the challenges facing AI builders from their years at Amazon and are putting that experience and insight to work for their customers,” said Tim Porter, managing director at Madrona. “We couldn’t be more excited to invest in WhyLabs and partner with them to bring cross-platform model reliability and observability to this exploding category of MLOps.”

Sep
22
2020
--

Microsoft Teams gets breakout rooms, custom layouts and virtual commutes

Unsurprisingly, Teams has become a major focus for Microsoft during the COVID-19 pandemic, so it’s no surprise that the company is using its annual Ignite IT conference to announce a number of new features for the service.

Today’s announcements follow the launch of features like Together Mode and dynamic view earlier this summer.

Together Mode, which puts cutouts of meeting participants in different settings, is getting a bit of an update today with the launch of new scenes: auditoriums, coffee shops and conference rooms. Like before, the presenter chooses the scene, but what’s new now is that Microsoft is also using machine learning to ensure that participants are automatically centered in their virtual chairs, making the whole scene look just a little bit more natural (and despite what Microsoft’s research shows, I can never help but think that this all looks a bit goofy, maybe because it reminds me of the opening credits of The Muppet Show).

Image Credits: Microsoft

Also new in Teams is custom layouts, which allow presenters to customize how their presentations — and their own video feeds — appear. With this, a presenter can superimpose her own video image over the presentation, for example.

Image Credits: Microsoft

Breakout rooms, a feature that is getting a lot of use in Zoom these days, is now also coming to Teams. Microsoft calls it the most requested feature in Teams and, like in similar products, it allows meeting organizers to split participants into smaller groups — and the meeting organizer can then go from room to room. Unsurprisingly, this feature is especially popular with teachers, though companies, too, often use it to facilitate brainstorming sessions, for example.

Image Credits: Microsoft

After exhausting all your brainstorming power in those breakout rooms and finishing up your meeting, Teams can now also send you an automatic recap of a meeting that includes a recording, transcript, shared files and more. These recaps will automatically appear on your Outlook calendar. In the future, Microsoft will also enable the ability to automatically store these recordings on SharePoint.

For companies that regularly host large meetings, Microsoft will launch support for up to 1,000 participants in the near future. Attendees in these meetings will get the full Teams experience, Microsoft promises. Later, Microsoft will also enable view-only meetings for up to 20,000 participants. Both of these features will become available as part of a new “Advanced Communications” plan, which is probably no surprise, given how much bandwidth and compute power it will likely take to manage a 1,000-person meeting.

Image Credits: Microsoft

Microsoft also made two hardware announcements related to Teams today. The first is the launch of what it calls “Microsoft Teams panels,” which are essentially small tablets that businesses can put outside of their meeting rooms for wayfinding. One cool feature here — especially as businesses start planning their post-pandemic office strategy — is that these devices will be able to use information from the cameras in the room to count how many people are attending a meeting in person and then show remaining room capacity, for example.

The company also today announced that the giant Surface Hub 2S 85-inch model will be available in January 2021.

And there is more. Microsoft is also launching new Teams features for front-line workers to help schedule shifts, alert workers when they are using Teams off-shift and praise badges that enable organizations to recognize workers (though those workers would probably prefer hard cash over a digital badge).

Also new is an integration between Teams and RealWear head-mounted devices for remote collaboration and a new Walkie Talkie app for Android.

And since digital badges aren’t usually enough to improve employee well-being, Microsoft is also adding a new set of well-being features to Teams. These provide users with personalized recommendations to help change habits and improve well-being and productivity.

Image Credits: Microsoft

That includes a new “virtual commute” feature that includes an integration with Headspace and an emotional check-in experience.

I’ve always been a fan of short and manageable commutes for getting some distance between work and home, but that’s not exactly a thing right now. Maybe Headspace works as an example, but there’s only so much Andy Puddicombe I can take. Still, I think I’ll keep my emotional check-ins to myself, though Microsoft obviously notes that it will keep all of that information private.

And while businesses now care about your emotional well-being (because it’s closely related to your productivity), managers mostly care about the work you get done. For them, Workplace Analytics is coming to Teams, giving “managers line of sight into teamwork norms like after-hours collaboration, focus time, meeting effectiveness, and cross-company connections. These will then be compared to averages among similar teams to provide managers with actionable insights.”

If that doesn’t make your manager happy, what will? Maybe a digital praise badge?

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com