Jun
02
2020
--

Atlassian launches new DevOps features

Atlassian today launched a slew of DevOps-centric updates to a variety of its services, ranging from Bitbucket Cloud and Pipelines to Jira and others. While it’s quite a grab-bag of announcements, the overall idea behind them is to make it easier for teams to collaborate across functions as companies adopt DevOps as their development practice of choice.

“I’ve seen a lot of these tech companies go through their agile and DevOps transformations over the years,” Tiffany To, the head of agile and DevOps solutions at Atlassian told me. “Everyone wants the benefits of DevOps, but — we know it — it gets complicated when we mix these teams together, we add all these tools. As we’ve talked with a lot of our users, for them to succeed in DevOps, they actually need a lot more than just the toolset. They have to enable the teams. And so that’s what a lot of these features are focused on.”

As To stressed, the company also worked with several ecosystem partners, for example, to extend the automation features in Jira Software Cloud, which can now also be triggered by commits and pull requests in GitHub, GitLab and other code repositories that are integrated into Jira Software Cloud. “Now you get these really nice integrations for DevOps where we are enabling these developers to not spend time updating the issues,” To noted.

Indeed, a lot of the announcements focus on integrations with third-party tools. This, To said, is meant to allow Atlassian to meet developers where they are. If your code editor of choice is VS Code, for example, you can now try Atlassian’s now VS Code extension, which brings your task like from Jira Software Cloud to the editor, as well as a code review experience and CI/CD tracking from Bitbucket Pipelines.

Also new is the “Your Work” dashboard in Bitbucket Cloud, which can now show you all of your assigned Jira issues, as well as Code Insights in Bitbucket Cloud. Code Insights features integrations with Mabl for test automation, Sentry for monitoring and Snyk for finding security vulnerabilities. These integrations were built on top of an open API, so teams can build their own integrations, too.

“There’s a really important trend to shift left. How do we remove the bugs and the security issues earlier in that dev cycle, because it costs more to fix it later,” said To. “You need to move that whole detection process much earlier in the software lifecycle.”

Jira Service Desk Cloud is getting a new Risk Management Engine that can score the risk of changes and auto-approve low-risk ones, as well as a new change management view to streamline the approval process.

Finally, there is new Opsgenie and Bitbucket Cloud integration that centralizes alerts and promises to filter out the noise, as well as a nice incident investigation dashboard to help teams take a look at the last deployment that happened before the incident occurred.

“The reason why you need all these little features is that as you stitch together a very large number of tools […], there is just lots of these friction points,” said To. “And so there is this balance of, if you bought a single toolchain, all from one vendor, you would have fewer of these friction points, but then you don’t get to choose best of breed. Our mission is to enable you to pick the best tools because it’s not one-size-fits-all.”

May
28
2020
--

Mirantis releases its first major update to Docker Enterprise

In a surprise move, Mirantis acquired Docker’s Enterprise platform business at the end of last year, and while Docker itself is refocusing on developers, Mirantis kept the Docker Enterprise name and product. Today, Mirantis is rolling out its first major update to Docker Enterprise with the release of version 3.1.

For the most part, these updates are in line with what’s been happening in the container ecosystem in recent months. There’s support for Kubernetes 1.17 and improved support for Kubernetes on Windows (something the Kubernetes community has worked on quite a bit in the last year or so). Also new is Nvidia GPU integration in Docker Enterprise through a pre-installed device plugin, as well as support for Istio Ingress for Kubernetes and a new command-line tool for deploying clusters with the Docker Engine.

In addition to the product updates, Mirantis is also launching three new support options for its customers that now give them the option to get 24×7 support for all support cases, for example, as well as enhanced SLAs for remote managed operations, designated customer success managers and proactive monitoring and alerting. With this, Mirantis is clearly building on its experience as a managed service provider.

What’s maybe more interesting, though, is how this acquisition is playing out at Mirantis itself. Mirantis, after all, went through its fair share of ups and downs in recent years, from high-flying OpenStack platform to layoffs and everything in between.

“Why we do this in the first place and why at some point I absolutely felt that I wanted to do this is because I felt that this would be a more compelling and interesting company to build, despite maybe some of the short-term challenges along the way, and that very much turned out to be true. It’s been fantastic,” Mirantis CEO and co-founder Adrian Ionel told me. “What we’ve seen since the acquisition, first of all, is that the customer base has been dramatically more loyal than people had thought, including ourselves.”

Ionel admitted that he thought some users would defect because this is obviously a major change, at least from the customer’s point of view. “Of course we have done everything possible to have something for them that’s really compelling and we put out the new roadmap right away in December after the acquisition — and people bought into it at very large scale,” he said. With that, Mirantis retained more than 90% of the customer base and the vast majority of all of Docker Enterprise’s largest users.

Ionel, who almost seemed a bit surprised by this, noted that this helped the company to turn in two “fantastic” quarters and was profitable in the last quarter, despite COVID-19.

“We wanted to go into this acquisition with a sober assessment of risks because we wanted to make it work, we wanted to make it successful because we were well aware that a lot of acquisitions fail,” he explained. “We didn’t want to go into it with a hyper-optimistic approach in any way — and we didn’t — and maybe that’s one of the reasons why we are positively surprised.”

He argues that the reason for the current success is that enterprises are doubling down on their container journeys and because they actually love the Docker Enterprise platform, like infrastructure independence, its developer focus, security features and ease of use. One thing many large customers asked for was better support for multi-cluster management at scale, which today’s update delivers.

“Where we stand today, we have one product development team. We have one product roadmap. We are shipping a very big new release of Docker Enterprise. […] The field has been completely unified and operates as one salesforce, with record results. So things have been extremely busy, but good and exciting.”

May
19
2020
--

Microsoft launches industry-specific cloud solutions, starting with healthcare

Microsoft today announced the launch of the Microsoft Cloud for Healthcare, an industry-specific cloud solution for healthcare providers. This is the first in what is likely going to be a set of cloud offerings that target specific verticals and extends a trend we’ve seen among large cloud providers (especially Google) that tailor specific offerings to the needs of individual industries.

“More than ever, being connected is critical to create an individualized patient experience,” writes Tom McGuinness, corporate vice president, Worldwide Health at Microsoft, and Dr. Greg Moore, corporate vice president, Microsoft Health, in today’s announcement. “The Microsoft Cloud for Healthcare helps healthcare organizations to engage in more proactive ways with their patients, allows caregivers to improve the efficiency of their workflows and streamline interactions with Classified as Microsoft Confidential patients with more actionable results.”

Like similar Microsoft-branded offerings from the company, Cloud for Healthcare is about bringing together a set of capabilities that already exist inside of Microsoft. In this case, that includes Microsoft 365, Dynamics, Power Platform and Azure, including Azure IoT for monitoring patients. The solution sits on top of a common data model that makes it easier to share data between applications and analyze the data they gather.

“By providing the right information at the right time, the Microsoft Cloud for Healthcare will help hospitals and care providers better manage the needs of patients and staff and make resource deployments more efficient,” Microsoft says in its press materials. “This solution also improves end-to-end security compliance and accessibility of data, driving better operational outcomes.”

Since Microsoft never passes up a chance to talk up Teams, the company also notes that its communications service will allow healthcare workers to more efficiently communicate with each other, but it also notes that Teams now includes a Bookings app to help its users — including healthcare providers — schedule, manage and conduct virtual visits in Teams. Some of the healthcare systems that are already using Teams include St. Luke’s University Health Network, Stony Brook Medicine, Confluent Health and Calderdale & Huddersfield NHS Foundation Trust in the U.K.

In addition to Microsoft’s own tools, the company is also working with its large partner ecosystem to provide healthcare providers with specialized services. These include the likes of Epic, Allscripts, GE Healthcare, Adaptive Biotechnologies and Nuance.

May
19
2020
--

Microsoft launches Azure Synapse Link to help enterprises get faster insights from their data

At its Build developer conference, Microsoft today announced Azure Synapse Link, a new enterprise service that allows businesses to analyze their data faster and more efficiently, using an approach that’s generally called “hybrid transaction/analytical processing” (HTAP). That’s a mouthful; it essentially enables enterprises to use the same database system for analytical and transactional workloads on a single system. Traditionally, enterprises had to make some trade-offs between either building a single system for both that was often highly over-provisioned or maintain separate systems for transactional and analytics workloads.

Last year, at its Ignite conference, Microsoft announced Azure Synapse Analytics, an analytics service that combines analytics and data warehousing to create what the company calls “the next evolution of Azure SQL Data Warehouse.” Synapse Analytics brings together data from Microsoft’s services and those from its partners and makes it easier to analyze.

“One of the key things, as we work with our customers on their digital transformation journey, there is an aspect of being data-driven, of being insights-driven as a culture, and a key part of that really is that once you decide there is some amount of information or insights that you need, how quickly are you able to get to that? For us, time to insight and a secondary element, which is the cost it takes, the effort it takes to build these pipelines and maintain them with an end-to-end analytics solution, was a key metric we have been observing for multiple years from our largest enterprise customers,” said Rohan Kumar, Microsoft’s corporate VP for Azure Data.

Synapse Link takes the work Microsoft did on Synaps Analytics a step further by removing the barriers between Azure’s operational databases and Synapse Analytics, so enterprises can immediately get value from the data in those databases without going through a data warehouse first.

“What we are announcing with Synapse Link is the next major step in the same vision that we had around reducing the time to insight,” explained Kumar. “And in this particular case, a long-standing barrier that exists today between operational databases and analytics systems is these complex ETL (extract, transform, load) pipelines that need to be set up just so you can do basic operational reporting or where, in a very transactionally consistent way, you need to move data from your operational system to the analytics system, because you don’t want to impact the performance of the operational system in any way because that’s typically dealing with, depending on the system, millions of transactions per second.”

ETL pipelines, Kumar argued, are typically expensive and hard to build and maintain, yet enterprises are now building new apps — and maybe even line of business mobile apps — where any action that consumers take and that is registered in the operational database is immediately available for predictive analytics, for example.

From the user perspective, enabling this only takes a single click to link the two, while it removes the need for managing additional data pipelines or database resources. That, Kumar said, was always the main goal for Synapse Link. “With a single click, you should be able to enable real-time analytics on your operational data in ways that don’t have any impact on your operational systems, so you’re not using the compute part of your operational system to do the query, you actually have to transform the data into a columnar format, which is more adaptable for analytics, and that’s really what we achieved with Synapse Link.”

Because traditional HTAP systems on-premises typically share their compute resources with the operational database, those systems never quite took off, Kumar argued. In the cloud, with Synapse Link, though, that impact doesn’t exist because you’re dealing with two separate systems. Now, once a transaction gets committed to the operational database, the Synapse Link system transforms the data into a columnar format that is more optimized for the analytics system — and it does so in real time.

For now, Synapse Link is only available in conjunction with Microsoft’s Cosmos DB database. As Kumar told me, that’s because that’s where the company saw the highest demand for this kind of service, but you can expect the company to add support for available in Azure SQL, Azure Database for PostgreSQL and Azure Database for MySQL in the future.

May
14
2020
--

Google makes it easier to migrate VMware environments to its cloud

Google Cloud today announced the next step in its partnership with VMware: the Google Cloud VMware Engine. This fully managed service provides businesses with a full VMware Cloud Foundation stack on Google Cloud to help businesses easily migrate their existing VMware-based environments to Google’s infrastructure. Cloud Foundation is VMware’s stack for hybrid and private cloud deployments.

Given Google Cloud’s focus on enterprise customers, it’s no surprise that the company continues to bet on partnerships with the likes of VMware to attract more of these companies’ workloads. Less than a year ago, Google announced that VMware Cloud Foundation would come to Google Cloud and that it would start supporting VMware workloads. Then, last November, Google Cloud acquired CloudSimple, a company that specialized in running VMware environments and that Google had already partnered with for its original VMware deployments. The company describes today’s announcement as the third step in this journey.

VMware Engine provides users with all of the standard Cloud Foundation components: vSphere, vCenter, vSAN, NSX-T and HCX. With this, Google Cloud General Manager June Yang notes in today’s announcement, businesses can quickly stand up their own software-defined data center in the Google Cloud.

“Google Cloud VMware Engine is designed to minimize your operational burden, so you can focus on your business,” she notes. “We take care of the lifecycle of the VMware software stack and manage all related infrastructure and upgrades. Customers can continue to leverage IT management tools and third-party services consistent with their on-premises environment.”

Google is also working with third-party providers like NetApp, Veeam, Zerto, Cohesity and Dell Technologies to ensure that their solutions work on Google’s platform, too.

“As customers look to simplify their cloud migration journey, we’re committed to build cloud services to help customers benefit from the increased agility and efficiency of running VMware workloads on Google Cloud,” said Bob Black, Dell Technologies Global Lead Alliance Principal at Deloitte Consulting. “By combining Google Cloud’s technology and Deloitte’s business transformation experience, we can enable our joint customers to accelerate their cloud migration, unify operations, and benefit from innovative Google Cloud services as they look to modernize applications.”

May
12
2020
--

Microsoft partners with Redis Labs to improve its Azure Cache for Redis

For a few years now, Microsoft has offered Azure Cache for Redis, a fully managed caching solution built on top of the open-source Redis project. Today, it is expanding this service by adding Redis Enterprise, Redis Lab’s commercial offering, to its platform. It’s doing so in partnership with Redis Labs and while Microsoft will offer some basic support for the service, Redis Labs will handle most of the software support itself.

Julia Liuson, Microsoft’s corporate VP of its developer tools division, told me that the company wants to be seen as a partner to open-source companies like Redis Labs, which was among the first companies to change its license to prevent cloud vendors from commercializing and repackaging their free code without contributing back to the community. Last year, Redis Labs partnered with Google Cloud to bring its own fully managed service to its platform and so maybe it’s no surprise that we are now seeing Microsoft make a similar move.

Liuson tells me that with this new tier for Azure Cache for Redis, users will get a single bill and native Azure management, as well as the option to deploy natively on SSD flash storage. The native Azure integration should also make it easier for developers on Azure to integrate Redis Enterprise into their applications.

It’s also worth noting that Microsoft will support Redis Labs’ own Redis modules, including RediSearch, a Redis-powered search engine, as well as RedisBloom and RedisTimeSeries, which provide support for new datatypes in Redis.

“For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications,” says Liuson. “We’ve seen tremendous adoption of Azure Cache for Redis, our managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.”

May
06
2020
--

GitHub gets a built-in IDE with Codespaces, discussion forums and more

Under different circumstances, GitHub would be hosting its Satellite conference in Paris this week. Like so many other events, GitHub decided to switch Satellite to a virtual event, but that isn’t stopping the Microsoft-owned company from announcing quite a bit of news this week.

The highlight of GitHub’s announcement is surely the launch of GitHub Codespaces, which gives developers a full cloud-hosted development environment in the cloud, based on Microsoft’s VS Code editor. If that name sounds familiar, that’s likely because Microsoft itself rebranded Visual Studio Code Online to Visual Studio Codespaces a week ago — and GitHub is essentially taking the same concepts and technology and is now integrating it directly inside its service. If you’ve seen VS Online/Codespaces before, the GitHub environment will look very similar.

Contributing code to a community can be hard. Every repository has its own way of configuring a dev environment, which often requires dozens of steps before you can write any code,” writes Shanku Niyogi, GitHub’s SVP of Product, in today’s announcement. “Even worse, sometimes the environment of two projects you are working on conflict with one another. GitHub Codespaces gives you a fully-featured cloud-hosted dev environment that spins up in seconds, directly within GitHub, so you can start contributing to a project right away.”

Currently, GitHub Codespaces is in beta and available for free. The company hasn’t set any pricing for the service once it goes live, but Niyogi says the pricing will look similar to that of GitHub Actions, where it charges for computationally intensive tasks like builds. Microsoft currently charges VS Codespaces users by the hour and depending on the kind of virtual machine they are using.

The other major new feature the company is announcing today is GitHub Discussions. These are essentially discussion forums for a given project. While GitHub already allowed for some degree of conversation around code through issues and pull requests, Discussions are meant to enable unstructured threaded conversations. They also lend themselves to Q&As, and GitHub notes that they can be a good place for maintaining FAQs and other documents.

Currently, Discussions are in beta for open-source communities and will be available for other projects soon.

On the security front, GitHub is also announcing two new features: code scanning and secret scanning. Code scanning checks your code for potential security vulnerabilities. It’s powered by CodeQL and free for open-source projects. Secret scanning is now available for private repositories (a similar feature has been available for public projects since 2018). Both of these features are part of GitHub Advanced Security.

As for GitHub’s enterprise customers, the company today announced the launch of Private Instances, a new fully managed service for enterprise customers that want to use GitHub in the cloud but know that their code is fully isolated from the rest of the company’s users. “Private Instances provides enhanced security, compliance, and policy features including bring-your-own-key encryption, backup archiving, and compliance with regional data sovereignty requirements,” GitHub explains in today’s announcement.

May
04
2020
--

IBM and Red Hat expand their telco, edge and AI enterprise offerings

At its Think Digital conference, IBM and Red Hat today announced a number of new services that all center around 5G edge and AI. The fact that the company is focusing on these two areas doesn’t come as a surprise, given that both edge and AI are two of the fastest-growing businesses in enterprise computing. Virtually every telecom company is now looking at how to best capitalize on the upcoming 5G rollouts, and most forward-looking enterprises are trying to figure out how to best plan around this for their own needs.

As IBM’s recently minted president Jim Whitehurst told me ahead of today’s announcement, he believes that IBM (in combination with Red Hat) is able to offer enterprises a very differentiated service because, unlike the large hyper clouds, IBM isn’t interested in locking these companies into a homogeneous cloud.

“Where IBM is competitively differentiated, is around how we think about helping clients on a journey to what we call hybrid cloud,” said Whitehurst, who hasn’t done a lot of media interviews since he took the new role, which still includes managing Red Hat. “Honestly, everybody has hybrid clouds. I wish we had a more differentiated term. One of the things that’s different is how we’re talking about how you think about an application portfolio that, by necessity, you’re going to have in multiple ways. If you’re a large enterprise, you probably have a mainframe running a set of transactional workloads that probably are going to stay there for a long time because there’s not a great alternative. And there’s going to be a set of applications you’re going to want to run in a distributed environment that need to access that data — all the way out to you running a factory floor and you want to make sure that the paint sprayer doesn’t have any defects while it’s painting a door.”

BARCELONA, CATALONIA, SPAIN – 2019/02/25: The IBM logo is seen during MWC 2019. (Photo by Paco Freire/SOPA Images/LightRocket via Getty Images)

He argues that IBM, at its core, is all about helping enterprises think about how to best run their workloads software, hardware and services perspective. “Public clouds are phenomenal, but they are exposing a set of services in a homogeneous way to enterprises,” he noted, while he argues that IBM is trying to weave all of these different pieces together.

Later in our discussion, he argued that the large public clouds essentially force enterprises to fit their workloads to those clouds’ service. “The public clouds do extraordinary things and they’re great partners of ours, but their primary business is creating these homogeneous services, at massive volumes, and saying ‘if your workloads fit into this, we can run it better, faster, cheaper etc.’ And they have obviously expanded out. They’ve added services. They are not saying we can put a box on-premise, but you’re still fitting into their model.”

On the news side, IBM is launching new services to automate business planning, budgeting and forecasting, for example, as well as new AI-driven tools for building and running automation apps that can handle routine tasks either autonomously or with the help of a human counterpart. The company is also launching new tools for call-center automation.

The most important AI announcement is surely Watson AIOps, though, which is meant to help enterprises detect, diagnose and respond to IT anomalies in order to reduce the effects of incidents and outages for a company.

On the telco side, IBM is launching new tools like the Edge Application Manager, for example, to make it easier to enable AI, analytics and IoT workloads on the edge, powered by IBM’s open-source Open Horizon edge computing project. The company is also launching a new Telco Network Cloud manager built on top of Red Hat OpenShift and the ability to also leverage the Red Hat OpenStack Platform (which remains to be an important platform for telcos and represents a growing business for IBM/Red Hat). In addition, IBM is launching a new dedicated IBM Services team for edge computing and telco cloud to help these customers build out their 5G and edge-enabled solutions.

Telcos are also betting big on a lot of different open-source technologies that often form the core of their 5G and edge deployments. Red Hat was already a major player in this space, but the acquisition has only accelerated this, Whitehurst argued. “Since the acquisition […] telcos have a lot more confidence in IBM’s capabilities to serve them long term and be able to serve them in mission-critical context. But importantly, IBM also has the capability to actually make it real now.”

A lot of the new telco edge and hybrid cloud deployments, he also noted, are built on Red Hat technologies but built by IBM, and neither IBM nor Red Hat could have really brought these to fruition in the same way. Red Hat never had the size, breadth and skills to pull off some of these projects, Whitehurst argued.

Whitehurst also argued that part of the Red Hat DNA that he’s bringing to the table now is helping IBM to think more in terms of ecosystems. “The DNA that I think matters a lot that Red Hat brings to the table with IBM — and I think IBM is adopting and we’re running with it — is the importance of ecosystems,” he said. “All of Red Hat’s software is open source. And so really, what you’re bringing to the table is ecosystems.”

It’s maybe no surprise then that the telco initiatives are backed by partners like Cisco, Dell Technologies, Juniper, Intel, Nvidia, Samsung, Packet, Equinix, Hazelcast, Sysdig, Turbonomics, Portworx, Humio, Indra Minsait, EuroTech, Arrow, ADLINK, Acromove, Geniatech, SmartCone, CloudHedge, Altiostar, Metaswitch, F5 Networks and ADVA.

In many ways, Red Hat pioneered the open-source business model and Whitehurst argued that having Red Hat as part of the IBM family means it’s now easier for the company to make the decision to invest even more in open source. “As we accelerate into this hybrid cloud world, we’re going to do our best to leverage open-source technologies to make them real,” he added.

May
04
2020
--

Nvidia acquires Cumulus Networks

Nvidia today announced its plans to acquire Cumulus Networks, an open-source-centric company that specializes in helping enterprises optimize their data center networking stack. Cumulus offers both its own Linux distribution for network switches, as well as tools for managing network operations. With Cumulus Express, the company also offers a hardware solution in the form of its own data center switch.

The two companies did not announce the price of the acquisition, but chances are we are talking about a considerable amount, given that Cumulus had raised $134 million since it was founded in 2010.

Mountain View-based Cumulus already had a previous partnership with Mellanox, which Nvidia acquired for $6.9 billion. That acquisition closed only a few days ago. As Mellanox’s Amit Katz notes in today’s announcement, the two companies first met in 2013, and they formed a first official partnership in 2016. Cumulus, it’s worth noting, was also an early player in the OpenStack ecosystem.

Having both Cumulus and Mellanox in its stable will give Nvidia virtually all the tools it needs to help enterprises and cloud providers build out their high-performance computing and AI workloads in their data centers. While you may mostly think about Nvidia because of its graphics cards, the company has a sizable data center group, which delivered close to $1 billion in revenue in the last quarter, up 43% from a year ago. In comparison, Nvidia’s revenue from gaming was just under $1.5 billion.

“With Cumulus, NVIDIA can innovate and optimize across the entire networking stack from chips and systems to software including analytics like Cumulus NetQ, delivering great performance and value to customers,” writes Katz. “This open networking platform is extensible and allows enterprise and cloud-scale data centers full control over their operations.”

May
04
2020
--

Decrypted: Chegg’s third time unlucky, Okta’s new CSO, Rapid7 beefs up cloud security

Ransomware is getting sneakier and smarter.

The latest example comes from ExecuPharm, a little-known but major outsourced pharmaceutical company that confirmed it was hit by a new type of ransomware last month. The incursion not only encrypted the company’s network and files, hackers also exfiltrated vast amounts of data from the network. The company was handed a two-for-one threat: pay the ransom and get your files back or don’t pay and the hackers will post the files to the internet.

This new tactic is shifting how organizations think of ransomware attacks: it’s no longer just a data-recovery mission; it’s also now a data breach. Now companies are torn between taking the FBI’s advice of not paying the ransom or the fear their intellectual property (or other sensitive internal files) are published online.

Because millions are now working from home, the surface area for attackers to get in is far greater than it was, making the threat of ransomware higher than ever before.

That’s just one of the stories from the week. Here’s what else you need to know.


THE BIG PICTURE

Chegg hacked for the third time in three years

Education giant Chegg confirmed its third data breach in as many years. The latest break-in affected past and present staff after a hacker made off with 700 names and Social Security numbers. It’s a drop in the ocean when compared to the 40 million records stolen in 2018 and an undisclosed number of passwords taken in a breach at Thinkful, which Chegg had just acquired in 2019.

Those 700 names account for about half of its 1,400 full-time employees, per a filing with the Securities and Exchange Commission. But Chegg’s refusal to disclose further details about the breach — beyond a state-mandated notice to the California attorney general’s office — makes it tough to know exactly went wrong this time.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com