May
08
2021
--

When the Earth is gone, at least the internet will still be working

The internet is now our nervous system. We are constantly streaming and buying and watching and liking, our brains locked into the global information matrix as one universal and coruscating emanation of thought and emotion.

What happens when the machine stops though?

It’s a question that E.M. Forster was intensely focused on more than a century ago in a short story called, rightly enough, “The Machine Stops,” about a human civilization connected entirely through machines that one day just turn off.

Those fears of downtime are not just science fiction anymore. Outages aren’t just missing a must-watch TikTok clip. Hospitals, law enforcement, the government, every corporation — the entire spectrum of human institutions that constitute civilization now deeply rely on connectivity to function.

So when it comes to disaster response, the world has dramatically changed. In decades past, the singular focus could be roughly summarized as rescue and mitigation — save who you can while trying to limit the scale of destruction. Today though, the highest priority is by necessity internet access, not just for citizens, but increasingly for the on-the-ground first responders who need bandwidth to protect themselves, keep abreast of their mission objectives, and have real-time ground truth on where dangers lurk and where help is needed.

While the sales cycles might be arduous as we learned in part one and the data trickles have finally turned to streams in part two, the reality is that none of that matters if there isn’t connectivity to begin with. So in part three of this series on the future of technology and disaster response, we’re going to analyze the changing nature of bandwidth and connectivity and how they intersect with emergencies, taking a look at how telcos are creating resilience in their networks while defending against climate change, how first responders are integrating connectivity into their operations, and finally, exploring how new technologies like 5G and satellite internet will affect these critical activities.

Wireless resilience as the world burns

Climate change is inducing more intense weather patterns all around the world, creating second- and third-order effects for industries that rely on environmental stability for operations. Few industries have to be as dynamic to the changing context as telecom companies, whose wired and wireless infrastructure is regularly buffeted by severe storms. Resiliency of these networks isn’t just needed for consumers — it’s absolutely necessary for the very responders trying to mitigate disasters and get the network back up in the first place.

Unsurprisingly, no issue looms larger for telcos than access to power — no juice, no bars. So all three of America’s major telcos — Verizon (which owns TechCrunch’s parent company Verizon Media, although not for much longer), AT&T and T-Mobile — have had to dramatically scale up their resiliency efforts in recent years to compensate both for the demand for wireless and the growing damage wrought by weather.

Jay Naillon, senior director of national technology service operations strategy at T-Mobile, said that the company has made resilience a key part of its network buildout in recent years, with investments in generators at cell towers that can be relied upon when the grid cannot. In “areas that have been hit by hurricanes or places that have fragile grids … that is where we have invested most of our fixed assets,” he said.

Like all three telcos, T-Mobile pre-deploys equipment in anticipation for disruptions. So when a hurricane begins to swirl in the Atlantic Ocean, the company will strategically fly in portable generators and mobile cell towers in anticipation of potential outages. “We look at storm forecasts for the year,” Naillon explained, and do “lots of preventative planning.” They also work with emergency managers and “run through various drills with them and respond and collaborate effectively with them” to determine which parts of the network are most at risk for damage in an emergency. Last year, the company partnered with StormGeo to accurately predict weather events.

Predictive AI for disasters is also a critical need for AT&T. Jason Porter, who leads public sector and the company’s FirstNet first-responder network, said that AT&T teamed up with Argonne National Laboratory to create a climate-change analysis tool to evaluate the siting of its cell towers and how they will weather the next 30 years of “floods, hurricanes, droughts and wildfires.” “We redesigned our buildout … based on what our algorithms told us would come,” he said, and the company has been elevating vulnerable cell towers four to eight feet high on “stilts” to improve their resiliency to at least some weather events. That “gave ourselves some additional buffer.”

AT&T has also had to manage the growing complexity of creating reliability with the chaos of a climate-change-induced world. In recent years, “we quickly realized that many of our deployments were due to weather-related events,” and the company has been “very focused on expanding our generator coverage over the past few years,” Porter said. It’s also been very focused on building out its portable infrastructure. “We essentially deploy entire data centers on trucks so that we can stand up essentially a central office,” he said, empathizing that the company’s national disaster recovery team responded to thousands of events last year.

Particularly on its FirstNet service, AT&T has pioneered two new technologies to try to get bandwidth to disaster-hit regions faster. First, it has invested in drones to offer wireless services from the sky. After Hurricane Laura hit Louisiana last year with record-setting winds, our “cell towers were twisted up like recycled aluminum cans … so we needed to deploy a sustainable solution,” Porter described. So the company deployed what it dubs the FirstNet One — a “dirigible” that “can cover twice the cell coverage range of a cell tower on a truck, and it can stay up for literally weeks, refuel in less than an hour and go back up — so long-term, sustainable coverage,” he said.

AT&T’s FirstNet One dirigible to offer internet access from the air for first responders. Image Credits: AT&T/FirstNet

Secondly, the company has been building out what it calls FirstNet MegaRange — a set of high-powered wireless equipment that it announced earlier this year that can deploy signals from miles away, say from a ship moored off a coast, to deliver reliable connectivity to first responders in the hardest-hit disaster zones.

As the internet has absorbed more of daily life, the norms for network resilience have become ever more exacting. Small outages can disrupt not just a first responder, but a child taking virtual classes and a doctor conducting remote surgery. From fixed and portable generators to rapid-deployment mobile cell towers and dirigibles, telcos are investing major resources to keep their networks running continuously.

Yet, these initiatives are ultimately costs borne by telcos increasingly confronting a world burning up. Across conversations with all three telcos and others in the disaster response space, there was a general sense that utilities just increasingly have to self-insulate themselves in a climate-changed world. For instance, cell towers need their own generators because — as we saw with Texas earlier this year — even the power grid itself can’t be guaranteed to be there. Critical applications need to have offline capabilities, since internet outages can’t always be prevented. The machine runs, but the machine stops, too.

The trend lines on the frontlines are data lines

While we may rely on connectivity in our daily lives as consumers, disaster responders have been much more hesitant to fully transition to connected services. It is precisely in the middle of a tornado and the cell tower is down that you realize a printed map might have been nice to have. Paper, pens, compasses — the old staples of survival flicks remain just as important in the field today as they were decades ago.

Yet, the power of software and connectivity to improve emergency response has forced a rethinking of field communications and how deeply technology is integrated on the ground. Data from the frontlines is extremely useful, and if it can be transmitted, dramatically improves the ability of operations planners to respond safely and efficiently.

Both AT&T and Verizon have made large investments in directly servicing the unique needs of the first responder community, with AT&T in particular gaining prominence with its FirstNet network, which it exclusively operates through a public-private partnership with the Department of Commerce’s First Responder Network Authority. The government offered a special spectrum license to the FirstNet authority in Band 14 in exchange for the buildout of a responder-exclusive network, a key recommendation of the 9/11 Commission, which found that first responders couldn’t communicate with each other on the day of those deadly terrorist attacks. Now, Porter of AT&T says that the company’s buildout is “90% complete” and is approaching 3 million square miles of coverage.

Why so much attention on first responders? The telcos are investing here because in many ways, the first responders are on the frontiers of technology. They need edge computing, AI/ML rapid decision-making, the bandwidth and latency of 5G (which we will get to in a bit), high reliability, and in general, are fairly profitable customers to boot. In other words, what first responders need today are what consumers in general are going to want tomorrow.

Cory Davis, director of public safety strategy and crisis response at Verizon, explained that “more than ever, first responders are relying on technology to go out there and save lives.” His counterpart, Nick Nilan, who leads product management for the public sector, said that “when we became Verizon, it was really about voice [and] what’s changed over the last five [years] is the importance of data.” He brings attention to tools for situational awareness, mapping, and more that are a becoming standard in the field. Everything first responders do “comes back to the network — do you have the coverage where you need it, do you have the network access when something happens?”

The challenge for the telcos is that we all want access to that network when catastrophe strikes, which is precisely when network resources are most scarce. The first responder trying to communicate with their team on the ground or their operations center is inevitably competing with a citizen letting friends know they are safe — or perhaps just watching the latest episode of a TV show in their vehicle as they are fleeing the evacuation zone.

That competition is the argument for a completely segmented network like FirstNet, which has its own dedicated spectrum with devices that can only be used by first responders. “With remote learning, remote work and general congestion,” Porter said, telcos and other bandwidth providers were overwhelmed with consumer demand. “Thankfully we saw through FirstNet … clearing that 20 MHz of spectrum for first responders” helped keep the lines clear for high-priority communications.

FirstNet’s big emphasis is on its dedicated spectrum, but that’s just one component of a larger strategy to give first responders always-on and ready access to wireless services. AT&T and Verizon have made prioritization and preemption key operational components of their networks in recent years. Prioritization gives public safety users better access to the network, while preemption can include actively kicking off lower-priority consumers from the network to ensure first responders have immediate access.

Nilan of Verizon said, “The network is built for everybody … but once we start thinking about who absolutely needs access to the network at a period of time, we prioritize our first responders.” Verizon has prioritization, preemption, and now virtual segmentation — “we separate their traffic from consumer traffic” so that first responders don’t have to compete if bandwidth is limited in the middle of a disaster. He noted that all three approaches have been enabled since 2018, and Verizon’s suite of bandwidth and software for first responders comes under the newly christened Verizon Frontline brand that launched in March.

With increased bandwidth reliability, first responders are increasingly connected in ways that even a decade ago would have been unfathomable. Tablets, sensors, connected devices and tools — equipment that would have been manual are now increasingly digital.

That opens up a wealth of possibilities now that the infrastructure is established. My interview subjects suggested applications as diverse as the decentralized coordination of response team movements through GPS and 5G; real-time updated maps that offer up-to-date risk analysis of how a disaster might progress; pathfinding for evacuees that’s updated as routes fluctuate; AI damage assessments even before the recovery process begins; and much, much more. In fact, when it comes to the ferment of the imagination, many of those possibilities will finally be realized in the coming years — when they have only ever been marketing-speak and technical promises in the past.

Five, Gee

We’ve been hearing about 5G for years now, and even 6G every once in a while just to cause reporters heart attacks, but what does 5G even mean in the context of disaster response? After years of speculation, we are finally starting to get answers.

Naillon of T-Mobile noted that the biggest benefit of 5G is that it “allows us to have greater coverage” particularly given the low-band spectrum that the standard partially uses. That said, “As far as applications — we are not really there at that point from an emergency response perspective,” he said.

Meanwhile, Porter of AT&T said that “the beauty of 5G that we have seen there is less about the speed and more about the latency.” Consumers have often seen marketing around voluminous bandwidths, but in the first-responder world, latency and edge computing tends to be the most desirable features. For instance, devices can relay video to each other on the frontlines, without necessarily needing a backhaul to the main wireless network. On-board processing of image data could allow for rapid decision-making in environments where seconds can be vital to the success of a mission.

That flexibility is allowing for many new applications in disaster response, and “we are seeing some amazing use cases coming out of our 5G deployments [and] we have launched some of our pilots with the [Department of Defense],” Porter said. He offered an example of “robotic dogs to go and do bomb dismantling or inspecting and recovery.”

Verizon has made innovating on new applications a strategic goal, launching a 5G First Responders Lab dedicated to guiding a new generation of startups to build at this crossroads. Nilan of Verizon said that the incubator has had more than 20 companies across four different cohorts, working on everything from virtual reality training environments to AR applications that allow firefighters to “see through walls.” His colleague Davis said that “artificial intelligence is going to continue to get better and better and better.”

Blueforce is a company that went through the first cohort of the Lab. The company uses 5G to connect sensors and devices together to allow first responders to make the best decisions they can with the most up-to-date data. Michael Helfrich, founder and CEO, said that “because of these new networks … commanders are able to leave the vehicle and go into the field and get the same fidelity” of information that they normally would have to be in a command center to receive. He noted that in addition to classic user interfaces, the company is exploring other ways of presenting information to responders. “They don’t have to look at a screen anymore, and [we’re] exploring different cognitive models like audio, vibration and heads-up displays.”

5G will offer many new ways to improve emergency responses, but that doesn’t mean that our current 4G networks will just disappear. Davis said that many sensors in the field don’t need the kind of latency or bandwidth that 5G offers. “LTE is going to be around for many, many more years,” he said, pointing to the hardware and applications taking advantage of LTE-M standards for Internet of Things (IoT) devices as a key development for the future here.

Michael Martin of emergency response data platform RapidSOS said that “it does feel like there is renewed energy to solve real problems,” in the disaster response market, which he dubbed the “Elon Musk effect.” And that effect definitely does exist when it comes to connectivity, where SpaceX’s satellite bandwidth project Starlink comes into play.

The Future of Technology and Disaster Response

Satellite uplinks have historically had horrific latency and bandwidth constraints, making them difficult to use in disaster contexts. Furthermore, depending on the particular type of disaster, satellite uplinks can be astonishingly challenging to setup given the ground environment. Starlink promises to shatter all of those barriers — easier connections, fat pipes, low latencies and a global footprint that would be the envy of any first responder globally. Its network is still under active development, so it is difficult to foresee today precisely what its impact will be on the disaster response market, but it’s an offering to watch closely in the years ahead, because it has the potential to completely upend the way we respond to disasters this century if its promises pan out.

Yet, even if we discount Starlink, the change coming this decade in emergency response represents a complete revolution. The depth and resilience of connectivity is changing the equation for first responders from complete reliance on antiquated tools to an embrace of the future of digital computing. The machine is no longer stoppable.

Oct
19
2020
--

The OpenStack Foundation becomes the Open Infrastructure Foundation

This has been a long time coming, but the OpenStack Foundation today announced that it is changing its name to “Open Infrastructure Foundation,” starting in 2021.

The announcement, which the foundation made at its virtual developer conference, doesn’t exactly come as a surprise. Over the course of the last few years, the organization started adding new projects that went well beyond the core OpenStack project, and renamed its conference to the “Open Infrastructure Summit.” The organization actually filed for the “Open Infrastructure Foundation” trademark back in April.

Image Credits: OpenStack Foundation

After years of hype, the open-source OpenStack project hit a bit of a wall in 2016, as the market started to consolidate. The project itself, which helps enterprises run their private cloud, found its niche in the telecom space, though, and continues to thrive as one of the world’s most active open-source projects. Indeed, I regularly hear from OpenStack vendors that they are now seeing record sales numbers — despite the lack of hype. With the project being stable, though, the Foundation started casting a wider net and added additional projects like the popular Kata Containers runtime and CI/CD platform Zuul.

“We are officially transitioning and becoming the Open Infrastructure Foundation,” long-term OpenStack Foundation executive president Jonathan Bryce told me. “That is something that I think is an awesome step that’s built on the success that our community has spawned both within projects like OpenStack, but also as a movement […], which is [about] how do you give people choice and control as they build out digital infrastructure? And that is, I think, an awesome mission to have. And that’s what we are recognizing and acknowledging and setting up for another decade of doing that together with our great community.”

In many ways, it’s been more of a surprise that the organization waited as long as it did. As the foundation’s COO Mark Collier told me, the team waited because it wanted to be sure that it did this right.

“We really just wanted to make sure that all the stuff we learned when we were building the OpenStack community and with the community — that started with a simple idea of ‘open source should be part of cloud, for infrastructure.’ That idea has just spawned so much more open source than we could have imagined. Of course, OpenStack itself has gotten bigger and more diverse than we could have imagined,” Collier said.

As part of today’s announcement, the group also announced that its board approved four new members at its Platinum tier, its highest membership level: Ant Group, the Alibaba affiliate behind Alipay, embedded systems specialist Wind River, China’s FiberHome (which was previously a Gold member) and Facebook Connectivity. These companies will join the new foundation in January. To become a Platinum member, companies must contribute $350,000 per year to the foundation and have at least two full-time employees contributing to its projects.

“If you look at those companies that we have as Platinum members, it’s a pretty broad set of organizations,” Bryce noted. “AT&T, the largest carrier in the world. And then you also have a company Ant, who’s the largest payment processor in the world and a massive financial services company overall — over to Ericsson, that does telco, Wind River, that does defense and manufacturing. And I think that speaks to that everybody needs infrastructure. If we build a community — and we successfully structure these communities to write software with a goal of getting all of that software out into production, I think that creates so much value for so many people: for an ecosystem of vendors and for a great group of users and a lot of developers love working in open source because we work with smart people from all over the world.”

The OpenStack Foundation’s existing members are also on board and Bryce and Collier hinted at several new members who will join soon but didn’t quite get everything in place for today’s announcement.

We can probably expect the new foundation to start adding new projects next year, but it’s worth noting that the OpenStack project continues apace. The latest of the project’s bi-annual releases, dubbed “Victoria,” launched last week, with additional Kubernetes integrations, improved support for various accelerators and more. Nothing will really change for the project now that the foundation is changing its name — though it may end up benefitting from a reenergized and more diverse community that will build out projects at its periphery.

Mar
31
2020
--

Microsoft launches Edge Zones for Azure

Microsoft today announced the launch of Azure Edge Zones, which will allow Azure users to bring their applications to the company’s edge locations. The focus here is on enabling real-time low-latency 5G applications. The company is also launching a version of Edge Zones with carriers (starting with AT&T) in preview, which connects these zones directly to 5G networks in the carrier’s data center. And to round it all out, Azure is also getting Private Edge Zones for those who are deploying private 5G/LTE networks in combination with Azure Stack Edge.

In addition to partnering with carriers like AT&T, as well as Rogers, SK Telecom, Telstra and Vodafone, Microsoft is also launching new standalone Azure Edge Zones in more than 10 cities over the next year, starting with LA, Miami and New York later this summer.

“For the last few decades, carriers and operators have pioneered how we connect with each other, laying the foundation for telephony and cellular,” the company notes in today’s announcement. “With cloud and 5G, there are new possibilities by combining cloud services, like compute and AI with high bandwidth and ultra-low latency. Microsoft is partnering with them bring 5G to life in immersive applications built by organization and developers.”

This may all sound a bit familiar, and that’s because only a few weeks ago, Google launched Anthos for Telecom and its Global Mobile Edge Cloud, which at first glance offers a similar promise of bringing applications close to that cloud’s edge locations for 5G and telco usage. Microsoft argues that its offering is more comprehensive in terms of its partner ecosystem and geographic availability. But it’s clear that 5G is a trend all of the large cloud providers are trying to tap into. Microsoft’s own acquisition of 5G cloud specialist Affirmed Networks is yet another example of how it is looking to position itself in this market.

As far as the details of the various Edge Zone versions go, the focus of Edge Zones is mostly on IoT and AI workloads, while Microsoft notes that Edge Zones with Carriers is more about low-latency online gaming, remote meetings and events, as well as smart infrastructure. Private Edge Zones, which combine private carrier networks with Azure Stack Edge, is something only a small number of large enterprise companies would likely to look into, given the cost and complexity of rolling out a system like this.

 

Mar
26
2020
--

Microsoft acquires 5G specialist Affirmed Networks

Microsoft today announced that it has acquired Affirmed Networks, a company that specializes in fully virtualized, cloud-native networking solutions for telecom operators.

With its focus on 5G and edge computing, Affirmed looks like the ideal acquisition target for a large cloud provider looking to get deeper into the telco business. According to Crunchbase, Affirmed raised a total of $155 million before this acquisition, and the company’s more than 100 enterprise customers include the likes of AT&T, Orange, Vodafone, Telus, Turkcell and STC.

“As we’ve seen with other technology transformations, we believe that software can play an important role in helping advance 5G and deliver new network solutions that offer step-change advancements in speed, cost and security,” writes Yousef Khalidi, Microsoft’s corporate vice president for Azure Networking. “There is a significant opportunity for both incumbents and new players across the industry to innovate, collaborate and create new markets, serving the networking and edge computing needs of our mutual customers.”

With its customer base, Affirmed gives Microsoft another entry point into the telecom industry. Previously, the telcos would often build their own data centers and stuff it with costly proprietary hardware (and the software to manage it). But thanks to today’s virtualization technologies, the large cloud platforms are now able to offer the same capabilities and reliability without any of the cost. And unsurprisingly, a new technology like 5G, with its promise of new and expanded markets, makes for a good moment to push forward with these new technologies.

Google recently made some moves in this direction with its Anthos for Telecom and Global Mobile Edge Cloud, too. Chances are we will see all of the large cloud providers continue to go after this market in the coming months.

In a somewhat odd move, only yesterday Affirmed announced a new CEO and president, Anand Krishnamurthy. It’s not often that we see these kinds of executive moves hours before a company announces its acquisition.

The announcement doesn’t feature a single hint at today’s news and includes all of the usual cliches we’ve come to expect from a press release that announces a new CEO. “We are thankful to Hassan for his vision and commitment in guiding the company through this extraordinary journey and positioning us for tremendous success in the future,” Krishnamurthy wrote at the time. “It is my honor to lead Affirmed as we continue to drive this incredible transformation in our industry.”

We asked Affirmed for some more background about this and will update this post if we hear more. Update: an Affirmed spokesperson told us that this was “part of a succession plan that had been determined previously.  So it was not related [to] any specific event.”

Mar
03
2020
--

Datastax acquires The Last Pickle

Data management company Datastax, one of the largest contributors to the Apache Cassandra project, today announced that it has acquired The Last Pickle (and no, I don’t know what’s up with that name either), a New Zealand-based Cassandra consulting and services firm that’s behind a number of popular open-source tools for the distributed NoSQL database.

As Datastax Chief Strategy Officer Sam Ramji, who you may remember from his recent tenure at Apigee, the Cloud Foundry Foundation, Google and Autodesk, told me, The Last Pickle is one of the premier Apache Cassandra consulting and services companies. The team there has been building Cassandra-based open source solutions for the likes of Spotify, T Mobile and AT&T since it was founded back in 2012. And while The Last Pickle is based in New Zealand, the company has engineers all over the world that do the heavy lifting and help these companies successfully implement the Cassandra database technology.

It’s worth mentioning that Last Pickle CEO Aaron Morton first discovered Cassandra when he worked for WETA Digital on the special effects for Avatar, where the team used Cassandra to allow the VFX artists to store their data.

“There’s two parts to what they do,” Ramji explained. “One is the very visible consulting, which has led them to become world experts in the operation of Cassandra. So as we automate Cassandra and as we improve the operability of the project with enterprises, their embodied wisdom about how to operate and scale Apache Cassandra is as good as it gets — the best in the world.” And The Last Pickle’s experience in building systems with tens of thousands of nodes — and the challenges that its customers face — is something Datastax can then offer to its customers as well.

And Datastax, of course, also plans to productize The Last Pickle’s open-source tools like the automated repair tool Reaper and the Medusa backup and restore system.

As both Ramji and Datastax VP of Engineering Josh McKenzie stressed, Cassandra has seen a lot of commercial development in recent years, with the likes of AWS now offering a managed Cassandra service, for example, but there wasn’t all that much hype around the project anymore. But they argue that’s a good thing. Now that it is over ten years old, Cassandra has been battle-hardened. For the last ten years, Ramji argues, the industry tried to figure out what the de factor standard for scale-out computing should be. By 2019, it became clear that Kubernetes was the answer to that.

“This next decade is about what is the de facto standard for scale-out data? We think that’s got certain affordances, certain structural needs and we think that the decades that Cassandra has spent getting harden puts it in a position to be data for that wave.”

McKenzie also noted that Cassandra provides users with a number of built-in features like support for mutiple data centers and geo-replication, rolling updates and live scaling, as well as wide support across programming languages, give it a number of advantages over competing databases.

“It’s easy to forget how much Cassandra gives you for free just based on its architecture,” he said. “Losing the power in an entire datacenter, upgrading the version of the database, hardware failing every day? No problem. The cluster is 100 percent always still up and available. The tooling and expertise of The Last Pickle really help bring all this distributed and resilient power into the hands of the masses.”

The two companies did not disclose the price of the acquisition.

Aug
05
2019
--

Cybereason raises $200 million for its enterprise security platform

Cybereason, which uses machine learning to increase the number of endpoints a single analyst can manage across a network of distributed resources, has raised $200 million in new financing from SoftBank Group and its affiliates. 

It’s a sign of the belief that SoftBank has in the technology, since the Japanese investment firm is basically doubling down on commitments it made to the Boston-based company four years ago.

The company first came to our attention five years ago when it raised a $25 million financing from investors, including CRV, Spark Capital and Lockheed Martin.

Cybereason’s technology processes and analyzes data in real time across an organization’s daily operations and relationships. It looks for anomalies in behavior across nodes on networks and uses those anomalies to flag suspicious activity.

The company also provides reporting tools to inform customers of the root cause, the timeline, the person involved in the breach or breaches, which tools they use and what information was being disseminated within and outside of the organization.

For co-founder Lior Div, Cybereason’s work is the continuation of the six years of training and service he spent working with the Israeli army’s 8200 Unit, the military incubator for half of the security startups pitching their wares today. After his time in the military, Div worked for the Israeli government as a private contractor reverse-engineering hacking operations.

Over the last two years, Cybereason has expanded the scope of its service to a network that spans 6 million endpoints tracked by 500 employees, with offices in Boston, Tel Aviv, Tokyo and London.

“Cybereason’s big data analytics approach to mitigating cyber risk has fueled explosive expansion at the leading edge of the EDR domain, disrupting the EPP market. We are leading the wave, becoming the world’s most reliable and effective endpoint prevention and detection solution because of our technology, our people and our partners,” said Div, in a statement. “We help all security teams prevent more attacks, sooner, in ways that enable understanding and taking decisive action faster.”

The company said it will use the new funding to accelerate its sales and marketing efforts across all geographies and push further ahead with research and development to make more of its security operations autonomous.

“Today, there is a shortage of more than three million level 1-3 analysts,” said Yonatan Striem-Amit, chief technology officer and co-founder, Cybereason, in a statement. “The new autonomous SOC enables SOC teams of the future to harness technology where manual work is being relied on today and it will elevate  L1 analysts to spend time on higher value tasks and accelerate the advanced analysis L3 analysts do.”

Most recently the company was behind the discovery of Operation SoftCell, the largest nation-state cyber espionage attack on telecommunications companies. 

That attack, which was either conducted by Chinese-backed actors or made to look like it was conducted by Chinese-backed actors, according to Cybereason, targeted a select group of users in an effort to acquire cell phone records.

As we wrote at the time:

… hackers have systematically broken in to more than 10 cell networks around the world to date over the past seven years to obtain massive amounts of call records — including times and dates of calls, and their cell-based locations — on at least 20 individuals.

Researchers at Boston-based Cybereason, who discovered the operation and shared their findings with TechCrunch, said the hackers could track the physical location of any customer of the hacked telcos — including spies and politicians — using the call records.

Lior Div, Cybereason’s co-founder and chief executive, told TechCrunch it’s “massive-scale” espionage.

Call detail records — or CDRs — are the crown jewels of any intelligence agency’s collection efforts. These call records are highly detailed metadata logs generated by a phone provider to connect calls and messages from one person to another. Although they don’t include the recordings of calls or the contents of messages, they can offer detailed insight into a person’s life. The National Security Agency  has for years controversially collected the call records of Americans from cell providers like AT&T and Verizon (which owns TechCrunch), despite the questionable legality.

It’s not the first time that Cybereason has uncovered major security threats.

Back when it had just raised capital from CRV and Spark, Cybereason’s chief executive was touting its work with a defense contractor who’d been hacked. Again, the suspected culprit was the Chinese government.

As we reported, during one of the early product demos for a private defense contractor, Cybereason identified a full-blown attack by the Chinese — 10,000 thousand usernames and passwords were leaked, and the attackers had access to nearly half of the organization on a daily basis.

The security breach was too sensitive to be shared with the press, but Div says that the FBI was involved and that the company had no indication that they were being hacked until Cybereason detected it.

Jul
17
2019
--

AT&T signs $2 billion cloud deal with Microsoft

While AWS leads the cloud infrastructure market by a wide margin, Microsoft isn’t doing too badly, ensconced firmly in second place, the only other company with double-digit share. Today, it announced a big deal with AT&T that encompasses both Azure cloud infrastructure services and Office 365.

A person with knowledge of the contract pegged the combined deal at a tidy $2 billion, a nice feather in Microsoft’s cloud cap. According to a Microsoft blog post announcing the deal, AT&T has a goal to move most of its non-networking workloads to the public cloud by 2024, and Microsoft just got itself a big slice of that pie, surely one that rivals AWS, Google and IBM (which closed the $34 billion Red Hat deal last week) would dearly have loved to get.

As you would expect, Microsoft CEO Satya Nadella spoke of the deal in lofty terms around transformation and innovation. “Together, we will apply the power of Azure and Microsoft 365 to transform the way AT&T’s workforce collaborates and to shape the future of media and communications for people everywhere,” he said in a statement in the blog post announcement.

To that end, they are looking to collaborate on emerging technologies like 5G and believe that by combining Azure with AT&T’s 5G network, the two companies can help customers create new kinds of applications and solutions. As an example cited in the blog post, they could see using the speed of the 5G network combined with Azure AI-powered live voice translation to help first responders communicate instantaneously with someone who speaks a different language.

It’s worth noting that while this deal to bring Office 365 to AT&T’s 250,000 employees is a nice win, that part of the deal falls under the SaaS umbrella, so it won’t help with Microsoft’s cloud infrastructure market share. Still, any way you slice it, this is a big deal.

Apr
29
2019
--

With Kata Containers and Zuul, OpenStack graduates its first infrastructure projects

Over the course of the last year and a half, the OpenStack Foundation made the switch from purely focusing on the core OpenStack project to opening itself up to other infrastructure-related projects as well. The first two of these projects, Kata Containers and the Zuul project gating system, have now exited their pilot phase and have become the first top-level Open Infrastructure Projects at the OpenStack Foundation.

The Foundation made the announcement at its Open Infrastructure Summit (previously known as the OpenStack Summit) in Denver today after the organization’s board voted to graduate them ahead of this week’s conference. “It’s an awesome milestone for the projects themselves,” OpenStack Foundation executive direction Jonathan Bryce told me. “It’s a validation of the fact that in the last 18 months, they have created sustainable and productive communities.”

It’s also a milestone for the OpenStack Foundation itself, though, which is still in the process of reinventing itself in many ways. It can now point at two successful projects that are under its stewardship, which will surely help it as it goes out and tries to attract others who are looking to bring their open-source projects under the aegis of a foundation.

In addition to graduating these first two projects, Airship — a collection of open-source tools for provisioning private clouds that is currently a pilot project — hit version 1.0 today. “Airship originated within AT&T,” Bryce said. “They built it from their need to bring a bunch of open-source tools together to deliver on their use case. And that’s why, from the beginning, it’s been really well-aligned with what we would love to see more of in the open-source world and why we’ve been super excited to be able to support their efforts there.”

With Airship, developers use YAML documents to describe what the final environment should look like and the result of that is a production-ready Kubernetes cluster that was deployed by OpenStack’s Helm tool — though without any other dependencies on OpenStack.

AT&T’s assistant vice president, Network Cloud Software Engineering, Ryan van Wyk, told me that a lot of enterprises want to use certain open-source components, but that the interplay between them is often difficult and that while it’s relatively easy to manage the life cycle of a single tool, it’s hard to do so when you bring in multiple open-source tools, all with their own life cycles. “What we found over the last five years working in this space is that you can go and get all the different open-source solutions that you need,” he said. “But then the operator has to invest a lot of engineering time and build extensions and wrappers and perhaps some orchestration to manage the life cycle of the various pieces of software required to deliver the infrastructure.”

It’s worth noting that nothing about Airship is specific to the telco world, though it’s no secret that OpenStack is quite popular in the telco world and unsurprisingly, the Foundation is using this week’s event to highlight the OpenStack project’s role in the upcoming 5G rollouts of various carriers.

In addition, the event will showcase OpenStack’s bare-metal capabilities, an area the project has also focused on in recent releases. Indeed, the Foundation today announced that its bare-metal tools now manage more than a million cores of compute. To codify these efforts, the Foundation also today launched the OpenStack Ironic Bare Metal program, which brings together some of the project’s biggest users, like Verizon Media (home of TechCrunch, though we don’t run on the Verizon cloud), 99Cloud, China Mobile, China Telecom, China Unicom, Mirantis, OVH, Red Hat, SUSE, Vexxhost and ZTE.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com