Mar
31
2020
--

Microsoft launches Edge Zones for Azure

Microsoft today announced the launch of Azure Edge Zones, which will allow Azure users to bring their applications to the company’s edge locations. The focus here is on enabling real-time low-latency 5G applications. The company is also launching a version of Edge Zones with carriers (starting with AT&T) in preview, which connects these zones directly to 5G networks in the carrier’s data center. And to round it all out, Azure is also getting Private Edge Zones for those who are deploying private 5G/LTE networks in combination with Azure Stack Edge.

In addition to partnering with carriers like AT&T, as well as Rogers, SK Telecom, Telstra and Vodafone, Microsoft is also launching new standalone Azure Edge Zones in more than 10 cities over the next year, starting with LA, Miami and New York later this summer.

“For the last few decades, carriers and operators have pioneered how we connect with each other, laying the foundation for telephony and cellular,” the company notes in today’s announcement. “With cloud and 5G, there are new possibilities by combining cloud services, like compute and AI with high bandwidth and ultra-low latency. Microsoft is partnering with them bring 5G to life in immersive applications built by organization and developers.”

This may all sound a bit familiar, and that’s because only a few weeks ago, Google launched Anthos for Telecom and its Global Mobile Edge Cloud, which at first glance offers a similar promise of bringing applications close to that cloud’s edge locations for 5G and telco usage. Microsoft argues that its offering is more comprehensive in terms of its partner ecosystem and geographic availability. But it’s clear that 5G is a trend all of the large cloud providers are trying to tap into. Microsoft’s own acquisition of 5G cloud specialist Affirmed Networks is yet another example of how it is looking to position itself in this market.

As far as the details of the various Edge Zone versions go, the focus of Edge Zones is mostly on IoT and AI workloads, while Microsoft notes that Edge Zones with Carriers is more about low-latency online gaming, remote meetings and events, as well as smart infrastructure. Private Edge Zones, which combine private carrier networks with Azure Stack Edge, is something only a small number of large enterprise companies would likely to look into, given the cost and complexity of rolling out a system like this.

 

Mar
26
2020
--

Microsoft acquires 5G specialist Affirmed Networks

Microsoft today announced that it has acquired Affirmed Networks, a company that specializes in fully virtualized, cloud-native networking solutions for telecom operators.

With its focus on 5G and edge computing, Affirmed looks like the ideal acquisition target for a large cloud provider looking to get deeper into the telco business. According to Crunchbase, Affirmed raised a total of $155 million before this acquisition, and the company’s more than 100 enterprise customers include the likes of AT&T, Orange, Vodafone, Telus, Turkcell and STC.

“As we’ve seen with other technology transformations, we believe that software can play an important role in helping advance 5G and deliver new network solutions that offer step-change advancements in speed, cost and security,” writes Yousef Khalidi, Microsoft’s corporate vice president for Azure Networking. “There is a significant opportunity for both incumbents and new players across the industry to innovate, collaborate and create new markets, serving the networking and edge computing needs of our mutual customers.”

With its customer base, Affirmed gives Microsoft another entry point into the telecom industry. Previously, the telcos would often build their own data centers and stuff it with costly proprietary hardware (and the software to manage it). But thanks to today’s virtualization technologies, the large cloud platforms are now able to offer the same capabilities and reliability without any of the cost. And unsurprisingly, a new technology like 5G, with its promise of new and expanded markets, makes for a good moment to push forward with these new technologies.

Google recently made some moves in this direction with its Anthos for Telecom and Global Mobile Edge Cloud, too. Chances are we will see all of the large cloud providers continue to go after this market in the coming months.

In a somewhat odd move, only yesterday Affirmed announced a new CEO and president, Anand Krishnamurthy. It’s not often that we see these kinds of executive moves hours before a company announces its acquisition.

The announcement doesn’t feature a single hint at today’s news and includes all of the usual cliches we’ve come to expect from a press release that announces a new CEO. “We are thankful to Hassan for his vision and commitment in guiding the company through this extraordinary journey and positioning us for tremendous success in the future,” Krishnamurthy wrote at the time. “It is my honor to lead Affirmed as we continue to drive this incredible transformation in our industry.”

We asked Affirmed for some more background about this and will update this post if we hear more. Update: an Affirmed spokesperson told us that this was “part of a succession plan that had been determined previously.  So it was not related [to] any specific event.”

Nov
25
2019
--

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Nov
04
2019
--

Microsoft Azure gets into ag tech with the preview of FarmBeats

At its annual Ignite event in Orlando, Fla., Microsoft today announced that  Azure FarmBeats, a project that until now was mostly a research effort, will be available as a public preview and in the Azure Marketplace, starting today. FarmBeats is Microsoft’s project that combines IoT sensors, data analysis and machine learning.

The goal of FarmBeats is to augment farmers’ knowledge and intuition about their own farm with data and data-driven insights,” Microsoft explained in today’s announcement. The idea behind FarmBeats is to take in data from a wide variety of sources, including sensors, satellites, drones and weather stations, and then turn that into actionable intelligence for farmers, using AI and machine learning. 

In addition, FarmBeats also wants to be somewhat of a platform for developers who can then build their own applications on top of this data that the platform aggregates and evaluates.

As Microsoft noted during the development process, having satellite imagery is one thing, but that can’t capture all of the data on a farm. For that, you need in-field sensors and other data — yet all of this heterogeneous data then has to be merged and analyzed somehow. Farms also often don’t have great internet connectivity. Because of this, the FarmBeats team was among the first to leverage Microsoft’s efforts in using TV white space for connectivity and, of course, Azure IoT Edge for collecting all of the data.

Oct
08
2019
--

Arm brings custom instructions to its embedded CPUs

At its annual TechCon event in San Jose, Arm today announced Custom Instructions, a new feature of its Armv8-M architecture for embedded CPUs that, as the name implies, enables its customers to write their own custom instructions to accelerate their specific use cases for embedded and IoT applications.

“We already have ways to add acceleration, but not as deep and down to the heart of the CPU. What we’re giving [our customers] here is the flexibility to program your own instructions, to define your own instructions — and have them executed by the CPU,” ARM senior director for its automotive and IoT business, Thomas Ensergueix, told me ahead of today’s announcement.

He noted that Arm always had a continuum of options for acceleration, starting with its memory-mapped architecture for connecting over a bus GPUs and today’s neural processor units. This allows the CPU and the accelerator to run in parallel, but with the bus being the bottleneck. Customers also can opt for a co-processor that’s directly connected to the CPU, but today’s news essentially allows Arm customers to create their own accelerated algorithms that then run directly on the CPU. That means the latency is low, but it’s not running in parallel, as with the memory-mapped solution.

arm instructions

As Arm argues, this setup allows for the lowest-cost (and risk) path for integrating customer workload acceleration, as there are no disruptions to the existing CPU features and it still allows its customers to use the existing standard tools with which they are already familiar.

custom assemblerFor now, custom instructions will only be available to be implemented in the Arm Cortex-M33 CPUs, starting in the first half of 2020. By default, it’ll also be available for all future Cortex-M processors. There are no additional costs or new licenses to buy for Arm’s customers.

Ensergueix noted that as we’re moving to a world with more and more connected devices, more of Arm’s customers will want to optimize their processors for their often very specific use cases — and often they’ll want to do so because by creating custom instructions, they can get a bit more battery life out of these devices, for example.

Arm has already lined up a number of partners to support Custom Instructions, including IAR Systems, NXP, Silicon Labs and STMicroelectronics .

“Arm’s new Custom Instructions capabilities allow silicon suppliers like NXP to offer their customers a new degree of application-specific instruction optimizations to improve performance, power dissipation and static code size for new and emerging embedded applications,” writes NXP’s Geoff Lees, SVP and GM of Microcontrollers. “Additionally, all these improvements are enabled within the extensive Cortex-M ecosystem, so customers’ existing software investments are maximized.”

In related embedded news, Arm also today announced that it is setting up a governance model for Mbed OS, its open-source operating system for embedded devices that run an Arm Cortex-M chip. Mbed OS has always been open source, but the Mbed OS Partner Governance model will allow Arm’s Mbed silicon partners to have more of a say in how the OS is developed through tools like a monthly Product Working Group meeting. Partners like Analog Devices, Cypress, Nuvoton, NXP, Renesas, Realtek,
Samsung and u-blox are already participating in this group.

Oct
08
2019
--

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington, DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge, where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.

May
02
2019
--

Microsoft brings Plug and Play to IoT

Microsoft today announced that it wants to bring the ease of use of Plug and Play, which today allows you to plug virtually any peripheral into a Windows PC without having to worry about drivers, to IoT devices. Typically, getting an IoT device connected and up and running takes some work, even with modern deployment tools. The promise of IoT Plug and Play is that it will greatly simplify this process and do away with the hardware and software configuration steps that are still needed today.

As Azure corporate vice president Julia White writes in today’s announcement, “one of the biggest challenges in building IoT solutions is to connect millions of IoT devices to the cloud due to heterogeneous nature of devices today – such as different form factors, processing capabilities, operational system, memory and capabilities.” This, Microsoft argues, is holding back IoT adoption.

IoT Plug and Play, on the other hand, offers developers an open modeling language that will allow them to connect these devices to the cloud without having to write any code.

Microsoft can’t do this alone, though, since it needs the support of the hardware and software manufacturers in its IoT ecosystem, too. The company has already signed up a number of partners, including Askey, Brainium, Compal, Kyocera, STMicroelectronics, Thundercomm and VIA Technologies . The company says that dozens of devices are already Plug and Play-ready and potential users can find them in the Azure IoT Device Catalog.

Apr
24
2019
--

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compilation.

This new capability, which will work for applications written in JavaScript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Strategic Alliances David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often significantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platforms and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Messina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure on which they run. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

Apr
11
2019
--

Armis nabs $65M Series C as IoT security biz grows in leaps and bounds

Armis is helping companies protect IoT devices on the network without using an agent, and it’s apparently a problem that is resonating with the market, as the startup reports 700 percent growth in the last year. That caught the attention of investors, who awarded them a $65 million Series C investment to help keep accelerating that growth.

Sequoia Capital led the round with help from new investors Insight Venture Partners and Intermountain Ventures. Returning investors Bain Capital Ventures, Red Dot Capital Partners and Tenaya Capital also participated. Today’s investment brings the total raised to $112 million, according to the company.

The company is solving a hard problem around device management on a network. If you have devices where you cannot apply an agent to track them, how do you manage them? Nadir Izrael, company co-founder and CTO, says you have to do it very carefully because even scanning for ports could be too much for older devices and they could shut down. Instead, he says that Armis takes a passive approach to security, watching and learning and understanding what normal device behavior looks like — a kind of behavioral fingerprinting.

“We observe what devices do on the network. We look at their behavior, and we figure out from that everything we need to know,” Izrael told TechCrunch. He adds, “Armis in a nutshell is a giant device behavior crowdsourcing engine. Basically, every client of Armis is constantly learning how devices behave. And those statistical models, those machine learning models, they get merged into master models.”

Whatever they are doing, they seem to have hit upon a security pain point. They announced a $30 million Series B almost exactly a year ago, and they went back for more because they were growing quickly and needed the capital to hire people to keep up.

That kind of growth is a challenge for any startup. The company expects to double its 125-person work force before the end of the year, but the company is working to put systems in place to incorporate those new people and service all of those new customers.

The company plans to hire more people in sales and marketing, of course, but they will concentrate on customer support and building out partnership programs to get some help from systems integrators, ISVs and MSPs, who can do some of the customer hand-holding for them.

Mar
28
2019
--

Microsoft gives 500 patents to startups

Microsoft today announced a major expansion of its Azure IP Advantage program, which provides its Azure users with protection against patent trolls. This program now also provides customers who are building IoT solutions that connect to Azure with access to 10,000 patents to defend themselves against intellectual property lawsuits.

What’s maybe most interesting here, though, is that Microsoft is also donating 500 patents to startups in the LOT Network. This organization, which counts companies like Amazon, Facebook, Google, Microsoft, Netflix, SAP, Epic Games, Ford, GM, Lyft and Uber among its close to 400 members, is designed to protect companies against patent trolls by giving them access to a wide library of patents from its member companies and other sources.

“The LOT Network is really committed to helping address the proliferation of intellectual property lawsuits, especially ones that are brought by non-practicing entities, or so-called trolls,” Microsoft  CVP and Deputy General Counsel Erich Andersen told me. 

This new program goes well beyond basic protection from patent trolls, though. Qualified startups who join the LOT Network can acquire Microsoft patents as part of their free membership and as Andersen stressed, the startups will own them outright. The LOT network will be able to provide its startup members with up to three patents from this collection.

There’s one additional requirement here, though: To qualify for getting the patents, these startups also have to meet a $1,000 per month Azure spend. As Andersen told me, though, they don’t have to make any kind of forward pledge. The company will simply look at a startup’s last three monthly Azure bills.

“We want to help the LOT Network grow its network of startups,” Andersen said. “To provide an incentive, we are going to provide these patents to them.” He noted that startups are obviously interested in getting access to patents as a foundation of their companies, but also to raise capital and to defend themselves against trolls.

The patents we’re talking about here cover a wide range of technologies as well as geographies. Andersen noted that we’re talking about U.S. patents as well as European and Chinese patents, for example.

“The idea is that these startups come from a diverse set of industry sectors,” he said. “The hope we have is that when they approach LOT, they’ll find patents among those 500 that are going to be interesting to basically almost any company that might want a foundational set of patents for their business.”

As for the extended Azure IP Advantage program, it’s worth noting that every Azure customer who spends more than $1,000 per month over the past three months and hasn’t filed a patent infringement lawsuit against another Azure customer in the last two years can automatically pick one of the patents in the program’s portfolio to protect itself against frivolous patent lawsuits from trolls (and that’s a different library of patents from the one Microsoft is donating to the LOT Network as part of the startup program).

As Andersen noted, the team looked at how it could enhance the IP program by focusing on a number of specific areas. Microsoft is obviously investing a lot into IoT, so extending the program to this area makes sense. “What we’re basically saying is that if the customer is using IoT technology — regardless of whether it’s Microsoft technology or not — and it’s connected to Azure, then we’re going to provide this patent pick right to help customers defend themselves against patent suits,” Andersen said.

In addition, for those who do choose to use Microsoft IoT technology across the board, Microsoft will provide indemnification, too.

Patent trolls have lately started acquiring IoT patents, so chances are they are getting ready to make use of them and that we’ll see quite a bit of patent litigation in this space in the future. “The early signs we’re seeing indicate that this is something that customers are going to care about in the future,” said Andersen.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com