Feb
10
2021
--

SecuriThings snares $14M Series A to keep edge devices under control

Managing IoT devices in a large organization can be a messy proposition, especially when many of them aren’t even managed directly by IT and often involve integrating with a number of third-party systems. SecuriThings wants to help with a platform of services to bring that all under control, and today the startup announced a $14 million Series A.

Aleph led the round with participation from existing investor Firstime VC and a number of unnamed angels. The company has raised a total of $17 million, according to Crunchbase data.

Roy Dagan, company CEO and co-founder, says that he sees organizations with many different connected devices running on a network, and it’s difficult to manage. “We enable organizations to manage IoT devices securely at scale in a consolidated and cost-efficient manner,” Dagan told me.

This could include devices like security cameras, along with access control systems and building management systems involving thousands — or in some instances, tens of thousands — of devices. “The technology we build, we integrate with management systems, and then we deploy our capabilities which are focused on the edge devices. So that’s how we also find the devices, and then we have these different capabilities running on the edge devices or fetching information from the edge devices,” Dagan explained.

SecuriThings Horizon - Screenshot - Device view

Image Credits: SecuriThings

The company has formed partnerships with a number of key device manufacturers, including Microsoft, Convergint Technologies and Johnson Controls, among others. They work with a range of industries including airports, casinos and large corporate campuses.

Aaron Rosenson, general partner at lead investor Aleph, says the company is solving a big problem managing the myriad devices inside large organizations. “Until SecuriThings came along, there were these massive enterprise software categories of automation, orchestration and observability just waiting to be built for IoT,” Rosenson said in a statement. He says that SecuiThings is pulling that all together for its customers.

The company was founded in 2016 originally with the idea of being an IoT security company, and while they still are involved in securing these devices, their ability to communicate with them gives IT much greater visibility and insight and the ability to update and manage them.

Today, the company has 30 employees, and with the new investment it will be doubling that number by the end of the year. While Dagan didn’t cite specific customer numbers, he did say they have dozens of customers with deal sizes of between five and seven figures.

Nov
13
2020
--

Which emerging technologies are enterprise companies getting serious about in 2020?

Startups need to live in the future. They create roadmaps, build products and continually upgrade them with an eye on next year — or even a few years out.

Big companies, often the target customers for startups, live in a much more near-term world. They buy technologies that can solve problems they know about today, rather than those they may face a couple bends down the road. In other words, they’re driving a Dodge, and most tech entrepreneurs are driving a DeLorean equipped with a flux-capacitor.

That situation can lead to a huge waste of time for startups that want to sell to enterprise customers: a business development black hole. Startups are talking about technology shifts and customer demands that the executives inside the large company — even if they have “innovation,” “IT,” or “emerging technology” in their titles — just don’t see as an urgent priority yet, or can’t sell to their colleagues.

How do you avoid the aforementioned black hole? Some recent research that my company, Innovation Leader, conducted in collaboration with KPMG LLP, suggests a constructive approach.

Rather than asking large companies about which technologies they were experimenting with, we created four buckets, based on what you might call “commitment level.” (Our survey had 211 respondents, 62% of them in North America and 59% at companies with greater than $1 billion in annual revenue.) We asked survey respondents to assess a list of 16 technologies, from advanced analytics to quantum computing, and put each one into one of these four buckets. We conducted the survey at the tail end of Q3 2020.

Respondents in the first group were “not exploring or investing” — in other words, “we don’t care about this right now.” The top technology there was quantum computing.

Bucket #2 was the second-lowest commitment level: “learning and exploring.” At this stage, a startup gets to educate its prospective corporate customer about an emerging technology — but nabbing a purchase commitment is still quite a few exits down the highway. It can be constructive to begin building relationships when a company is at this stage, but your sales staff shouldn’t start calculating their commissions just yet.

Here are the top five things that fell into the “learning and exploring” cohort, in ranked order:

  1. Blockchain.
  2. Augmented reality/mixed reality.
  3. Virtual reality.
  4. AI/machine learning.
  5. Wearable devices.

Technologies in the third group, “investing or piloting,” may represent the sweet spot for startups. At this stage, the corporate customer has already discovered some internal problem or use case that the technology might address. They may have shaken loose some early funding. They may have departments internally, or test sites externally, where they know they can conduct pilots. Often, they’re assessing what established tech vendors like Microsoft, Oracle and Cisco can provide — and they may find their solutions wanting.

Here’s what our survey respondents put into the “investing or piloting” bucket, in ranked order:

  1. Advanced analytics.
  2. AI/machine learning.
  3. Collaboration tools and software.
  4. Cloud infrastructure and services.
  5. Internet of things/new sensors.

By the time a technology is placed into the fourth category, which we dubbed “in-market or accelerating investment,” it may be too late for a startup to find a foothold. There’s already a clear understanding of at least some of the use cases or problems that need solving, and return-on-investment metrics have been established. But some providers have already been chosen, based on successful pilots and you may need to dislodge someone that the enterprise is already working with. It can happen, but the headwinds are strong.

Here’s what the survey respondents placed into the “in-market or accelerating investment” bucket, in ranked order:

Mar
31
2020
--

Microsoft launches Edge Zones for Azure

Microsoft today announced the launch of Azure Edge Zones, which will allow Azure users to bring their applications to the company’s edge locations. The focus here is on enabling real-time low-latency 5G applications. The company is also launching a version of Edge Zones with carriers (starting with AT&T) in preview, which connects these zones directly to 5G networks in the carrier’s data center. And to round it all out, Azure is also getting Private Edge Zones for those who are deploying private 5G/LTE networks in combination with Azure Stack Edge.

In addition to partnering with carriers like AT&T, as well as Rogers, SK Telecom, Telstra and Vodafone, Microsoft is also launching new standalone Azure Edge Zones in more than 10 cities over the next year, starting with LA, Miami and New York later this summer.

“For the last few decades, carriers and operators have pioneered how we connect with each other, laying the foundation for telephony and cellular,” the company notes in today’s announcement. “With cloud and 5G, there are new possibilities by combining cloud services, like compute and AI with high bandwidth and ultra-low latency. Microsoft is partnering with them bring 5G to life in immersive applications built by organization and developers.”

This may all sound a bit familiar, and that’s because only a few weeks ago, Google launched Anthos for Telecom and its Global Mobile Edge Cloud, which at first glance offers a similar promise of bringing applications close to that cloud’s edge locations for 5G and telco usage. Microsoft argues that its offering is more comprehensive in terms of its partner ecosystem and geographic availability. But it’s clear that 5G is a trend all of the large cloud providers are trying to tap into. Microsoft’s own acquisition of 5G cloud specialist Affirmed Networks is yet another example of how it is looking to position itself in this market.

As far as the details of the various Edge Zone versions go, the focus of Edge Zones is mostly on IoT and AI workloads, while Microsoft notes that Edge Zones with Carriers is more about low-latency online gaming, remote meetings and events, as well as smart infrastructure. Private Edge Zones, which combine private carrier networks with Azure Stack Edge, is something only a small number of large enterprise companies would likely to look into, given the cost and complexity of rolling out a system like this.

 

Mar
26
2020
--

Microsoft acquires 5G specialist Affirmed Networks

Microsoft today announced that it has acquired Affirmed Networks, a company that specializes in fully virtualized, cloud-native networking solutions for telecom operators.

With its focus on 5G and edge computing, Affirmed looks like the ideal acquisition target for a large cloud provider looking to get deeper into the telco business. According to Crunchbase, Affirmed raised a total of $155 million before this acquisition, and the company’s more than 100 enterprise customers include the likes of AT&T, Orange, Vodafone, Telus, Turkcell and STC.

“As we’ve seen with other technology transformations, we believe that software can play an important role in helping advance 5G and deliver new network solutions that offer step-change advancements in speed, cost and security,” writes Yousef Khalidi, Microsoft’s corporate vice president for Azure Networking. “There is a significant opportunity for both incumbents and new players across the industry to innovate, collaborate and create new markets, serving the networking and edge computing needs of our mutual customers.”

With its customer base, Affirmed gives Microsoft another entry point into the telecom industry. Previously, the telcos would often build their own data centers and stuff it with costly proprietary hardware (and the software to manage it). But thanks to today’s virtualization technologies, the large cloud platforms are now able to offer the same capabilities and reliability without any of the cost. And unsurprisingly, a new technology like 5G, with its promise of new and expanded markets, makes for a good moment to push forward with these new technologies.

Google recently made some moves in this direction with its Anthos for Telecom and Global Mobile Edge Cloud, too. Chances are we will see all of the large cloud providers continue to go after this market in the coming months.

In a somewhat odd move, only yesterday Affirmed announced a new CEO and president, Anand Krishnamurthy. It’s not often that we see these kinds of executive moves hours before a company announces its acquisition.

The announcement doesn’t feature a single hint at today’s news and includes all of the usual cliches we’ve come to expect from a press release that announces a new CEO. “We are thankful to Hassan for his vision and commitment in guiding the company through this extraordinary journey and positioning us for tremendous success in the future,” Krishnamurthy wrote at the time. “It is my honor to lead Affirmed as we continue to drive this incredible transformation in our industry.”

We asked Affirmed for some more background about this and will update this post if we hear more. Update: an Affirmed spokesperson told us that this was “part of a succession plan that had been determined previously.  So it was not related [to] any specific event.”

Nov
25
2019
--

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Nov
04
2019
--

Microsoft Azure gets into ag tech with the preview of FarmBeats

At its annual Ignite event in Orlando, Fla., Microsoft today announced that  Azure FarmBeats, a project that until now was mostly a research effort, will be available as a public preview and in the Azure Marketplace, starting today. FarmBeats is Microsoft’s project that combines IoT sensors, data analysis and machine learning.

The goal of FarmBeats is to augment farmers’ knowledge and intuition about their own farm with data and data-driven insights,” Microsoft explained in today’s announcement. The idea behind FarmBeats is to take in data from a wide variety of sources, including sensors, satellites, drones and weather stations, and then turn that into actionable intelligence for farmers, using AI and machine learning. 

In addition, FarmBeats also wants to be somewhat of a platform for developers who can then build their own applications on top of this data that the platform aggregates and evaluates.

As Microsoft noted during the development process, having satellite imagery is one thing, but that can’t capture all of the data on a farm. For that, you need in-field sensors and other data — yet all of this heterogeneous data then has to be merged and analyzed somehow. Farms also often don’t have great internet connectivity. Because of this, the FarmBeats team was among the first to leverage Microsoft’s efforts in using TV white space for connectivity and, of course, Azure IoT Edge for collecting all of the data.

Oct
08
2019
--

Arm brings custom instructions to its embedded CPUs

At its annual TechCon event in San Jose, Arm today announced Custom Instructions, a new feature of its Armv8-M architecture for embedded CPUs that, as the name implies, enables its customers to write their own custom instructions to accelerate their specific use cases for embedded and IoT applications.

“We already have ways to add acceleration, but not as deep and down to the heart of the CPU. What we’re giving [our customers] here is the flexibility to program your own instructions, to define your own instructions — and have them executed by the CPU,” ARM senior director for its automotive and IoT business, Thomas Ensergueix, told me ahead of today’s announcement.

He noted that Arm always had a continuum of options for acceleration, starting with its memory-mapped architecture for connecting over a bus GPUs and today’s neural processor units. This allows the CPU and the accelerator to run in parallel, but with the bus being the bottleneck. Customers also can opt for a co-processor that’s directly connected to the CPU, but today’s news essentially allows Arm customers to create their own accelerated algorithms that then run directly on the CPU. That means the latency is low, but it’s not running in parallel, as with the memory-mapped solution.

arm instructions

As Arm argues, this setup allows for the lowest-cost (and risk) path for integrating customer workload acceleration, as there are no disruptions to the existing CPU features and it still allows its customers to use the existing standard tools with which they are already familiar.

custom assemblerFor now, custom instructions will only be available to be implemented in the Arm Cortex-M33 CPUs, starting in the first half of 2020. By default, it’ll also be available for all future Cortex-M processors. There are no additional costs or new licenses to buy for Arm’s customers.

Ensergueix noted that as we’re moving to a world with more and more connected devices, more of Arm’s customers will want to optimize their processors for their often very specific use cases — and often they’ll want to do so because by creating custom instructions, they can get a bit more battery life out of these devices, for example.

Arm has already lined up a number of partners to support Custom Instructions, including IAR Systems, NXP, Silicon Labs and STMicroelectronics .

“Arm’s new Custom Instructions capabilities allow silicon suppliers like NXP to offer their customers a new degree of application-specific instruction optimizations to improve performance, power dissipation and static code size for new and emerging embedded applications,” writes NXP’s Geoff Lees, SVP and GM of Microcontrollers. “Additionally, all these improvements are enabled within the extensive Cortex-M ecosystem, so customers’ existing software investments are maximized.”

In related embedded news, Arm also today announced that it is setting up a governance model for Mbed OS, its open-source operating system for embedded devices that run an Arm Cortex-M chip. Mbed OS has always been open source, but the Mbed OS Partner Governance model will allow Arm’s Mbed silicon partners to have more of a say in how the OS is developed through tools like a monthly Product Working Group meeting. Partners like Analog Devices, Cypress, Nuvoton, NXP, Renesas, Realtek,
Samsung and u-blox are already participating in this group.

Oct
08
2019
--

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington, DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge, where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.

May
02
2019
--

Microsoft brings Plug and Play to IoT

Microsoft today announced that it wants to bring the ease of use of Plug and Play, which today allows you to plug virtually any peripheral into a Windows PC without having to worry about drivers, to IoT devices. Typically, getting an IoT device connected and up and running takes some work, even with modern deployment tools. The promise of IoT Plug and Play is that it will greatly simplify this process and do away with the hardware and software configuration steps that are still needed today.

As Azure corporate vice president Julia White writes in today’s announcement, “one of the biggest challenges in building IoT solutions is to connect millions of IoT devices to the cloud due to heterogeneous nature of devices today – such as different form factors, processing capabilities, operational system, memory and capabilities.” This, Microsoft argues, is holding back IoT adoption.

IoT Plug and Play, on the other hand, offers developers an open modeling language that will allow them to connect these devices to the cloud without having to write any code.

Microsoft can’t do this alone, though, since it needs the support of the hardware and software manufacturers in its IoT ecosystem, too. The company has already signed up a number of partners, including Askey, Brainium, Compal, Kyocera, STMicroelectronics, Thundercomm and VIA Technologies . The company says that dozens of devices are already Plug and Play-ready and potential users can find them in the Azure IoT Device Catalog.

Apr
24
2019
--

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compilation.

This new capability, which will work for applications written in JavaScript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Strategic Alliances David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often significantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platforms and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Messina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure on which they run. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com