Jan
09
2019
--

Baidu Cloud launches its open-source edge computing platform

At CES, the Chinese tech giant Baidu today announced OpenEdge, its open-source edge computing platform. At its core, OpenEdge is the local package component of Baidu’s existing Intelligent Edge (BIE) commercial offering and obviously plays well with that service’s components for managing edge nodes and apps.

Because this is obviously a developer announcement, I’m not sure why Baidu decided to use CES as the venue for this release, but there can be no doubt that China’s major tech firms have become quite comfortable with open source. Companies like Baidu, Alibaba, Tencent and others are often members of the Linux Foundation and its growing stable of projects, for example, and virtually ever major open-source organization now looks to China as its growth market. It’s no surprise, then, that we’re also now seeing a wider range of Chinese companies that open source their own projects.

“Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy,” says Baidu VP and GM of Baidu Cloud Watson Yin. “By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications.”

A company spokesperson tells us that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud.

Baidu also today announced that it has partnered with Intel to launch the BIE-AI-Box and with NXP Semiconductors to launch the BIE-AI-Board. The box is designed for in-vehicle video analysis while the board is small enough for cameras, drones, robots and similar applications.

CES 2019 coverage - TechCrunch

Nov
26
2018
--

AWS Global Accelerators helps customers manage traffic across zones

Many AWS customers have to run in multiple zones for many reasons, including performance requirements, regulatory issues or fail-over management. Whatever the reason, AWS announced a new tool tonight called Global Accelerators designed to help customers route traffic more easily across multiple regions.

Peter DeSantis, VP of global infrastructure and customer support at AWS speaking at an event Monday night at AWS Re:Invent, explained that much of AWS customer traffic already flows over their massive network, and customers are using AWS Direct Connect to help applications get consistent performance and low network variability as customers move between AWS regions. He said what has been missing is a way to use the AWS global network to optimize their applications.

“Tonight I’m excited to announce AWS Global Accelerator. AWS Global Accelerator makes it easy for you to improve the performance and availability of your applications by taking advantage of the AWS global network,” he told the AWS re:Invent audience.

Graphic: AWS

“Your customer traffic is routed from your end users to the closest AWS edge location and from there traverses congestion-free redundant, highly available AWS global network. In addition to improving performance AWS Global Accelerator has built-in fault isolation, which instantly reacts to changes in the network health or your applications configuration,” DeSantis explained.

In fact, network administrators can route traffic based on defined policies such as health or geographic requirements and the traffic will move to the designated zone automatically based on those policies.

AWS plans to charge customers based on the number of accelerators they create. “An accelerator is the resource you create to direct traffic to optimal endpoints over the AWS global network. Customers will typically set up one accelerator for each application, but more complex applications may require more than one accelerator,” AWS’s Shaun Ray wrote in a blog post announcing the new feature.

AWS Global Accelerator is available today in several regions in the U.S., Europe and Asia.

more AWS re:Invent 2018 coverage

Jul
18
2018
--

Swim.ai raises $10M to bring real-time analytics to the edge

Once upon a time, it looked like cloud-based serviced would become the central hub for analyzing all IoT data. But it didn’t quite turn out that way because most IoT solutions simply generate too much data to do this effectively and the round-trip to the data center doesn’t work for applications that have to react in real time. Hence the advent of edge computing, which is spawning its own ecosystem of startups.

Among those is Swim.ai, which today announced that it has raised a $10 million Series B funding round led by Cambridge Innovation Capital, with participation from Silver Creek Ventures and Harris Barton Asset Management. The round also included a strategic investment from Arm, the chip design firm you may still remember as ARM (but don’t write it like that or their PR department will promptly email you). This brings the company’s total funding to about $18 million.

Swim.ai has an interesting take on edge computing. The company’s SWIM EDX product combines both local data processing and analytics with local machine learning. In a traditional approach, the edge devices collect the data, maybe perform some basic operations against the data to bring down the bandwidth cost and then ship it to the cloud where the hard work is done and where, if you are doing machine learning, the models are trained. Swim.ai argues that this doesn’t work for applications that need to respond in real time. Swim.ai, however, performs the model training on the edge device itself by pulling in data from all connected devices. It then builds a digital twin for each one of these devices and uses that to self-train its models based on this data.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, share new insights instantly peer-to-peer – locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of Swim.ai. “We are thrilled to partner with our new and existing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

The company doesn’t disclose any current customers, but it is focusing its efforts on manufacturers, service providers and smart city solutions. Update: Swim.ai did tell us about two customers after we published this story: The City of Palo Alto and Itron.

Swim.ai plans to use its new funding to launch a new R&D center in Cambridge, UK, expand its product development team and tackle new verticals and geographies with an expanded sales and marketing team.

May
07
2018
--

Microsoft brings more AI smarts to the edge

At its Build developer conference this week, Microsoft is putting a lot of emphasis on artificial intelligence and edge computing. To a large degree, that means bringing many of the existing Azure services to machines that sit at the edge, no matter whether that’s a large industrial machine in a warehouse or a remote oil-drilling platform. The service that brings all of this together is Azure IoT Edge, which is getting quite a few updates today. IoT Edge is a collection of tools that brings AI, Azure services and custom apps to IoT devices.

As Microsoft announced today, Azure IoT Edge, which sits on top of Microsoft’s IoT Hub service, is now getting support for Microsoft’s Cognitive Services APIs, for example, as well as support for Event Grid and Kubernetes containers. In addition, Microsoft is also open sourcing the Azure IoT Edge runtime, which will allow developers to customize their edge deployments as needed.

The highlight here is support for Cognitive Services for edge deployments. Right now, this is a bit of a limited service as it actually only supports the Custom Vision service, but over time, the company plans to bring other Cognitive Services to the edge as well. The appeal of this service is pretty obvious, too, as it will allow industrial equipment or even drones to use these machine learning models without internet connectivity so they can take action even when they are offline.

As far as AI goes, Microsoft also today announced that it will bring its new Brainwave deep neural network acceleration platform for real-time AI to the edge.

The company has also teamed up with Qualcomm to launch an AI developer kit for on-device inferencing on the edge. The focus of the first version of this kit will be on camera-based solutions, which doesn’t come as a major surprise given that Qualcomm recently launched its own vision intelligence platform.

IoT Edge is also getting a number of other updates that don’t directly involve machine learning. Kubernetes support is an obvious one and a smart addition, given that it will allow developers to build Kubernetes clusters that can span both the edge and a more centralized cloud.

The appeal of running Event Grid, Microsoft’s event routing service, at the edge is also pretty obvious, given that it’ll allow developers to connect services with far lower latency than if all the data had to run through a remote data center.

Other IoT Edge updates include the planned launch of a marketplace that will allow Microsoft partners and developers to share and monetize their edge modules, as well as a new certification program for hardware manufacturers to ensure that their devices are compatible with Microsoft’s platform. IoT Edge, as well as Windows 10 IoT and Azure Machine Learning, will also soon support hardware-accelerated model evaluation with DirextX 12 GPU, which is available in virtually every modern Windows PC.

Feb
28
2018
--

OpenStack gets support for virtual GPUs and new container features

 OpenStack, the open-source infrastructure project that aims to give enterprises the equivalent of AWS for the private clouds, today announced the launch of its 17th release, dubbed “Queens.” After all of those releases, you’d think that there isn’t all that much new that the OpenStack community could add to the project, but just as the large public clouds keep adding… Read More

Feb
07
2018
--

Intel’s latest chip is designed for computing at the edge

 As we develop increasingly sophisticated technologies like self-driving cars and industrial internet of things sensors, it’s going to require that we move computing to the edge. Essentially this means that instead of sending data to the cloud for processing, it needs to be done right on the device itself because even a little bit of latency is too much. Intel announced a new chip… Read More

Nov
29
2017
--

Amazon FreeRTOS is a new operating system for microcontroller-based IoT devices

 Amazon FreeRTOS is, as the name implies, essentially an extension of the FreeRTOS operating system that adds libraries for local and cloud connectivity. Over time, Amazon will also add support for over-the-air updates. Read More

Aug
29
2017
--

OpenStack sees new use cases in edge computing and fast-growing interest in China

 OpenStack, the massive open-source project that aims to bring the power and ease of use of public clouds like AWS and Azure to private data centers, today launched Pike, the sixteenth major version of its software. Read More

Jul
25
2017
--

Iguazio nabs $33M to bring big data edge analytics to IoT, finance and other enterprises

 Big data analytics — where vast troves of information are structured and used to help businesses gain more insights into their operations and customers, to develop new products, and to run more efficiently — are a cornerstone of how many tech-centric enterprises run their businesses today. Now the focus is on building solutions that the rest of the enterprise world can use, even if… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com