Jul
13
2018
--

Chad Rigetti to talk quantum computing at Disrupt SF

Even for the long-standing giants of the tech industry, quantum computing is one of the most complicated subjects to tackle. So how does a five-year old startup compete?

Chad Rigetti, the namesake founder of Rigetti Computing, will join us at Disrupt SF 2018 to help us break it all down.

Rigetti’s approach to quantum computing is two-fold: on one front, the company is working on the design and fabrication of its own quantum chips; on the other, the company is opening up access to its early quantum computers for researchers and developers by way of its cloud computing platform, Forest.

Rigetti Computing has raised nearly $70 million to date according to Crunchbase, with investment from some of the biggest names around. Meanwhile, labs around the country are already using Forest to explore the possibilities ahead.

What’s the current state of quantum computing? How do we separate hype from reality? Which fields might quantum computing impact first — and how can those interested in quantum technology make an impact? We’ll talk all this and more at Disrupt SF 2018.

Passes to Disrupt SF are available at the Early Bird rate until July 25 here.

Jul
12
2018
--

Google’s Apigee teams up with Informatica to extend its API ecosystem

Google acquired API management service Apigee back in 2016, but it’s been pretty quiet around the service in recent years. Today, however, Apigee announced a number of smaller updates that introduce a few new integrations with the Google Cloud platform, as well as a major new partnership with cloud data management and integration firm Informatica that essentially makes Informatica the preferred integration partner for Google Cloud.

Like most partnerships in this space, the deal with Informatica involves some co-selling and marketing agreements, but that really wouldn’t be all that interesting. What makes this deal stand out is that Google is actually baking some of Informatica’s tools right into the Google Cloud dashboard. This will allow Apigee users to use Informatica’s wide range of integrations with third-party enterprise applications while Informatica users will be able to publish their APIs through Apigee and have that service manage them for them.

Some of Google’s competitors, including Microsoft, have built their own integration services. As Google Cloud director of product management Ed Anuff told me, that wasn’t really on Google’s road map. “It takes a lot of know-how to build a rich catalog of connectors,” he said. “You could go and build an integration platform but if you don’t have that, you can’t address your customer’s needs.” Instead, Google went to look for a partner who already has this large catalog and plenty of credibility in the enterprise space.

Similarly, Informatica’s senior VP and GM for big data, cloud and data integration Ronen Schwartz noted that many of his company’s customers are now looking to move into the cloud and this move will make it easier for Informatica’s customers to bring their services into Apigee and open them up for external applications. “With this partnership, we are bringing the best of breed of both worlds to our customers,” he said. “And we are doing it now and we are making it available in an integrated, optimized way.”

Jul
12
2018
--

GitHub Enterprise and Business Cloud users now get access to public repos, too

GitHub, the code hosting service Microsoft recently acquired, is launching a couple of new features for its business users today that’ll make it easier for them to access public repositories on the service.

Traditionally, users on the hosted Business Cloud and self-hosted Enterprise were not able to directly access the millions of public open-source repositories on the service. Now, with the service’s release, that’s changing, and business users will be able to reach beyond their firewalls to engage and collaborate with the rest of the GitHub community directly.

With this, GitHub now also offers its business and enterprise users a new unified search feature that lets them tap into their internal repos but also look at open-source ones.

Other new features in this latest Enterprise release include the ability to ignore whitespace when reviewing changes, the ability to require multiple reviewers for code changes, automated support tickets and more. You can find a full list of all updates here.

Microsoft’s acquisition of GitHub wasn’t fully unexpected (and it’s worth noting that the acquisition hasn’t closed yet), but it is still controversial, given that Microsoft and the open-source community, which heavily relies on GitHub, haven’t always seen eye-to-eye in the past. I’m personally not too worried about that, and it feels like the dust has settled at this point and that people are waiting to see what Microsoft will do with the service.

Jul
11
2018
--

Box opens up about the company’s approach to innovation

Most of us never really stop to think about how the software and services we use on a daily basis are created. We just know it’s there when we want to access it, and it works most of the time. But companies don’t just appear and expand randomly, they need a well defined process and methodology to keep innovating or they won’t be around very long.

Box has been around since 2005 and grown into a company on a run rate of over $500 million.  Along the way, it transformed from a consumer focus to one concentrating on enterprise content management and expanded the platform from one that mostly offered online storage and file sharing to one that offers a range of content management services in the cloud.

I recently sat down with Chief Product and Chief Strategy Officer Jeetu Patel . A big part of Patel’s job is to keep the company’s development teams on track and focused on new features that could enhance the Box platform, attract new customers and increase revenue.

Fundamental beliefs

Before you solve a problem, you need the right group of people working on it. Patel says building a team has a few primary principles to help guide the product and team development. It starts with rules and rubrics to develop innovative solutions and help them focus on where to invest their resources in terms of money and people.

Graphic: Box

When it comes to innovating, you have to structure your teams in such a way that you can react to changing requirements in the marketplace, and in today’s tech world, being agile is more important than ever. “You have to configure your innovation engine from a team, motivation and talent recruiting perspective so that you’ve actually got the right structure in place to provide enough speed and autonomy to the team so that they’re unencumbered and able to execute quickly,” Patel explained

Finally, you need to have a good grip on the customer and the market. That involves constantly assessing market requirements and looking at building products and features that respond to a need, yet that aren’t dated when you launch them.

Start with the customer

Patel says that when all is said and done, the company wants to help its customers by filling a hole in the product set. From a central company philosophy perspective, it begins with the customer. That might sound like pandering on its face, but he says if you keep that goal in mind it really acts as an anchor to the entire process.

“From a core philosophy that we keep in mind, you have to actually make sure that you get everyone really oriented in the company to say you always start from a customer problem and work backwards. But picking the right problem to solve is 90 percent of the battle,” he said.

Solve hard problems

Patel strongly believes that the quality of the problem is directly proportional to the outcome of the project. Part of that is solving a real customer pain point, but it’s also about challenging your engineers. You can be successfully solving the low-hanging fruit problems most of the time, but then you don’t necessarily attract the highest quality engineering talent.

“If you think about really hard problems that have a lot of mission and purpose around them, you can actually attract the best team,” he said.

That means looking for a problem where you can add a lot of value. “The problem that you choose to spend your time solving should be one where you are uniquely positioned to create a 10 x value proposition compared to what might exist in the market today,” Patel explained. If it doesn’t reach that threshold, he believes that there’s no motivation for the customer to change, and it’s not really worth going after.

Build small teams

Once you identify that big problem, you need to form a team to start attacking it. Patel recommends keeping the teams manageable, and he believes in the Amazon approach of the two-pizza team, a group of 8-10 people who can operate on..well…two pizzas. If the teams get too large, he says it becomes difficult to coordinate and too much time gets wasted on logistics instead of innovation.

“Having very defined local missions, having [small] teams carrying out those local missions, and making sure that those team sizes don’t get too large so that they can stay very agile, is a pretty important kind of core operating principle of how we build products,” Patel said.

That becomes even more important as the company scales. The trick is to configure the organization in such a way so that as you grow, you end up with many smaller teams instead of a few bigger ones, and in that way you can better pinpoint team missions.

Developing a Box product

Patel sees four key areas when it comes to finally building that new product at Box. First of all, it needs to be enterprise grade and all that entails — secure, reliable, scalable, fault tolerant and so forth.

That’s Job One, but what generally has differentiated Box in the content management market has been its ease of use. He sees that as removing as much friction as you can from a software-driven business process.

Next, you try to make those processes intelligent and that means understanding the purpose of the content. Patel says that could involve having better search, better surfacing of content and automated trigger events that move that content through a workflow inside a company.

Finally, they look at how it fits inside a workflow because content doesn’t live in a vacuum inside an enterprise. It generally has a defined purposed and the content management system should make it easy to integrate that content into the broader context of its purpose.

Measure twice

Once you have those small teams set up with their missions in place, you have to establish rules and metrics that allow them to work quickly, but still have a set of milestones they have to meet to prove they are on a worthwhile project for the company. You don’t want to be throwing good money after a bad project.

For Patel and Box that involves a set of metrics that tell you at all times, whether the team is succeeding or failing. Seems simple enough, but it takes a lot of work from a management perspective to define missions and goals and then track them on a regular basis.

He says that involves three elements: “There are three things that we think about including what’s the plan for what you’re going to build, what’s the strategy around what you’re going to build, and then what’s the level of coordination that each one of us have on whether or not what we’re building is, in fact, going to be successful.”

In the end, this is an iterative process, one that keeps evolving as the company grows and develops and as they learn from each project and each team. “We’re constantly looking at the processes and saying, what are the things that need to be adjusted,” Patel said.

Jul
08
2018
--

Serverless computing could unleash a new startup ecosystem

While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.

Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).

That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.

Building layers of abstraction

In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.

Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.

Photo: shutterjack/Getty Images

While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.

All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.

It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.

Removing another barrier to entry

Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying, patching or monitoring — all those details at the the server and operating system level go away,” he explained.

He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.

Blocks of servers in cloud data center.

Colin Anderson/Getty Images

Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.

Survey says…

Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.

When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.

Graph: Digital Ocean

Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.

Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.

Creating ecosystems

The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.

This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.

Photo: shylendrahoode/Getty Images

Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.

Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.

S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.

Beyond tooling

Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.

Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.

While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.

Jun
19
2018
--

Riskified prevents fraud on your favorite e-commerce site

Meet Riskified, an Israel-based startup that has raised $64 million in total to fight online fraud. The company has built a service that helps you reject transactions from stolen credit cards and approve more transactions from legitimate clients.

If you live in the U.S., chances are you know someone who noticed a fraudulent charge for an expensive purchase with their credit card — it’s still unclear why most restaurants and bars in the U.S. take your card away instead of bringing the card reader to you.

Online purchases, also known as card-not-present transactions, represent the primary source of fraudulent transactions. That’s why e-commerce websites need to optimize their payment system to detect fraudulent transactions and approve all the others.

Riskified uses machine learning to recognize good orders and improve your bottom line. In fact, Riskified is so confident that it guarantees that you’ll never have to pay chargebacks. As long as a transaction is approved by the product, the startup offers chargeback protection. If Riskified made the wrong call, the company reimburses fraudulent chargebacks.

On the other side of the equation, many e-commerce websites leave money on the table by rejecting transactions and false declines. It’s hard to quantify this as some customers end up not ordering anything. Riskified should help you on this front too.

The startup targets big customers — Macy’s, Dyson, Prada, Vestiaire Collective and GOAT are all using it. You can integrate Riskified with popular e-commerce payment systems and solutions, such as Shopify, Magento and Stripe. Riskified also has an API for more sophisticated implementations.

Jun
12
2018
--

Sumo Logic brings data analysis to containers

Sumo Logic has long held the goal to help customers understand their data wherever it lives. As we move into the era of containers, that goal becomes more challenging because containers by their nature are ephemeral. The company announced a product enhancement today designed to instrument containerized applications in spite of that.

They are debuting these new features at DockerCon, Docker’s customer conference taking place this week in San Francisco.

Sumo’s CEO Ramin Sayer says containers have begun to take hold over the last 12-18 months with Docker and Kubernetes emerging as tools of choice. Given their popularity, Sumo wants to be able to work with them. “[Docker and Kubernetes] are by far the most standard things that have developed in any new shop, or any existing shop that wants to build a brand new modern app or wants to lift and shift an app from on prem [to the cloud], or have the ability to migrate workloads from Vendor A platform to Vendor B,” he said.

He’s not wrong of course. Containers and Kubernetes have been taking off in a big way over the last 18 months and developers and operations alike have struggled to instrument these apps to understand how they behave.

“But as that standardization of adoption of that technology has come about, it makes it easier for us to understand how to instrument, collect, analyze, and more importantly, start to provide industry benchmarks,” Sayer explained.

They do this by avoiding the use of agents. Regardless of how you run your application, whether in a VM or a container, Sumo is able to capture the data and give you feedback you might otherwise have trouble retrieving.

Screen shot: Sumo Logic (cropped)

The company has built in native support for Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS). It also supports the open source tool Prometheus favored by Kubernetes users to extract metrics and metadata. The goal of the Sumo tool is to help customers fix issues faster and reduce downtime.

As they work with this technology, they can begin to understand norms and pass that information onto customers. “We can guide them and give them best practices and tips, not just on what they’ve done, but how they compare to other users on Sumo,” he said.

Sumo Logic was founded in 2010 and has raised $230 million, according to data on Crunchbase. Its most recent round was a $70 million Series F led by Sapphire Ventures last June.

Jun
07
2018
--

Google Cloud announces the Beta of single tenant instances

One of the characteristics of cloud computing is that when you launch a virtual machine, it gets distributed wherever it makes the most sense for the cloud provider. That usually means sharing servers with other customers in what is known as a multi-tenant environment. But what about times when you want a physical server dedicated just to you?

To help meet those kinds of demands, Google announced the Beta of Google Compute Engine Sole-tenant nodes, which have been designed for use cases such a regulatory or compliance where you require full control of the underlying physical machine, and sharing is not desirable.

“Normally, VM instances run on physical hosts that may be shared by many customers. With sole-tenant nodes, you have the host all to yourself,” Google wrote in a blog post announcing the new offering.

Diagram: Google

Google has tried to be as flexible as possible, letting the customer choose exactly what configuration they want in terms CPU and memory. Customers can also let Google choose the dedicated server that’s best at any particular moment, or you can manually select the server if you want that level of control. In both cases, you will be assigned a dedicated machine.

If you want to play with this, there is a free tier and then various pricing tiers for a variety of computing requirements. Regardless of your choice, you will be charged on a per-second basis with a one-minute minimum charge, according to Google.

Since this feature is still in Beta, it’s worth noting that it is not covered under any SLA. Microsoft and Amazon have similar offerings.

Jun
06
2018
--

Four years after its release, Kubernetes has come a long way

On June 6th, 2014 Kubernetes was released for the first time. At the time, nobody could have predicted that 4 years later that the project would become a de facto standard for container orchestration or that the biggest tech companies in the world would be backing it. That would come later.

If you think back to June 2014, containerization was just beginning to take off thanks to Docker, which was popularizing the concept with developers, but being so early there was no standard way to manage those containers.

Google had been using containers as a way to deliver applications for years and ran a tool called Borg to handle orchestration. It’s called an orchestrator because much like a conductor of an orchestra, it decides when a container is launched and when it shuts down once it’s completed its job.

At the time, two Google engineers, Craig McLuckie and Joe Beda, who would later go on to start Heptio, were looking at developing an orchestration tool like Borg for companies that might not have the depth of engineering talent of Google to make it work. They wanted to spread this idea of how they develop distributed applications to other developers.

Hello world

Before that first version hit the streets, what would become Kubernetes developed out of a need for an orchestration layer that Beda and McLuckie had been considering for a long time. They were both involved in bringing Google Compute Engine, Google’s Infrastructure as a Service offering, to market, but they felt like there was something missing in the tooling that would fill in the gaps between infrastructure and platform service offerings.

“We had long thought about trying to find a way to bring a sort of a more progressive orchestrated way of running applications in production. Just based on our own experiences with Google Compute Engine, we got to see firsthand some of the challenges that the enterprise faced in moving workloads to the cloud,” McLuckie explained.

He said that they also understood some of the limitations associated with virtual machine-based workloads and they were thinking about tooling to help with all of that. “And so we came up the idea to start a new project, which ultimately became Kubernetes.”

Let’s open source it

When Google began developing Kubernetes in March 2014, it wanted nothing less than to bring container orchestration to the masses. It was a big goal and McLuckie, Beda and teammate Brendan Burns believed the only way to get there was to open source the technology and build a community around it. As it turns out, they were spot on with that assessment, but couldn’t have been 100 percent certain at the time. Nobody could have.

Photo: Cloud Native Computing Foudation

“If you look at the history, we made the decision to open source Kubernetes and make it a community-oriented project much sooner than conventional wisdom would dictate and focus on really building a community in an open and engaged fashion. And that really paid dividends as Kubernetes has accelerated and effectively become the standard for container orchestration,” McLuckie said.

The next thing they did was to create the Cloud Native Computing Foundation (CNCF) as an umbrella organization for the project. If you think about it, this project could have gone in several directions, as current CNCF director Dan Kohn described in a recent interview.

Going cloud native

Kohn said Kubernetes was unique in a couple of ways. First of all, it was based on existing technology developed over many years at Google. “Even though Kubernetes code was new, the concepts and engineering and know-how behind it was based on 15 years at Google building Borg (And a Borg replacement called Omega that failed),” Kohn said. The other thing was that Kubernetes was designed from the beginning to be open sourced.

Photo: Swapnil Bhartiya on Flickr. Used under CC by SA 2.0 license

He pointed out that Google could have gone in a few directions with Kubernetes. It could have created a commercial product and sold it through Google Cloud. It could have open sourced it, but had a strong central lead as they did with Go. They could have gone to the Linux Foundation and said they wanted to create a stand-alone Kubernetes Foundation. But they didn’t do any of these things.

McLuckie says they decided to something entirely different and place it under the auspices of the Linux Foundation, but not as Kubernetes project. Instead they wanted to create a new framework for cloud native computing itself and the CNCF was born. “The CNCF is a really important staging ground, not just for Kubernetes, but for the technologies that needed to come together to really complete the narrative, to make Kubernetes a much more comprehensive framework,” McLuckie explained.

Getting everyone going in the same direction

Over the last few years, we have watched as Kubernetes has grown into a container orchestration standard. Last summer in quick succession  a slew of major enterprise players joined CNCF as AWSOracleMicrosoftVMware and Pivotal all joined. They came together with Red Hat, Intel, IBM Cisco and others who were already members.

Cloud Native Computing Foundation Platinum members

Each these players no doubt wanted to control the orchestration layer, but they saw Kubernetes gaining momentum so rapidly, they had little choice but to go along. Kohn jokes that having all these big name players on board is like herding cats, but bringing in them in has been the goal all along. He said it just happened much faster than he thought it would.

In a recent interview with TechCrunch, David Aronchick, who runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days. He is shocked by how quickly it has grown. “I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he told TechCrunch.

As it has grown, it has become readily apparent that McLuckie was right about building that cloud native framework instead of a stand-alone Kubernetes foundation. Today there are dozens of adjacent projects and the organization is thriving.

Nobody is more blown away by this than McLuckie himself who says seeing Kubernetes hit these various milestones since its initial release has been amazing for him and his team to watch. “It’s just been a series of these wonderful kind of moments as Kubernetes has gained a head of steam, and it’s been  so much fun to see the community really rally around it.”

Jun
05
2018
--

Microsoft program provides a decade of updates for Windows IoT devices

If you have an essential Internet of Things device running Windows 10 IoT Core Service, you don’t want to be worried about security and OS patches over a period of years. Microsoft wants to help customers running these kinds of devices with a new program that guarantees 10 years of updates.

The idea is that as third-party partners build applications on top of the Windows 10 IoT Core Services, these OEMs, who create the apps, can pay Microsoft to guarantee updates for these devices for a decade. This can help assure customers that they won’t be vulnerable to attack on these critical systems from unpatched applications.

The service does more than provide updates though. It also gives OEMs the ability to manage the updates and assess the device’s health.

“The Windows IoT Core service offering is enabling partners to commercialize secure IoT devices backed by industry-leading support. And so device makers will have the ability to manage updates for the OS, for the apps and for the settings for OEM-specific files,” Dinesh Narayanan, director of business development for emerging markets explained.

It gives OEMs creating Windows-powered applications on machines like healthcare devices or ATMs this ability to manage them over an extended period. That’s particularly important as these devices tend to have a more extended usage period than say a PC or tablet.”We want to extend support and commit to that support over the long haul for these devices that have a longer life cycle,” Narayanan said.

Beyond the longevity, the service also provides customers with access to the Device Update Center where they can control and customize how and when the devices get updated. It also includes another level of security called Device Health Attestation that allows the OEMs to evaluate the trustworthiness of the devices before they update them using a third party service.

All of this is designed to give Microsoft a foothold in the growing IoT space and to provide an operating system for these devices as they proliferate. While predictions vary dramatically, Gartner has predicted that at least 20 billion connected devices will be online in 2020.

While not all of these will be powered by Windows, or require advanced management capabilities, those that do can be assured if their vendor uses this program that they can manage the devices and keep them up-to-date. And when it comes to the Internet of Things, chances are that’s going to be critical.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com