Nov
04
2019
--

Microsoft launched Endpoint Manager to modernize device management

Ever since the days of Windows NT, the Microsoft System Center Configuration Manager (better known as ConfigMgr) has allowed companies to manage the increasingly large number of devices they issue to their employees. Then, back in 2011, the company also launched Intune, its cloud-based endpoint management system for corporate and BYOD devices. These days, most enterprises that use Microsoft’s tools use ConfigMgr to manage their PCs and then opt for Intune for mobile devices — and that’s a complex system to manage, even for sophisticated IT departments. So today, at its annual Ignite conference for IT professionals, Microsoft is announcing a way forward for these users to modernize their systems with the launch of the unified Microsoft Endpoint Manager.

As Brad Anderson, Microsoft’s corporate VP for Microsoft 365, told me, he takes some blame for this. “A lot of this falls on my shoulders because we just allowed everything to get complex. So we’re just simplifying everything,” he said. “So really at the core, what we think modern management is that modern management is it’s management that is driven by cloud intelligence.”

The general idea here, Anderson explained, is that in earlier eras of IT management, Microsoft and its partners didn’t have the tools to collect and analyze all of the signals it received from these management tools. That’s obviously not a problem anymore today and the company can use the telemetry it gets from a company’s PC deployments, for example, to figure out where there are problems.

“One of the things that we’re able to do is be learned as cloud-scale as we can help organizations improve their end-user experience,” Anderson noted. Common issues with that experience could be extremely long boot times, which slow down and frustrate employees, or issues with the delivery of important security patches. Today, all of this is often still managed by spreadsheets and complex security policies that are administrated manually — and Anderson argues that these days, you always have to think about security and management together anyway.

To quantify this user experience, Microsoft is also introducing what it calls the Microsoft Productivity Score, which looks at both how employees are working and using their tools, as well as how their technology is enabling them (or not) to do so. “The Productivity Score is all about helping an organization understand the experience their users are having — and then giving them the insights and the actions on what they can do to improve that,” explained Anderson.

Over the course of the last few months, Microsoft actually worked with some large customers and took over the management of their Windows and Office deployments, meaning those machines ran nothing but Microsoft 365 agents (and a control group that was managed in a more traditional way). The devices with the modern management system saw an 85% reduction in boot time and an 85% reduction in crashes and a doubling of battery life. Unsurprisingly, the employees that used the devices were also far happier.

As far as the device management experience goes, the new Endpoint Manager and the licensing changes that come with that are meant to not just simplify the branding but also the experience. And Microsoft definitely wants people to move to this modern system, so it’s giving everybody who has ConfigMgr licenses Intune licenses, too, so that they can co-manage their PCs with both tools and get access to the cloud-based features of Intune. The Microsoft Endpoint Manager console will show a single view of all devices managed by either product. “It’s all about simplifying — and we’re taking that simplifying deep and broad from a branding, licensing and product perspective,” said Anderson.

Today, ConfigMgr and Intune manage well over 190 million Windows, iOS and Android devices. Yet Microsoft knows that not every company is ready to move to this modern device management system just yet. That’s why it’s making these licensing changes to help get people on board, but also leaving the existing systems in place and giving them an onramp to move to provisioning new machines to be cloud-managed, for example.

Jul
03
2019
--

Capital One CTO George Brady will join us at TC Sessions: Enterprise

When you think of old, giant mainframes that sit in the basement of a giant corporation, still doing the same work they did 30 years ago, chances are you’re thinking about a financial institution. It’s the financial enterprises, though, that are often leading the charge in bringing new technologies and software development practices to their employees and customers. That’s in part because they are in a period of disruption that forces them to become more nimble. Often, this means leaving behind legacy technology and embracing the cloud.

At TC Sessions: Enterprise, which is happening on September 5 in San Francisco, Capital One executive VP in charge of its technology operations, George Brady, will talk about the company’s journey from legacy hardware and software to embracing the cloud and open source, all while working in a highly regulated industry. Indeed, Capital One was among the first companies to embrace the Facebook-led Open Compute project and it’s a member of the Cloud Native Computing Foundation. It’s this transformation at Capital One that Brady is leading.

At our event, Brady will join a number of other distinguished panelists to specifically talk about his company’s journey to the cloud. There, Capital One is using serverless compute, for example, to power its Credit Offers API using AWS’s Lambda service, as well as a number of other cloud technologies.

Before joining Capital One as its CTO in 2014, Brady ran Fidelity Investment’s global enterprise infrastructure team from 2009 to 2014 and served as Goldman Sachs’ head of global business applications infrastructure before that.

Currently, he leads cloud application and platform productization for Capital One. Part of that portfolio is Critical Stack, a secure container orchestration platform for the enterprise. Capital One’s goal with this work is to help companies across industries become more compliant, secure and cost-effective operating in the public cloud.

Early-bird tickets are still on sale for $249; grab yours today before we sell out.

Student tickets are just $75 — grab them here.

Jun
12
2019
--

RealityEngines.AI raises $5.25M seed round to make ML easier for enterprises

RealityEngines.AI, a research startup that wants to help enterprises make better use of AI, even when they only have incomplete data, today announced that it has raised a $5.25 million seed funding round. The round was led by former Google CEO and Chairman Eric Schmidt and Google founding board member Ram Shriram. Khosla Ventures, Paul Buchheit, Deepchand Nishar, Elad Gil, Keval Desai, Don Burnette and others also participated in this round.

The fact that the service was able to raise from this rather prominent group of investors clearly shows that its overall thesis resonates. The company, which doesn’t have a product yet, tells me that it specifically wants to help enterprises make better use of the smaller and noisier data sets they have and provide them with state-of-the-art machine learning and AI systems that they can quickly take into production. It also aims to provide its customers with systems that can explain their predictions and are free of various forms of bias, something that’s hard to do when the system is essentially a black box.

As RealityEngines CEO Bindu Reddy, who was previously the head of products for Google Apps, told me, the company plans to use the funding to build out its research and development team. The company, after all, is tackling some of the most fundamental and hardest problems in machine learning right now — and that costs money. Some, like working with smaller data sets, already have some available solutions like generative adversarial networks that can augment existing data sets and that RealityEngines expects to innovate on.

Reddy is also betting on reinforcement learning as one of the core machine learning techniques for the platform.

Once it has its product in place, the plan is to make it available as a pay-as-you-go managed service that will make machine learning more accessible to large enterprise, but also to small and medium businesses, which also increasingly need access to these tools to remain competitive.

May
16
2019
--

Unveiling its latest cohort, Alchemist announces $4 million in funding for its enterprise accelerator

The enterprise software and services-focused accelerator Alchemist has raised $4 million in fresh financing from investors BASF and the Qatar Development Bank, just in time for its latest demo day unveiling 20 new companies.

Qatar and BASF join previous investors, including the venture firms Mayfield, Khosla Ventures, Foundation Capital, DFJ and USVP, and corporate investors like Cisco, Siemens and Juniper Networks.

While the roster of successes from Alchemist’s fund isn’t as lengthy as Y Combinator, the accelerator program has launched the likes of the quantum computing upstart Rigetti, the soft-launch developer tool LaunchDarkly and drone startup Matternet .

Some (personal) highlights of the latest cohort include:

  • Bayware: Helmed by a former head of software-defined networking from Cisco, the company is pitching a tool that makes creating networks in multi-cloud environments as easy as copying and pasting.
  • MotorCortex.AI: Co-founded by a Stanford engineering professor and a Carnegie Mellon roboticist, the company is using computer vision, machine learning and robotics to create a fruit packer for packaging lines. Starting with avocados, the company is aiming to tackle the entire packaging side of pick and pack in logistics.
  • Resilio: With claims of a 96% effectiveness rate and $35,000 in annual recurring revenue with another $1 million in the pipeline, Resilio is already seeing companies embrace its mobile app that uses a phone’s camera to track stress levels and application-based prompts on how to lower it, according to Alchemist.
  • Operant Networks: It’s a long-held belief (of mine) that if computing networks are already irrevocably compromised, the best thing that companies and individuals can do is just encrypt the hell out of their data. Apparently Operant agrees with me. The company is claiming 50% time savings with this approach, and have booked $1.9 million in 2019 as proof, according to Alchemist.
  • HPC Hub: HPC Hub wants to democratize access to supercomputers by overlaying a virtualization layer and pre-installed software on underutilized super computers to give more companies and researchers easier access to machines… and they’ve booked $92,000 worth of annual recurring revenue.
  • DinoPlusAI: This chip developer is designing a low latency chip for artificial intelligence applications, reducing latency by 12 times over a competing Nvidia chip, according to the company. DinoPlusAI sees applications for its tech in things like real-time AI markets and autonomous driving. Its team is led by a designer from Cadence and Broadcom and the company already has $8 million in letters of intent signed, according to Alchemist.
  • Aero Systems West: Co-founders from the Air Force’s Research Labs and MIT are aiming to take humans out of drone operations and maintenance. The company contends that for every hour of flight time, drones require seven hours of maintenance and check ups. Aero Systems aims to reduce that by using remote analytics, self-inspection, autonomous deployment and automated maintenance to take humans out of the drone business.

Watch a live stream of Alchemist’s demo day pitches, starting at 3PM, here.

 

May
14
2019
--

Algorithmia raises $25M Series B for its AI automation platform

Algorithmia, a Seattle-based startup that offers a cloud-agnostic AI automation platform for enterprises, today announced a $25 million Series B funding round led by Norwest Partners. Madrona, Gradient Ventures, Work-Bench, Osage University Partners and Rakuten Ventures also participated in this round.

While the company started out five years ago as a marketplace for algorithms, it now mostly focuses on machine learning and helping enterprises take their models into production.

“It’s actually really hard to productionize machine learning models,” Algorithmia CEO Diego Oppenheimer told me. “It’s hard to help data scientists to not deal with data infrastructure but really being able to build out their machine learning and AI muscle.”

To help them, Algorithmia essentially built out a machine learning DevOps platform that allows data scientists to train their models on the platform and with the framework of their choice, bring it to Algorithmia — a platform that has already been blessed by their IT departments — and take it into production.

“Every Fortune 500 CIO has an AI initiative but they are bogged down by the difficulty of managing and deploying ML models,” said Rama Sekhar, a partner at Norwest Venture Partners, who has now joined the company’s board. “Algorithmia is the clear leader in building the tools to manage the complete machine learning life cycle and helping customers unlock value from their R&D investments.”

With the new funding, the company will double down on this focus by investing in product development to solve these issues, but also by building out its team, with a plan to double its headcount over the next year. A year from now, Oppenheimer told me, he hopes that Algorithmia will be a household name for data scientists and, maybe more importantly, their platform of choice for putting their models into production.

“How does Algorithmia succeed? Algorithmia succeeds when our customers are able to deploy AI and ML applications,” Oppenheimer said. “And although there is a ton of excitement around doing this, the fact is that it’s really difficult for companies to do so.”

The company previously raised a $10.5 million Series A round led by Google’s AI fund. It’s customers now include the United Nations, a number of U.S. intelligence agencies and Fortune 500 companies. In total, more than 90,000 engineers and data scientists are now on the platform.

May
13
2019
--

Market map: the 200+ innovative startups transforming affordable housing

In this section of my exploration into innovation in inclusive housing, I am digging into the 200+ companies impacting the key phases of developing and managing housing.

Innovations have reduced costs in the most expensive phases of the housing development and management process. I explore innovations in each of these phases, including construction, land, regulatory, financing, and operational costs.

Reducing Construction Costs

This is one of the top three challenges developers face, exacerbated by rising building material costs and labor shortages.

Apr
02
2019
--

Edgybees’s new developer platform brings situational awareness to live video feeds

San Diego-based Edgybees today announced the launch of Argus, its API-based developer platform that makes it easy to add augmented reality features to live video feeds.

The service has long used this capability to run its own drone platform for first responders and enterprise customers, which allows its users to tag and track objects and people in emergency situations, for example, to create better situational awareness for first responders.

I first saw a demo of the service a year ago, when the team walked a group of journalists through a simulated emergency, with live drone footage and an overlay of a street map and the location of ambulances and other emergency personnel. It’s clear how these features could be used in other situations as well, given that few companies have the expertise to combine the video footage, GPS data and other information, including geographic information systems, for their own custom projects.

Indeed, that’s what inspired the team to open up its platform. As the Edgybees team told me during an interview at the Ourcrowd Summit last month, it’s impossible for the company to build a new solution for every vertical that could make use of it. So instead of even trying (though it’ll keep refining its existing products), it’s now opening up its platform.

“The potential for augmented reality beyond the entertainment sector is endless, especially as video becomes an essential medium for organizations relying on drone footage or CCTV,” said Adam Kaplan, CEO and co-founder of Edgybees. “As forward-thinking industries look to make sense of all the data at their fingertips, we’re giving developers a way to tailor our offering and set them up for success.”

In the run-up to today’s launch, the company has already worked with organizations like the PGA to use its software to enhance the live coverage of its golf tournaments.

Feb
20
2019
--

Arm expands its push into the cloud and edge with the Neoverse N1 and E1

For the longest time, Arm was basically synonymous with chip designs for smartphones and very low-end devices. But more recently, the company launched solutions for laptops, cars, high-powered IoT devices and even servers. Today, ahead of MWC 2019, the company is officially launching two new products for cloud and edge applications, the Neoverse N1 and E1. Arm unveiled the Neoverse brand a few months ago, but it’s only now that it is taking concrete form with the launch of these new products.

“We’ve always been anticipating that this market is going to shift as we move more towards this world of lots of really smart devices out at the endpoint — moving beyond even just what smartphones are capable of doing,” Drew Henry, Arms’ SVP and GM for Infrastructure, told me in an interview ahead of today’s announcement. “And when you start anticipating that, you realize that those devices out of those endpoints are going to start creating an awful lot of data and need an awful lot of compute to support that.”

To address these two problems, Arm decided to launch two products: one that focuses on compute speed and one that is all about throughput, especially in the context of 5G.

ARM NEOVERSE N1

The Neoverse N1 platform is meant for infrastructure-class solutions that focus on raw compute speed. The chips should perform significantly better than previous Arm CPU generations meant for the data center and the company says that it saw speedups of 2.5x for Nginx and MemcacheD, for example. Chip manufacturers can optimize the 7nm platform for their needs, with core counts that can reach up to 128 cores (or as few as 4).

“This technology platform is designed for a lot of compute power that you could either put in the data center or stick out at the edge,” said Henry. “It’s very configurable for our customers so they can design how big or small they want those devices to be.”

The E1 is also a 7nm platform, but with a stronger focus on edge computing use cases where you also need some compute power to maybe filter out data as it is generated, but where the focus is on moving that data quickly and efficiently. “The E1 is very highly efficient in terms of its ability to be able to move data through it while doing the right amount of compute as you move that data through,” explained Henry, who also stressed that the company made the decision to launch these two different platforms based on customer feedback.

There’s no point in launching these platforms without software support, though. A few years ago, that would have been a challenge because few commercial vendors supported their data center products on the Arm architecture. Today, many of the biggest open-source and proprietary projects and distributions run on Arm chips, including Red Hat Enterprise Linux, Ubuntu, Suse, VMware, MySQL, OpenStack, Docker, Microsoft .Net, DOK and OPNFV. “We have lots of support across the space,” said Henry. “And then as you go down to that tier of languages and libraries and compilers, that’s a very large investment area for us at Arm. One of our largest investments in engineering is in software and working with the software communities.”

And as Henry noted, AWS also recently launched its Arm-based servers — and that surely gave the industry a lot more confidence in the platform, given that the biggest cloud supplier is now backing it, too.

Feb
19
2019
--

Google acquires cloud migration platform Alooma

Google today announced its intention to acquire Alooma, a company that allows enterprises to combine all of their data sources into services like Google’s BigQuery, Amazon’s Redshift, Snowflake and Azure. The promise of Alooma is that it handles the data pipelines and manages them for its users. In addition to this data integration service, though, Alooma also helps with migrating to the cloud, cleaning up this data and then using it for AI and machine learning use cases.

“Here at Google Cloud, we’re committed to helping enterprise customers easily and securely migrate their data to our platform,” Google VP of engineering Amit Ganesh and Google Cloud Platform director of product management Dominic Preuss write today. “The addition of Alooma, subject to closing conditions, is a natural fit that allows us to offer customers a streamlined, automated migration experience to Google Cloud, and give them access to our full range of database services, from managed open source database offerings to solutions like Cloud Spanner and Cloud Bigtable.”

Before the acquisition, Alooma had raised about $15 million, including an $11.2 million Series A round led by Lightspeed Venture Partners and Sequoia Capital in early 2016. The two companies did not disclose the price of the acquisition, but chances are we are talking about a modest price, given how much Alooma had previously raised.

Neither Google nor Alooma said much about what will happen to the existing products and customers — and whether it will continue to support migrations to Google’s competitors. We’ve reached out to Google and will update this post once we hear more.

Update. Here is Google’s statement about the future of the platform:

For now, it’s business as usual for Alooma and Google Cloud as we await regulatory approvals and complete the deal. After close, the team will be joining us in our Tel Aviv and Sunnyvale offices, and we will be leveraging the Alooma technology and team to provide our Google Cloud customers with a great data migration service in the future.

Regarding supporting competitors, yes, the existing Alooma product will continue to support other cloud providers. We will only be accepting new customers that are migrating data to Google Cloud Platform, but existing customers will continue to have access to other cloud providers.   
So going forward, Alooma will not accept any new customers who want to migrate data to any competitors. That’s not necessarily unsurprising and it’s good to see that Google will continue to support Alooma’s existing users. Those who use Alooma in combination with AWS, Azure and other non-Google services will likely start looking for other solutions soon, though, as this also means that Google isn’t likely to develop the service for them beyond its current state.

Alooma’s co-founders do stress, though, that “the journey is not over. Alooma has always aimed to provide the simplest and most efficient path toward standardizing enterprise data from every source and transforming it into actionable intelligence,” they write. “Joining Google Cloud will bring us one step closer to delivering a full self-service database migration experience bolstered by the power of their cloud technology, including analytics, security, AI, and machine learning.”

Oct
28
2018
--

Forget Watson, the Red Hat acquisition may be the thing that saves IBM

With its latest $34 billion acquisition of Red Hat, IBM may have found something more elementary than “Watson” to save its flagging business.

Though the acquisition of Red Hat  is by no means a guaranteed victory for the Armonk, N.Y.-based computing company that has had more downs than ups over the five years, it seems to be a better bet for “Big Blue” than an artificial intelligence program that was always more hype than reality.

Indeed, commentators are already noting that this may be a case where IBM finally hangs up the Watson hat and returns to the enterprise software and services business that has always been its core competency (albeit one that has been weighted far more heavily on consulting services — to the detriment of the company’s business).

Watson, the business division focused on artificial intelligence whose public claims were always more marketing than actually market-driven, has not performed as well as IBM had hoped and investors were losing their patience.

Critics — including analysts at the investment bank Jefferies (as early as one year ago) — were skeptical of Watson’s ability to deliver IBM from its business woes.

As we wrote at the time:

Jefferies pulls from an audit of a partnership between IBM Watson and MD Anderson as a case study for IBM’s broader problems scaling Watson. MD Anderson cut its ties with IBM after wasting $60 million on a Watson project that was ultimately deemed, “not ready for human investigational or clinical use.”

The MD Anderson nightmare doesn’t stand on its own. I regularly hear from startup founders in the AI space that their own financial services and biotech clients have had similar experiences working with IBM.

The narrative isn’t the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.

That’s not the only trouble IBM has had with Watson’s healthcare results. Earlier this year, the online medical journal Stat reported that Watson was giving clinicians recommendations for cancer treatments that were “unsafe and incorrect” — based on the training data it had received from the company’s own engineers and doctors at Sloan-Kettering who were working with the technology.

All of these woes were reflected in the company’s latest earnings call where it reported falling revenues primarily from the Cognitive Solutions business, which includes Watson’s artificial intelligence and supercomputing services. Though IBM chief financial officer pointed to “mid-to-high” single digit growth from Watson’s health business in the quarter, transaction processing software business fell by 8% and the company’s suite of hosted software services is basically an afterthought for business gravitating to Microsoft, Alphabet, and Amazon for cloud services.

To be sure, Watson is only one of the segments that IBM had been hoping to tap for its future growth; and while it was a huge investment area for the company, the company always had its eyes partly fixed on the cloud computing environment as it looked for areas of growth.

It’s this area of cloud computing where IBM hopes that Red Hat can help it gain ground.

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” said Ginni Rometty, IBM Chairman, President and Chief Executive Officer, in a statement announcing the acquisition. “IBM will become the world’s number-one hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.”

The acquisition also puts an incredible amount of marketing power behind Red Hat’s various open source services business — giving all of those IBM project managers and consultants new projects to pitch and maybe juicing open source software adoption a bit more aggressively in the enterprise.

As Red Hat chief executive Jim Whitehurst told TheStreet in September, “The big secular driver of Linux is that big data workloads run on Linux. AI workloads run on Linux. DevOps and those platforms, almost exclusively Linux,” he said. “So much of the net new workloads that are being built have an affinity for Linux.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com