May
14
2018
--

AWS introduces 1-click Lambda functions app for IoT

When Amazon introduced AWS Lambda in 2015, the notion of serverless computing was relatively unknown. It enables developers to deliver software without having to manage a server to do it. Instead, Amazon manages it all and the underlying infrastructure only comes into play when an event triggers a requirement. Today, the company released an app in the iOS App Store called AWS IoT 1-Click to bring that notion a step further.

The 1-click part of the name may be a bit optimistic, but the app is designed to give developers even quicker access to Lambda event triggers. These are designed specifically for simple single-purpose devices like a badge reader or a button. When you press the button, you could be connected to customer service or maintenance or whatever makes sense for the given scenario.

One particularly good example from Amazon is the Dash Button. These are simple buttons that users push to reorder goods like laundry detergent or toilet paper. Pushing the button connects to the device to the internet via the home or business’s WiFi and sends a signal to the vendor to order the product in the pre-configured amount. AWS IoT 1-Click extends this capability to any developers, so long as it is on a supported device.

To use the new feature, you need to enter your existing account information. You configure your WiFi and you can choose from a pre-configured list of devices and Lambda functions for the given device. Supported devices in this early release include AWS IoT Enterprise Button, a commercialized version of the Dash button and the AT&T LTE-M Button.

Once you select a device, you define the project to trigger a Lambda function, or send an SMS or email, as you prefer. Choose Lambda for an event trigger, then touch Next to move to the configuration screen where you configure the trigger action. For instance, if pushing the button triggers a call to IT from the conference room, the trigger would send a page to IT that there was a call for help in the given conference room.

Finally, choose the appropriate Lambda function, which should work correctly based on your configuration information.

All of this obviously requires more than one click and probably involves some testing and reconfiguring to make sure you’ve entered everything correctly, but the idea of having an app to create simple Lambda functions could help people with non-programming background configure buttons with simple functions with some training on the configuration process.

It’s worth noting that the service is still in Preview, so you can download the app today, but you have to apply to participate at this time.

May
12
2018
--

Adobe CTO leads company’s broad AI bet

There isn’t a software company out there worth its salt that doesn’t have some kind of artificial intelligence initiative in progress right now. These organizations understand that AI is going to be a game-changer, even if they might not have a full understanding of how that’s going to work just yet.

In March at the Adobe Summit, I sat down with Adobe executive vice president and CTO Abhay Parasnis, and talked about a range of subjects with him including the company’s goal to build a cloud platform for the next decade — and how AI is a big part of that.

Parasnis told me that he has a broad set of responsibilities starting with the typical CTO role of setting the tone for the company’s technology strategy, but it doesn’t stop there by any means. He also is in charge of operational execution for the core cloud platform and all the engineering building out the platform — including AI and Sensei. That includes managing a multi-thousand person engineering team. Finally, he’s in charge of all the digital infrastructure and the IT organization — just a bit on his plate.

Ten years down the road

The company’s transition from selling boxed software to a subscription-based cloud company began in 2013, long before Parasnis came on board. It has been a highly successful one, but Adobe knew it would take more than simply shedding boxed software to survive long-term. When Parasnis arrived, the next step was to rearchitect the base platform in a way that was flexible enough to last for at least a decade — yes, a decade.

“When we first started thinking about the next generation platform, we had to think about what do we want to build for. It’s a massive lift and we have to architect to last a decade,” he said. There’s a huge challenge because so much can change over time, especially right now when technology is shifting so rapidly.

That meant that they had to build in flexibility to allow for these kinds of changes over time, maybe even ones they can’t anticipate just yet. The company certainly sees immersive technology like AR and VR, as well as voice as something they need to start thinking about as a future bet — and their base platform had to be adaptable enough to support that.

Making Sensei of it all

But Adobe also needed to get its ducks in a row around AI. That’s why around 18 months ago, the company made another strategic decision to develop AI as a core part of the new  platform. They saw a lot of companies looking at a more general AI for developers, but they had a different vision, one tightly focussed on Adobe’s core functionality. Parasnis sees this as the key part of the company’s cloud platform strategy. “AI will be the single most transformational force in technology,” he said, adding that Sensei is by far the thing he is spending the most time on.”

Photo: Ron Miller

The company began thinking about the new cloud platform with the larger artificial intelligence goal in mind, building AI-fueled algorithms to handle core platform functionality. Once they refined them for use in-house, the next step was to open up these algorithms to third-party developers to build their own applications using Adobe’s AI tools.

It’s actually a classic software platform play, whether the service involves AI or not. Every cloud company from Box to Salesforce has been exposing their services for years, letting developers take advantage of their expertise so they can concentrate on their core knowledge. They don’t have to worry about building something like storage or security from scratch because they can grab those features from a platform that has built-in expertise  and provides a way to easily incorporate it into applications.

The difference here is that it involves Adobe’s core functions, so it may be intelligent auto cropping and smart tagging in Adobe Experience Manager or AI-fueled visual stock search in Creative Cloud. These are features that are essential to the Adobe software experience, which the company is packaging as an API and delivering to developers to use in their own software.

Whether or not Sensei can be the technology that drives the Adobe cloud platform for the next 10 years, Parasnis and the company at large are very much committed to that vision. We should see more announcements from Adobe in the coming months and years as they build more AI-powered algorithms into the platform and expose them to developers for use in their own software.

Parasnis certainly recognizes this as an ongoing process. “We still have a lot of work to do, but we are off in an extremely good architectural direction, and AI will be a crucial part,” he said.

May
11
2018
--

HubSpot adds customer service tools to its marketing platform

HubSpot is expanding beyond sales and marketing with the official launch of its Service Hub for managing customer service.

The product was first announced last fall, but now it’s moved out of beta testing.

HubSpot President and COO JD Sherman said this was a logical next step for the company. He argued that the Internet has “democratized” the ability of businesses to attract customers by creating their own content (using tools like HubSpot’s, natch), and while “that opportunity still exists, frankly, it’s getting harder due to the sheer volume of what’s going on.”

“It makes sense to take care of your customer,” Sherman said — both to keep them loyal and also to turn them into an advocate who might help you attract new customers.

Service Hub General Manager Michael Redbord and Go To Market Leader David Barron gave me a quick tour of the Service Hub. It includes an universal inbox for all your customer communications, a bot-builder to automate some of those customer interactions, tools for building a company knowledge base (which can then be fed into the bot-builder, which Redbord described as a more “customer-centric” way to present your content), tools for creating surveys and a dashboard to track how your service team is doing.

ServiceHub dashboard

Redbord said he previously worked on HubSpot’s own service and support team, so every feature in ServiceHub has “a one-to-one relationship” with an issue that HubSpot has faced, or that he personally has faced, while trying to support customers.

Barron added that ServiceHub benefits from being integrated with HubSpot’s existing products, allowing businesses to track their interactions with a customer across sales, marketing and support.

“We’re a platform company,” he said. “When any of these conversations happens, whether it’s a chat with a human or a chat with a bot, that’s all logged on [a single record] in HubSpot, so there’s no data leakage between different teams.”

May
09
2018
--

StubHub bets on Pivotal and Google Cloud as it looks to go beyond tickets

StubHub is best known as a destination for buying and selling event tickets. The company operates in 48 countries and sells a ticket every 1.3 seconds. But the company wants to go beyond that and provide its users with a far more comprehensive set of services around entertainment. To do that, it’s working on changing its development culture and infrastructure to become more nimble. As the company announced today, it’s betting on Google Cloud and Pivotal Cloud Foundry as the infrastructure for this move.

StubHub CTO Matt Swann told me that the idea behind going with Pivotal — and the twelve-factor app model that entails — is to help the company accelerate its journey and give it an option to run new apps in both an on-premise and cloud environment.

“We’re coming from a place where we are largely on premise,” said Swann. “Our aim is to become increasingly agile — where we are going to focus on building balanced and focused teams with a global mindset.” To do that, Swann said, the team decided to go with the best platforms to enable that and that “remove the muck that comes with how developers work today.”

As for Google, Swann noted that this was an easy decision because the team wanted to leverage that company’s infrastructure and machine learning tools like Cloud ML. “We are aiming to build some of the most powerful AI systems focused on this space so we can be ahead of our customers,” he said. Given the number of users, StubHub sits on top of a lot of data — and that’s exactly what you need when you want to build AI-powered services. What exactly these will look like, though, remains to be seen, but Swann has only been on the job for six months. We can probably expect to see more for the company in this space in the coming months.

“Digital transformation is on the mind of every technology leader, especially in industries requiring the capability to rapidly respond to changing consumer expectations,” said Bill Cook, President of Pivotal . “To adapt, enterprises need to bring together the best of modern developer environments with software-driven customer experiences designed to drive richer engagement.”

Stubhub has already spun up its new development environment and plans to launch all new ups on this new infrastructure. Swann acknowledged that they company won’t be switching all of its workloads over to the new setup soon. But he does expect that the company will hit a tipping point in the next year or so.

He also noted that this over transformation means that the company will look beyond its own walls and toward working with more third-party APIs, especially with regard to transportation services and merchants that offer services around events.

Throughout our conversation, Swann also stressed that this isn’t a technology change for the sake of it.

May
08
2018
--

Intel Capital pumps $72M into AI, IoT, cloud and silicon startups, $115M invested so far in 2018

Intel Capital, the investment arm of the computer processor giant, is today announcing $72 million in funding for the 12 newest startups to enter its portfolio, bringing the total invested so far this year to $115 million. Announced at the company’s global summit currently underway in southern California, investments in this latest tranche cover artificial intelligence, Internet of Things, cloud services, and silicon. A detailed list is below.

Other notable news from the event included a new deal between the NBA and Intel Capital to work on more collaborations in delivering sports content, an area where Intel has already been working for years; and the news that Intel has now invested $125 million in startups headed by minorities, women and other under-represented groups as part of its Diversity Initiative. The mark was reached 2.5 years ahead of schedule, it said.

The range of categories of the startups that Intel is investing in is a mark of how the company continues to back ideas that it views as central to its future business — and specifically where it hopes its processors will play a central role, such as AI, IoT and cloud. Investing in silicon startups, meanwhile, is a sign of how Intel is also focusing on businesses that are working in an area that’s close to the company’s own DNA.

It’s hasn’t been a completely smooth road. Intel became a huge presence in the world of IT and early rise of desktop and laptop computers many years ago with its advances in PC processors, but its fortunes changed with the shift to mobile, which saw the emergence of a new wave of chip companies and designs for smaller and faster devices. Mobile is area that Intel itself acknowledged it largely missed out.

Later years have seen still other issues hit the company. For example, the Spectre security flaw (fixes for which are still being rolled out). And some of the business lines where Intel was hoping to make a mark have not panned out as it hoped they would. Just last month, Intel shut down development of its Vaunt smart glasses and reportedly the entirety of its new devices group.

The investments that Intel Capital makes, in contrast, are a fresher and more optimistic aspect of the company’s operations: they represent hopes and possibilities that still have everything to play for. And given that, on balance, things like AI and cloud services still have a long way to go before being truly ubiquitous, there remains a lot of opportunity for Intel.

“These innovative companies reflect Intel’s strategic focus as a data leader,” said Wendell Brooks, Intel senior vice president and president of Intel Capital, in a statement. “They’re helping shape the future of artificial intelligence, the future of the cloud and the Internet of Things, and the future of silicon. These are critical areas of technology as the world becomes increasingly connected and smart.”

Intel Capital since 1991 has put $12.3 billion into 1,530 companies covering everything from autonomous driving to virtual reality and e-commerce and says that more than 660 of these startups have gone public or been acquired. Intel has organised its investment announcements thematically before: last October, it announced $60 million in 15 big data startups.

Here’s a rundown of the investments getting announced today. Unless otherwise noted, the startups are based around Silicon Valley:

Avaamo is a deep learning startup that builds conversational interfaces based on neural networks to address problems in enterprises — part of the wave of startups that are focusing on non-consumer conversational AI solutions.

Fictiv has built a “virtual manufacturing platform” to design, develop and deliver physical products, linking companies that want to build products with manufacturers who can help them. This is a problem that has foxed many a startup (notable failures have included Factorli out of Las Vegas), and it will be interesting to see if newer advances will make the challenges here surmoutable.

Gamalon from Cambridge, MA, says it has built a machine learning platform to “teaches computers actual ideas.” Its so-called Idea Learning technology is able to order free-form data like chat transcripts and surveys into something that a computer can read, making the data more actionable. More from Ron here.

Reconova out of Xiamen, China is focusing on problems in visual perception in areas like retail, smart home and intelligent security.

Syntiant is an Irvine, CA-based AI semiconductor company that is working on ways of placing neural decision making on chips themselves to speed up processing and reduce battery consumption — a key challenge as computing devices move more information to the cloud and keep getting smaller. Target devices include mobile phones, wearable devices, smart sensors and drones.

Alauda out of China is a container-based cloud services provider focusing on enterprise platform-as-a-service solutions. “Alauda serves organizations undergoing digital transformation across a number of industries, including financial services, manufacturing, aviation, energy and automotive,” Intel said.

CloudGenix is a software-defined wide-area network startup, addressing an important area as more businesses take their networks and data into the cloud and look for cost savings. Intel says its customers use its broadband solutions to run unified communications and data center applications to remote offices, cutting costs by 70 percent and seeing big speed and reliability improvements.

Espressif Systems, also based in China, is a fabless semiconductor company, with its system-on-a-chip focused on IoT solutions.

VenueNext is a “smart venue” platform to deliver various services to visitors’ smartphones, providing analytics and more to the facility providing the services. Hospitals, sports stadiums and others are among its customers.

Lyncean Technologies is nearly 18 years old (founded in 2001) and has been working on something called Compact Light Source (CLS), which Intel describes as a miniature synchrotron X-ray source, which can be used for either extremely detailed large X-rays or very microscopic ones. This has both medical and security applications, making it a very timely business.

Movellus “develops semiconductor technologies that enable digital tools to automatically create and implement functionality previously achievable only with custom analog design.” Its main focus is creating more efficient approaches to designing analog circuits for systems on chips, needed for AI and other applications.

SiFive makes “market-ready processor core IP based on the RISC-V instruction set architecture,” founded by the inventors of RISC-V and led by a team of industry veterans.

May
07
2018
--

Microsoft and DJI team up to bring smarter drones to the enterprise

At the Microsoft Build developer conference today, Microsoft and Chinese drone manufacturer DJI announced a new partnership that aims to bring more of Microsoft’s machine learning smarts to commercial drones. Given Microsoft’s current focus on bringing intelligence to the edge, this is almost a logical partnership, given that drones are essentially semi-autonomous edge computing devices.

DJI also today announced that Azure is now its preferred cloud computing partner and that it will use the platform to analyze video data, for example. The two companies also plan to offer new commercial drone solutions using Azure IoT Edge and related AI technologies for verticals like agriculture, construction and public safety. Indeed, the companies are already working together on Microsoft’s FarmBeats solution, an AI and IoT platform for farmers.

As part of this partnership, DJI is launching a software development kit (SDK) for Windows that will allow Windows developers to build native apps to control DJI drones. Using the SDK, developers can also integrate third-party tools for managing payloads or accessing sensors and robotics components on their drones. DJI already offers a Windows-based ground station.

“DJI is excited to form this unique partnership with Microsoft to bring the power of DJI aerial platforms to the Microsoft developer ecosystem,” said Roger Luo, DJI president, in today’s announcement. “Using our new SDK, Windows developers will soon be able to employ drones, AI and machine learning technologies to create intelligent flying robots that will save businesses time and money and help make drone technology a mainstay in the workplace.”

Interestingly, Microsoft also stresses that this partnership gives DJI access to its Azure IP Advantage program. “For Microsoft, the partnership is an example of the important role IP plays in ensuring a healthy and vibrant technology ecosystem and builds upon existing partnerships in emerging sectors such as connected cars and personal wearables,” the company notes in today’s announcement.

May
07
2018
--

Microsoft brings more AI smarts to the edge

At its Build developer conference this week, Microsoft is putting a lot of emphasis on artificial intelligence and edge computing. To a large degree, that means bringing many of the existing Azure services to machines that sit at the edge, no matter whether that’s a large industrial machine in a warehouse or a remote oil-drilling platform. The service that brings all of this together is Azure IoT Edge, which is getting quite a few updates today. IoT Edge is a collection of tools that brings AI, Azure services and custom apps to IoT devices.

As Microsoft announced today, Azure IoT Edge, which sits on top of Microsoft’s IoT Hub service, is now getting support for Microsoft’s Cognitive Services APIs, for example, as well as support for Event Grid and Kubernetes containers. In addition, Microsoft is also open sourcing the Azure IoT Edge runtime, which will allow developers to customize their edge deployments as needed.

The highlight here is support for Cognitive Services for edge deployments. Right now, this is a bit of a limited service as it actually only supports the Custom Vision service, but over time, the company plans to bring other Cognitive Services to the edge as well. The appeal of this service is pretty obvious, too, as it will allow industrial equipment or even drones to use these machine learning models without internet connectivity so they can take action even when they are offline.

As far as AI goes, Microsoft also today announced that it will bring its new Brainwave deep neural network acceleration platform for real-time AI to the edge.

The company has also teamed up with Qualcomm to launch an AI developer kit for on-device inferencing on the edge. The focus of the first version of this kit will be on camera-based solutions, which doesn’t come as a major surprise given that Qualcomm recently launched its own vision intelligence platform.

IoT Edge is also getting a number of other updates that don’t directly involve machine learning. Kubernetes support is an obvious one and a smart addition, given that it will allow developers to build Kubernetes clusters that can span both the edge and a more centralized cloud.

The appeal of running Event Grid, Microsoft’s event routing service, at the edge is also pretty obvious, given that it’ll allow developers to connect services with far lower latency than if all the data had to run through a remote data center.

Other IoT Edge updates include the planned launch of a marketplace that will allow Microsoft partners and developers to share and monetize their edge modules, as well as a new certification program for hardware manufacturers to ensure that their devices are compatible with Microsoft’s platform. IoT Edge, as well as Windows 10 IoT and Azure Machine Learning, will also soon support hardware-accelerated model evaluation with DirextX 12 GPU, which is available in virtually every modern Windows PC.

May
01
2018
--

Workplace, Facebook’s enterprise version, now has 52 SaaS apps and bots, opens up for more integrations

Workplace, the more secure, closed enterprise version of Facebook that competes against the likes of Slack, Microsoft Teams and Hipchat as a platform for employees to communicate and work on things together, says that it today has tens of thousands of organisations using its platform.

Now to pick up more, and to bring more of customers into the paid premium tier of Workplace, Facebook is announcing a couple of new developments at F8.

First, it’s expanding the premium tier of the service with several more integrations — apps that it says have been the most requested by the “tens of thousands” of organizations using Workplace — including Jira, Sharepoint, and SurveyMonkey, bringing the total now to just over 50. And second, Facebook is now taking applications for app developers who want to integrate with the platform.

The latter is a significant shift: up to now, Facebook had been handpicking third-party integrations itself.

The new apps that are being announced today roughly fall into three categories, as outlined by Facebook. Those that let users share information; those that let users get daily summaries; and those that let users speed up data entry and data queries by way of bots.

New integrations for JIRA, Cornerstone OnDemand and Medallia allow users to bring in previews of content from these apps so that they can discuss them in Workplace. Users of Sharepoint from Microsoft can now also share folders from that into Workplace groups.

Meanwhile, users of SurveyMonkey, Hubspot, Marketo, Vonage and Zoom can get notifications from those apps to update on how campaigns and other work is running within those services.

Lastly, Workplace is now bringing bots into its platform to help manage queries from apps outside of it. A new integration with ADP for example will let employees start a chat with it to request a payslip, book and get updates on vacation time and more. Others that are launching bots for querying their apps include AdobeSign, Kronos, Smartsheet and Workday.

The bigger idea behind today’s app expansion, and opening up the platform to more users, is to continue to expand the usefulness of Workplace.

It’s been a fairly methodical journey, the antithesis of “move fast and break things,” Facebook’s (sometimes notorious) mantra.

When the service made its official debut in closed beta back in January 2015 (when it was called Facebook at Work), it was little more than a basic version of Facebook that could be used in a more closed environment, a little like a closed Facebook Group.

It rebranded to Workplace when it officially left its closed beta in October 2016, but that was nearly two years later.

The subsequent addition of apps and features like chat (which came a year after that) have also been very gradual. Even today, there is a big gulf between the 50 or so apps that you can use with Workplace and the 1,400+ that are available on a platform like Slack.

Julien Codorniou, who leads the Workplace effort at Facebook, describes the company’s slower approach to adding apps and features as very intentional.

“We don’t need 1,000 apps on Workplace,” he said. “Our customers ask for an application like Sharepoint or Jira. We wanted to keep the integrations meaningful, and to keep them beautiful in the news feed.”

In 2017 Workplace snapped up retail giant Walmart as a customer, and in a way that deal is indicative of how Workplace has positioned itself as a product.

Facebook is targeting businesses that have a mix of employees that range from those who sit at desks to those who never sit at a desk. And as a result, it wants to keep the number of apps and IT noise low to avoid putting off those users.

“We try to connect people who have never had access to software as a service by making products like ServiceNow easy to use,” Codorniou said.

So there is a common touch, but it only goes so far.

Ultimately, the full set of app integrations is only available for those users who are on the premium tier of the product. Pricing is $3 per active user, per month up to 5,000 users. More users are negotiated with Facebook. Those who are standard users get a much more limited range of apps, including Box, OneDrive and Dropbox and RSS. Codorniou would not comment on whether Facebook had plans to add more apps into the free tier.

May
01
2018
--

ScienceLogic release gives IT view across entire stack

It’s tough being part of IT Ops these days. Your company could be operating across public and private clouds, and in many cases, an internal datacenter too. Meanwhile your developers are generating more code ever faster. ScienceLogic wants to help with it latest release, ScienceLogic SL1.

As company CEO Dave Link sees, we are seeing this vast confluence of technology influences coming together very quickly. He says the goal with this release is nothing less than a comprehensive, full-stack view of how an application is behaving, and how the different pieces that make up and connect to that application could be affecting its performance.

“Every CIO wants to know the health of their mission critical business services and only way to see that is to see through the entire stack,” Link said.

Part of the problem of course is the sheer volume of information. As that increases, it becomes nearly impossible for humans, even the most highly skilled among us, to keep up and understand what particular element may be causing an application to misbehave.  That problem is exacerbated further by the speed at which developers are generating new code.

Murali Nemani, CMO at ScienceLogic, says that’s where artificial intelligence and machine learning come into play. “Part of the problem is that if businesses are moving at machine speed in terms of their capability to innovate, the big challenge is how do you get operations to keep up with what developers are creating,” Nemani asked.

The machine learning aspect of the platform enables companies to begin automating solutions for some of the more common problems, while directing the more unusual ones to humans on the operations team. They rely on the AI tools produced by others, rather than trying to develop that part of the solution themselves. “If an application is performing poorly, we can diagnose which part is the problem child, then feed this information to AI/ML engines like Google TensorFlow or IBM Watson and see pattern recognition. That’s the way we achieve machine speed,” Nemani explained.

Link says they do this by looking at the problem holistically and giving operations a full view of the application to track down the problem behavior and fix it. “We look at all the layers when we think of a service view: security, systems, network, OS, infrastructure then the application layer (database and application tier). We then contextualize all of those elements into one service view, so [the customer has] the most efficient view of what’s happening in real time,” Link said.

The product being announced publicly today has been early Beta up to now and will be generally available on July 25th.

Apr
30
2018
--

RedHat’s CoreOS launches a new toolkit for managing Kubernetes applications

CoreOS, the Linux distribution and container management startup Red Hat acquired for $250 million earlier this year, today announced the Operator Framework, a new open source toolkit for managing Kubernetes clusters.

CoreOS first talked about operators in 2016. The general idea here is to encode the best practices for deploying and managing container-based applications as code. “The way we like to think of this is that the operators are basically a picture of the best employee you have,” Red Hat OpenShift product manager Rob Szumski told me. Ideally, the Operator Framework frees up the operations team from doing all the grunt work of managing applications and allows them to focus on higher-level tasks. And at the same time, it also removes the error-prone humans from the process since the operator will always follow the company rulebook.

“To make the most of Kubernetes, you need a set of cohesive APIs to extend in order to service and manage your applications that run on Kubernetes,” CoreOS CTO Brandon Philips explains in today’s announcement. “We consider Operators to be the runtime that manages this type of application on Kubernetes.”

As Szumski told me, the CoreOS team developed many of these best practices in building and managing its own Tectonic container platform (and from the community that uses it). Once written, the operators watch over the Kubernetes cluster and can handle upgrades, for example, and when things go awry, the can react to failures within milliseconds.

The overall Operator Framework consists of three pieces: an SDK for building, testing and packaging the actual operator, the Operator Lifecycle Manager for deploying the operator to a Kubernetes cluster and managing them, and the Operator Metering tool for metering Kubernetes users for enterprises that need to do chargebacks or that want to charge their customers based on usage.

The metering tool doesn’t quite seem to fit into the overall goal here, but as Szumski told me, it’s something a lot of businesses have been looking for and CoreOS actually argues that this is a first for Kubernetes.

Today’s CoreOS/Red Hat announcement only marks the start of a week that’ll likely see numerous other Kubernetes-related announcements. That’s because the Cloud Native Computing Foundation is its KubeCon developer conference in the next few days and virtually every company in the container ecosystem will attend the event and have some kind of announcements.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com