Nov
08
2018
--

Google Cloud wants to make it easier for data scientists to share models

Today, Google Cloud announced Kubeflow pipelines and AI Hub, two tools designed to help data scientists put to work across their organizations the models they create.

Rajen Sheth, director of product management for Google Cloud’s AI and ML products, says that the company recognized that data scientists too often build models that never get used. He says that if machine learning is really a team sport, as Google believes, models must get passed from data scientists to data engineers and developers who can build applications based on them.

To help fix that, Google is announcing Kubeflow pipelines, which are an extension of Kubeflow, an open-source framework built on top of Kubernetes designed specifically for machine learning. Pipelines are essentially containerized building blocks that people in the machine learning ecosystem can string together to build and manage machine learning workflows.

By placing the model in a container, data scientists can simply adjust the underlying model as needed and relaunch in a continuous delivery kind of approach. Sheth says this opens up even more possibilities for model usage in a company.

“[Kubeflow pipelines] also give users a way to experiment with different pipeline variants to identify which ones produce the best outcomes in a reliable and reproducible environment,” Sheth wrote in a blog post announcing the new machine learning features.

The company is also announcing AI Hub, which, as the name implies, is a central place where data scientists can go to find different kinds of ML content, including Kubeflow pipelines, Jupyter notebooks, TensorFlow modules and so forth. This will be a public repository seeded with resources developed by Google Cloud AI, Google Research and other teams across Google, allowing data scientists to take advantage of Google’s own research and development expertise.

But Google wanted the hub to be more than a public library — it also sees it as a place where teams can share information privately inside their organizations, giving it a dual purpose. This should provide another way to extend model usage by making essential building blocks available in a central repository.

AI Hub will be available in Alpha starting today with some initial components from Google, as well as tools for sharing some internal resources, but the plan is to keep expanding the offerings and capabilities over time.

Google believes if it provides easier ways to share model building blocks across an organization, the more likely they will be put to work. These tools are a step toward achieving that.

Oct
15
2018
--

Celonis brings intelligent process automation software to cloud

Celonis has been helping companies analyze and improve their internal processes using machine learning. Today the company announced it was providing that same solution as a cloud service with a few nifty improvements you won’t find on prem.

The new approach, called Celonis Intelligent Business Cloud, allows customers to analyze a workflow, find inefficiencies and offer improvements very quickly. Companies typically follow a workflow that has developed over time and very rarely think about why it developed the way it did, or how to fix it. If they do, it usually involves bringing in consultants to help. Celonis puts software and machine learning to bear on the problem.

Co-founder and CEO Alexander Rinke says that his company deals with massive volumes of data and moving all of that to the cloud makes sense. “With Intelligent Business Cloud, we will unlock that [on prem data], bring it to the cloud in a very efficient infrastructure and provide much more value on top of it,” he told TechCrunch.

The idea is to speed up the whole ingestion process, allowing a company to see the inefficiencies in their business processes very quickly. Rinke says it starts with ingesting data from sources such as Salesforce or SAP and then creating a visual view of the process flow. There may be hundreds of variants from the main process workflow, but you can see which ones would give you the most value to change, based on the number of times the variation occurs.

Screenshot: Celonis

By packaging the Celonis tools as a cloud service, they are reducing the complexity of running and managing it. They are also introducing an app store with over 300 pre-packaged options for popular products like Salesforce and ServiceNow and popular process like order to cash. This should also help get customers up and running much more quickly.

New Celonis App Store. Screenshot: Celonis

The cloud service also includes an Action Engine, which Rinke describes as a big step toward moving Celonis from being purely analytical to operational. “Action Engine focuses on changing and improving processes. It gives workers concrete info on what to do next. For example in process analysis, it would notice on time delivery isn’t great because order to cash is to slow. It helps accelerate changes in system configuration,” he explained.

Celonis Action Engine. Screenshot: Celonis

The new cloud service is available today. Celonis was founded in 2011. It has raised over $77 million. The most recent round was a $50 million Series B on a valuation over $1 billion.

Oct
09
2018
--

After its acquisition, Magento starts integrating Adobe’s personalization and analytics tools

It’s been less than six months since Adobe acquired commerce platform Magento for $1.68 billion and today, at Magento’s annual conference, the company announced the first set of integrations that bring the analytics and personalization features of Adobe’s Experience Cloud to Magento’s Commerce Cloud.

In many ways, the acquisition of Magento helps Adobe close the loop in its marketing story by giving its customers a full spectrum of services that go from analytics, marketing and customer acquisition all the way to closing the transaction. It’s no surprise then that the Experience Cloud and Commerce Cloud are growing closer to, in Adobe’s words, “make every experience shoppable.”

“From the time that this company started to today, our focus has been pretty much exactly the same,” Adobe’s SVP of Strategic Marketing Aseem Chandra told me. “This is, how do we deliver better experiences across any channel in which our customers are interacting with a brand? If you think about the way that customers interact today, every experience is valuable and important. […] It’s no longer just about the product, it’s more about the experience that we deliver around that product that really counts.”

So with these new integrations, Magento Commerce Cloud users will get access to an integration with Adobe Target, for example, the company’s machine learning-based tool for personalizing shopping experiences. Similarly, they’ll get easy access to predictive analytics from Adobe Analytics to analyze their customers’ data and predict future churn and purchasing behavior, among other things.

These kinds of AI/ML capabilities were something Magento had long been thinking about, Magento’s former CEO and new Adobe SVP fo Commerce Mark Lavelle told me, but it took the acquisition by Adobe to really be able to push ahead with this. “Where the world’s going for Magento clients — and really for all of Adobe’s clients — is you can’t do this yourself,” he said. “you need to be associated with a platform that has not just technology and feature functionality, but actually has this living and breathing data environment that that learns and delivers intelligence back into the product so that your job is easier. That’s what Amazon and Google and all of the big companies that we’re all increasingly competing against or cooperating with have. They have that type of scale.” He also noted that at least part of this match-up of Adobe and Magento is to give their clients that kind of scale, even if they are small- or medium-sized merchants.

The other new Adobe-powered feature that’s now available is an integration with the Adobe Experience Manager. That’s Adobe’s content management tool that itself integrates many of these AI technologies for building personalized mobile and web content and shopping experiences.

“The goal here is really in unifying that profile, where we have a lot of behavioral information about our consumers,” said Aseem. “And what Magento allows us to do is bring in the transactional information and put those together so we get a much richer view of who the consumers are and how we personalize that experience with the next interaction that they have with a Magento-based commerce site.”

It’s worth noting that Magento is also launching a number of other new features to its Commerce Cloud that include a new drag-and-drop editing tool for site content, support for building Progressive Web Applications, a streamlined payment tool with improved risk management capabilities, as well as a new integration with the Amazon Sales Channel so Magento stores can sync their inventory with Amazon’s platform. Magneto is also announcing integrations with Google’s Merchant Center and Advertising Channels for Google Smart Shopping Campaigns.

Sep
24
2018
--

Adobe introduces AI assistant to help Analytics users find deeper insights

Adobe Analytics is a sophisticated product, so much so that users might focus on a set of known metrics at the cost of missing key insights. Adobe introduced an AI-fueled virtual assistant called Intelligent Alerts today to help users find deeper insights they might have otherwise missed.

John Bates, director of product management for Adobe Analytics says that in the past, the company has used artificial intelligence and machine learning under the hood of Analytics to help their users understand their customer’s behavior better. This marks the first time, Adobe will be using this technology to understand how the user works with Analytics to offer new data they might not have considered.

“Historically we’ve analyzed the data that we collect on behalf of our customers, on behalf of brands and help provide insights. Now we’re analyzing our users’ behavior within Adobe Analytics, and then mashing them up with those insights that are most relevant and personalized for that individual, based on the signals that we see and how they use our tool,” Bates explained.

Adobe Intelligent Alerts. Screenshot: Adobe

Bates says that this isn’t unlike Netflix recommendations, which recommends content based on other shows and movies you’ve watched before, but applying it to the enterprise user, especially someone who really knows their way around Adobe Analytics. That’s because these power users provide the artificial intelligence engine with the strongest signals.

The way it works is the analyst receives some alerts they can dig into to give them additional insights. If they don’t like what they’re seeing, they can tune the system and it should learn over time what the analyst needs in terms of data.

Intelligent Alert Settings. Screenshot: Adobe

They can configure how often they see the alerts and how many they want to see. This all falls within the realm of Adobe’s artificial intelligence platform they call Sensei. Adobe built Sensei with the idea of injecting intelligence across the Adobe product line.

“It’s really a vision and strategy around how do we take things that data scientists do, and how we inject that into our technology such that an everyday user of Adobe Analytics can leverage the power of these advanced algorithms to help them better understand their customers and better perform in their jobs,” he said.

Sep
20
2018
--

AI could help push Neo4j graph database growth

Graph databases have always been useful to help find connections across a vast data set, and it turns out that capability is quite handy in artificial intelligence and machine learning too. Today, Neo4j, the makers of the open source and commercial graph database platform, announced the release of Neo4j 3.5, which has a number of new features aimed specifically at AI and machine learning.

Neo4j founder and CEO Emil Eifrem says he had recognized the connection between AI and machine learning and graph databases for a while, but he says that it has taken some time for the market to catch up to the idea.

“There has been a lot momentum around AI and graphs…Graphs are very fundamental to AI. At the same time we were seeing some early use cases, but not really broad adoption, and that’s what we’re seeing right now,” he explained.

AI graph uses cases. Graphic: Neo4j

To help advance AI uses cases, today’s release includes a new full text search capability, which Eifrem says has been one of the most requested features. This is important because when you are making connections between entities, you have to be able to find all of the examples regardless of how it’s worded — for example, human versus humans versus people.

Part of that was building their own indexing engine to increase indexing speed, which becomes essential with ever more data to process. “Another really important piece of functionality is that we have improved our data ingestion very significantly. We have 5x end-to-end performance improvements when it comes to importing data. And this is really important for connected feature extraction, where obviously, you need a lot of data to be able to train the machine learning,” he said. That also means faster sorting of data too.

Other features in the new release include improvements to the company’s own Cypher database query language and better visualization of the graphs to give more visibility, which is useful for visualizing how machine learning algorithms work, which is known as AI explainability. They also announced support for the Go language and increased security.

Graph databases are growing increasingly important as we look to find connections between data. The most common use case is the knowledge graph, which is what lets us see connections in a huge data sets. Common examples include who we are connected to on a social network like Facebook, or if we bought one item, we might like similar items on an ecommerce site.

Other use cases include connected feature extraction, a common machine learning training techniques that can look at a lot of data and extract the connections, the context and the relationships for a particular piece of data, such as suspects in a criminal case and the people connected to them.

Neo4j has over 300 large enterprise customers including Adobe, Microsoft, Walmart, UBS and NASA. The company launched in 2007 and has raised $80 million. The last round was $36 million in November 2016.

Sep
19
2018
--

IBM launches cloud tool to detect AI bias and explain automated decisions

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.

The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.

It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.

IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.

For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.

The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.

However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.

So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.

And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)

In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

Sep
12
2018
--

Nvidia launches the Tesla T4, its fastest data center inferencing platform yet

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the first to bring the new T4 GPUs to its Cloud Platform.

Nvidia argues that the T4s are significantly faster than the P4s. For language inferencing, for example, the T4 is 34 times faster than using a CPU and more than 3.5 times faster than the P4. Peak performance for the P4 is 260 TOPS for 4-bit integer operations and 65 TOPS for floating point operations. The T4 sits on a standard low-profile 75 watt PCI-e card.

What’s most important, though, is that Nvidia designed these chips specifically for AI inferencing. “What makes Tesla T4 such an efficient GPU for inferencing is the new Turing tensor core,” said Ian Buck, Nvidia’s VP and GM of its Tesla data center business. “[Nvidia CEO] Jensen [Huang] already talked about the Tensor core and what it can do for gaming and rendering and for AI, but for inferencing — that’s what it’s designed for.” In total, the chip features 320 Turing Tensor cores and 2,560 CUDA cores.

In addition to the new chip, Nvidia is also launching a refresh of its TensorRT software for optimizing deep learning models. This new version also includes the TensorRT inference server, a fully containerized microservice for data center inferencing that plugs seamlessly into an existing Kubernetes infrastructure.

 

 

Sep
06
2018
--

PagerDuty raises $90M to wake up more engineers in the middle of the night

PagerDuty, the popular service that helps businesses monitor their tech stacks, manage incidents and alert engineers when things go sideways, today announced that it has raised a $90 million Series D round at a valuation of $1.3 billion. With this, PagerDuty, which was founded in 2009, has now raised well over $170 million.

The round was led by T. Rowe Price Associates and Wellington Management . Accel, Andreessen Horowitz and Bessemer Venture Partners participated. Given the leads in this round, chances are that PagerDuty is gearing up for an IPO.

“This capital infusion allows us to continue our investments in innovation that leverages artificial intelligence and machine learning, enabling us to help our customers transform their companies and delight their customers,” said Jennifer Tejada, CEO at PagerDuty in today’s announcement. “From a business standpoint, we can strengthen our investment in and development of our people, our most valuable asset, as we scale our operations globally. We’re well positioned to make the lives of digital workers better by elevating work to the outcomes that matter.”

Currently PagerDuty users include the likes of GE, Capital One, IBM, Spotify and virtually every other software company you’ve ever heard of. In total, more than 10,500 enterprises now use the service. While it’s best known for its alerting capabilities, PagerDuty has expanded well beyond that over the years, though it’s still a core part of its service. Earlier this year, for example, the company announced its new AIOps services that aim to help businesses reduce the amount of noisy and unnecessary alerts. I’m sure there’s a lot of engineers who are quite happy about that (and now sleep better).

Aug
29
2018
--

Storage provider Cloudian raises $94M

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, whether from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”

Aug
21
2018
--

Talla builds a smarter customer knowledge base

Talla is taking aim at the customer service industry with its latest release, an AI-infused knowledge base. Today, the company released version 2.0 of the Talla Intelligent Knowledge Base.

The company also announced that Paula Long, most recently CEO at Data Gravity, has joined the company as SVP of engineering.

This tool combines customer content with automation, chatbots and machine learning. It’s designed to help teams who work directly with customers get at the information they need faster and the machine learning element should allow it to improve over time.

You can deploy the product as a widget on your website to give customers direct access to the information, but Rob May, company founder and CEO says the most common use case involves helping sales, customer service and customer success teams get access to the most relevant and current information, whether that’s maintenance or pricing.

The information can get into the knowledge base in several ways. First of all you can enter elements like product pages and FAQs directly in the Talla product as with any knowledge base. Secondly if an employee asks a question and there isn’t an adequate answer, it exposes the gaps in information.

Talla Knowledge Base gap list. Screenshot: Talla

“It really shows you the unknown unknowns in your business. What are the questions people are asking that you didn’t realize you don’t have content for or you don’t have answers for. And so that allows you to write new content and better content,” May explained.

Finally, the company can import information into the knowledge base from Salesforce, ServiceNow, Jira or wherever it happens to live, and that can be added to a new page or incorporated into existing page as appropriate.

Employees interact with the system by asking a bot questions and it supplies the answers if one exists. It works with Slack, Microsoft Teams or Talla Chat.

Talla bot in action in Talla Chat. Screenshot: Talla

Customer service remains a major pain point for many companies. It is the direct link to customers when they are having issues. A single bad experience can taint a person’s view of a brand, and chances are when a customer is unhappy they let their friends know on social media, making an isolated incident much bigger. Having quicker access to more accurate information could help limit negative experiences.

Today’s announcement builds on an earlier version of the product that took aim at IT help desks. Talla found customers kept asking for a solution that provided similar functionality with customer-facing information and they have tuned it for that.

May launched Talla in 2015 after selling his former startup Backupify to Datto in 2014. The company, which is based near Boston, has raised $12.3 million.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com