Sep
19
2018
--

IBM launches cloud tool to detect AI bias and explain automated decisions

IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.

The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.

It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.

The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.

It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.

Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.

IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.

For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.

The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.

However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.

Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.

So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.

And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)

In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

Sep
13
2018
--

Hacera creates directory to make blockchain projects more searchable

In the 1990s when the web was young, companies like Yahoo, created directories of web pages to help make them more discoverable. Hacera wants to bring that same idea to blockchain, and today it announced the launch of the Hacera Network Registry.

CEO Jonathan Levi says that blockchains being established today risk being isolated because people simply can’t find them. If you have a project like the IBM -Maersk supply chain blockchain announced last month, how does an interested party like a supplier or customs authority find it and ask to participate? Up until the creation of this registry, there was no easy way to search for projects.

Early participants include heavy hitters like Microsoft, Hitachi, Huawei, IBM, SAP and Oracle, who are linking to projects being created on their platforms. The registry supports projects based on major digital ledger communities including Hyperledger, Quorum, Cosmos, Ethereum and Corda. The Hacera Network Registry is built on Hyperledger Fabric, and the code is open source. (Levi was Risk Manager for Hyperledger Fabric 1.0.)

Hacera Network Registry page

While early sponsors of the project include IBM and Hyperledger Fabric, Levi stressed the network is open to all. Blockchain projects can create information pages, not unlike a personal LinkedIn page, and Hacera verifies the data before adding it to the registry. There are currently more than 70 networks in the registry, and Hacera is hoping this is just the beginning.

Jerry Cuomo, VP of blockchain technologies at IBM, says for blockchain to grow it will require a way to register, lookup, join and transact across a variety of blockchain solutions. “As the number of blockchain consortiums, networks and applications continues to grow we need a means to list them and make them known to the world, in order to unleash the power of blockchain,” Cuomo told TechCrunch. Hacera is solving that problem.

This is exactly the kind of underlying infrastructure that the blockchain requires to expand as a technology. Cuomo certainly recognizes this.”We realized from the start that you cannot do blockchain on your own; you need a vibrant community and ecosystem of like-minded innovators who share the vision of helping to transform the way companies conduct business in the global economy,” he said.

Hacera understands that every cloud vendor wants people using their blockchain service. Yet they also see that to move the technology forward, there need to be some standard ways of conducting business, and they want to provide that layer. Levi has a broader vision for the network beyond pure discoverability. He hopes eventually to provide the means to share data through the registry.

Sep
06
2018
--

PagerDuty raises $90M to wake up more engineers in the middle of the night

PagerDuty, the popular service that helps businesses monitor their tech stacks, manage incidents and alert engineers when things go sideways, today announced that it has raised a $90 million Series D round at a valuation of $1.3 billion. With this, PagerDuty, which was founded in 2009, has now raised well over $170 million.

The round was led by T. Rowe Price Associates and Wellington Management . Accel, Andreessen Horowitz and Bessemer Venture Partners participated. Given the leads in this round, chances are that PagerDuty is gearing up for an IPO.

“This capital infusion allows us to continue our investments in innovation that leverages artificial intelligence and machine learning, enabling us to help our customers transform their companies and delight their customers,” said Jennifer Tejada, CEO at PagerDuty in today’s announcement. “From a business standpoint, we can strengthen our investment in and development of our people, our most valuable asset, as we scale our operations globally. We’re well positioned to make the lives of digital workers better by elevating work to the outcomes that matter.”

Currently PagerDuty users include the likes of GE, Capital One, IBM, Spotify and virtually every other software company you’ve ever heard of. In total, more than 10,500 enterprises now use the service. While it’s best known for its alerting capabilities, PagerDuty has expanded well beyond that over the years, though it’s still a core part of its service. Earlier this year, for example, the company announced its new AIOps services that aim to help businesses reduce the amount of noisy and unnecessary alerts. I’m sure there’s a lot of engineers who are quite happy about that (and now sleep better).

Aug
29
2018
--

Google takes a step back from running the Kubernetes development infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project, like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs more than 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry has served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure, either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands — making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.

Aug
09
2018
--

IBM teams with Maersk on new blockchain shipping solution

IBM and shipping giant Maersk having been working together for the last year developing a blockchain-based shipping solution called TradeLens. Today they moved the project from Beta into limited availability.

Marie Wieck, GM for IBM Blockchain says the product provides a way to digitize every step of the global trade workflow, transforming it into a real-time communication and visual data sharing tool.

TradeLens was developed jointly by the two companies with IBM providing the underlying blockchain technology and Maersk bringing the worldwide shipping expertise. It involves three components: the blockchain, which provides a mechanism for tracking goods from factory or field to delivery, APIs for others to build new applications on top of the platform these two companies have built, and a set of standards to facilitate data sharing among the different entities in the workflow such as customs, ports and shipping companies.

Wieck says the blockchain really changes how companies have traditionally tracked shipped goods. While many of the entities in the system have digitized the process, the data they have has been trapped in silos and previous attempts at sharing like EDI have been limited. “The challenge is they tend to think of a linear flow and you really only have visibility one [level] up and one down in your value chain,” she said.

The blockchain provides a couple of obvious advantages over previous methods. For starters, she says it’s safer because data is distributed, making it much more secure with digital encryption built in. The greatest advantage though is the visibility it provides. Every participant can check any aspect of the flow in real time, or an auditor or other authority can easily track the entire process from start to finish by clicking on a block in the blockchain instead of requesting data from each entity manually.

While she says it won’t entirely prevent fraud, it does help reduce it by putting more eyeballs onto the process. “If you had fraudulent data at start, blockchain won’t help prevent that. What it does help with is that you have multiple people validating every data set and you get greater visibility when something doesn’t look right,” she said.

As for the APIs, she sees the system becoming a shipping information platform. Developers can build on top of that, taking advantage of the data in the system to build even greater efficiencies. The standards help pull it together and align with APIs, such as providing a standard Bill of Lading. They are starting by incorporating existing industry standards, but are also looking for gaps that slow things down to add new standard approaches that would benefit everyone in the system.

So far, the companies have 94 entities in 300 locations around the world using TradeLens including customs authorities, ports, cargo shippers and logistics companies. They are opening the program to limited availability today with the goal of a full launch by the end of this year.

Wieck ultimately sees TradeLens as a way to facilitate trade by building in trust, the end of goal of any blockchain product. “By virtue of already having an early adopter program, and having coverage of 300 trading locations around the world, it is a very good basis for the global exchange of information. And I personally think visibility creates trust, and that can help in a myriad of ways,” she said.

Jul
31
2018
--

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

May
15
2018
--

Veridium Labs teams with IBM and Stellar on carbon credit blockchain

Veridium Labs has been trying to solve a hard problem about how to trade carbon offset credits in an open market. The trouble is that more complex credits don’t have a simple value like a stock, and there hasn’t been a formula to determine their individual value. That has made accounting for them and selling them on open exchanges difficult or impossible. It’s a problem Veridium believes they can finally solve with tokens and the blockchain.

This week the company announced a partnership with IBM to sell carbon offset tokens on the Stellar blockchain. Each company has a role here with Veridium setting up the structure and determining the value formula. Stellar acts as the digital ledger for the transactions and IBM will handle the nuts and bolts of the trade activity of buying, selling and managing the tokens.

Todd Lemons, CEO and cofounder of Veridium Labs, which is part of a larger environmental company called EnVision Corporation, says that even companies with the best of intentions have struggled with how to account for the complex carbon credits. There are simpler offset credits that are sold on exchanges, but ones that seek to measure the impact of a product through the entire supply chain are much more difficult to determine.  As one example, how does a company making a candy bar source its cocoa and sugar. It’s not always easy to determine through a web of suppliers and sellers.

Moving forward

To partly solve this problem, another Envision company, InfiniteEARTH developed a way to account for them called the Redd+ forest carbon accounting methodology. It is widely accepted to the point that it has been incorporated in the Paris Climate Agreement, but it doesn’t provide a way to turn the credits into what are called fungible assets, that is an easily tradable one. The problem is the value of a given credit shifts according to the overall environmental impact of producing a good and getting it to market. That value can change according to the product.

Jared Klee, blockchain manager for token initiatives at IBM, says that buying and accounting for Redd+ credits on the company balance sheet has been a huge challenge for organizations. “It’s a major pain point. Today Redd+ credits are over the counter assets and there is no central exchange,” he said. That means they are essentially one-off transactions and the company is forced to hold these assets on the books with no easy way to account for their actual value. That often results in a big loss, he says, and companies are looking for ways to comply in a more cost-efficient way.

Putting it together

The three companies — Veridium, IBM and Stellar — have come together to solve this problem by creating a digital token that acts as a layer on top of the carbon credit to give it a value and make it easier to account for. In addition, the tokens can be bought and sold on the blockchain.

The blockchain provides all the usual advantages of a decentralized record keeping system, immutable records and encrypted transactions.

Veridium is working on the underlying formula for token valuation that measures “carbon density per dollar times product group,” Lemons explained. “That can be coded into a token and carried out automatically,” he added. They are working with various world bodies like the United Nations and The World Resource Institute to help figure out the values for each product group.

All of the details are still being worked out as the idea works its way through the various regulatory bodies, but the companies hope to be making the tokens available for sale some time later this year.

Ultimately this is about finding ways to help businesses comply with environmental initiatives and remove some of the complexity inherent in that process today. “We hope the tokens will provide less friction and a much higher adoption rate,” Lemons said.

Apr
26
2018
--

IBM introduces a blockchain to verify the jewelry supply chain

Every time I talk to someone about the viability of blockchain, I get challenged to show a real project beyond the obvious bitcoin use case. IBM has been working to build large enterprise projects blockchain and today they offered an irrefutable example that they have dubbed TrustChain, a blockchain that proves the provenance of jewelry by following the supply chain from mine to store.

As you might expect the TrustChain is built on IBM blockchain technology and includes a consortium of companies involved in every step of the supply chain: Asahi Refining, the precious metals refiner; Helzberg Diamonds, a U.S. jewelry retailer; LeachGarner, a precious metals supplier and The Richline Group, a global jewelry manufacturer. It even includes some third-party verification with UL Labs for the skeptical among you.

“What we are announcing and bringing forward has been in the works for some time. It’s the first end-to-end industry capability on blockchain that has its core in trust,” Jason Kelley, the GM of blockchain services at IBM told TechCrunch.

While there are trust mechanisms in place to ensure the authenticity of jewelry, they tend to be more piecemeal and this one is designed to be more comprehensive. One of the primary benefits of using blockchain in this instance is that it’s so much more efficient. Instead shuffling paper, the process becomes much more digital and reduces a lot (although not all) of the manual paper-pushing along the way.

Photo: IBM

Of course, just because it’s on the blockchain doesn’t mean there won’t be attempts to circumvent the system, but the TrustChain has a mechanism for participants to check the validity of each transaction, each step of the way. “If there is a dispute, instead of calling and following back through the process in a more manual way, you can click on a trusted chain, and you’re able to see what happened immediately. That reduces the number of steps in the process, and speeds up what has been a paper-laden and manual effort,” Kelley explained.

He fully recognizes the hype surrounding blockchain and that it’s the latest shiny tech thing, but he says if you set aside the name, the capability is really what’s important here. “Now we can share this [data] in a permissioned network and we can be sure it’s accurate,” he said.

The notion of the permissioned blockchain is an important one here. It means that you have to be allowed on the blockchain to participate, and everyone on the blockchain has to agree to let any members on. “That’s what exciting with TrustChain. Each point in the supply chain has bought into the consortium,” he said.

He acknowledges that errors could be introduced in any system, whether intentional or not, but he says the beauty of this system is that blockchain is a team sport and many, many eyeballs are acting as a check for each step along the way. If a problem is found, it can be fixed through the same level of consensus.

Blockchain network Photo: Zapp2Photo

Kelley says this level of trust is increasingly essential because consumers are demanding transparency in the jewelry they buy. They want to be sure the diamond or precious metal in the jewelry was not mined by exploited labor and in a sustainable way. Research has found consumers are willing to pay more for such proof.

By next year, you could be able to pull out your smart phone, scan a QR code on the diamond you want to by and see a visual of the entire supply chain right on your smartphone. Kelley such an interface is in the works for the consumer side.

The blockchain is clearly still in early days, and it can’t solve every problem, but systems like this could help prove that there are actual viable scalable use cases for it.

Mar
22
2018
--

IBM can’t stop milking the Watson brand

More than seven years after IBM Watson beat a couple of human Jeopardy! champions, the company has continued to make hay with the brand. Watson, at its core, is simply an artificial intelligence engine and while that’s not trivial by any means, neither is it the personified intelligence that their TV commercials would have the less technically savvy believe.

These commercials contribute to this unrealistic idea that humans can talk to machines in this natural fashion. You’ve probably seen some. They show this symbol talking to humans in a robotic voice explaining its capabilities. Some of the humans include Bob Dylan, Serena Williams and Stephen King.

In spite of devices like Alexa and Google Home, we certainly don’t have machines giving us detailed explanations, at least not yet.

IBM would probably be better served aiming its commercials at the enterprises it sells to, rather than the general public, who may be impressed by a talking box having a conversation with a star. However, those of us who have at least some understanding of the capabilities of such tech, and those who buy it, don’t need such bells and whistles. We need much more practical applications. While chatting with Serena Williams about competitiveness may be entertaining, it isn’t really driving home the actual value proposition of this tech for business.

The trouble with using Watson as a catch-all phrase is that it reduces the authenticity of the core technology behind it. It’s not as though IBM is alone in trying to personify its AI though. We’ve seen the same thing from Salesforce with Einstein, Microsoft with Cortana and Adobe with Sensei. It seems that these large companies can’t deliver artificial intelligence without hiding it behind a brand.

The thing is this though, this is not a consumer device like the Amazon Echo or Google Home. It’s a set of technologies like deep learning, computer vision and natural language processing, but that’s hard to sell, so these companies try to put a brand on it like it’s a single entity.

Just this week, at the IBM Think Conference in Las Vegas, we saw a slew of announcements from IBM that took on the Watson brand. That included Watson Studio, Watson Knowledge Catalog, Watson Data Kits and Watson Assistant. While they were at it, they also announced they were beefing up their partnership Apple with — you guessed it — Watson and Apple Core ML. (Do you have anything without quite so much Watson in it?)

Marketers gonna market and there is little we can do, but when you overplay your brand, you may be doing your company more harm than good. IBM has saturated the Watson brand, and might not be reaching the intended audience as a result.

Mar
19
2018
--

Apple, IBM add machine learning to partnership with Watson-Core ML coupling

Apple and IBM may seem like an odd couple, but the two companies have been working closely together for several years now. That has involved IBM sharing its enterprise expertise with Apple and Apple sharing its design sense with IBM. The companies have actually built hundreds of enterprise apps running on iOS devices. Today, they took that friendship a step further when they announced they were providing a way to combine IBM Watson machine learning with Apple Core ML to make the business apps running on Apple devices all the more intelligent.

The way it works is that a customer builds a machine learning model using Watson, taking advantage of data in an enterprise repository to train the model. For instance, a company may want to help field service techs point their iPhone camera at a machine and identify the make and model to order the correct parts. You could potentially train a model to recognize all the different machines using Watson’s image recognition capability.

The next step is to convert that model into Core ML and include it in your custom app. Apple introduced Core ML at the Worldwide Developers Conference last June as a way to make it easy for developers to move machine learning models from popular model building tools like TensorFlow, Caffe or IBM Watson to apps running on iOS devices.

After creating the model, you run it through the Core ML converter tools and insert it in your Apple app. The agreement with IBM makes it easier to do this using IBM Watson as the model building part of the equation. This allows the two partners to make the apps created under the partnership even smarter with machine learning.

“Apple developers need a way to quickly and easily build these apps and leverage the cloud where it’s delivered. [The partnership] lets developers take advantage of the Core ML integration,” Mahmoud Naghshineh, general manager for IBM Partnerships and Alliances explained.

To make it even easier, IBM also announced a cloud console to simplify the connection between the Watson model building process and inserting that model in the application running on the Apple device.

Over time, the app can share data back with Watson and improve the machine learning algorithm running on the edge device in a classic device-cloud partnership. “That’s the beauty of this combination. As you run the application, it’s real time and you don’t need to be connected to Watson, but as you classify different parts [on the device], that data gets collected and when you’re connected to Watson on a lower [bandwidth] interaction basis, you can feed it back to train your machine learning model and make it even better,” Naghshineh said.

The point of the partnership has always been to use data and analytics to build new business processes, by taking existing approaches and reengineering them for a touch screen.

“This adds a level of machine learning to that original goal moving it forward to take advantage of the latest tech. “We are taking this to the next level through machine learning. We are very much on that path and bringing improved accelerated capabilities and providing better insight to [give users] a much greater experience,” Naghshineh said.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com