Aug
17
2018
--

Incentivai launches to simulate how hackers break blockchains

Cryptocurrency projects can crash and burn if developers don’t predict how humans will abuse their blockchains. Once a decentralized digital economy is released into the wild and the coins start to fly, it’s tough to implement fixes to the smart contracts that govern them. That’s why Incentivai is coming out of stealth today with its artificial intelligence simulations that test not just for security holes, but for how greedy or illogical humans can crater a blockchain community. Crypto developers can use Incentivai’s service to fix their systems before they go live.

“There are many ways to check the code of a smart contract, but there’s no way to make sure the economy you’ve created works as expected,” says Incentivai’s solo founder Piotr Grudzie?. “I came up with the idea to build a simulation with machine learning agents that behave like humans so you can look into the future and see what your system is likely to behave like.”

Incentivai will graduate from Y Combinator next week and already has a few customers. They can either pay Incentivai to audit their project and produce a report, or they can host the AI simulation tool like a software-as-a-service. The first deployments of blockchains it’s checked will go out in a few months, and the startup has released some case studies to prove its worth.

“People do theoretical work or logic to prove that under certain conditions, this is the optimal strategy for the user. But users are not rational. There’s lots of unpredictable behavior that’s difficult to model,” Grudzie? explains. Incentivai explores those illogical trading strategies so developers don’t have to tear out their hair trying to imagine them.

Protecting crypto from the human x-factor

There’s no rewind button in the blockchain world. The immutable and irreversible qualities of this decentralized technology prevent inventors from meddling with it once in use, for better or worse. If developers don’t foresee how users could make false claims and bribe others to approve them, or take other actions to screw over the system, they might not be able to thwart the attack. But given the right open-ended incentives (hence the startup’s name), AI agents will try everything they can to earn the most money, exposing the conceptual flaws in the project’s architecture.

“The strategy is the same as what DeepMind does with AlphaGo, testing different strategies,” Grudzie? explains. He developed his AI chops earning a masters at Cambridge before working on natural language processing research for Microsoft.

Here’s how Incentivai works. First a developer writes the smart contracts they want to test for a product like selling insurance on the blockchain. Incentivai tells its AI agents what to optimize for and lays out all the possible actions they could take. The agents can have different identities, like a hacker trying to grab as much money as they can, a faker filing false claims or a speculator that cares about maximizing coin price while ignoring its functionality.

Incentivai then tweaks these agents to make them more or less risk averse, or care more or less about whether they disrupt the blockchain system in its totality. The startup monitors the agents and pulls out insights about how to change the system.

For example, Incentivai might learn that uneven token distribution leads to pump and dump schemes, so the developer should more evenly divide tokens and give fewer to early users. Or it might find that an insurance product where users vote on what claims should be approved needs to increase its bond price that voters pay for verifying a false claim so that it’s not profitable for voters to take bribes from fraudsters.

Grudzie? has done some predictions about his own startup too. He thinks that if the use of decentralized apps rises, there will be a lot of startups trying to copy his approach to security services. He says there are already some doing token engineering audits, incentive design and consultancy, but he hasn’t seen anyone else with a functional simulation product that’s produced case studies. “As the industry matures, I think we’ll see more and more complex economic systems that need this.”

Aug
15
2018
--

Oracle open sources Graphpipe to standardize machine learning model deployment

Oracle, a company not exactly known for having the best relationship with the open source community, is releasing a new open source tool today called Graphpipe, which is designed to simplify and standardize the deployment of machine learning models.

The tool consists of a set of libraries and tools for following the standard.

Vish Abrams, whose background includes helping develop OpenStack at NASA and later helping launch Nebula, an OpenStack startup in 2011, is leading the project. He says as his team dug into the machine learning workflow, they found a gap. While teams spend lots of energy developing a machine learning model, it’s hard to actually deploy the model for customers to use. That’s where Graphpipe comes in.

He points out that it’s common with newer technologies like machine learning for people to get caught up in the hype. Even though the development process keeps improving, he says that people often don’t think about deployment.

“Graphpipe is what’s grown out of our attempt to really improve deployment stories for machine learning models, and to create an open standard around having a way of doing that to improve the space,” Abrams told TechCrunch.

As Oracle dug into this, they identified three main problems. For starters, there is no standard way to serve APIs, leaving you to use whatever your framework provides. Next, there is no standard deployment mechanism, which leaves developers to build custom ones every time. Finally, they found existing methods leave performance as an afterthought, which in machine learning could be a major problem.

“We created Graphpipe to solve these three challenges. It provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make deploying and querying machine learning models from any framework a breeze,” Abrams wrote in a blog post announcing the release of Graphpipe.

The company decided to make this a standard and to open source it to try and move machine learning model deployment forward. “Graphpipe sits on that intersection between solving a business problems and pushing the state of the art forward, and I think personally, the best way to do that is by have an open source approach. Often, if you’re trying to standardize something without going for the open source bits, what you end up with is a bunch of competing technologies,” he said.

Abrams acknowledged the tension that has existed between Oracle and the open source community over the years, but says they have been working to change the perception recently with contributions to Kubernetes and Oracle FN, their open source Serverless Functions Platform as examples. Ultimately he says, if the technology is interesting enough, people will give it a chance, regardless of who is putting it out there. And of course, once it’s out there, if a community builds around it, they will adapt and change it as open source projects tend to do. Abrams hopes that happens.

“We care more about the standard becoming quite broadly adopted, than we do about our particular implementation of it because that makes it easier for everyone. It’s really up to the community decide that this is valuable and interesting.” he said.

Graphpipe is available starting today on the Oracle GitHub Graphpipe page.

Aug
13
2018
--

New Uber feature uses machine learning to sort business and personal rides

Uber announced a new program today called Profile Recommendations that takes advantage of machine intelligence to reduce user error when switching between personal and business accounts.

It’s not unusual for a person to have both types of accounts. When you’re out and about, it’s easy to forget to switch between them when appropriate. Uber wants to help by recommending the correct one.

“Using machine learning, Uber can predict which profile and corresponding payment method an employee should be using, and make the appropriate recommendation,” Ronnie Gurion, GM and Global Head of Uber for Business wrote in a blog post announcing the new feature.

Uber has been analyzing a dizzying amount of trip data for so long, it can now (mostly) understand the purpose of a given trip based on the details of your request. While it’s certainly not perfect because it’s not always obvious what the purpose is, Uber believes it can determine the correct intention 80 percent of the time. For that remaining 20 percent, when it doesn’t get it right, Uber is hoping to simplify corrections too.

Photo: Uber

Business users can now also assign trip reviewers — managers or other employees who understand the employee’s usage patterns, and can flag questionable rides. Instead of starting an email thread or complicated bureaucratic process to resolve an issue, the employee can now see these flagged rides and resolve them right in the app. “This new feature not only saves the employee’s and administrator’s time, but it also cuts down on delays associated with closing out reports,” Gurion wrote in the blog post announcement.

Uber also announced that it’s supporting a slew of new expense reporting software to simplify integration with these systems. They currently have integrations with Certify, Chrome River, Concur and Expensify. They will be adding support for Expensya, Happay, Rydoo, Zeno by Serko and Zoho Expense starting in September.

All of this should help business account holders deal with Uber expenses more efficiently, while integrating with many of the leading expense programs to move data smoothly from Uber to a company’s regular record-keeping systems.

Jul
30
2018
--

Google Calendar makes rescheduling meetings easier

Nobody really likes meetings — and the few people who do like them are the ones with whom you probably don’t want to have meetings. So when you’ve reached your fill and decide to reschedule some of those obligations, the usual process of trying to find a new meeting time begins. Thankfully, the Google Calendar team has heard your sighs of frustration and built a new tool that makes rescheduling meetings much easier.

Starting in two weeks, on August 13th, every guest will be able to propose a new meeting time and attach to that update a message to the organizer to explain themselves. The organizer can then review and accept or deny that new time slot. If the other guests have made their calendars public, the organizer can also see the other attendees’ availability in a new side-by-side view to find a new time.

What’s a bit odd here is that this is still mostly a manual feature. To find meeting slots to begin with, Google already employs some of its machine learning smarts to find the best times. This new feature doesn’t seem to employ the same algorithms to proposed dates and times for rescheduled meetings.

This new feature will work across G Suite domains and also with Microsoft Exchange. It’s worth noting, though, that this new option won’t be available for meetings with more than 200 attendees and all-day events.

Jul
25
2018
--

Google is baking machine learning into its BigQuery data warehouse

There are still a lot of obstacles to building machine learning models and one of those is that in order to build those models, developers often have to move a lot of data back and forth between their data warehouses and wherever they are building their models. Google is now making this part of the process a bit easier for the developers and data scientists in its ecosystem with BigQuery ML, a new feature of its BigQuery data warehouse, by building some machine learning functionality right into BigQuery.

Using BigQuery ML, developers can build models using linear and logistical regression right inside their data warehouse without having to transfer data back and forth as they build and fine-tune their models. And all they have to do to build these models and get predictions is to write a bit of SQL.

Moving data doesn’t sound like it should be a big issue, but developers often spend a lot of their time on this kind of grunt work — time that would be better spent on actually working on their models.

BigQuery ML also promises to make it easier to build these models, even for developers who don’t have a lot of experience with machine learning. To get started, developers can use what’s basically a variant of standard SQL to say what kind of model they are trying to build and what the input data is supposed to be. From there, BigQuery ML then builds the model and allows developers to almost immediately generate predictions based on it. And they won’t even have to write any code in R or Python.

These new features are now available in beta.

Jul
23
2018
--

SessionM customer loyalty data aggregator snags $23.8 M investment

SessionM announced a $23.8 million Series E investment led by Salesforce Ventures. A bushel of existing investors including Causeway Media Partners, CRV, General Atlantic, Highland Capital and Kleiner Perkins Caufield & Byers also contributed to the round. The company has now raised over $97 million.

At its core, SessionM aggregates loyalty data for brands to help them understand their customer better, says company co-founder and CEO Lars Albright. “We are a customer data and engagement platform that helps companies build more loyal and profitable relationships with their consumers,” he explained.

Essentially that means, they are pulling data from a variety of sources and helping brands offer customers more targeted incentives, offers and product recommendations “We give [our users] a holistic view of that customer and what motivates them,” he said.

Screenshot: SessionM (cropped)

To achieve this, SessionM takes advantage of machine learning to analyze the data stream and integrates with partner platforms like Salesforce, Adobe and others. This certainly fits in with Adobe’s goal to build a customer service experience system of record and Salesforce’s acquisition of Mulesoft in March to integrate data from across an organization, all in the interest of better understanding the customer.

When it comes to using data like this, especially with the advent of GDPR in the EU in May, Albright recognizes that companies need to be more careful with data, and that it has really enhanced the sensitivity around stewardship for all data-driven businesses like his.

“We’ve been at the forefront of adopting the right product requirements and features that allow our clients and businesses to give their consumers the necessary control to be sure we’re complying with all the GDPR regulations,” he explained.

The company was not discussing valuation or revenue. Their most recent round prior to today’s announcement, was a Series D in 2016 for $35 million also led by Salesforce Ventures.

SessionM, which was founded in 2011, has around 200 employees with headquarters in downtown Boston. Customers include Coca-Cola, L’Oreal and Barney’s.

Jul
18
2018
--

Swim.ai raises $10M to bring real-time analytics to the edge

Once upon a time, it looked like cloud-based serviced would become the central hub for analyzing all IoT data. But it didn’t quite turn out that way because most IoT solutions simply generate too much data to do this effectively and the round-trip to the data center doesn’t work for applications that have to react in real time. Hence the advent of edge computing, which is spawning its own ecosystem of startups.

Among those is Swim.ai, which today announced that it has raised a $10 million Series B funding round led by Cambridge Innovation Capital, with participation from Silver Creek Ventures and Harris Barton Asset Management. The round also included a strategic investment from Arm, the chip design firm you may still remember as ARM (but don’t write it like that or their PR department will promptly email you). This brings the company’s total funding to about $18 million.

Swim.ai has an interesting take on edge computing. The company’s SWIM EDX product combines both local data processing and analytics with local machine learning. In a traditional approach, the edge devices collect the data, maybe perform some basic operations against the data to bring down the bandwidth cost and then ship it to the cloud where the hard work is done and where, if you are doing machine learning, the models are trained. Swim.ai argues that this doesn’t work for applications that need to respond in real time. Swim.ai, however, performs the model training on the edge device itself by pulling in data from all connected devices. It then builds a digital twin for each one of these devices and uses that to self-train its models based on this data.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, share new insights instantly peer-to-peer – locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of Swim.ai. “We are thrilled to partner with our new and existing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

The company doesn’t disclose any current customers, but it is focusing its efforts on manufacturers, service providers and smart city solutions. Update: Swim.ai did tell us about two customers after we published this story: The City of Palo Alto and Itron.

Swim.ai plans to use its new funding to launch a new R&D center in Cambridge, UK, expand its product development team and tackle new verticals and geographies with an expanded sales and marketing team.

Jul
12
2018
--

Datadog launches Watchdog to help you monitor your cloud apps

Your typical cloud monitoring service integrates with dozens of service and provides you a pretty dashboard and some automation to help you keep tabs on how your applications are doing. Datadog has long done that but today, it is adding a new service called Watchdog, which uses machine learning to automatically detect anomalies for you.

The company notes that a traditional monitoring setup involves defining your parameters based on how you expect the application to behave and then set up dashboards and alerts to monitor them. Given the complexity of modern cloud applications, that approach has its limits, so an additional layer of automation becomes necessary.

That’s where Watchdog comes in. The service observes all of the performance data it can get its paws on, learns what’s normal, and then provides alerts when something unusual happens and — ideally — provides insights into where exactly the issue started.

“Watchdog builds upon our years of research and training of algorithms on our customers data sets. This technology is unique in that it not only identifies an issue programmatically, but also points users to probable root causes to kick off an investigation,” Datadog’s head of data science Homin Lee notes in today’s announcement.

The service is now available to all Datadog customers in its Enterprise APM plan.

Jun
28
2018
--

Facebook is using machine learning to self-tune its myriad services

Regardless of what you may think of Facebook as a platform, they run a massive operation, and when you reach their level of scale you have to get more creative in how you handle every aspect of your computing environment.

Engineers quickly reach the limits of human ability to track information, to the point that checking logs and analytics becomes impractical and unwieldy on a system running thousands of services. This is a perfect scenario to implement machine learning, and that is precisely what Facebook has done.

The company published a blog post today about a self-tuning system they have dubbed Spiral. This is pretty nifty, and what it does is essentially flip the idea of system tuning on its head. Instead of looking at some data and coding what you want the system to do, you teach the system the right way to do it and it does it for you, using the massive stream of data to continually teach the machine learning models how to push the systems to be ever better.

In the blog post, the Spiral team described it this way: “Instead of looking at charts and logs produced by the system to verify correct and efficient operation, engineers now express what it means for a system to operate correctly and efficiently in code. Today, rather than specify how to compute correct responses to requests, our engineers encode the means of providing feedback to a self-tuning system.”

They say that coding in this way is akin to declarative code, like using SQL statements to tell the database what you want it to do with the data, but the act of applying that concept to systems is not a simple matter.

“Spiral uses machine learning to create data-driven and reactive heuristics for resource-constrained real-time services. The system allows for much faster development and hands-free maintenance of those services, compared with the hand-coded alternative,” the Spiral team wrote in the blog post.

If you consider the sheer number of services running on Facebook, and the number of users trying to interact with those services at any given time, it required sophisticated automation, and that is what Spiral is providing.

The system takes the log data and processes it through Spiral, which is connected with just a few lines of code. It then sends commands back to the server based on the declarative coding statements written by the team. To ensure those commands are always being fine-tuned, at the same time, the data gets sent from the server to a model for further adjustment in a lovely virtuous cycle. This process can be applied locally or globally.

The tool was developed by the team operating in Boston, and is only available internally inside Facebook. It took lots of engineering to make it happen, the kind of scope that only Facebook could apply to a problem like this (mostly because Facebook is one of the few companies that would actually have a problem like this).

Jun
11
2018
--

Splunk nabs on-call management startup VictorOps for $120M

In a DevOps world, the operations part of the equation needs to be on call to deal with issues as they come up 24/7. We used to use pagers. Today’s solutions like PagerDuty and VictorOps have been created to place this kind of requirement in a modern digital context. Today, Splunk bought VictorOps for $120 million in cash and Splunk securities.

It’s a company that makes a lot of sense for Splunk, a log management tool that has been helping customers deal with oodles of information being generated from back-end systems for many years. With VictorOps, the company gets a system to alert the operations team when something from that muddle of data actually requires their attention.

Splunk has been making moves in recent years to use artificial intelligence and machine learning to help make sense of the data and provide a level of automation required when the sheer volume of data makes it next to impossible for humans to keep up. VictorOps fits within that approach.

“The combination of machine data analytics and artificial intelligence from Splunk with incident management from VictorOps creates a ‘Platform of Engagement’ that will help modern development teams innovate faster and deliver better customer experiences,” Doug Merritt, president and CEO at Splunk said in a statement.

In a blog post announcing the deal, VictorOps founder and CEO Todd Vernon said the two companies’ missions are aligned. “Upon close, VictorOps will join Splunk’s IT Markets group and together will provide on-call technical staff an analytics and AI-driven approach for addressing the incident lifecycle, from monitoring to response to incident management to continuous learning and improvement,” Vernon wrote.

It should come as no surprise that the two companies have been working together even before the acquisition. “Splunk has been an important technical partner of ours for some time, and through our work together, we discovered that we share a common viewpoint that Modern Incident Management is in a period of strategic change where data is king, and insights from that data are key to maintaining a market leading strategy,” Vernon wrote in the blog post.

VictorOps was founded 2012 and has raised over $33 million, according to data on Crunchbase. The most recent investment was a $15 million Series B in December 2016.

The deal is expected to close in Splunk’s fiscal second quarter subject to customary closing conditions, according to a statement from Splunk.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com