Aug
15
2018
--

Oracle open sources Graphpipe to standardize machine learning model deployment

Oracle, a company not exactly known for having the best relationship with the open source community, is releasing a new open source tool today called Graphpipe, which is designed to simplify and standardize the deployment of machine learning models.

The tool consists of a set of libraries and tools for following the standard.

Vish Abrams, whose background includes helping develop OpenStack at NASA and later helping launch Nebula, an OpenStack startup in 2011, is leading the project. He says as his team dug into the machine learning workflow, they found a gap. While teams spend lots of energy developing a machine learning model, it’s hard to actually deploy the model for customers to use. That’s where Graphpipe comes in.

He points out that it’s common with newer technologies like machine learning for people to get caught up in the hype. Even though the development process keeps improving, he says that people often don’t think about deployment.

“Graphpipe is what’s grown out of our attempt to really improve deployment stories for machine learning models, and to create an open standard around having a way of doing that to improve the space,” Abrams told TechCrunch.

As Oracle dug into this, they identified three main problems. For starters, there is no standard way to serve APIs, leaving you to use whatever your framework provides. Next, there is no standard deployment mechanism, which leaves developers to build custom ones every time. Finally, they found existing methods leave performance as an afterthought, which in machine learning could be a major problem.

“We created Graphpipe to solve these three challenges. It provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make deploying and querying machine learning models from any framework a breeze,” Abrams wrote in a blog post announcing the release of Graphpipe.

The company decided to make this a standard and to open source it to try and move machine learning model deployment forward. “Graphpipe sits on that intersection between solving a business problems and pushing the state of the art forward, and I think personally, the best way to do that is by have an open source approach. Often, if you’re trying to standardize something without going for the open source bits, what you end up with is a bunch of competing technologies,” he said.

Abrams acknowledged the tension that has existed between Oracle and the open source community over the years, but says they have been working to change the perception recently with contributions to Kubernetes and Oracle FN, their open source Serverless Functions Platform as examples. Ultimately he says, if the technology is interesting enough, people will give it a chance, regardless of who is putting it out there. And of course, once it’s out there, if a community builds around it, they will adapt and change it as open source projects tend to do. Abrams hopes that happens.

“We care more about the standard becoming quite broadly adopted, than we do about our particular implementation of it because that makes it easier for everyone. It’s really up to the community decide that this is valuable and interesting.” he said.

Graphpipe is available starting today on the Oracle GitHub Graphpipe page.

Aug
13
2018
--

New Uber feature uses machine learning to sort business and personal rides

Uber announced a new program today called Profile Recommendations that takes advantage of machine intelligence to reduce user error when switching between personal and business accounts.

It’s not unusual for a person to have both types of accounts. When you’re out and about, it’s easy to forget to switch between them when appropriate. Uber wants to help by recommending the correct one.

“Using machine learning, Uber can predict which profile and corresponding payment method an employee should be using, and make the appropriate recommendation,” Ronnie Gurion, GM and Global Head of Uber for Business wrote in a blog post announcing the new feature.

Uber has been analyzing a dizzying amount of trip data for so long, it can now (mostly) understand the purpose of a given trip based on the details of your request. While it’s certainly not perfect because it’s not always obvious what the purpose is, Uber believes it can determine the correct intention 80 percent of the time. For that remaining 20 percent, when it doesn’t get it right, Uber is hoping to simplify corrections too.

Photo: Uber

Business users can now also assign trip reviewers — managers or other employees who understand the employee’s usage patterns, and can flag questionable rides. Instead of starting an email thread or complicated bureaucratic process to resolve an issue, the employee can now see these flagged rides and resolve them right in the app. “This new feature not only saves the employee’s and administrator’s time, but it also cuts down on delays associated with closing out reports,” Gurion wrote in the blog post announcement.

Uber also announced that it’s supporting a slew of new expense reporting software to simplify integration with these systems. They currently have integrations with Certify, Chrome River, Concur and Expensify. They will be adding support for Expensya, Happay, Rydoo, Zeno by Serko and Zoho Expense starting in September.

All of this should help business account holders deal with Uber expenses more efficiently, while integrating with many of the leading expense programs to move data smoothly from Uber to a company’s regular record-keeping systems.

Jul
30
2018
--

Google Calendar makes rescheduling meetings easier

Nobody really likes meetings — and the few people who do like them are the ones with whom you probably don’t want to have meetings. So when you’ve reached your fill and decide to reschedule some of those obligations, the usual process of trying to find a new meeting time begins. Thankfully, the Google Calendar team has heard your sighs of frustration and built a new tool that makes rescheduling meetings much easier.

Starting in two weeks, on August 13th, every guest will be able to propose a new meeting time and attach to that update a message to the organizer to explain themselves. The organizer can then review and accept or deny that new time slot. If the other guests have made their calendars public, the organizer can also see the other attendees’ availability in a new side-by-side view to find a new time.

What’s a bit odd here is that this is still mostly a manual feature. To find meeting slots to begin with, Google already employs some of its machine learning smarts to find the best times. This new feature doesn’t seem to employ the same algorithms to proposed dates and times for rescheduled meetings.

This new feature will work across G Suite domains and also with Microsoft Exchange. It’s worth noting, though, that this new option won’t be available for meetings with more than 200 attendees and all-day events.

Jul
25
2018
--

Google is baking machine learning into its BigQuery data warehouse

There are still a lot of obstacles to building machine learning models and one of those is that in order to build those models, developers often have to move a lot of data back and forth between their data warehouses and wherever they are building their models. Google is now making this part of the process a bit easier for the developers and data scientists in its ecosystem with BigQuery ML, a new feature of its BigQuery data warehouse, by building some machine learning functionality right into BigQuery.

Using BigQuery ML, developers can build models using linear and logistical regression right inside their data warehouse without having to transfer data back and forth as they build and fine-tune their models. And all they have to do to build these models and get predictions is to write a bit of SQL.

Moving data doesn’t sound like it should be a big issue, but developers often spend a lot of their time on this kind of grunt work — time that would be better spent on actually working on their models.

BigQuery ML also promises to make it easier to build these models, even for developers who don’t have a lot of experience with machine learning. To get started, developers can use what’s basically a variant of standard SQL to say what kind of model they are trying to build and what the input data is supposed to be. From there, BigQuery ML then builds the model and allows developers to almost immediately generate predictions based on it. And they won’t even have to write any code in R or Python.

These new features are now available in beta.

Jul
23
2018
--

SessionM customer loyalty data aggregator snags $23.8 M investment

SessionM announced a $23.8 million Series E investment led by Salesforce Ventures. A bushel of existing investors including Causeway Media Partners, CRV, General Atlantic, Highland Capital and Kleiner Perkins Caufield & Byers also contributed to the round. The company has now raised over $97 million.

At its core, SessionM aggregates loyalty data for brands to help them understand their customer better, says company co-founder and CEO Lars Albright. “We are a customer data and engagement platform that helps companies build more loyal and profitable relationships with their consumers,” he explained.

Essentially that means, they are pulling data from a variety of sources and helping brands offer customers more targeted incentives, offers and product recommendations “We give [our users] a holistic view of that customer and what motivates them,” he said.

Screenshot: SessionM (cropped)

To achieve this, SessionM takes advantage of machine learning to analyze the data stream and integrates with partner platforms like Salesforce, Adobe and others. This certainly fits in with Adobe’s goal to build a customer service experience system of record and Salesforce’s acquisition of Mulesoft in March to integrate data from across an organization, all in the interest of better understanding the customer.

When it comes to using data like this, especially with the advent of GDPR in the EU in May, Albright recognizes that companies need to be more careful with data, and that it has really enhanced the sensitivity around stewardship for all data-driven businesses like his.

“We’ve been at the forefront of adopting the right product requirements and features that allow our clients and businesses to give their consumers the necessary control to be sure we’re complying with all the GDPR regulations,” he explained.

The company was not discussing valuation or revenue. Their most recent round prior to today’s announcement, was a Series D in 2016 for $35 million also led by Salesforce Ventures.

SessionM, which was founded in 2011, has around 200 employees with headquarters in downtown Boston. Customers include Coca-Cola, L’Oreal and Barney’s.

Jul
18
2018
--

Swim.ai raises $10M to bring real-time analytics to the edge

Once upon a time, it looked like cloud-based serviced would become the central hub for analyzing all IoT data. But it didn’t quite turn out that way because most IoT solutions simply generate too much data to do this effectively and the round-trip to the data center doesn’t work for applications that have to react in real time. Hence the advent of edge computing, which is spawning its own ecosystem of startups.

Among those is Swim.ai, which today announced that it has raised a $10 million Series B funding round led by Cambridge Innovation Capital, with participation from Silver Creek Ventures and Harris Barton Asset Management. The round also included a strategic investment from Arm, the chip design firm you may still remember as ARM (but don’t write it like that or their PR department will promptly email you). This brings the company’s total funding to about $18 million.

Swim.ai has an interesting take on edge computing. The company’s SWIM EDX product combines both local data processing and analytics with local machine learning. In a traditional approach, the edge devices collect the data, maybe perform some basic operations against the data to bring down the bandwidth cost and then ship it to the cloud where the hard work is done and where, if you are doing machine learning, the models are trained. Swim.ai argues that this doesn’t work for applications that need to respond in real time. Swim.ai, however, performs the model training on the edge device itself by pulling in data from all connected devices. It then builds a digital twin for each one of these devices and uses that to self-train its models based on this data.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, share new insights instantly peer-to-peer – locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of Swim.ai. “We are thrilled to partner with our new and existing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

The company doesn’t disclose any current customers, but it is focusing its efforts on manufacturers, service providers and smart city solutions. Update: Swim.ai did tell us about two customers after we published this story: The City of Palo Alto and Itron.

Swim.ai plans to use its new funding to launch a new R&D center in Cambridge, UK, expand its product development team and tackle new verticals and geographies with an expanded sales and marketing team.

Jul
12
2018
--

Datadog launches Watchdog to help you monitor your cloud apps

Your typical cloud monitoring service integrates with dozens of service and provides you a pretty dashboard and some automation to help you keep tabs on how your applications are doing. Datadog has long done that but today, it is adding a new service called Watchdog, which uses machine learning to automatically detect anomalies for you.

The company notes that a traditional monitoring setup involves defining your parameters based on how you expect the application to behave and then set up dashboards and alerts to monitor them. Given the complexity of modern cloud applications, that approach has its limits, so an additional layer of automation becomes necessary.

That’s where Watchdog comes in. The service observes all of the performance data it can get its paws on, learns what’s normal, and then provides alerts when something unusual happens and — ideally — provides insights into where exactly the issue started.

“Watchdog builds upon our years of research and training of algorithms on our customers data sets. This technology is unique in that it not only identifies an issue programmatically, but also points users to probable root causes to kick off an investigation,” Datadog’s head of data science Homin Lee notes in today’s announcement.

The service is now available to all Datadog customers in its Enterprise APM plan.

Jun
28
2018
--

Facebook is using machine learning to self-tune its myriad services

Regardless of what you may think of Facebook as a platform, they run a massive operation, and when you reach their level of scale you have to get more creative in how you handle every aspect of your computing environment.

Engineers quickly reach the limits of human ability to track information, to the point that checking logs and analytics becomes impractical and unwieldy on a system running thousands of services. This is a perfect scenario to implement machine learning, and that is precisely what Facebook has done.

The company published a blog post today about a self-tuning system they have dubbed Spiral. This is pretty nifty, and what it does is essentially flip the idea of system tuning on its head. Instead of looking at some data and coding what you want the system to do, you teach the system the right way to do it and it does it for you, using the massive stream of data to continually teach the machine learning models how to push the systems to be ever better.

In the blog post, the Spiral team described it this way: “Instead of looking at charts and logs produced by the system to verify correct and efficient operation, engineers now express what it means for a system to operate correctly and efficiently in code. Today, rather than specify how to compute correct responses to requests, our engineers encode the means of providing feedback to a self-tuning system.”

They say that coding in this way is akin to declarative code, like using SQL statements to tell the database what you want it to do with the data, but the act of applying that concept to systems is not a simple matter.

“Spiral uses machine learning to create data-driven and reactive heuristics for resource-constrained real-time services. The system allows for much faster development and hands-free maintenance of those services, compared with the hand-coded alternative,” the Spiral team wrote in the blog post.

If you consider the sheer number of services running on Facebook, and the number of users trying to interact with those services at any given time, it required sophisticated automation, and that is what Spiral is providing.

The system takes the log data and processes it through Spiral, which is connected with just a few lines of code. It then sends commands back to the server based on the declarative coding statements written by the team. To ensure those commands are always being fine-tuned, at the same time, the data gets sent from the server to a model for further adjustment in a lovely virtuous cycle. This process can be applied locally or globally.

The tool was developed by the team operating in Boston, and is only available internally inside Facebook. It took lots of engineering to make it happen, the kind of scope that only Facebook could apply to a problem like this (mostly because Facebook is one of the few companies that would actually have a problem like this).

Jun
11
2018
--

Splunk nabs on-call management startup VictorOps for $120M

In a DevOps world, the operations part of the equation needs to be on call to deal with issues as they come up 24/7. We used to use pagers. Today’s solutions like PagerDuty and VictorOps have been created to place this kind of requirement in a modern digital context. Today, Splunk bought VictorOps for $120 million in cash and Splunk securities.

It’s a company that makes a lot of sense for Splunk, a log management tool that has been helping customers deal with oodles of information being generated from back-end systems for many years. With VictorOps, the company gets a system to alert the operations team when something from that muddle of data actually requires their attention.

Splunk has been making moves in recent years to use artificial intelligence and machine learning to help make sense of the data and provide a level of automation required when the sheer volume of data makes it next to impossible for humans to keep up. VictorOps fits within that approach.

“The combination of machine data analytics and artificial intelligence from Splunk with incident management from VictorOps creates a ‘Platform of Engagement’ that will help modern development teams innovate faster and deliver better customer experiences,” Doug Merritt, president and CEO at Splunk said in a statement.

In a blog post announcing the deal, VictorOps founder and CEO Todd Vernon said the two companies’ missions are aligned. “Upon close, VictorOps will join Splunk’s IT Markets group and together will provide on-call technical staff an analytics and AI-driven approach for addressing the incident lifecycle, from monitoring to response to incident management to continuous learning and improvement,” Vernon wrote.

It should come as no surprise that the two companies have been working together even before the acquisition. “Splunk has been an important technical partner of ours for some time, and through our work together, we discovered that we share a common viewpoint that Modern Incident Management is in a period of strategic change where data is king, and insights from that data are key to maintaining a market leading strategy,” Vernon wrote in the blog post.

VictorOps was founded 2012 and has raised over $33 million, according to data on Crunchbase. The most recent investment was a $15 million Series B in December 2016.

The deal is expected to close in Splunk’s fiscal second quarter subject to customary closing conditions, according to a statement from Splunk.

Jun
11
2018
--

Workday acquires financial modelling startup Adaptive Insights for $1.55B

Workday, the cloud-based platform that offers HR and other back-office apps for businesses, is making an acquisition to expand its portfolio of services: It’s buying Adaptive Insights, a provider of cloud-based business planning and financial modelling tools, for $1.55 billion. The acquisition is notable because Adaptive Insights had filed for an IPO as recently as May 17.

Workday says that the $1.55 billion price tag includes “the assumption of approximately $150 million in unvested equity issued to Adaptive Insights employees” related to that IPO. This deal is expected to close in Q3 of this year.

IPO filings are known to sometimes trigger M&A. Most recently, PayPal announced it would acquire iZettle just after the latter filed to go public. Skype was acquired by Microsoft in 2011 while it was waiting to IPO after previous owner eBay said it would spin it off.

Workday itself went public in 2012 and currently has a market cap of nearly $27 billion.

The deal will give Workday another string to its bow, in its attempt to become the go-to place for all for back-office services for its business customers: the company plans to integrate Adaptive Insights’ tools into its existing platform.

“Adaptive Insights is an industry leader with its Business Planning Cloud platform, and together with Workday, we will help customers accelerate their finance transformation in the cloud,” said Aneel Bhusri, Co-Founder and CEO, Workday, in a statement. “I am excited to welcome the Adaptive Insights team to Workday and look forward to coming together to continue delivering industry-leading products that equip finance organizations to make even faster, better business decisions to adapt to change and to drive growth.”

The two have been working together as partners since 2015.

In the case of Adaptive Insights, which says it has ‘thousands’ of customers, its growth mirrors that both of cloud services and specifically about how business intelligence has developed into a distinct software category of its own over the years, with not just the CFO but an army of in-house analysts relying on analytics of a business’ data to help make small and big decisions.

“The market opportunity here is huge as the CFO has become a power player in the C-Suite,” CEO Tom Bogan told TechCrunch when it raised $75 million in 2015, when it first passed the billion-dollar mark for its valuation. Bogan previously also held a role as chairman of Citrix. “As a former CFO myself, I have seen this first hand and it is accelerating.” Other examples of this force includes Twitter’s Anthony Noto catapulting from CFO to COO (and is now a CEO running SoFi). Around 25 percent of CEOs at Fortune 500 companies are former CFOs.

Adaptive Insights had raised $175 million prior to this.

Bogan will stay on and lead the business and report directly to Bhusri.

“Joining forces with Workday accelerates our vision to drive holistic business planning and digital transformation for our customers,” said Bogan, in a separate statement. “Most importantly, both Adaptive Insights and Workday have an employee-first and customer-centric approach to developing enterprise software that will only increase the power of the combined companies.”

More generally, while we have certainly seen a much wider opening of the door for tech IPOs this year, there is also an argument to be made for continuing consolidation it enterprise IT, in particular with regards to cloud services that might have small or potentially negative margins.

Adaptive Insights was not immune to that: the company in its public listing filing said that its previous fiscal year brough tin $106.5 million in revenues, up 30 percent from the year before, but it also posted a loss of $42.7 million in the same period. That was narrower than the $59.1 million it posted in 2016. Combined with the bigger trend of all-in-one platforms packing a bigger punch with businesses, it might have meant that Workday’s offer was too compelling to refuse. 

This looks like Workday’s biggest acquisition yet, but the company has been on a spree of sorts: just last week it announced the acquisition of RallyTeam to beef up its machine learning.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com