Sep
29
2020
--

Datasaur snags $3.9M investment to build intelligent machine learning labeling platform

As machine learning has grown, one of the major bottlenecks remains labeling things so the machine learning application understands the data it’s working with. Datasaur, a member of the Y Combinator Winter 2020 batch, announced a $3.9 million investment today to help solve that problem with a platform designed for machine learning labeling teams.

The funding announcement, which includes a pre-seed amount of $1.1 million from last year and $2.8 million seed right after it graduated from Y Combinator in March, included investments from Initialized Capital, Y Combinator and OpenAI CTO Greg Brockman.

Company founder Ivan Lee says that he has been working in various capacities involving AI for seven years. First when his mobile gaming startup Loki Studios was acquired by Yahoo! in 2013, and Lee was eventually moved to the AI team, and, most recently, at Apple. Regardless of the company, he consistently saw a problem around organizing machine learning labeling teams, one that he felt he was uniquely situated to solve because of his experience.

“I have spent millions of dollars [in budget over the years] and spent countless hours gathering labeled data for my engineers. I came to recognize that this was something that was a problem across all the companies that I’ve been at. And they were just consistently reinventing the wheel and the process. So instead of reinventing that for the third time at Apple, my most recent company, I decided to solve it once and for all for the industry. And that’s why we started Datasaur last year,” Lee told TechCrunch.

He built a platform to speed up human data labeling with a dose of AI, while keeping humans involved. The platform consists of three parts: a labeling interface; the intelligence component, which can recognize basic things so the labeler isn’t identifying the same thing over and over; and finally a team organizing component.

He says the area is hot, but to this point has mostly involved labeling consulting solutions, which farm out labeling to contractors. He points to the sale of Figure Eight in March 2019 and to Scale, which snagged $100 million last year as examples of other startups trying to solve this problem in this way, but he believes his company is doing something different by building a fully software-based solution.

The company currently offers a cloud and on-prem solution, depending on the customer’s requirements. It has 10 employees, with plans to hire in the next year, although he didn’t share an exact number. As he does that, he says he has been working with a partner at investor Initialized on creating a positive and inclusive culture inside the organization, and that includes conversations about hiring a diverse workforce as he builds the company.

“I feel like this is just standard CEO speak, but that is something that we absolutely value in our top of funnel for the hiring process,” he said.

As Lee builds out his platform, he has also worried about built-in bias in AI systems and the detrimental impact that could have on society. He says that he has spoken to clients about the role of labeling in bias and ways of combatting that.

“When I speak with our clients, I talk to them about the potential for bias from their labelers and built into our product itself is the ability to assign multiple people to the same project. And I explain to my clients that this can be more costly, but from personal experience I know that it can improve results dramatically to get multiple perspectives on the exact same data,” he said.

Lee believes humans will continue to be involved in the labeling process in some way, even as parts of the process become more automated. “The very nature of our existence [as a company] will always require humans in the loop, […] and moving forward I do think it’s really important that as we get into more and more of the long tail use cases of AI, we will need humans to continue to educate and inform AI, and that’s going to be a critical part of how this technology develops.”

Mar
16
2018
--

With great tech success, comes even greater responsibility

As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.

That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.

This was a running theme this week at the South by Southwest conference in Austin, Texas.

The AI debate rages on

While the platform plays are clearly on the front lines of this discussion, tech icon Elon Musk repeated his concerns about AI running amok in a Q&A at South by Southwest. He worries that it won’t be long before we graduate from the narrow (and not terribly smart) AI we have today to a more generalized AI. He is particularly concerned that a strong AI could develop and evolve over time to the point it eventually matches the intellectual capabilities of humans. Of course, as TechCrunch’s Jon Shieber wrote, Musk sees his stable of companies as a kind of hedge against such a possible apocalypse.

Elon Musk with Jonathan Nolan at South by Southwest 2018. Photo: Getty Images/Chris Saucedo

“Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is,” he told the South by Southwest audience.

He went so far as to suggest it could be more of a threat than nuclear warheads in terms of the kind of impact it could have on humanity.

Taking responsibility

Whether you agree with that assessment or not, or even if you think he is being somewhat self-serving with his warnings to promote his companies, he could be touching upon something important about corporate responsibility around the technology that startups and established companies alike should heed.

It was certainly on the mind of Apple’s Eddy Cue, who was interviewed on stage at SXSW by CNN’s Dylan Byers this week. “Tech is a great thing and makes humans more capable, but in of itself is not for good. People who make it, have to make it for good,” Cue said.

We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.

Apple’s Eddie Cue speaking at South by Southwest 2018. Photo: Ron Miller

Cue dodged any of Byers’ questions about competing platforms, saying he could only speak to what Apple was doing because he didn’t have an inside view of companies like Facebook and Google (which he didn’t ever actually mention by name). “I think our company is different than what you’re talking about. Our customers’ privacy is of utmost importance to us,” he said. That includes, he said, limiting the amount of data they collect because they are not worrying about having enough to serve more meaningful ads. “We don’t care where you shop or what you buy,” he added.

Andy O’Connell from Facebook’s Global Policy Development team, speaking on a panel on the challenges of using AI to filter “fake news” said, that Facebook recognizes it can and should play a role if it sees people manipulating the platform. “This is a whole society issue, but there are technical things we are doing and things we can invest in [to help lessen the impact of fake news],” he said. He added that Facebook co-founder and CEO Mark Zuckerberg has expressed it as challenge to the company to make the platform more secure and that includes reducing the amount of false or misleading news that makes it onto the platform.

Recognizing tech’s limitations

As O’Connell put forth, this is not just a Facebook problem or a general technology problem. It’s a social problem and society as a whole needs to address it. Sometimes tech can help, but, we can’t always look to tech to solve every problem. The trouble is that we can never really anticipate how a given piece of technology will behave or how people use it once we put it out there.

Photo: Ron Miller

All of this suggests that none of these problems, some of which we never could have never have even imagined, are easy to solve. For every action and reaction, there can be another set of unintended consequences, even with the best of intentions.

But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society. “Everyone has a responsibility [to draw clear lines]. It is something we do and how we want to run our company. In today’s world people have to take responsibility and we intend to do that,” Cue said.

It’s got to be more than lip service though. It requires thought and care and reacting when things do run amok, while continually assessing the impact of every decision.

Jun
24
2016
--

A running tab of what tech people think about whether we’re living in a simulation

elon musk code confere Are we living in a simulation? For whatever reason, this is a hot topic in Silicon Valley these days. It all more or less started when Tesla Motors CEO (and soon to be SolarCity CEO — check one off for the simulation argument there) Elon Musk made a claim at the Code Conference that there’s such a high chance that we’re living in a simulation that it’s more likely we… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com