Aug
17
2018
--

Incentivai launches to simulate how hackers break blockchains

Cryptocurrency projects can crash and burn if developers don’t predict how humans will abuse their blockchains. Once a decentralized digital economy is released into the wild and the coins start to fly, it’s tough to implement fixes to the smart contracts that govern them. That’s why Incentivai is coming out of stealth today with its artificial intelligence simulations that test not just for security holes, but for how greedy or illogical humans can crater a blockchain community. Crypto developers can use Incentivai’s service to fix their systems before they go live.

“There are many ways to check the code of a smart contract, but there’s no way to make sure the economy you’ve created works as expected,” says Incentivai’s solo founder Piotr Grudzie?. “I came up with the idea to build a simulation with machine learning agents that behave like humans so you can look into the future and see what your system is likely to behave like.”

Incentivai will graduate from Y Combinator next week and already has a few customers. They can either pay Incentivai to audit their project and produce a report, or they can host the AI simulation tool like a software-as-a-service. The first deployments of blockchains it’s checked will go out in a few months, and the startup has released some case studies to prove its worth.

“People do theoretical work or logic to prove that under certain conditions, this is the optimal strategy for the user. But users are not rational. There’s lots of unpredictable behavior that’s difficult to model,” Grudzie? explains. Incentivai explores those illogical trading strategies so developers don’t have to tear out their hair trying to imagine them.

Protecting crypto from the human x-factor

There’s no rewind button in the blockchain world. The immutable and irreversible qualities of this decentralized technology prevent inventors from meddling with it once in use, for better or worse. If developers don’t foresee how users could make false claims and bribe others to approve them, or take other actions to screw over the system, they might not be able to thwart the attack. But given the right open-ended incentives (hence the startup’s name), AI agents will try everything they can to earn the most money, exposing the conceptual flaws in the project’s architecture.

“The strategy is the same as what DeepMind does with AlphaGo, testing different strategies,” Grudzie? explains. He developed his AI chops earning a masters at Cambridge before working on natural language processing research for Microsoft.

Here’s how Incentivai works. First a developer writes the smart contracts they want to test for a product like selling insurance on the blockchain. Incentivai tells its AI agents what to optimize for and lays out all the possible actions they could take. The agents can have different identities, like a hacker trying to grab as much money as they can, a faker filing false claims or a speculator that cares about maximizing coin price while ignoring its functionality.

Incentivai then tweaks these agents to make them more or less risk averse, or care more or less about whether they disrupt the blockchain system in its totality. The startup monitors the agents and pulls out insights about how to change the system.

For example, Incentivai might learn that uneven token distribution leads to pump and dump schemes, so the developer should more evenly divide tokens and give fewer to early users. Or it might find that an insurance product where users vote on what claims should be approved needs to increase its bond price that voters pay for verifying a false claim so that it’s not profitable for voters to take bribes from fraudsters.

Grudzie? has done some predictions about his own startup too. He thinks that if the use of decentralized apps rises, there will be a lot of startups trying to copy his approach to security services. He says there are already some doing token engineering audits, incentive design and consultancy, but he hasn’t seen anyone else with a functional simulation product that’s produced case studies. “As the industry matures, I think we’ll see more and more complex economic systems that need this.”

Aug
09
2018
--

Prometheus monitoring tool joins Kubernetes as CNCF’s latest ‘graduated’ project

The Cloud Native Computing Foundation (CNCF) may not be a household name, but it houses some important open source projects including Kubernetes, the fast-growing container orchestration tool. Today, CNCF announced that the Prometheus monitoring and alerting tool had joined Kubernetes as the second “graduated” project in the organization’s history.

The announcement was made at PromCon, the project’s dedicated conference being held in Munich this week. According to Chris Aniszczyk, CTO and COO at CNCF, a graduated project reflects the overall maturity where it has reached a tipping point in terms of diversity of contribution, community and adoption.

For Prometheus that means 20 active maintainers, more than 1,000 contributors and more than 13,000 commits. Its contributors include the likes of DigitalOcean, Weaveworks, ShowMax and Uber.

CNCF projects start in the sandbox, move onto incubation and finally to graduation. To achieve graduation level, they need to adopt the CNCF Code of Conduct, have passed an independent security audit and defined a community governance structure. Finally it needs to show an “ongoing commitment to code quality and security best practices,” according to the organization.

Aniszczyk says the tool consists of a time series database combined with a query language that lets developers search for issues or anomalies in their system and get analytics back based on their queries. Not surprisingly, it is especially well suited to containers.

Like Kubernetes, the project that became Prometheus has its roots inside Google. Google was one of the first companies to work with containers and developed Borg (the Kubernetes predecessor) and Borgmon (the Prometheus predecessor). While Borg’s job was to manage container orchestration, Borgmon’s job was to monitor the process and give engineers feedback and insight into what was happening to the containers as they moved through their lifecycle.

While its roots go back to Borgmon, Prometheus as we know it today was developed by a couple of former Google engineers at SoundCloud in 2012. It joined Kubernetes as the second CNCF project in May 2016, and appropriately is the second graduate.

The Cloud Native Computing Foundation’s role in all of this to help promote cloud native computing, the notion that you can manage your infrastructure wherever it lives in a common way, greatly reducing the complexity of managing on-prem and cloud resources. It is part of the Linux Foundation and boasts some of the biggest names in tech as members.

Aug
01
2018
--

WhatsApp finally earns money by charging businesses for slow replies

Today WhatsApp launches its first revenue-generating enterprise product and the only way it currently makes money directly from its app. The WhatsApp Business API is launching to let businesses respond to messages from users for free for up to 24 hours, but will charge them a fixed rate by country per message sent after that.

Businesses will still only be able to message people who contacted them first, but the API will help them programatically send shipping confirmations, appointment reminders or event tickets. Clients also can use it to manually respond to customer service inquiries through their own tool or apps like Zendesk, MessageBird or Twilio. And small businesses that are one of the 3 million users of the WhatsApp For Business app can still use it to send late replies one-by-one for free.

After getting acquired by Facebook for $19 billion in 2014, it’s finally time for the 1.5 billion-user WhatsApp to pull its weight and contribute some revenue. If Facebook can pitch the WhatsApp Business API as a cheaper alternative to customer service call centers, the convenience of asynchronous chat could compel users to message companies instead of phoning.

Only charging for slow replies after 24 hours since a user’s last message is a genius way to create a growth feedback loop. If users get quick answers via WhatsApp, they’ll prefer it to other channels. Once businesses and their customers get addicted to it, WhatsApp could eventually charge for all replies or any that exceed a volume threshold, or cut down the free window. Meanwhile, businesses might be too optimistic about their response times and end up paying more often than they expect, especially when messages come in on weekends or holidays.

WhatsApp first announced it would eventually charge for enterprise service last September when it launched its free WhatsApp For Business app that now has 3 million users and remains free for all replies, even late ones.

Importantly, WhatsApp stresses that all messaging between users and businesses, even through the API, will be end-to-end encrypted. That contrasts with The Washington Post’s report that Facebook pushing to weaken encryption for WhatsApp For Business messages is partly what drove former CEO Jan Koum to quit WhatsApp and Facebook’s board in April. His co-founder, Brian Acton, had ditched Facebook back in September and donated $50 million to the foundation of encrypted messaging app Signal.

Today WhatsApp is also formally launching its new display ads product worldwide. But don’t worry, they won’t be crammed into your chat inbox like with Facebook Messenger. Instead, businesses will be able to buy ads on Facebook’s News Feed that launch WhatsApp conversations with them… thereby allowing them to use the new Business API to reply. TechCrunch scooped that this was coming last September, when code in Facebook’s ad manager revealed the click-to-WhatsApp ads option and the company confirmed the ads were in testing. Facebook launched similar click-to-Messenger ads back in 2015.

Finally, WhatsApp also tells TechCrunch it’s planning to run ads in its 450 million daily user Snapchat Stories clone called Status. “WhatsApp does not currently run ads in Status though this represents a future goal for us, starting in 2019. We will move slowly and carefully and provide more details before we place any Ads in Status,” a spokesperson told us. Given WhatsApp Status is more than twice the size of Snapchat, it could earn a ton on ads between Stories, especially if it’s willing to make some unskippable.

Together, the ads and API will replace the $1 per year subscription fee WhatsApp used to charge in some countries but dropped in 2016. With Facebook’s own revenue decelerating, triggering a 20 percent, $120 billion market cap drop in its share price, it needs to show it has new ways to make money — now more than ever.

Jul
31
2018
--

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Jul
30
2018
--

A pickaxe for the AI gold rush, Labelbox sells training data software

Every artificial intelligence startup or corporate R&D lab has to reinvent the wheel when it comes to how humans annotate training data to teach algorithms what to look for. Whether it’s doctors assessing the size of cancer from a scan or drivers circling street signs in self-driving car footage, all this labeling has to happen somewhere. Often that means wasting six months and as much as a million dollars just developing a training data system. With nearly every type of business racing to adopt AI, that spend in cash and time adds up.

Labelbox builds artificial intelligence training data labeling software so nobody else has to. What Salesforce is to a sales team, Labelbox is to an AI engineering team. The software-as-a-service acts as the interface for human experts or crowdsourced labor to instruct computers how to spot relevant signals in data by themselves and continuously improve their algorithms’ accuracy.

Today, Labelbox is emerging from six months in stealth with a $3.9 million seed round led by Kleiner Perkins and joined by First Round and Google’s Gradient Ventures.

“There haven’t been seamless tools to allow AI teams to transfer institutional knowledge from their brains to software,” says co-founder Manu Sharma. “Now we have over 5,000 customers, and many big companies have replaced their own internal tools with Labelbox.”

Kleiner’s Ilya Fushman explains that “If you have these tools, you can ramp up to the AI curve much faster, allowing companies to realize the dream of AI.”

Inventing the best wheel

Sharma knew how annoying it was to try to forge training data systems from scratch because he’d seen it done before at Planet Labs, a satellite imaging startup. “One of the things that I observed was that Planet Labs has a superb AI team, but that team had been for over six months building labeling and training tools. Is this really how teams around the world are approaching building AI?,” he wondered.

Before that, he’d worked at DroneDeploy alongside Labelbox co-founder and CTO Daniel Rasmuson, who was leading the aerial data startup’s developer platform. “Many drone analytics companies that were also building AI were going through the same pain point,” Sharma tells me. In September, the two began to explore the idea and found that 20 other companies big and small were also burning talent and capital on the problem. “We thought we could make that much smarter so AI teams can focus on algorithms,” Sharma decided.

Labelbox’s team, with co-founders Ysiad Ferreiras (third from left), Manu Sharma (fourth from left), Brian Rieger (sixth from left) Daniel Rasmuson (seventh from left)

Labelbox launched its early alpha in January and saw swift pickup from the AI community that immediately asked for additional features. With time, the tool expanded with more and more ways to manually annotate data, from gradation levels like how sick a cow is for judging its milk production to matching systems like whether a dress fits a fashion brand’s aesthetic. Rigorous data science is applied to weed out discrepancies between reviewers’ decisions and identify edge cases that don’t fit the models.

“There are all these research studies about how to make training data” that Labelbox analyzes and applies, says co-founder and COO Ysiad Ferreiras, who’d led all of sales and revenue at fast-rising grassroots campaign texting startup Hustle. “We can let people tweak different settings so they can run their own machine learning program the way they want to, instead of being limited by what they can build really quickly.” When Norway mandated all citizens get colon cancer screenings, it had to build AI for recognizing polyps. Instead of spending half a year creating the training tool, they just signed up all the doctors on Labelbox.

Any organization can try Labelbox for free, and Ferreiras claims hundreds have. Once they hit a usage threshold, the startup works with them on appropriate SaaS pricing related to the revenue the client’s AI will generate. One called Lytx makes DriveCam, a system installed on half a million trucks with cameras that use AI to detect unsafe driver behavior so they can be coached to improve. Conde Nast is using Labelbox to match runway fashion to related items in their archive of content.

Eliminating redundancy, and jobs?

The big challenge is convincing companies that they’re better off leaving the training software to the experts instead of building it in-house where they’re intimately, though perhaps inefficiently, involved in every step of development. Some turn to crowdsourcing agencies like CrowdFlower, which has their own training data interface, but they only work with generalist labor, not the experts required for many fields. Labelbox wants to cooperate rather than compete here, serving as the management software that treats outsourcers as just another data input.

Long-term, the risk for Labelbox is that it’s arrived too early for the AI revolution. Most potential corporate customers are still in the R&D phase around AI, not at scaled deployment into real-world products. The big business isn’t selling the labeling software. That’s just the start. Labelbox wants to continuously manage the fine-tuning data to help optimize an algorithm through its entire life cycle. That requires AI being part of the actual engineering process. Right now it’s often stuck as an experiment in the lab. “We’re not concerned about our ability to build the tool to do that. Our concern is ‘will the industry get there fast enough?’” Ferreiras declares.

Their investor agrees. Last year’s big joke in venture capital was that suddenly you couldn’t hear a startup pitch without “AI” being referenced. “There was a big wave where everything was AI. I think at this point it’s almost a bit implied,” says Fushman. But it’s corporations that already have plenty of data, and plenty of human jobs to obfuscate, that are Labelbox’s opportunity. “The bigger question is ‘when does that [AI] reality reach consumers, not just from the Googles and Amazons of the world, but the mainstream corporations?’”

Labelbox is willing to wait it out, or better yet, accelerate that arrival — even if it means eliminating jobs. That’s because the team believes the benefits to humanity will outweigh the transition troubles.

“For a colonoscopy or mammogram, you only have a certain number of people in the world who can do that. That limits how many of those can be performed. In the future, that could only be limited by the computational power provided so it could be exponentially cheaper” says co-founder Brian Rieger. With Labelbox, tens of thousands of radiology exams can be quickly ingested to produce cancer-spotting algorithms that he says studies show can become more accurate than humans. Employment might get tougher to find, but hopefully life will get easier and cheaper too. Meanwhile, improving underwater pipeline inspections could protect the environment from its biggest threat: us.

“AI can solve such important problems in our society,” Sharma concludes. “We want to accelerate that by helping companies tell AI what to learn.”

Jul
26
2018
--

GitHub and Google reaffirm partnership with Cloud Build CI/CD tool integration

When Microsoft acquired GitHub for $7.5 billion smackeroos in June, it sent some shock waves through the developer community as it is a key code repository. Google certainly took notice, but the two companies continue to work closely together. Today at Google Next, they announced an expansion of their partnership around Google’s new CI/CD tool, Cloud Build, which was unveiled this week at the conference.

Politics aside, the purpose of the integration is to make life easier for developers by reducing the need to switch between tools. If GitHub recognizes a Docker file without a corresponding CI/CD tool, the developer will be prompted to grab one from the GitHub Marketplace with Google Cloud Build offered prominently as one of the suggested tools.

Photo: GitHub

Should the developer choose to install Cloud Build, that’s where the tight integration comes into play. Developers can run Cloud Build against their code directly from GitHub, and the results will appear directly in the GitHub interface. They won’t have to switch applications to make this work together, and that should go a long way toward saving developer time and effort.

Google Cloud Build. Photo: Google

This is part of GitHub’s new “Smart Recommendations,” which will be rolling out to users in the coming months.

Melody Meckfessel, VP of Engineering for Google Cloud says that the two companies have a history and a context and they have always worked extremely well together on an engineer-to-engineer level. “We have been working together from an engineering standpoint for so many years. We both believe in doing the right thing for developers. We believe that success as it relates to cloud adoption comes from collaborating in the ecosystem,” she said.

Given that close relationship, it had to be disappointing on some level when Microsoft acquired GitHub. In fact, Google Cloud head, Diane Greene expressed sadness about the deal in an interview with CNBC earlier this week, but GitHub’s SVP of Technology Jason Warner believes that Microsoft will be a good steward and that the relationship with Google will remain strong.

Warner says the company’s founding principles were about not getting locked in to any particularly platform and he doesn’t see that changing after the acquisition is finalized. “One of the things that was critical in any discussion about an acquisition was that GitHub shall remain an open platform,” Warner explained.

He indicated that today’s announcement is just a starting point, and the two companies intend to build on this integration moving forward. “We worked pretty closely on this together. This announcement is a nod to some of the future oriented partnerships that we will be announcing later in the year,” he said. And that partnership should continue unabated, even after the Microsoft acquisition is finalized later this year.

Jul
24
2018
--

Google’s Cloud Functions serverless platform is now generally available

Cloud Functions, Google’s serverless platform that competes directly with tools like AWS Lambda and Azure Functions from Microsoft, is now generally available, the company announced at its Cloud Next conference in San Francisco today.

Google first announced Cloud Functions back in 2016, so this has been a long beta. Overall, it also always seemed as if Google wasn’t quite putting the same resources behind its serverless play when compared to its major competitors. AWS, for example, is placing a major bet on serverless, as is Microsoft. And there are also plenty of startups in this space, too.

Like all Google products that come out of beta, Cloud Functions is now backed by an SLA and the company also today announced that the service now runs in more regions in the U.S. and Europe.

In addition to these hosted options, Google also today announced its new Cloud Services platform for enterprises that want to run hybrid clouds. While this doesn’t include a self-hosted Cloud Functions option, Google is betting on Kubernetes as the foundation for businesses that want to run serverless applications (and yes, I hate the term ‘serverless,’ too) in their own data centers.

Jul
24
2018
--

Google Cloud goes all-in on hybrid with its new Cloud Services Platform

The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.

Jul
18
2018
--

Swim.ai raises $10M to bring real-time analytics to the edge

Once upon a time, it looked like cloud-based serviced would become the central hub for analyzing all IoT data. But it didn’t quite turn out that way because most IoT solutions simply generate too much data to do this effectively and the round-trip to the data center doesn’t work for applications that have to react in real time. Hence the advent of edge computing, which is spawning its own ecosystem of startups.

Among those is Swim.ai, which today announced that it has raised a $10 million Series B funding round led by Cambridge Innovation Capital, with participation from Silver Creek Ventures and Harris Barton Asset Management. The round also included a strategic investment from Arm, the chip design firm you may still remember as ARM (but don’t write it like that or their PR department will promptly email you). This brings the company’s total funding to about $18 million.

Swim.ai has an interesting take on edge computing. The company’s SWIM EDX product combines both local data processing and analytics with local machine learning. In a traditional approach, the edge devices collect the data, maybe perform some basic operations against the data to bring down the bandwidth cost and then ship it to the cloud where the hard work is done and where, if you are doing machine learning, the models are trained. Swim.ai argues that this doesn’t work for applications that need to respond in real time. Swim.ai, however, performs the model training on the edge device itself by pulling in data from all connected devices. It then builds a digital twin for each one of these devices and uses that to self-train its models based on this data.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, share new insights instantly peer-to-peer – locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of Swim.ai. “We are thrilled to partner with our new and existing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

The company doesn’t disclose any current customers, but it is focusing its efforts on manufacturers, service providers and smart city solutions. Update: Swim.ai did tell us about two customers after we published this story: The City of Palo Alto and Itron.

Swim.ai plans to use its new funding to launch a new R&D center in Cambridge, UK, expand its product development team and tackle new verticals and geographies with an expanded sales and marketing team.

Jul
16
2018
--

Fastly raises another $40 million before an IPO

Last round before the IPO. That’s how Fastly frames its new $40 million Series F round. It means that the company has raised $219 million over the past few years.

The funding round was led by Deutsche Telekom Capital Partners with participation from Sozo Ventures, Swisscom Ventures, and existing investors.

Fastly operates a content delivery network to speed up web requests. Let’s say you type nytimes.com in your browser. In the early days of the internet, your computer would send a request to one of The New York Times’ servers in a data center. The server would receive the request and send back the page to the reader.

But the web has grown immensely, and this kind of architecture is no longer sustainable. The New York Times use Fastly to cache its homepage, media and articles on Fastly’s servers. This way, when somebody types nytimes.com, Fastly already has the webpage on its servers and can send it directly. For some customers, it can represent as much as 90 percent of requests.

Scale and availability are one of the benefits of using a content delivery network. But speed is also another one. Even though the web is a digital platform, it’s very physical by nature. When you load a page on a server on the other side of the world, it’s going to take hundreds of milliseconds to get the page. Over time, this latency adds up and it feels like a sluggish experience.

Fastly has data centers and servers all around the world so that you can load content in less than 20 or 30 milliseconds. This is particularly important for Stripe or Ticketmaster as response time can greatly influence an e-commerce purchase.

Fastly’s platform also provides additional benefits, such as DDoS mitigation and web application firewall. One of the main challenges for the platform is being able to cache content as quickly as possible. Users upload photos and videos all the time, so it should be on Fastly’s servers within seconds.

The company has tripled its customer base over the past three years. It had a $100 million revenue run rate in 2017. Customers now include Reddit, GitHub, Stripe, Ticketmaster and Pinterest.

There are now 400 employees working for Fastly. It’s worth noting that women represent 42 percent of the executive team, and 65 percent of the engineering leads are women, people of color or LGBTQ (or the intersection of those categories). And if you haven’t read all the diversity reports from tech companies, those are great numbers.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com