Jul
18
2018
--

Swim.ai raises $10M to bring real-time analytics to the edge

Once upon a time, it looked like cloud-based serviced would become the central hub for analyzing all IoT data. But it didn’t quite turn out that way because most IoT solutions simply generate too much data to do this effectively and the round-trip to the data center doesn’t work for applications that have to react in real time. Hence the advent of edge computing, which is spawning its own ecosystem of startups.

Among those is Swim.ai, which today announced that it has raised a $10 million Series B funding round led by Cambridge Innovation Capital, with participation from Silver Creek Ventures and Harris Barton Asset Management. The round also included a strategic investment from Arm, the chip design firm you may still remember as ARM (but don’t write it like that or their PR department will promptly email you). This brings the company’s total funding to about $18 million.

Swim.ai has an interesting take on edge computing. The company’s SWIM EDX product combines both local data processing and analytics with local machine learning. In a traditional approach, the edge devices collect the data, maybe perform some basic operations against the data to bring down the bandwidth cost and then ship it to the cloud where the hard work is done and where, if you are doing machine learning, the models are trained. Swim.ai argues that this doesn’t work for applications that need to respond in real time. Swim.ai, however, performs the model training on the edge device itself by pulling in data from all connected devices. It then builds a digital twin for each one of these devices and uses that to self-train its models based on this data.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, share new insights instantly peer-to-peer – locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of Swim.ai. “We are thrilled to partner with our new and existing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

The company doesn’t disclose any current customers, but it is focusing its efforts on manufacturers, service providers and smart city solutions. Update: Swim.ai did tell us about two customers after we published this story: The City of Palo Alto and Itron.

Swim.ai plans to use its new funding to launch a new R&D center in Cambridge, UK, expand its product development team and tackle new verticals and geographies with an expanded sales and marketing team.

Jul
16
2018
--

Fastly raises another $40 million before an IPO

Last round before the IPO. That’s how Fastly frames its new $40 million Series F round. It means that the company has raised $219 million over the past few years.

The funding round was led by Deutsche Telekom Capital Partners with participation from Sozo Ventures, Swisscom Ventures, and existing investors.

Fastly operates a content delivery network to speed up web requests. Let’s say you type nytimes.com in your browser. In the early days of the internet, your computer would send a request to one of The New York Times’ servers in a data center. The server would receive the request and send back the page to the reader.

But the web has grown immensely, and this kind of architecture is no longer sustainable. The New York Times use Fastly to cache its homepage, media and articles on Fastly’s servers. This way, when somebody types nytimes.com, Fastly already has the webpage on its servers and can send it directly. For some customers, it can represent as much as 90 percent of requests.

Scale and availability are one of the benefits of using a content delivery network. But speed is also another one. Even though the web is a digital platform, it’s very physical by nature. When you load a page on a server on the other side of the world, it’s going to take hundreds of milliseconds to get the page. Over time, this latency adds up and it feels like a sluggish experience.

Fastly has data centers and servers all around the world so that you can load content in less than 20 or 30 milliseconds. This is particularly important for Stripe or Ticketmaster as response time can greatly influence an e-commerce purchase.

Fastly’s platform also provides additional benefits, such as DDoS mitigation and web application firewall. One of the main challenges for the platform is being able to cache content as quickly as possible. Users upload photos and videos all the time, so it should be on Fastly’s servers within seconds.

The company has tripled its customer base over the past three years. It had a $100 million revenue run rate in 2017. Customers now include Reddit, GitHub, Stripe, Ticketmaster and Pinterest.

There are now 400 employees working for Fastly. It’s worth noting that women represent 42 percent of the executive team, and 65 percent of the engineering leads are women, people of color or LGBTQ (or the intersection of those categories). And if you haven’t read all the diversity reports from tech companies, those are great numbers.

Jul
13
2018
--

Chad Rigetti to talk quantum computing at Disrupt SF

Even for the long-standing giants of the tech industry, quantum computing is one of the most complicated subjects to tackle. So how does a five-year old startup compete?

Chad Rigetti, the namesake founder of Rigetti Computing, will join us at Disrupt SF 2018 to help us break it all down.

Rigetti’s approach to quantum computing is two-fold: on one front, the company is working on the design and fabrication of its own quantum chips; on the other, the company is opening up access to its early quantum computers for researchers and developers by way of its cloud computing platform, Forest.

Rigetti Computing has raised nearly $70 million to date according to Crunchbase, with investment from some of the biggest names around. Meanwhile, labs around the country are already using Forest to explore the possibilities ahead.

What’s the current state of quantum computing? How do we separate hype from reality? Which fields might quantum computing impact first — and how can those interested in quantum technology make an impact? We’ll talk all this and more at Disrupt SF 2018.

Passes to Disrupt SF are available at the Early Bird rate until July 25 here.

Jul
12
2018
--

Google’s Apigee teams up with Informatica to extend its API ecosystem

Google acquired API management service Apigee back in 2016, but it’s been pretty quiet around the service in recent years. Today, however, Apigee announced a number of smaller updates that introduce a few new integrations with the Google Cloud platform, as well as a major new partnership with cloud data management and integration firm Informatica that essentially makes Informatica the preferred integration partner for Google Cloud.

Like most partnerships in this space, the deal with Informatica involves some co-selling and marketing agreements, but that really wouldn’t be all that interesting. What makes this deal stand out is that Google is actually baking some of Informatica’s tools right into the Google Cloud dashboard. This will allow Apigee users to use Informatica’s wide range of integrations with third-party enterprise applications while Informatica users will be able to publish their APIs through Apigee and have that service manage them for them.

Some of Google’s competitors, including Microsoft, have built their own integration services. As Google Cloud director of product management Ed Anuff told me, that wasn’t really on Google’s road map. “It takes a lot of know-how to build a rich catalog of connectors,” he said. “You could go and build an integration platform but if you don’t have that, you can’t address your customer’s needs.” Instead, Google went to look for a partner who already has this large catalog and plenty of credibility in the enterprise space.

Similarly, Informatica’s senior VP and GM for big data, cloud and data integration Ronen Schwartz noted that many of his company’s customers are now looking to move into the cloud and this move will make it easier for Informatica’s customers to bring their services into Apigee and open them up for external applications. “With this partnership, we are bringing the best of breed of both worlds to our customers,” he said. “And we are doing it now and we are making it available in an integrated, optimized way.”

Jul
12
2018
--

GitHub Enterprise and Business Cloud users now get access to public repos, too

GitHub, the code hosting service Microsoft recently acquired, is launching a couple of new features for its business users today that’ll make it easier for them to access public repositories on the service.

Traditionally, users on the hosted Business Cloud and self-hosted Enterprise were not able to directly access the millions of public open-source repositories on the service. Now, with the service’s release, that’s changing, and business users will be able to reach beyond their firewalls to engage and collaborate with the rest of the GitHub community directly.

With this, GitHub now also offers its business and enterprise users a new unified search feature that lets them tap into their internal repos but also look at open-source ones.

Other new features in this latest Enterprise release include the ability to ignore whitespace when reviewing changes, the ability to require multiple reviewers for code changes, automated support tickets and more. You can find a full list of all updates here.

Microsoft’s acquisition of GitHub wasn’t fully unexpected (and it’s worth noting that the acquisition hasn’t closed yet), but it is still controversial, given that Microsoft and the open-source community, which heavily relies on GitHub, haven’t always seen eye-to-eye in the past. I’m personally not too worried about that, and it feels like the dust has settled at this point and that people are waiting to see what Microsoft will do with the service.

Jul
11
2018
--

Box opens up about the company’s approach to innovation

Most of us never really stop to think about how the software and services we use on a daily basis are created. We just know it’s there when we want to access it, and it works most of the time. But companies don’t just appear and expand randomly, they need a well defined process and methodology to keep innovating or they won’t be around very long.

Box has been around since 2005 and grown into a company on a run rate of over $500 million.  Along the way, it transformed from a consumer focus to one concentrating on enterprise content management and expanded the platform from one that mostly offered online storage and file sharing to one that offers a range of content management services in the cloud.

I recently sat down with Chief Product and Chief Strategy Officer Jeetu Patel . A big part of Patel’s job is to keep the company’s development teams on track and focused on new features that could enhance the Box platform, attract new customers and increase revenue.

Fundamental beliefs

Before you solve a problem, you need the right group of people working on it. Patel says building a team has a few primary principles to help guide the product and team development. It starts with rules and rubrics to develop innovative solutions and help them focus on where to invest their resources in terms of money and people.

Graphic: Box

When it comes to innovating, you have to structure your teams in such a way that you can react to changing requirements in the marketplace, and in today’s tech world, being agile is more important than ever. “You have to configure your innovation engine from a team, motivation and talent recruiting perspective so that you’ve actually got the right structure in place to provide enough speed and autonomy to the team so that they’re unencumbered and able to execute quickly,” Patel explained

Finally, you need to have a good grip on the customer and the market. That involves constantly assessing market requirements and looking at building products and features that respond to a need, yet that aren’t dated when you launch them.

Start with the customer

Patel says that when all is said and done, the company wants to help its customers by filling a hole in the product set. From a central company philosophy perspective, it begins with the customer. That might sound like pandering on its face, but he says if you keep that goal in mind it really acts as an anchor to the entire process.

“From a core philosophy that we keep in mind, you have to actually make sure that you get everyone really oriented in the company to say you always start from a customer problem and work backwards. But picking the right problem to solve is 90 percent of the battle,” he said.

Solve hard problems

Patel strongly believes that the quality of the problem is directly proportional to the outcome of the project. Part of that is solving a real customer pain point, but it’s also about challenging your engineers. You can be successfully solving the low-hanging fruit problems most of the time, but then you don’t necessarily attract the highest quality engineering talent.

“If you think about really hard problems that have a lot of mission and purpose around them, you can actually attract the best team,” he said.

That means looking for a problem where you can add a lot of value. “The problem that you choose to spend your time solving should be one where you are uniquely positioned to create a 10 x value proposition compared to what might exist in the market today,” Patel explained. If it doesn’t reach that threshold, he believes that there’s no motivation for the customer to change, and it’s not really worth going after.

Build small teams

Once you identify that big problem, you need to form a team to start attacking it. Patel recommends keeping the teams manageable, and he believes in the Amazon approach of the two-pizza team, a group of 8-10 people who can operate on..well…two pizzas. If the teams get too large, he says it becomes difficult to coordinate and too much time gets wasted on logistics instead of innovation.

“Having very defined local missions, having [small] teams carrying out those local missions, and making sure that those team sizes don’t get too large so that they can stay very agile, is a pretty important kind of core operating principle of how we build products,” Patel said.

That becomes even more important as the company scales. The trick is to configure the organization in such a way so that as you grow, you end up with many smaller teams instead of a few bigger ones, and in that way you can better pinpoint team missions.

Developing a Box product

Patel sees four key areas when it comes to finally building that new product at Box. First of all, it needs to be enterprise grade and all that entails — secure, reliable, scalable, fault tolerant and so forth.

That’s Job One, but what generally has differentiated Box in the content management market has been its ease of use. He sees that as removing as much friction as you can from a software-driven business process.

Next, you try to make those processes intelligent and that means understanding the purpose of the content. Patel says that could involve having better search, better surfacing of content and automated trigger events that move that content through a workflow inside a company.

Finally, they look at how it fits inside a workflow because content doesn’t live in a vacuum inside an enterprise. It generally has a defined purposed and the content management system should make it easy to integrate that content into the broader context of its purpose.

Measure twice

Once you have those small teams set up with their missions in place, you have to establish rules and metrics that allow them to work quickly, but still have a set of milestones they have to meet to prove they are on a worthwhile project for the company. You don’t want to be throwing good money after a bad project.

For Patel and Box that involves a set of metrics that tell you at all times, whether the team is succeeding or failing. Seems simple enough, but it takes a lot of work from a management perspective to define missions and goals and then track them on a regular basis.

He says that involves three elements: “There are three things that we think about including what’s the plan for what you’re going to build, what’s the strategy around what you’re going to build, and then what’s the level of coordination that each one of us have on whether or not what we’re building is, in fact, going to be successful.”

In the end, this is an iterative process, one that keeps evolving as the company grows and develops and as they learn from each project and each team. “We’re constantly looking at the processes and saying, what are the things that need to be adjusted,” Patel said.

Jul
08
2018
--

Serverless computing could unleash a new startup ecosystem

While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.

Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).

That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.

Building layers of abstraction

In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.

Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.

Photo: shutterjack/Getty Images

While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.

All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.

It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.

Removing another barrier to entry

Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying, patching or monitoring — all those details at the the server and operating system level go away,” he explained.

He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.

Blocks of servers in cloud data center.

Colin Anderson/Getty Images

Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.

Survey says…

Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.

When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.

Graph: Digital Ocean

Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.

Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.

Creating ecosystems

The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.

This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.

Photo: shylendrahoode/Getty Images

Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.

Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.

S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.

Beyond tooling

Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.

Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.

While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.

Jun
19
2018
--

Riskified prevents fraud on your favorite e-commerce site

Meet Riskified, an Israel-based startup that has raised $64 million in total to fight online fraud. The company has built a service that helps you reject transactions from stolen credit cards and approve more transactions from legitimate clients.

If you live in the U.S., chances are you know someone who noticed a fraudulent charge for an expensive purchase with their credit card — it’s still unclear why most restaurants and bars in the U.S. take your card away instead of bringing the card reader to you.

Online purchases, also known as card-not-present transactions, represent the primary source of fraudulent transactions. That’s why e-commerce websites need to optimize their payment system to detect fraudulent transactions and approve all the others.

Riskified uses machine learning to recognize good orders and improve your bottom line. In fact, Riskified is so confident that it guarantees that you’ll never have to pay chargebacks. As long as a transaction is approved by the product, the startup offers chargeback protection. If Riskified made the wrong call, the company reimburses fraudulent chargebacks.

On the other side of the equation, many e-commerce websites leave money on the table by rejecting transactions and false declines. It’s hard to quantify this as some customers end up not ordering anything. Riskified should help you on this front too.

The startup targets big customers — Macy’s, Dyson, Prada, Vestiaire Collective and GOAT are all using it. You can integrate Riskified with popular e-commerce payment systems and solutions, such as Shopify, Magento and Stripe. Riskified also has an API for more sophisticated implementations.

Jun
12
2018
--

Sumo Logic brings data analysis to containers

Sumo Logic has long held the goal to help customers understand their data wherever it lives. As we move into the era of containers, that goal becomes more challenging because containers by their nature are ephemeral. The company announced a product enhancement today designed to instrument containerized applications in spite of that.

They are debuting these new features at DockerCon, Docker’s customer conference taking place this week in San Francisco.

Sumo’s CEO Ramin Sayer says containers have begun to take hold over the last 12-18 months with Docker and Kubernetes emerging as tools of choice. Given their popularity, Sumo wants to be able to work with them. “[Docker and Kubernetes] are by far the most standard things that have developed in any new shop, or any existing shop that wants to build a brand new modern app or wants to lift and shift an app from on prem [to the cloud], or have the ability to migrate workloads from Vendor A platform to Vendor B,” he said.

He’s not wrong of course. Containers and Kubernetes have been taking off in a big way over the last 18 months and developers and operations alike have struggled to instrument these apps to understand how they behave.

“But as that standardization of adoption of that technology has come about, it makes it easier for us to understand how to instrument, collect, analyze, and more importantly, start to provide industry benchmarks,” Sayer explained.

They do this by avoiding the use of agents. Regardless of how you run your application, whether in a VM or a container, Sumo is able to capture the data and give you feedback you might otherwise have trouble retrieving.

Screen shot: Sumo Logic (cropped)

The company has built in native support for Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS). It also supports the open source tool Prometheus favored by Kubernetes users to extract metrics and metadata. The goal of the Sumo tool is to help customers fix issues faster and reduce downtime.

As they work with this technology, they can begin to understand norms and pass that information onto customers. “We can guide them and give them best practices and tips, not just on what they’ve done, but how they compare to other users on Sumo,” he said.

Sumo Logic was founded in 2010 and has raised $230 million, according to data on Crunchbase. Its most recent round was a $70 million Series F led by Sapphire Ventures last June.

Jun
07
2018
--

Google Cloud announces the Beta of single tenant instances

One of the characteristics of cloud computing is that when you launch a virtual machine, it gets distributed wherever it makes the most sense for the cloud provider. That usually means sharing servers with other customers in what is known as a multi-tenant environment. But what about times when you want a physical server dedicated just to you?

To help meet those kinds of demands, Google announced the Beta of Google Compute Engine Sole-tenant nodes, which have been designed for use cases such a regulatory or compliance where you require full control of the underlying physical machine, and sharing is not desirable.

“Normally, VM instances run on physical hosts that may be shared by many customers. With sole-tenant nodes, you have the host all to yourself,” Google wrote in a blog post announcing the new offering.

Diagram: Google

Google has tried to be as flexible as possible, letting the customer choose exactly what configuration they want in terms CPU and memory. Customers can also let Google choose the dedicated server that’s best at any particular moment, or you can manually select the server if you want that level of control. In both cases, you will be assigned a dedicated machine.

If you want to play with this, there is a free tier and then various pricing tiers for a variety of computing requirements. Regardless of your choice, you will be charged on a per-second basis with a one-minute minimum charge, according to Google.

Since this feature is still in Beta, it’s worth noting that it is not covered under any SLA. Microsoft and Amazon have similar offerings.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com