Jul
31
2018
--

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Jul
24
2018
--

Google Cloud goes all-in on hybrid with its new Cloud Services Platform

The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.

Jul
18
2018
--

Swim.ai raises $10M to bring real-time analytics to the edge

Once upon a time, it looked like cloud-based serviced would become the central hub for analyzing all IoT data. But it didn’t quite turn out that way because most IoT solutions simply generate too much data to do this effectively and the round-trip to the data center doesn’t work for applications that have to react in real time. Hence the advent of edge computing, which is spawning its own ecosystem of startups.

Among those is Swim.ai, which today announced that it has raised a $10 million Series B funding round led by Cambridge Innovation Capital, with participation from Silver Creek Ventures and Harris Barton Asset Management. The round also included a strategic investment from Arm, the chip design firm you may still remember as ARM (but don’t write it like that or their PR department will promptly email you). This brings the company’s total funding to about $18 million.

Swim.ai has an interesting take on edge computing. The company’s SWIM EDX product combines both local data processing and analytics with local machine learning. In a traditional approach, the edge devices collect the data, maybe perform some basic operations against the data to bring down the bandwidth cost and then ship it to the cloud where the hard work is done and where, if you are doing machine learning, the models are trained. Swim.ai argues that this doesn’t work for applications that need to respond in real time. Swim.ai, however, performs the model training on the edge device itself by pulling in data from all connected devices. It then builds a digital twin for each one of these devices and uses that to self-train its models based on this data.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, share new insights instantly peer-to-peer – locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of Swim.ai. “We are thrilled to partner with our new and existing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

The company doesn’t disclose any current customers, but it is focusing its efforts on manufacturers, service providers and smart city solutions. Update: Swim.ai did tell us about two customers after we published this story: The City of Palo Alto and Itron.

Swim.ai plans to use its new funding to launch a new R&D center in Cambridge, UK, expand its product development team and tackle new verticals and geographies with an expanded sales and marketing team.

Mar
12
2018
--

Dropbox sets IPO range $16-18, valuing it below $10B, as Salesforce ponies up $100M

After announcing an IPO in February, today Dropbox updated its S-1 filing with pricing. The cloud services and storage company said that it expects to price its IPO at between $16 and $18 per share when it sells 36,000,000 shares to raise $648 million as “DBX” on the Nasdaq exchange.

In addition to that, Dropbox announced that it will be selling $100 million in stock to Salesforce — its new integration partner — right after the IPO, “at a price per share equal to the initial offering price.”

A specific date has not yet been set for Dropbox’s listing later this month.

The IPO pricing values the company at between $7 billion and nearly $8 billion when you factor in restricted stock units — making it the biggest tech IPO since Snap last year, but still falling well below the $10 billion valuation that Dropbox crept up to back in 2014 when it raised $350 million in venture funding.

Many will be watching Dropbox’s IPO to see if it stands up longer term and becomes a bellwether for the fortunes and fates of many other outsized “startups” that many have also expecting to list, including those that have already filed to go public like Spotify, as well as those that have yet to make any official pronouncements, like Airbnb.

Some might argue that it’s illogical to compare a company whose business model is built around cloud storage with a travel and accommodation business, or a music streaming platform. Perhaps especially now: at a time when people are still wincing from Snap’s drastic drop — the company is trading more than 30 percent down from its IPO debut — Dropbox presents a challenging picture.

On the plus side, the company has helped bring the concept of cloud storage services to the masses. Riding on the wave of mobile devices, lightweight apps, and faster internet connections, it has changed the conversation about how many conceive of handling their data and offloading it off of their devices. Today, Dropbox has more than 500 million users in more than 180 countries.

On the minus side, only around 11 million of those customers are paying users. The company reported around $1.1 billion in revenues in 2017, representing a rise on $845 million in 2016 and $604 million in 2015. But it’s unprofitable, reporting a loss of $112 million in 2017.

Again, that’s a large improvement when you compare Dropbox’s 2016 loss of $210 million in 2016 and $326 million in 2015. But it does raise more pressing questions: Does Dropbox have a big plan for how to convert more people into paying users? And will its investors have the patience to watch its business models play out?

In that regard, the Salesforce investment and integration, and its timing of being announced alongside the sober IPO range, is a notable vote of confidence in Dropbox. Salesforce has staked its whole business model around cloud services — its ticker may be “CRM,” but its logo is its name inside a cloud — and it’s passed into the pantheon of tech giants with flying colors.

Having Salesforce buy into Dropbox not only shows how it’s bolstering its new partner Dropbox in the next phase, but I’d argue also gives Dropbox one potential exit strategy. Salesforce, after all, has been interested in playing more directly in this space for years at this point.

Jan
30
2018
--

Microsoft’s Azure Event Grid hits general availability

 With Event Grid, Microsoft introduced a new Azure service last year that it hopes will become the glue that holds together modern event-driven and distributed applications. Starting today, Event Grid is generally available, with all the SLAs and other premises this entails. Read More

Sep
19
2017
--

Google Cloud’s Natural Language API gets content classification and more granular sentiment analysis

 Google Cloud announced two updates this morning to its Natural Language API. Specifically users will now have access to content classification and entity sentiment analysis. These features are particularly valuable for brands and media companies For starters, GCP users will now be able to tag content as corresponding with common topics like health, entertainment and law (cc: Henry).… Read More

Sep
14
2017
--

AWS now offers a virtual machine with over 4TB of memory

 Earlier this year, Amazon’s AWS group said that it was working on bringing instance types with between 4 to 16TB of memory to its users. It’s now starting to fulfill this promise as the company today launched its largest EC2 machine (in terms of memory size) yet: the x1e.32xlarge instance with a whopping 4.19TB of RAM. Previously, EC2’s largest instance only featured just… Read More

Jul
10
2017
--

Microsoft’s Azure Stack private cloud platform is ready for its first customers

 Microsoft’s cloud strategy has long focused on the kind of hybrid cloud deployments that allow enterprises to run workloads in a public cloud like Azure and in their own data centers. Azure Stack, its project for bringing the core Azure services into the corporate data center, is the logical conclusion of this. If developers can target a single platform for both the public and… Read More

May
25
2017
--

Meta SaaS raises $1.5 million from Mark Cuban and others

 Meta SaaS is a product that helps you cancel other products. Like Cardlife and Cleanshelf, Meta SaaS looks at all of your software-as-a-service subscriptions and tells you which ones you use and, more important, which ones you don’t. Founded by Arlo Gilbert and Scott Hertel, the product raised $1.5 million in seed from Mark Cuban with participation from Barracuda Networks, Capital… Read More

May
17
2017
--

Google is giving a cluster of 1,000 Cloud TPUs to researchers for free

 At the end of Google I/O, the company unveiled a new program to give researchers access to the company’s most advanced machine learning technologies for free. The TensorFlow Research Cloud program, as it will be called, will be application based and open to anyone conducting research, rather than just members of academia. If accepted, researchers will get access to a cluster of 1,000… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com