Dec
11
2018
--

The Cloud Native Computing Foundation adds etcd to its open-source stable

The Cloud Native Computing Foundation (CNCF), the open-source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.

Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open-source projects that use etcd include Cloud Foundry, and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”

Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 projects that fall under its “incubated technologies” umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.

That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).

Dec
11
2018
--

InVision, valued at $1.9 billion, picks up $115 million Series F

“The screen is becoming the most important place in the world,” says InVision CEO and founder Clark Valberg . In fact, it’s hard to get through a conversation with him without hearing it. And, considering that his company has grown to $100 million in annual recurring revenue, he has reason to believe his own affirmation.

InVision, the startup looking to be the Salesforce of design, has officially achieved unicorn status with the close of a $115 million Series F round, bringing the company’s total funding to $350 million. This deal values InVision at $1.9 billion, which is nearly double its valuation as of mid-2017 on the heels of its $100 million Series E financing.

Spark Capital led the round, with participation from Goldman Sachs, as well as existing investors Battery Ventures, ICONIQ Capital, Tiger Global Management, FirstMark and Geodesic Capital. Atlassian also participated in the round. Earlier this year, Atlassian and InVision built out much deeper integrations, allowing Jira, Confluence and Trello users to instantly collaborate via InVision.

As part of the deal, Spark Capital’s Megan Quinn will be joining the board alongside existing board members and observers Amish Jani, Lee Fixel, Matthew Jacobson, Mike Kourey, Neeraj Agrawal, Vas Natarajan and Daniel Wolfson.

InVision started in 2011 as a simple prototyping tool. It let designers build out their experience without asking the engineering/dev team to actually build it, to then send to the engineering and product and marketing and executive teams for collaboration and/or approval.

Over the years, the company has stretched its efforts both up and downstream in the process, building out a full collaboration suite called InVision Cloud, so that every member of the organization can be involved in the design process; Studio, a design platform meant to take on the likes of Adobe and Sketch; and InVision Design System Manager, where design teams can manage their assets and best practices from one place.

But perhaps more impressive than InVision’s ability to build design products for designers is its ability to attract users that aren’t designers.

“Originally, I don’t think we appreciated how much the freemium model acted as a flywheel internally within an organization,” said Quinn. “Those designers weren’t just inviting designers from their own team or other teams, but PMs and Marketing and Customer Service and executives to collaborate and approve the designs. From the outside, InVision looks like a design company. But really, they start with the designer as a core customer and spread virally within an organization to serve a multitude.”

InVision has simply dominated prototyping and collaboration, today announcing it has surpassed 5 million users. What’s more, InVision has a wide variety of customers. The startup has a long and impressive list of digital-first customers — including Netflix, Uber, Airbnb and Twitter — but also serves 97 percent of the Fortune 100, with customers like Adidas, General Electric, NASA, IKEA, Starbucks and Toyota.

Part of that can be attributed to the quality of the products, but the fundamental shift to digital (as predicted by Valberg) is most certainly under way. Whether brands like it or not, customers are interacting with them more and more from behind a screen, and digital customer experience is becoming more and more important to all companies.

In fact, a McKinsey study showed that companies that are in the top quartile scores of the McKinsey Design Index outperformed their counterparts in both revenues and total returns to shareholders by as much as a factor of two.

But as with any transition, some folks are averse to change. Valberg identifies industry education and evangelism as two big challenges for InVision.

“Organizations are not quick to change on things like design, which is why we’ve built out a Design Transformation Team,” said Valberg. “The team goes in and gets hands on with brands to help them with new practices and to achieve design maturity within the organization.”

With a fresh $115 million and 5 million users, InVision has just about everything it needs to step into a new tier of competition. Even amongst behemoths like Adobe, which pulled in $2.29 billion in revenue in Q3 alone, InVision has provided products that can both complement and compete.

But Quinn believes the future of InVision rests on execution.

“As with most companies, the biggest challenge will be continued excellence in execution,” said Quinn. “InVision has all the right tail winds with the right team, a great product and excellent customers. It’s all about building and executing ahead of where the pack is going.”

Dec
10
2018
--

Kong launches its fully managed API platform

API platform Kong, which you may remember under its previous name of Mashape, is launching its new Kong Cloud service today. Kong Cloud is the company’s fully managed platform for securing, connecting and orchestrating APIs. Enterprises can deploy it to virtually any major cloud platform, including AWS, Azure and Google Cloud, and Kong will handle all of the daily drudgery of managing it for them.

At the core of Kong Cloud is Kong, the company’s open source microservices gateway. The company already offers an enterprise version of Kong under the Kong Enterprise brand, but it’s up to enterprises to manage this version by themselves.

“Customers running Kong Enterprise on-prem and self-managed are often running it multi cloud. They are running it from  AWS, to Azure, Google Cloud, Pivotal Cloud Foundry or bare metal. It’s all over the place,” Kong co-founder, president and CEO Augusto Marietti told me. “But not all of them have massive engineering organizations, so Kong multi-cloud is our managed version of Kong as a service that can run on any cloud.”

With Kong Cloud, the company monitors and manages the service, giving enterprises an end-to-end API platform and developer portal. The company handles updates and all the other operational tasks. In terms of the overall functionality (think governance, security features etc.), this is essentially Kong Enterprise. Indeed, Marietti stressed that the two are meant to be one-to-one compatible, in part because he expects that some companies will use both versions, depending on their teams’ needs.

Marietti told me that Kong now has more than 85 employees and more than 100 enterprise customers. These include the likes of Zillow, Soulcycle and Expedia. Year-over-year, the company tells me, its bookings have grown 9x and the Kong open-source tool has now been downloaded more than 54 million times.

The company rebranded as Kong in October 2017, in part to signify that its ongoing focus would be on microservices in the enterprise and the Kong tool, which it open sourced in 2015. Ahead of its rebranding exercise, Mashape/Kong sold off its API marketplace to RapidAPI. The marketplace was the company’s first product — and Kong was in part developed to support it — but in the end, the company decided that its focus was going to be on Kong itself. That move seems to be paying off now, as enterprises are moving to adopt microservices and often need partners to do so.

Dec
07
2018
--

Pivotal announces new serverless framework

Pivotal has always been about making open-source tools for enterprise developers, but surprisingly, up until now, the arsenal has lacked a serverless component. That changed today with the alpha launch of Pivotal Function Service.

Pivotal Function Service is a Kubernetes-based, multi-cloud function service. It’s part of the broader Pivotal vision of offering you a single platform for all your workloads on any cloud,” the company wrote in a blog post announcing the new service.

What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least.

The idea up until now has been that the large-scale cloud providers like Amazon, Google and Microsoft could dial up whatever infrastructure your functions require, then dial them down when you’re finished without you ever having to think about the underlying infrastructure. The cloud provider deals with whatever compute, storage and memory you need to run the function, and no more.

Pivotal wants to take that same idea and make it available in the cloud across any cloud service. It also wants to make it available on-prem, which may seem curious at first, but Pivotal’s Onsi Fakhouri says customers want that same abilities both on-prem and in the cloud. “One of the key values that you often hear about serverless is that it will run down to zero and there is less utilization, but at the same time there are customers who want to explore and embrace the serverless programming paradigm on-prem,” Fakhouri said. Of course, then it is up to IT to ensure that there are sufficient resources to meet the demands of the serverless programs.

The new package includes several key components for developers, including an environment for building, deploying and managing your functions, a native eventing ability that provides a way to build rich event triggers to call whatever functionality you require and the ability to do this within a Kubernetes-based environment. This is particularly important as companies embrace a hybrid use case to manage the events across on-prem and cloud in a seamless way.

One of the advantages of Pivotal’s approach is that Pivotal can work on any cloud as an open product. This is in contrast to the cloud providers like Amazon, Google and Microsoft, which provide similar services that run exclusively on their clouds. Pivotal is not the first to build an open-source Function as a Service, but they are attempting to package it in a way that makes it easier to use.

Serverless doesn’t actually mean there are no underlying servers. Instead, it means that developers don’t have to point to any servers because the cloud provider takes care of whatever infrastructure is required. In an on-prem scenario, IT has to make those resources available.

Dec
06
2018
--

Looker snags $103 million investment on $1.6 billion valuation

Looker has been helping customers visualize and understand their data for seven years, and today it got a big reward, a $103 million Series E investment on a $1.6 billion valuation.

The round was led by Premji Invest, with new investment from Cross Creek Advisors and participation from the company’s existing investors. With today’s investment, Looker has raised $280.5 million, according the company.

In spite of the large valuation, Looker CEO Frank Bien really wasn’t in the mood to focus on that particular number, which he said was arbitrary, based on the economic conditions at the time of the funding round. He said having an executive team old enough to remember the dot-com bubble from the late 1990s and the crash of 2008 keeps them grounded when it comes to those kinds of figures.

Instead, he preferred to concentrate on other numbers. He reported that the company has 1,600 customers now and just crossed the $100 million revenue run rate, a significant milestone for any enterprise SaaS company. What’s more, Bien reports revenue is still growing 70 percent year over year, so there’s plenty of room to keep this going.

He said he took such a large round because there was interest and he believed that it was prudent to take the investment as they move deeper into enterprise markets. “To grow effectively into enterprise customers, you have to build more product, and you have to hire sales teams that take longer to activate. So you look to grow into that, and that’s what we’re going to use this financing for,” Bien told TechCrunch.

He said it’s highly likely that this is the last private fundraising the company will undertake as it heads toward an IPO at some point in the future. “We would absolutely view this as our last round unless something drastic changed,” Bien said.

For now, he’s looking to build a mature company that is ready for the public markets whenever the time is right. That involves building internal processes of a public company even if they’re not there yet. “You create that maturity either way, and I think that’s what we’re doing. So when those markets look okay, you could look at that as another funding source,” he explained.

The company currently has around 600 employees. Bien indicated that they added 200 this year alone and expect to add additional headcount in 2019 as the business continues to grow and they can take advantage of this substantial cash infusion.

Dec
05
2018
--

Salesforce wants to deliver more automated field service using IoT data

Salesforce has been talking about the Internet of Things for some time as a way to empower field service workers. Today, the company announced Field Service Lightning, a new component designed to deliver automated IoT data to service technicians in the field on their mobile devices.

Once you connect sensors in the field to Service Cloud, you can make this information available in an automated fashion to human customer service agents and pull in other data about the customer from Salesforce’s CRM system to give the CSR a more complete picture of the customer.

“Drawing on IoT signals surfaced in the Service Cloud console, agents can gauge whether device failure is imminent, quickly determine the source of the problem (often before the customer is even aware a problem exists) and dispatch the right mobile worker with the right skill set,” Salesforce’s SVP and GM for Salesforce Field Service Lightning Paolo Bergamo wrote in a blog post introducing the new feature.

The field service industry has been talking for years about using IoT data from the field to deliver more proactive service and automate the customer service and repair process. That’s precisely what this new feature is designed to do. Let’s say you have a “smart home” with a heating and cooling system that can transmit data to the company that installed your equipment. With a system like this in place, the sensors could tell your HVAC dealer that a part is ready to break down and automatically start a repair process (that would presumably include calling the customer to tell them about it). When a CSR determines a repair visit is required, the repair technician would receive all the details on their smart phone.

Customer Service Console view. Gif: SalesforceIt also could provide a smoother experience because the repair technician can prepare before he or she leaves for the visit with the right equipment and parts for the job and a better understanding of what needs to be done before arriving at the customer location. This should theoretically lead to more efficient service calls.

All of this is in line with a vision the field service industry has been talking about for some time that you could sell a subscription to a device like an air conditioning system instead of the device itself. This would mean that the dealer would be responsible for keeping it up and running and having access to data like this could help that vision to become closer to reality.

In reality, most companies are probably not ready to implement a system like this and most equipment in the field has not been fit with sensors to deliver this information to the Service Cloud. Still, companies like Salesforce, ServiceNow and ServiceMax (owned by GE) want to release products like this for early adopters and to have something in place as more companies look to put smarter systems in place in the field.

Dec
04
2018
--

Fivetran announces $15M Series A to build automated data pipelines

Fivetran, a startup that builds automated data pipelines between data repositories and cloud data warehouses and analytics tools, announced a $15 million Series A investment led by Matrix Partners.

Fivetran helps move data from source repositories like Salesforce and NetSuite to data warehouses like Snowflake or analytics tools like Looker. Company CEO and co-founder George Fraser says the automation is the key differentiator here between his company and competitors like Informatica and SnapLogic.

“What makes Fivetran different is that it’s an automated data pipeline to basically connect all your sources. You can access your data warehouse, and all of the data just appears and gets kept updated automatically,” Fraser explained. While he acknowledges that there is a great deal of complexity behind the scenes to drive that automation, he stresses that his company is hiding that complexity from the customer.

The company launched out of Y Combinator in 2012, and other than $4 million in seed funding along the way, it has relied solely on revenue up until now. That’s a rather refreshing approach to running an enterprise startup, which typically requires piles of cash to build out sales and marketing organizations to compete with the big guys they are trying to unseat.

One of the key reasons they’ve been able to take this approach has been the company’s partner strategy. Having the ability to get data into another company’s solution with a minimum of fuss and expense has attracted data-hungry applications. In addition to the previously mentioned Snowflake and Looker, the company counts Google BigQuery, Microsoft Azure, Amazon Redshift, Tableau, Periscope Data, Salesforce, NetSuite and PostgreSQL as partners.

Ilya Sukhar, general partner at Matrix Partners, who will be joining the Fivetran board under the terms of deal sees a lot of potential here. “We’ve gone from companies talking about the move to the cloud to preparing to execute their plans, and the most sophisticated are making Fivetran, along with cloud data warehouses and modern analysis tools, the backbone of their analytical infrastructure,” Sukhar said in a statement.

They currently have 100 employees spread out across four offices in Oakland, Denver, Bangalore and Dublin. They boast 500 customers using their product including Square, WeWork, Vice Media and Lime Scooters, among others.

Dec
02
2018
--

AWS wants to rule the world

AWS, once a nice little side hustle for Amazon’s eCommerce business, has grown over the years into a behemoth that’s on a $27 billion run rate, one that’s still growing at around 45 percent a year. That’s a highly successful business by any measure, but as I listened to AWS executives last week at their AWS re:Invent conference in Las Vegas, I didn’t hear a group that was content to sit still and let the growth speak for itself. Instead, I heard one that wants to dominate every area of enterprise computing.

Whether it was hardware like the new Inferentia chip and Outposts, the new on-prem servers or blockchain and a base station service for satellites, if AWS saw an opportunity they were not ceding an inch to anyone.

Last year, AWS announced an astonishing 1400 new features, and word was that they are on pace to exceed that this year. They get a lot of credit for not resting on their laurels and continuing to innovate like a much smaller company, even as they own gobs of marketshare.

The feature inflation probably can’t go on forever, but for now at least they show no signs of slowing down, as the announcements came at a furious pace once again. While they will tell you that every decision they make is about meeting customer needs, it’s clear that some of these announcements were also about answering competitive pressure.

Going after competitors harder

In the past, AWS kept criticism of competitors to a minimum maybe giving a little jab to Oracle, but this year they seemed to ratchet it up. In their keynotes, AWS CEO Andy Jassy and Amazon CTO Werner Vogels continually flogged Oracle, a competitor in the database market, but hardly a major threat as a cloud company right now.

They went right for Oracle’s market though with a new on prem system called Outposts, which allows AWS customers to operate on prem and in the cloud using a single AWS control panel or one from VMware if customers prefer. That is the kind of cloud vision that Larry Ellison might have put forth, but Jassy didn’t necessarily see it as going after Oracle or anyone else. “I don’t see Outposts as a shot across the bow of anyone. If you look at what we are doing, it’s very much informed by customers,” he told reporters at a press conference last week.

AWS CEO Andy Jassy at a press conference at AWS Re:Invent last week.

Yet AWS didn’t reserve its criticism just for Oracle. It also took aim at Microsoft, taking jabs at Microsoft SQL Server, and also announcing Amazon FSx for Windows File Server, a tool specifically designed to move Microsoft files to the AWS cloud.

Google wasn’t spared either when launching Inferentia and Elastic Inference, which put Google on notice that AWS wasn’t going to yield the AI market to Google’s TPU infrastructure. All of these tools and much more were about more than answering customer demand, they were about putting the competition on notice in every aspect of enterprise computing.

Upward growth trajectory

The cloud market is continuing to grow at a dramatic pace, and as market leader, AWS has been able to take advantage of its market dominance to this point. Jassy, echoing Google’s Diane Greene and Oracle’s Larry Ellison, says the industry as a whole is still really early in terms of cloud adoption, which means there is still plenty of marketshare left to capture.

“I think we’re just in the early stages of enterprise and public sector adoption in the US. Outside the US I would say we are 12-36 months behind. So there are a lot of mainstream enterprises that are just now starting to plan their approach to the cloud,” Jassy said.

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy says that AWS has been using its market position to keep expanding into different areas. “AWS has the scale right now to do many things others cannot, particularly lesser players like Google Cloud Platform and Oracle Cloud. They are trying to make a point with the thousands of new products and features they bring out. This serves as a disincentive longer-term for other players, and I believe will result in a shakeout,” he told TechCrunch.

As for the frenetic pace of innovation, Moorhead believes it can’t go on forever. “To me, the question is, when do we reach a point where 95% of the needs are met, and the innovation rate isn’t required. Every market, literally every market, reaches a point where this happens, so it’s not a matter of if but when,” he said.

Certainly areas like the AWS Ground Station announcement, showed that AWS was willing to expand beyond the conventional confines of enterprise computing and into outer space to help companies process satellite data. This ability to think beyond traditional uses of cloud computing resources shows a level of creativity that suggests there could be other untapped markets for AWS that we haven’t yet imagined.

As AWS moves into more areas of the enterprise computing stack, whether on premises or in the cloud, they are showing their desire to dominate every aspect of the enterprise computing world. Last week they demonstrated that there is no area that they are willing to surrender to anyone.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com