Nov
06
2018
--

VMware acquires Heptio, the startup founded by 2 co-founders of Kubernetes

During its big customer event in Europe, VMware announced another acquisition to step up its game in helping enterprises build and run containerised, Kubernetes-based architectures: it has acquired Heptio, a startup out of Seattle that was co-founded by Joe Beda and Craig McLuckie, who were two of the three people who co-created Kubernetes back at Google in 2014 (it has since been open sourced).

Beda and McLuckie and their team will all be joining VMware in the transaction.

Terms of the deal are not being disclosed — VMware said in a release that they are not material to the company — but as a point of reference, when Heptio last raised money — a $25 million Series B in 2017, with investors including Lightspeed, Accel and Madrona — it was valued at $117 million post-money, according to data from PitchBook.

Given the pedigree of Heptio’s founders, this is a signal of the big bet that VMware is taking on Kubernetes, and the belief that it will become an increasing cornerstone in how enterprises run their businesses. The larger company already works with 500,000+ customers globally, and 75,000 partners. It’s not clear how many customers Heptio worked with but they included large, tech-forward businesses like Yahoo Japan.

It’s also another endorsement of the ongoing rise of open source and its role in cloud architectures, a paradigm that got its biggest boost at the end of October with IBM’s acquisition of RedHat, one of the biggest tech acquisitions of all time at $34 billion.

Heptio provides professional services for enterprises that are adopting or already use Kubernetes, providing training, support and building open-source projects for managing specific aspects of Kubernetes and related container clusters, and this deal is about VMware expanding the business funnel and margins for Kubernetes within it its wider cloud, on-premise and hybrid storage and computing services with that expertise.

“Kubernetes is emerging as an open framework for multi-cloud infrastructure that enables enterprise organizations to run modern applications,” said Paul Fazzone, senior vice president and general manager, Cloud Native Apps Business Unit, VMware, in a statement. “Heptio products and services will reinforce and extend VMware’s efforts with PKS to establish Kubernetes as the de facto standard for infrastructure across clouds upon closing. We are thrilled that the Heptio team led by Craig and Joe will be joining VMware to help us guide customers as they move to a multi-cloud world.”

VMware and its Pivotal business already offer Kubernetes-related services by way of PKS, which lets organizations run cloud-agnostic apps. Heptio will become a part of that wider portfolio.

“The team at Heptio has been focused on Kubernetes, creating products that make it easier to manage multiple clusters across multiple clouds,” said Craig McLuckie, CEO and co-founder of Heptio. “And now we will be tapping into VMware’s cloud native resources and proven ability to execute, amplifying our impact. VMware’s interest in Heptio is a recognition that there is so much innovation happening in open source. We are jointly committed to contribute even more to the community—resources, ideas and support.”

VMware has made some 33 acquisitions overall, according to Crunchbase, but this appears to have been the first specifically to boost its position in Kubernetes.

The deal is expected to close by fiscal Q4 2019, VMware said.

Oct
28
2018
--

Forget Watson, the Red Hat acquisition may be the thing that saves IBM

With its latest $34 billion acquisition of Red Hat, IBM may have found something more elementary than “Watson” to save its flagging business.

Though the acquisition of Red Hat  is by no means a guaranteed victory for the Armonk, N.Y.-based computing company that has had more downs than ups over the five years, it seems to be a better bet for “Big Blue” than an artificial intelligence program that was always more hype than reality.

Indeed, commentators are already noting that this may be a case where IBM finally hangs up the Watson hat and returns to the enterprise software and services business that has always been its core competency (albeit one that has been weighted far more heavily on consulting services — to the detriment of the company’s business).

Watson, the business division focused on artificial intelligence whose public claims were always more marketing than actually market-driven, has not performed as well as IBM had hoped and investors were losing their patience.

Critics — including analysts at the investment bank Jefferies (as early as one year ago) — were skeptical of Watson’s ability to deliver IBM from its business woes.

As we wrote at the time:

Jefferies pulls from an audit of a partnership between IBM Watson and MD Anderson as a case study for IBM’s broader problems scaling Watson. MD Anderson cut its ties with IBM after wasting $60 million on a Watson project that was ultimately deemed, “not ready for human investigational or clinical use.”

The MD Anderson nightmare doesn’t stand on its own. I regularly hear from startup founders in the AI space that their own financial services and biotech clients have had similar experiences working with IBM.

The narrative isn’t the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.

That’s not the only trouble IBM has had with Watson’s healthcare results. Earlier this year, the online medical journal Stat reported that Watson was giving clinicians recommendations for cancer treatments that were “unsafe and incorrect” — based on the training data it had received from the company’s own engineers and doctors at Sloan-Kettering who were working with the technology.

All of these woes were reflected in the company’s latest earnings call where it reported falling revenues primarily from the Cognitive Solutions business, which includes Watson’s artificial intelligence and supercomputing services. Though IBM chief financial officer pointed to “mid-to-high” single digit growth from Watson’s health business in the quarter, transaction processing software business fell by 8% and the company’s suite of hosted software services is basically an afterthought for business gravitating to Microsoft, Alphabet, and Amazon for cloud services.

To be sure, Watson is only one of the segments that IBM had been hoping to tap for its future growth; and while it was a huge investment area for the company, the company always had its eyes partly fixed on the cloud computing environment as it looked for areas of growth.

It’s this area of cloud computing where IBM hopes that Red Hat can help it gain ground.

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” said Ginni Rometty, IBM Chairman, President and Chief Executive Officer, in a statement announcing the acquisition. “IBM will become the world’s number-one hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.”

The acquisition also puts an incredible amount of marketing power behind Red Hat’s various open source services business — giving all of those IBM project managers and consultants new projects to pitch and maybe juicing open source software adoption a bit more aggressively in the enterprise.

As Red Hat chief executive Jim Whitehurst told TheStreet in September, “The big secular driver of Linux is that big data workloads run on Linux. AI workloads run on Linux. DevOps and those platforms, almost exclusively Linux,” he said. “So much of the net new workloads that are being built have an affinity for Linux.”

Oct
22
2018
--

Pulumi raises $15M for its infrastructure as code platform

Pulumi, a Seattle-based startup that lets developers specify and manage their cloud infrastructure using the programming language they already know, today announced that it has raised a $15 million Series A funding round led by Madrona Venture Group. Tola Capital also participated in this round and Tola managing director Sheila Gulati will get a seat on the Pulumi board, where she’ll join former Microsoft exec and Madrona managing director S. Somasegar.

In addition to announcing its raise, the company also today launched its commercial platform, which builds upon Pulumi’s open-source work.

“Since launch, we’ve had a lot of inbound interest, both on the community side — so you’re seeing a lot of open source contributions, and they’re really impactful contributions, including, for example, community-led support for VMware and OpenStack,” Pulumi co-founder and CEO Eric Rudder told me. “So we’re actually seeing a lot of vibrancy in the open-source community. And at the same time, we have a lot of inbound interest on the commercial side of things. That is, teams wanting to operationalize Pulumi and put it into production and wanting to purchase the product.”

So to meet that opportunity, the team decided to raise a new round to scale out both its team and product. And now, that product includes a commercial offering of Pulumi with the company’s new ‘team edition.’ This new enterprise version includes support for unlimited users, integrations with third-party tools like GitHub and Slack, as well as role-based access controls and onboarding and 12×5 support. Like the free, single-user community edition, the team edition is delivered as a SaaS product and supports deployments to all of the major public and private cloud platforms.

“We’re all seeing the same things — the cloud is a foregone conclusion,” Tola’s Gulati told me when I asked her why she was investing in Pulumi. “Enterprises have a lot of complexity as they come over the cloud. And so dealing with VMs, containers and serverless is a reality for these enterprises. And the ability to do that in a way that there’s a single toolset, letting developers use real programming languages, letting them exist where they have skills today, but then allows them to bring the best of cloud into their organization. Frankly, Pulumi really has thought through the existing complexity, the developer reality, the IT and develop a relationship from both a runtime and deployment perspective. And they are the best that we’ve seen.”

Pulumi will, of course, continue to develop its open source tools, too. Indeed, the company noted that it would invest heavily in building out the community around its tools. The team told me that it is already seeing a lot of momentum but with the new funding, it’ll re-double its efforts.

With the new funding, the company will also work on making the onboarding process much easier, up to the point where it will become a full self-serve experience. But that doesn’t work for most large organizations, so Pulumi will also invest heavily in its pre- and post-sales organization. Right now, like most companies at this stage, the team is mostly composed of engineers.

Oct
10
2018
--

Google expands its identity management portfolio for businesses and developers

Over the course of the last year, Google has launched a number of services that bring to other companies the same BeyondCorp model for managing access to a company’s apps and data without a VPN that it uses internally. Google’s flagship product for this is Cloud Identity, which is essentially Google’s BeyondCorp, but packaged for other businesses.

Today, at its Cloud Next event in London, it’s expanding this portfolio of Cloud Identity services with three new products and features that enable developers to adopt this way of thinking about identity and access for their own apps and that make it easier for enterprises to adopt Cloud Identity and make it work with their existing solutions.

The highlight of today’s announcements, though, is Cloud Identity for Customers and Partners, which is now in beta. While Cloud Identity is very much meant for employees at a larger company, this new product allows developers to build into their own applications the same kind of identity and access management services.

“Cloud Identity is how we protect our employees and you protect your workforce,” Karthik Lakshminarayanan, Google’s product management director for Cloud Identity, said in a press briefing ahead of the announcement. “But what we’re increasingly finding is that developers are building applications and are also having to deal with identity and access management. So if you’re building an application, you might be thinking about accepting usernames and passwords, or you might be thinking about accepting social media as an authentication mechanism.”

This new service allows developers to build in multiple ways of authenticating the user, including through email and password, Twitter, Facebook, their phones, SAML, OIDC and others. Google then handles all of that authentication work. Google will offer both client-side (web, iOS and Android) and server-side SDKs (with support for Node.ja, Java, Python and other languages).

“They no longer have to worry about getting hacked and their passwords and their user credentials getting compromised,” added Lakshminarayanan, “They can now leave that to Google and the exact same scale that we have, the security that we have, the reliability that we have — that we are using to protect employees in the cloud — can now be used to protect that developer’s applications.”

In addition to Cloud Identity for Customers and Partners, Google is also launching a new feature for the existing Cloud Identity service, which brings support for traditional LDAP-based applications and IT services like VPNs to Cloud Identity. This feature is, in many ways, an acknowledgment that most enterprises can’t simply turn on a new security paradigm like BeyondCorp/Cloud Identity. With support for secure LDAP, these companies can still make it easy for their employees to connect to these legacy applications while still using Cloud Identity.

“As much as Google loves the cloud, a mantra that Google has is ‘let’s meet customers where they are.’ We know that customers are embracing the cloud, but we also know that they have a massive, massive footprint of traditional applications,” Lakshminarayanan explained. He noted that most enterprises today run two solutions: one that provides access to their on-premise applications and another that provides the same services for their cloud applications. Cloud Identity now natively supports access to many of these legacy applications, including Aruba Networks (HPE), Itopia, JAMF, Jenkins (Cloudbees), OpenVPN, Papercut, pfSense (Netgate), Puppet, Sophos and Splunk. Indeed, as Google notes, virtually any application that supports LDAP over SSL can work with this new service.

Finally, the third new feature Google is launching today is context-aware access for those enterprises that already use its Cloud Identity-Aware Proxy (yes, those names are all a mouthful). The idea here is to help enterprises provide access to cloud resources based on the identity of the user and the context of the request — all without using a VPN. That’s pretty much the promise of BeyondCorp in a nutshell, and this implementation, which is now in beta, allows businesses to manage access based on the user’s identity and a device’s location and its security status, for example. Using this new service, IT managers could restrict access to one of their apps to users in a specific country, for example.

 

Oct
10
2018
--

Google Cloud expands its networking feature with Cloud NAT

It’s a busy week for news from Google Cloud, which is hosting its Next event in London. Today, the company used the event to launch a number of new networking features. The marquee launch today is Cloud NAT, a new service that makes it easier for developers to build cloud-based services that don’t have public IP addresses and can only be accessed from applications within a company’s virtual private cloud.

As Google notes, building this kind of setup was already possible, but it wasn’t easy. Obviously, this is a pretty common use case, though, so with Cloud NAT, Google now offers a fully managed service that handles all the network address translation (hence the NAT) and provides access to these private instances behind the Cloud NAT gateway.

Cloud NAT supports Google Compute Engine virtual machines as well as Google Kubernetes Engine containers, and offers both a manual mode where developers can specify their IPs and an automatic mode where IPs are automatically allocated.

Also new in today’s release is Firewall Rules Logging, which is now in beta. Using this feature, admins can audit, verify and analyze the effects of their firewall rules. That means when there are repeated connection attempts that the firewall blocked, you can now analyze those and see whether somebody was up to no good or whether somebody misconfigured the firewall. Because the data is only delayed by about five seconds, the service provides near real-time access to this data — and you can obviously tie this in with other services like Stackdriver Logging, Cloud Pub/Sub and BigQuery to create alerts and further analyze the data.

Also new today is managed TLS certificated for HTTPS load balancers. The idea here is to take the hassle out of managing TLS certificates (the kind of certificates that ensure that your user’s browser creates a secure connection to your app) when there is a load balancer in play. This feature, too, is now in beta.

Oct
10
2018
--

Shasta Ventures is doubling down on security startups with 3 new hires

Early-stage venture capital firm Shasta Ventures has brought on three new faces to beef up its enterprise software and security portfolio amid a big push to “go deeper” into cybersecurity, per Shasta’s managing director Doug Pepper.

Balaji Yelamanchili (above left), the former general manager and executive vice president of Symantec’s enterprise security business unit, joins as a venture partner on the firm’s enterprise software team. He was previously a senior vice president at Oracle and Dell EMC. Pepper says Yelamanchili will be sourcing investments and may take board seats in “certain cases.”

The firm has also tapped Salesforce’s former chief information security officer Izak Mutlu (above center) as an executive-in-residence, a role in which he’ll advise Shasta portfolio companies. Mutlu spent 11 years at the cloud computing company managing IT security and compliance.

InterWest board partner Drew Harman, the final new hire, has joined as a board partner and will work closely with the chief executive officers of Shasta’s startups. Harman has worked in enterprise software for 25 years across a number of roles. He is currently on the boards of the cloud-based monetization platform Aria, enterprise content marketing startup NewsCred, customer retention software provider Totango and others.

There’s no area today that’s more important than cybersecurity,” Pepper told TechCrunch. “The business of venture has gotten increasingly competitive and it demands more focus than ever before. We aren’t looking for generalists, we are looking for domain experts.”

Shasta’s security investments include email authentication service Valimail, which raised a $25 million Series B in May. Airspace Systems, a startup that built “kinetic capture” technologies that can identify offending unmanned aircrafts and take them down, raised a $20 million round with participation from Shasta in March. And four-year-old Stealth Security, a startup that defends companies from automated bot attacks, secured an $8 million investment from Shasta in February.

The Menlo Park-based firm filed to raise $300 million for its fifth flagship VC fund in 2016. A year later, it announced a specialty vehicle geared toward augmented and virtual reality app development. With more than $1 billion under management, the firm also backs consumer, IoT, robotics and space-tech companies across the U.S.

In the last year, Shasta has promoted Nikhil Basu Trivedi, Nitin Chopra and Jacob Mullins from associate to partner, as well as added two new associates, Natalie Sandman and Rachel Star.

Sep
24
2018
--

Microsoft wants to put your data in a box

AWS has its Snowball (and Snowmobile truck), Google Cloud has its data transfer appliance and Microsoft has its Azure Data Box. All of these are physical appliances that allow enterprises to ship lots of data to the cloud by uploading it into these machines and then shipping them to the cloud. Microsoft’s Azure Data Box launched into preview about a year ago and today, the company is announcing a number of updates and adding a few new boxes, too.

First of all, the standard 50-pound, 100-terabyte Data Box is now generally available. If you’ve got a lot of data to transfer to the cloud — or maybe collect a lot of offline data — then FedEx will happily pick this one up and Microsoft will upload the data to Azure and charge you for your storage allotment.

If you’ve got a lot more data, though, then Microsoft now also offers the Azure Data Box Heavy. This new box, which is now in preview, can hold up to one petabyte of data. Microsoft did not say how heavy the Data Box Heavy is, though.

Also new is the Azure Data Box Edge, which is now also in preview. In many ways, this is the most interesting of the additions since it goes well beyond transporting data. As the name implies, Data Box Edge is meant for edge deployments where a company collects data. What makes this version stand out is that it’s basically a small data center rack that lets you process data as it comes in. It even includes an FPGA to run AI algorithms at the edge.

Using this box, enterprises can collect the data, transform and analyze it on the box, and then send it to Azure over the network (and not in a truck). Using this, users can cut back on bandwidth cost and don’t have to send all of their data to the cloud for processing.

Also part of the same Data Box family is the Data Box Gateway. This is a virtual appliance, however, that runs on Hyper-V and VMWare and lets users create a data transfer gateway for importing data in Azure. That’s not quite as interesting as a hardware appliance but useful nonetheless.

more Microsoft Ignite 2018 coverage

Sep
12
2018
--

Nvidia launches the Tesla T4, its fastest data center inferencing platform yet

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the first to bring the new T4 GPUs to its Cloud Platform.

Nvidia argues that the T4s are significantly faster than the P4s. For language inferencing, for example, the T4 is 34 times faster than using a CPU and more than 3.5 times faster than the P4. Peak performance for the P4 is 260 TOPS for 4-bit integer operations and 65 TOPS for floating point operations. The T4 sits on a standard low-profile 75 watt PCI-e card.

What’s most important, though, is that Nvidia designed these chips specifically for AI inferencing. “What makes Tesla T4 such an efficient GPU for inferencing is the new Turing tensor core,” said Ian Buck, Nvidia’s VP and GM of its Tesla data center business. “[Nvidia CEO] Jensen [Huang] already talked about the Tensor core and what it can do for gaming and rendering and for AI, but for inferencing — that’s what it’s designed for.” In total, the chip features 320 Turing Tensor cores and 2,560 CUDA cores.

In addition to the new chip, Nvidia is also launching a refresh of its TensorRT software for optimizing deep learning models. This new version also includes the TensorRT inference server, a fully containerized microservice for data center inferencing that plugs seamlessly into an existing Kubernetes infrastructure.

 

 

Sep
05
2018
--

Elastic’s IPO filing is here

Elastic, the provider of subscription-based data search software used by Dell, Netflix, The New York Times and others, has unveiled its IPO filing after confidentially submitting paperwork to the SEC in June. The company will be the latest in a line of enterprise SaaS businesses to hit the public markets in 2018.

Headquartered in Mountain View, Elastic plans to raise $100 million in its NYSE listing, though that’s likely a placeholder amount. The timing of the filing suggests the company will transition to the public markets this fall; we’ve reached out to the company for more details. 

Elastic will trade under the symbol ESTC.

The business is known for its core product, an open-source search tool called ElasticSearch. It also offers a range of analytics and visualization tools meant to help businesses organize large data sets, competing directly with companies like Splunk and even Amazon — a name it mentions 14 times in the filing.

Amazon offers some of our open source features as part of its Amazon Web Services offering. As such, Amazon competes with us for potential customers, and while Amazon cannot provide our proprietary software, the pricing of Amazon’s offerings may limit our ability to adjust,” the company wrote in the filing, which also lists Endeca, FAST, Autonomy and several others as key competitors.

This is our first look at Elastic’s financials. The company brought in $159.9 million in revenue in the 12 months ended July 30, 2018, up roughly 100 percent from $88.1 million the year prior. Losses are growing at about the same rate. Elastic reported a net loss of $18.5 million in the second quarter of 2018. That’s an increase from $9.9 million in the same period in 2017.

Founded in 2012, the company has raised about $100 million in venture capital funding, garnering a $700 million valuation the last time it raised VC, which was all the way back in 2014. Its investors include Benchmark, NEA and Future Fund, which each retain a 17.8 percent, 10.2 percent and 8.2 percent pre-IPO stake, respectively.

A flurry of business software companies have opted to go public this year. Domo, a business analytics company based in Utah, went public in June raising $193 million in the process. On top of that, subscription biller Zuora had a positive debut in April in what was a “clear sign post on the road to SaaS maturation,” according to TechCrunch’s Ron Miller. DocuSign and Smartsheet are also recent examples of both high-profile and successful SaaS IPOs.

Aug
30
2018
--

OpenStack’s latest release focuses on bare metal clouds and easier upgrades

The OpenStack Foundation today released the 18th version of its namesake open-source cloud infrastructure software. The project has had its ups and downs, but it remains the de facto standard for running and managing large private clouds.

What’s been interesting to watch over the years is how the project’s releases have mirrored what’s been happening in the wider world of enterprise software. The core features of the platform (compute, storage, networking) are very much in place at this point, allowing the project to look forward and to add new features that enterprises are now requesting.

The new release, dubbed Rocky, puts an emphasis on bare metal clouds, for example. While the majority of enterprises still run their workloads in virtual machines, a lot of them are now looking at containers as an alternative with less overhead and the promise of faster development cycles. Many of these enterprises want to run those containers on bare metal clouds and the project is reacting to this with its “Ironic” project that offers all of the management and automation features necessary to run these kinds of deployments.

“There’s a couple of big features that landed in Ironic in the Rocky release cycle that we think really set it up well for OpenStack bare metal clouds to be the foundation for both running VMs and containers,” OpenStack Foundation VP of marketing and community Lauren Sell told me. 

Ironic itself isn’t new, but in today’s update, Ironic gets user-managed BIOS settings (to configure power management, for example) and RAM disk support for high-performance computing workloads. Magnum, OpenStack’s service for using container engines like Docker Swarm, Apache Mesos and Kubernetes, is now also a Kubernetes certified installer, meaning that users can be confident that OpenStack and Kubernetes work together just like a user would expect.

Another trend that’s becoming quite apparent is that many enterprises that build their own private clouds do so because they have very specific hardware needs. Often, that includes GPUs and FPGAs, for example, for machine learning workloads. To make it easier for these businesses to use OpenStack, the project now includes a lifecycle management service for these kinds of accelerators.

“Specialized hardware is getting a lot of traction right now,” OpenStack CTO Mark Collier noted. “And what’s interesting is that FPGAs have been around for a long time but people are finding out that they are really useful for certain types of AI, because they’re really good at doing the relatively simple math that you need to repeat over and over again millions of times. It’s kind of interesting to see this kind of resurgence of certain types of hardware that maybe was seen as going to be disrupted by cloud and now it’s making a roaring comeback.”

With this update, the OpenStack project is also enabling easier upgrades, something that was long a daunting process for enterprises. Because it was so hard, many chose to simply not update to the latest releases and often stayed a few releases behind. Now, the so-called Fast Forward Upgrade feature allows these users to get on new releases faster, even if they are well behind the project’s own cycle. Oath, which owns TechCrunch, runs a massive OpenStack cloud, for example, and the team recently upgraded a 20,000-core deployment from Juno (the 10th OpenStack release) to Ocata (the 15th release).

The fact that Vexxhost, a Canadian cloud provider, is already offering support for the Rocky release in its new Silicon Valley cloud today is yet another sign that updates are getting a bit easier (and the whole public cloud side of OpenStack, too, often gets overlooked, but continues to grow).

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com