Apr
30
2021
--

The health data transparency movement is birthing a new generation of startups

In the early 2000s, Jeff Bezos gave a seminal TED Talk titled “The Electricity Metaphor for the Web’s Future.” In it, he argued that the internet will enable innovation on the same scale that electricity did.

We are at a similar inflection point in healthcare, with the recent movement toward data transparency birthing a new generation of innovation and startups.

Those who follow the space closely may have noticed that there are twin struggles taking place: a push for more transparency on provider and payer data, including anonymous patient data, and another for strict privacy protection for personal patient data. What’s the main difference?

This sector is still somewhat nascent — we are in the first wave of innovation, with much more to come.

Anonymized data is much more freely available, while personal data is being locked even tighter (as it should be) due to regulations like GDPR, CCPA and their equivalents around the world.

The former trend is enabling a host of new vendors and services that will ultimately make healthcare better and more transparent for all of us.

These new companies could not have existed five years ago. The Affordable Care Act was the first step toward making anonymized data more available. It required healthcare institutions (such as hospitals and healthcare systems) to publish data on costs and outcomes. This included the release of detailed data on providers.

Later legislation required biotech and pharma companies to disclose monies paid to research partners. And every physician in the U.S. is now required to be in the National Practitioner Identifier (NPI), a comprehensive public database of providers.

All of this allowed the creation of new types of companies that give both patients and providers more control over their data. Here are some key examples of how.

Allowing patients to access all their own health data in one place

This is a key capability of patients’ newly found access to health data. Think of how often, as a patient, providers aren’t aware of treatment or a test you’ve had elsewhere. Often you end up repeating a test because a provider doesn’t have a record of a test conducted elsewhere.

Apr
28
2021
--

DigitalOcean says customer billing data accessed in data breach

DigitalOcean has emailed customers warning of a data breach involving customers’ billing data, TechCrunch has learned.

The cloud infrastructure giant told customers in an email on Wednesday, obtained by TechCrunch, that it has “confirmed an unauthorized exposure of details associated with the billing profile on your DigitalOcean account.” The company said the person “gained access to some of your billing account details through a flaw that has been fixed” over a two-week window between April 9 and April 22.

The email said customer billing names and addresses were accessed, as well as the last four digits of the payment card, its expiry date and the name of the card-issuing bank. The company said that customers’ DigitalOcean accounts were “not accessed,” and passwords and account tokens were “not involved” in this breach.

“To be extra careful, we have implemented additional security monitoring on your account. We are expanding our security measures to reduce the likelihood of this kind of flaw occuring [sic] in the future,” the email said.

DigitalOcean said it fixed the flaw and notified data protection authorities, but it’s not clear what the apparent flaw was that put customer billing information at risk.

In a statement, DigitalOcean’s security chief Tyler Healy said 1% of billing profiles were affected by the breach, but declined to address our specific questions, including how the vulnerability was discovered and which authorities have been informed.

Companies with customers in Europe are subject to GDPR and can face fines of up to 4% of their global annual revenue.

Last year, the cloud company raised $100 million in new debt, followed by another $50 million round, months after laying off dozens of staff amid concerns about the company’s financial health. In March, the company went public, raising about $775 million in its initial public offering. 

Apr
22
2021
--

Google’s Anthos multicloud platform gets improved logging, Windows container support and more

Google today announced a sizable update to its Anthos multicloud platform that lets you build, deploy and manage containerized applications anywhere, including on Amazon’s AWS and (in preview) on Microsoft Azure.

Version 1.7 includes new features like improved metrics and logging for Anthos on AWS, a new Connect gateway to interact with any cluster right from Google Cloud and a preview of Google’s managed control plane for Anthos Service Mesh. Other new features include Windows container support for environments that use VMware’s vSphere platform and new tools for developers to make it easier for them to deploy their applications to any Anthos cluster.

Today’s update comes almost exactly two years after Google CEO Sundar Pichai originally announced Anthos at its Cloud Next event in 2019 (before that, Google called this project the “Google Cloud Services Platform,” which launched three years ago). Hybrid and multicloud, it’s fair to say, takes a key role in the Google Cloud roadmap — and maybe more so for Google than for any of its competitors. Recently, Google brought on industry veteran Jeff Reed to become the VP of Product Management in charge of Anthos.

Reed told me that he believes that there are a lot of factors right now that are putting Anthos in a good position. “The wind is at our back. We bet on Kubernetes, bet on containers — those were good decisions,” he said. Increasingly, customers are also now scaling out their use of Kubernetes and have to figure out how to best scale out their clusters and deploy them in different environments — and to do so, they need a consistent platform across these environments. He also noted that when it comes to bringing on new Anthos customers, it’s really those factors that determine whether a company will look into Anthos or not.

He acknowledged that there are other players in this market, but he argues that Google Cloud’s take on this is also quite different. “I think we’re pretty unique in the sense that we’re from the cloud, cloud-native is our core approach,” he said. “A lot of what we talk about in [Anthos] 1.7 is about how we leverage the power of the cloud and use what we call “an anchor in the cloud” to make your life much easier. We’re more like a cloud vendor there, but because we support on-prem, we see some of those other folks.” Those other folks being IBM/Red Hat’s OpenShift and VMware’s Tanzu, for example. 

The addition of support for Windows containers in vSphere environments also points to the fact that a lot of Anthos customers are classical enterprises that are trying to modernize their infrastructure, yet still rely on a lot of legacy applications that they are now trying to bring to the cloud.

Looking ahead, one thing we’ll likely see is more integrations with a wider range of Google Cloud products into Anthos. And indeed, as Reed noted, inside of Google Cloud, more teams are now building their products on top of Anthos themselves. In turn, that then makes it easier to bring those services to an Anthos-managed environment anywhere. One of the first of these internal services that run on top of Anthos is Apigee. “Your Apigee deployment essentially has Anthos underneath the covers. So Apigee gets all the benefits of a container environment, scalability and all those pieces — and we’ve made it really simple for that whole environment to run kind of as a stack,” he said.

I guess we can expect to hear more about this in the near future — or at Google Cloud Next 2021.

Apr
06
2021
--

Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the “cloud financial management” space to establish best practices and standards. As the term implies, “cloud financial management” is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, vice president of Engineering and Product at Google Cloud. “More visibility, efficiency and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open-source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, executive director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the second of three dedicated Premier Member Technical Advisory Council seats.”

Mar
11
2021
--

Google Cloud launches a new support option for mission critical workloads

Google Cloud today announced the launch of a new support option for its Premium Support customers that run mission-critical services on its platform. The new service, imaginatively dubbed Mission Critical Services (MCS), brings Google’s own experience with Site Reliability Engineering to its customers. This is not Google completely taking over the management of these services, though. Instead, the company describes it as a “consultative offering in which we partner with you on a journey toward readiness.”

Initially, Google will work with its customers to improve — or develop — the architecture of their apps and help them instrument the right monitoring systems and controls, as well as help them set and raise their service-level objectives (a key feature in the Site Reliability Engineering philosophy).

Later, Google will also provide ongoing check-ins with its engineers and walk customers through tune-ups architecture reviews. “Our highest tier of engineers will have deep familiarity with your workloads, allowing us to monitor, prevent, and mitigate impacts quickly, delivering the fastest response in the industry. For example, if you have any issues–24-hours-a-day, seven-days-a-week–we’ll spin up a live war room with our experts within five minutes,” Google Cloud’s VP for Customer Experience, John Jester, explains in today’s announcement.

This new offering is another example of how Google Cloud is trying to differentiate itself from the rest of the large cloud providers. Its emphasis today is on providing the high-touch service experiences that were long missing from its platform, with a clear emphasis on the needs of large enterprise customers. That’s what Thomas Kurian promised to do when he became the organization’s CEO and he’s clearly following through.

 


Early Stage is the premier ‘how-to’ event for startup entrepreneurs and investors. You’ll hear first-hand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company-building: Fundraising, recruiting, sales, product market fit, PR, marketing and brand building. Each session also has audience participation built-in – there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20 percent off tickets right here.

Mar
10
2021
--

Aqua Security raises $135M at a $1B valuation for its cloud native security platform

Aqua Security, a Boston- and Tel Aviv-based security startup that focuses squarely on securing cloud-native services, today announced that it has raised a $135 million Series E funding round at a $1 billion valuation. The round was led by ION Crossover Partners. Existing investors M12 Ventures, Lightspeed Venture Partners, Insight Partners, TLV Partners, Greenspring Associates and Acrew Capital also participated. In total, Aqua Security has now raised $265 million since it was founded in 2015.

The company was one of the earliest to focus on securing container deployments. And while many of its competitors were acquired over the years, Aqua remains independent and is now likely on a path to an IPO. When it launched, the industry focus was still very much on Docker and Docker containers. To the detriment of Docker, that quickly shifted to Kubernetes, which is now the de facto standard. But enterprises are also now looking at serverless and other new technologies on top of this new stack.

“Enterprises that five years ago were experimenting with different types of technologies are now facing a completely different technology stack, a completely different ecosystem and a completely new set of security requirements,” Aqua CEO Dror Davidoff told me. And with these new security requirements came a plethora of startups, all focusing on specific parts of the stack.

Image Credits: Aqua Security

What set Aqua apart, Dror argues, is that it managed to 1) become the best solution for container security and 2) realized that to succeed in the long run, it had to become a platform that would secure the entire cloud-native environment. About two years ago, the company made this switch from a product to a platform, as Davidoff describes it.

“There was a spree of acquisitions by CheckPoint and Palo Alto [Networks] and Trend [Micro],” Davidoff said. “They all started to acquire pieces and tried to build a more complete offering. The big advantage for Aqua was that we had everything natively built on one platform. […] Five years later, everyone is talking about cloud-native security. No one says ‘container security’ or ‘serverless security’ anymore. And Aqua is practically the broadest cloud-native security [platform].”

One interesting aspect of Aqua’s strategy is that it continues to bet on open source, too. Trivy, its open-source vulnerability scanner, is the default scanner for GitLab’s Harbor Registry and the CNCF’s Artifact Hub, for example.

“We are probably the best security open-source player there is because not only do we secure from vulnerable open source, we are also very active in the open-source community,” Davidoff said (with maybe a bit of hyperbole). “We provide tools to the community that are open source. To keep evolving, we have a whole open-source team. It’s part of the philosophy here that we want to be part of the community and it really helps us to understand it better and provide the right tools.”

In 2020, Aqua, which mostly focuses on mid-size and larger companies, doubled the number of paying customers and it now has more than half a dozen customers with an ARR of over $1 million each.

Davidoff tells me the company wasn’t actively looking for new funding. Its last funding round came together only a year ago, after all. But the team decided that it wanted to be able to double down on its current strategy and raise sooner than originally planned. ION had been interested in working with Aqua for a while, Davidoff told me, and while the company received other offers, the team decided to go ahead with ION as the lead investor (with all of Aqua’s existing investors also participating in this round).

“We want to grow from a product perspective, we want to grow from a go-to-market [perspective] and expand our geographical coverage — and we also want to be a little more acquisitive. That’s another direction we’re looking at because now we have the platform that allows us to do that. […] I feel we can take the company to great heights. That’s the plan. The market opportunity allows us to dream big.”

 

Mar
02
2021
--

Microsoft launches Azure Percept, its new hardware and software platform to bring AI to the edge

Microsoft today announced Azure Percept, its new hardware and software platform for bringing more of its Azure AI services to the edge. Percept combines Microsoft’s Azure cloud tools for managing devices and creating AI models with hardware from Microsoft’s device partners. The general idea here is to make it far easier for all kinds of businesses to build and implement AI for things like object detection, anomaly detections, shelf analytics and keyword spotting at the edge by providing them with an end-to-end solution that takes them from building AI models to deploying them on compatible hardware.

To kickstart this, Microsoft also today launches a hardware development kit with an intelligent camera for vision use cases (dubbed Azure Percept Vision). The kit features hardware-enabled AI modules for running models at the edge, but it can also be connected to the cloud. Users will also be able to trial their proofs-of-concept in the real world because the development kit conforms to the widely used 80/20 T-slot framing architecture.

In addition to Percept Vision, Microsoft is also launching Azure Percept Audio for audio-centric use cases.

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, the corporate vice president of Microsoft’s edge and platform group, said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Percept customers will have access to Azure’s cognitive service and machine learning models and Percept devices will automatically connect to Azure’s IoT hub.

Microsoft says it is working with silicon and equipment manufacturers to build an ecosystem of “intelligent edge devices that are certified to run on the Azure Percept platform.” Over the course of the next few months, Microsoft plans to certify third-party devices for inclusion in this program, which will ideally allow its customers to take their proofs-of-concept and easily deploy them to any certified devices.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” said Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

St. Pierre also noted that all of the components of the platform will have to conform to Microsoft’s responsible AI principles — and go through extensive security testing.


Early Stage is the premiere ‘how-to’ event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company-building: Fundraising, recruiting, sales, legal, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included in each for audience questions and discussion.


Feb
24
2021
--

Google Cloud puts its Kubernetes Engine on autopilot

Google Cloud today announced a new operating mode for its Kubernetes Engine (GKE) that turns over the management of much of the day-to-day operations of a container cluster to Google’s own engineers and automated tools. With Autopilot, as the new mode is called, Google manages all of the Day 2 operations of managing these clusters and their nodes, all while implementing best practices for operating and securing them.

This new mode augments the existing GKE experience, which already managed most of the infrastructure of standing up a cluster. This ‘standard’ experience, as Google Cloud now calls it, is still available and allows users to customize their configurations to their heart’s content and manually provision and manage their node infrastructure.

Drew Bradstock, the Group Product Manager for GKE, told me that the idea behind Autopilot was to bring together all of the tools that Google already had for GKE and bring them together with its SRE teams who know how to run these clusters in production — and have long done so inside of the company.

“Autopilot stitches together auto-scaling, auto-upgrades, maintenance, Day 2 operations and — just as importantly — does it in a hardened fashion,” Bradstock noted. “[…] What this has allowed our initial customers to do is very quickly offer a better environment for developers or dev and test, as well as production, because they can go from Day Zero and the end of that five-minute cluster creation time, and actually have Day 2 done as well.”

Image Credits: Google

From a developer’s perspective, nothing really changes here, but this new mode does free up teams to focus on the actual workloads and less on managing Kubernetes clusters. With Autopilot, businesses still get the benefits of Kubernetes, but without all of the routine management and maintenance work that comes with that. And that’s definitely a trend we’ve been seeing as the Kubernetes ecosystem has evolved. Few companies, after all, see their ability to effectively manage Kubernetes as their real competitive differentiator.

All of that comes at a price, of course, in addition to the standard GKE flat fee of $0.10 per hour and cluster (there’s also a free GKE tier that provides $74.40 in billing credits), plus additional fees for resources that your clusters and pods consume. Google offers a 99.95% SLA for the control plane of its Autopilot clusters and a 99.9% SLA for Autopilot pods in multiple zones.

Image Credits: Google

Autopilot for GKE joins a set of container-centric products in the Google Cloud portfolio that also include Anthos for running in multi-cloud environments and Cloud Run, Google’s serverless offering. “[Autopilot] is really [about] bringing the automation aspects in GKE we have for running on Google Cloud, and bringing it all together in an easy-to-use package, so that if you’re newer to Kubernetes, or you’ve got a very large fleet, it drastically reduces the amount of time, operations and even compute you need to use,” Bradstock explained.

And while GKE is a key part of Anthos, that service is more about brining Google’s config management, service mesh and other tools to an enterprise’s own data center. Autopilot of GKE is, at least for now, only available on Google Cloud.

“On the serverless side, Cloud Run is really, really great for an opinionated development experience,” Bradstock added. “So you can get going really fast if you want an app to be able to go from zero to 1000 and back to zero — and not worry about anything at all and have it managed entirely by Google. That’s highly valuable and ideal for a lot of development. Autopilot is more about simplifying the entire platform people work on when they want to leverage the Kubernetes ecosystem, be a lot more in control and have a whole bunch of apps running within one environment.”

 

Feb
17
2021
--

Microsoft’s Dapr open-source project to help developers build cloud-native apps hits 1.0

Dapr, the Microsoft-incubated open-source project that aims to make it easier for developers to build event-driven, distributed cloud-native applications, hit its 1.0 milestone today, signifying the project’s readiness for production use cases. Microsoft launched the Distributed Application Runtime (that’s what “Dapr” stand for) back in October 2019. Since then, the project released 14 updates and the community launched integrations with virtually all major cloud providers, including Azure, AWS, Alibaba and Google Cloud.

The goal for Dapr, Microsoft Azure CTO Mark Russinovich told me, was to democratize cloud-native development for enterprise developers.

“When we go look at what enterprise developers are being asked to do — they’ve traditionally been doing client, server, web plus database-type applications,” he noted. “But now, we’re asking them to containerize and to create microservices that scale out and have no-downtime updates — and they’ve got to integrate with all these cloud services. And many enterprises are, on top of that, asking them to make apps that are portable across on-premises environments as well as cloud environments or even be able to move between clouds. So just tons of complexity has been thrown at them that’s not specific to or not relevant to the business problems they’re trying to solve.”

And a lot of the development involves re-inventing the wheel to make their applications reliably talk to various other services. The idea behind Dapr is to give developers a single runtime that, out of the box, provides the tools that developers need to build event-driven microservices. Among other things, Dapr provides various building blocks for things like service-to-service communications, state management, pub/sub and secrets management.

Image Credits: Dapr

“The goal with Dapr was: let’s take care of all of the mundane work of writing one of these cloud-native distributed, highly available, scalable, secure cloud services, away from the developers so they can focus on their code. And actually, we took lessons from serverless, from Functions-as-a-Service where with, for example Azure Functions, it’s event-driven, they focus on their business logic and then things like the bindings that come with Azure Functions take care of connecting with other services,” Russinovich said.

He also noted that another goal here was to do away with language-specific models and to create a programming model that can be leveraged from any language. Enterprises, after all, tend to use multiple languages in their existing code, and a lot of them are now looking at how to best modernize their existing applications — without throwing out all of their current code.

As Russinovich noted, the project now has more than 700 contributors outside of Microsoft (though the core commuters are largely from Microsoft) and a number of businesses started using it in production before the 1.0 release. One of the larger cloud providers that is already using it is Alibaba. “Alibaba Cloud has really fallen in love with Dapr and is leveraging it heavily,” he said. Other organizations that have contributed to Dapr include HashiCorp and early users like ZEISS, Ignition Group and New Relic.

And while it may seem a bit odd for a cloud provider to be happy that its competitors are using its innovations already, Russinovich noted that this was exactly the plan and that the team hopes to bring Dapr into a foundation soon.

“We’ve been on a path to open governance for several months and the goal is to get this into a foundation. […] The goal is opening this up. It’s not a Microsoft thing. It’s an industry thing,” he said — but he wasn’t quite ready to say to which foundation the team is talking.

 

Feb
04
2021
--

Google Cloud launches Apigee X, the next generation of its API management platform

Google today announced the launch of Apigee X, the next major release of the Apgiee API management platform it acquired back in 2016.

“If you look at what’s happening — especially after the pandemic started in March last year — the volume of digital activities has gone up in every kind of industry, all kinds of use cases are coming up. And one of the things we see is the need for a really high-performance, reliable, global digital transformation platform,” Amit Zavery, Google Cloud’s head of platform, told me.

He noted that the number of API calls has gone up 47 percent from last year and that the platform now handles about 2.2 trillion API calls per year.

At the core of the updates are deeper integrations with Google Cloud’s AI, security and networking tools. In practice, this means Apigee users can now deploy their APIs across 24 Google Cloud regions, for example, and use Google’s caching services in more than 100 edge locations.

Image Credits: Google

In addition, Apigee X now integrates with Google’s Cloud Armor firewall and its Cloud Identity Access Management platform. This also means that Apigee users won’t have to use third-party tools for their firewall and identity management needs.

“We do a lot of AI/ML-based anomaly detection and operations management,” Zavery explained. “We can predict any kind of malicious intent or any other things which might happen to those API calls or your traffic by embedding a lot of those insights into our API platform. I think [that] is a big improvement, as well as new features, especially in operations management, security management, vulnerability management and making those a core capability so that as a business, you don’t have to worry about all these things. It comes with the core capabilities and that is really where the front doors of digital front-ends can shine and customers can focus on that.”

The platform now also makes better use of Google’s AI capabilities to help users identify anomalies or predict traffic for peak seasons. The idea here is to help customers automate a lot of the standards automation tasks and, of course, improve security at the same time.

As Zavery stressed, API management is now about more than just managing traffic between applications. But more than just helping customers manage their digital transformation projects, the Apigee team is now thinking about what it calls ‘digital excellence.’ “That’s how we’re thinking of the journey for customers moving from not just ‘hey, I can have a front end,’ but what about all the excellent things you want to do and how we can do that,” Zavery said.

“During these uncertain times, organizations worldwide are doubling-down on their API strategies to operate anywhere, automate processes, and deliver new digital experiences quickly and securely,” said James Fairweather, Chief Innovation Officer at Pitney Bowes. “By powering APIs with new capabilities like reCAPTCHA Enterprise, Cloud Armor (WAF), and Cloud CDN, Apigee X makes it easy for enterprises like us to scale digital initiatives, and deliver innovative experiences to our customers, employees and partners.”

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com