Aug
12
2020
--

Adaptive Shield raises $4M for its SaaS security platform

Adaptive Shield, a Tel Aviv-based security startup, is coming out of stealth today and announcing its $4 million seed round led by Vertex Ventures Israel. The company’s platform helps businesses protect their SaaS applications by regularly scanning their various setting for security issues.

The company’s co-founders met in the Israeli Defense Forces, where they were trained on cybersecurity, and then worked at a number of other security companies before starting their own venture. Adaptive Shield CEO Maor Bin, who previously led cloud research at Proofpoint, told me the team decided to look at SaaS security because they believe this is an urgent problem few other companies are addressing.

Pictured is a representative sample of nine apps being monitored by the Adaptive Shield platform, including the total score of each application, affected categories and affected security frameworks and standards. (Image Credits: Adaptive Shield)

“When you look at the problems that are out there — you want to solve something that is critical, that is urgent,” he said. “And what’s more critical than business applications? All the information is out there and every day, we see people moving their on-prem infrastructure into the cloud.”

Bin argues that as companies adopt a large variety of SaaS applications, all with their own security settings and user privileges, security teams are often either overwhelmed or simply not focused on these SaaS tools because they aren’t the system owners and may not even have access to them.

“Every enterprise today is heavily using SaaS services without addressing the associated and ever-changing security risks,” says Emanuel Timor, general partner at Vertex Ventures Israel . “We are impressed by the vision Adaptive Shield has to elegantly solve this complex problem and by the level of interest and fast adoption of its solution by customers.”

Onboarding is pretty easy, as Bin showed me, and typically involves setting up a user in the SaaS app and then logging into a given service through Adaptive Shield. Currently, the company supports most of the standard SaaS enterprise applications you would expect, including GitHub, Office 365, Salesforce, Slack, SuccessFactors and Zoom.

“I think that one of the most important differentiators for us is the amount of applications that we support,” Bin noted.

The company already has paying customers, including some Fortune 500 companies across a number of verticals, and it has already invested some of the new funding round, which closed before the global COVID-19 pandemic hit, into building out more integrations for these customers. Bin tells me that Adaptive Shield immediately started hiring once the round closed and is now also in the process of hiring its first employee in the U.S. to help with sales.

Aug
05
2020
--

Microsoft launches Open Service Mesh

Microsoft today announced the launch of a new open-source service mesh based on the Envoy proxy. The Open Service Mesh is meant to be a reference implementation of the Service Mesh Interface (SMI) spec, a standard interface for service meshes on Kubernetes that has the backing of most of the players in this ecosystem.

The company plans to donate Open Service Mesh to the Cloud Native Computing Foundation (CNCF) to ensure that it is community-led and has open governance.

“SMI is really resonating with folks and so we really thought that there was room in the ecosystem for a reference implementation of SMI where the mesh technology was first and foremost implementing those SMI APIs and making it the best possible SMI experience for customers,” Microsoft director of partner management for Azure Compute (and CNCF board member) Gabe Monroy told me.

Image Credits: Microsoft

He also added that, because SMI provides the lowest common denominator API design, Open Service Mesh gives users the ability to “bail out” to raw Envoy if they need some more advanced features. This “no cliffs” design, Monroy noted, is core to the philosophy behind Open Service Mesh.

As for its feature set, SMI handles all of the standard service mesh features you’d expect, including securing communications between services using mTLS, managing access control policies, service monitoring and more.

Image Credits: Microsoft

There are plenty of other service mesh technologies in the market today, though. So why would Microsoft launch this?

“What our customers have been telling us is that solutions that are out there today, Istio being a good example, are extremely complex,” he said. “It’s not just me saying this. We see the data in the AKS support queue of customers who are trying to use this stuff — and they’re struggling right here. This is just hard technology to use, hard technology to build at scale. And so the solutions that were out there all had something that wasn’t quite right and we really felt like something lighter weight and something with more of an SMI focus was what was going to hit the sweet spot for the customers that are dabbling in this technology today.”

Monroy also noted that Open Service Mesh can sit alongside other solutions like Linkerd, for example.

A lot of pundits expected Google to also donate its Istio service mesh to the CNCF. That move didn’t materialize. “It’s funny. A lot of people are very focused on the governance aspect of this,” he said. “I think when people over-focus on that, you lose sight of how are customers doing with this technology. And the truth is that customers are not having a great time with Istio in the wild today. I think even folks who are deep in that community will acknowledge that and that’s really the reason why we’re not interested in contributing to that ecosystem at the moment.”

Jul
30
2020
--

Atlassian acquires asset management company Mindville

Atlassian today announced that it has acquired Mindville, a Jira-centric enterprise asset management firm based in Sweden. Mindville’s more than 1,700 customers include the likes of NASA, Spotify and Samsung.

Image Credits: Atlassian

With this acquisition, Atlassian is getting into a new market, too, by adding asset management tools to its lineup of services. The company’s flagship product is Mindville Insights, which helps IT, HR, sales, legal and facilities to track assets across a company. It’s completely agnostic as to which assets you are tracking, though, given Atlassian’s user base, most companies will likely use it to track IT assets like servers and laptops. But in addition to physical assets, you also can use the service to automatically import cloud-based servers from AWS, Azure and GCP, for example, and the team has built connectors to services like Service Now and Snow Software, too.

Image Credits: Mindville

“Mindville Insight provides enterprises with full visibility into their assets and services, critical to delivering great customer and employee service experiences. These capabilities are a cornerstone of IT Service Management (ITSM), a market where Atlassian continues to see strong momentum and growth,” Atlassian’s head of tech teams Noah Wasmer writes in today’s announcement.

Co-founded by Tommy Nordahl and Mathias Edblom, Mindville never raised any institutional funding, according to Crunchbase. The two companies also didn’t disclose the acquisition price.

Like some of Atlassian’s other recent acquisitions, including Code Barrel, the company was already an Atlassian partner and successfully selling its service in the Atlassian Marketplace.

“This acquisition builds on Atlassian’s investment in [IT Service Management], including recent acquisitions like Opsgenie for incident management, Automation for Jira for code-free automation, and Halp for conversational ticketing,” Atlassian’s Wasmer writes.

The Mindville team says it will continue to support existing customers and that Atlassian will continue to build on Insight’s tools while it works to integrate them with Jira Service Desk. That integration, Atlassian argues, will give its users more visibility into their assets and allow them to deliver better customer and employee service experiences.

Image Credits: Mindville

“We’ve watched the Insight product line be used heavily in many industries and for various disciplines, including some we never expected! One of the most popular areas is IT Service Management where Insight plays an important role connecting all relevant asset data to incidents, changes, problems, and requests,” write Mindville’s founders in today’s announcement. “Combining our solutions with the products from Atlassian enables tighter integration for more sophisticated service management, empowered by the underlying asset data.”

Jul
15
2020
--

Gmail for G Suite gets deep integrations with Chat, Meet, Rooms and more

Google is launching a major update to its G Suite productivity tools today that will see a deep integration of Gmail, Chat, Meet and Rooms on the web and on mobile, as well as other tools like Calendar, Docs, Sheets and Slides. This integration will become available in the G Suite early adopter program, with a wider roll-out coming at a later time.

The G Suite team has been working on this project for about a year, though it fast-tracked the Gmail/Meet integration, which was originally scheduled to be part of today’s release, as part of its response to the COVID-19 pandemic.

At the core of today’s update is the idea that we’re all constantly switching between different modes of communication, be that email, chat, voice or video. So with this update, the company is bringing all of this together, with Gmail being the focal point for the time being, given that this is where most users already find themselves for hours on end anyway.

Google is branding this initiative as a “better home for work” and in practice, it means that you’ll not just see deeper integrations between products, like a fill calendaring and file management experience in Gmail, but also the ability to have a video chat open on one side of the window while collaboratively editing a document in real time on the other.

Image Credits: Google

According to G Suite VP and GM Javier Soltero, the overall idea here is not just to bring all of these tools closer together to reduce the task-switching that users have to do.

Image Credits: Google

“We’re announcing something we’ve been working on since a little bit before I even joined Google last year: a new integrated workspace designed to bring together all the core components of communication and collaboration into a single surface that is not just about bringing these ingredients into the same pane of glass, but also realizes something that’s greater than the sum of its parts,” he told me ahead of today’s announcement. “The degree of integration across the different modes of communication, specifically email, chat, and video calling and voice video calling along with our existing physical existing strength in collaboration.”

Just like on the web, Google also revealed some of its plans when it first announced its latest major update to Gmail for mobile in May, with its Meet integration in the form of a new bar at the bottom of the screen for moving between Mail and Meet. With this, it’s expanding to include native Chat and Rooms support as well. Soltero noted that Google thinks of these four products as the “four pillars of the integrated workspace.” Having them all integrated into a single app means you can manage the notification behavior of all of them in a single place, for example, and without the often cumbersome task-switching experience on mobile.

Image Credits: Google

For now, these updates are specific to G Suite, though similar to Google’s work around bringing Meet to consumers, the company plans to bring this workspace experience to consumers as well, but what exactly that will look like still remains to be seen. “Right now we’re really focused. The people who urgently need this are those involved in productivity scenarios. This idea of ‘the new home for work’ is much more about collaboration that is specific to professional settings, productivity and workplace settings,” Soltero said.

Image Credits: Google

But there is more…

Google is also announcing a few other feature updates to its G Suite line today. Chat rooms, for example, are now getting shared files and tasks, with the ability to assign tasks and to invite users from outside your company into rooms. These rooms now also let you have chats open on one side and edit a document on the other, all without switching to a completely different web app.

Also new is the ability in Gmail to search not just for emails but also chats, as well as new tools to pin important rooms and new “do not disturb” and “out of office” settings.

One nifty new feature of these new integrated workspaces is that Google is also working with some of its partners to bring their apps into the experience. The company specifically mentions DocuSign, Salesforce and Trello. These companies already offer some deep Gmail integrations, including integrations with the Gmail sidebar, so we’ll likely see this list expand over time.

Meet itself, too, is getting some updates in the coming weeks with “knocking controls” to make sure that once you throw somebody out of a meeting, that person can’t come back, and safety locks that help meeting hosts decide who can chat or present in a meeting.

Image Credits: Google

Jul
14
2020
--

Google Cloud launches Confidential VMs

At its virtual Cloud Next ’20 event, Google Cloud today announced Confidential VMs, a new type of virtual machine that makes use of the company’s work around confidential computing to ensure that data isn’t just encrypted at rest but also while it is in memory.

We already employ a variety of isolation and sandboxing techniques as part of our cloud infrastructure to help make our multi-tenant architecture secure,” the company notes in today’s announcement. “Confidential VMs take this to the next level by offering memory encryption so that you can further isolate your workloads in the cloud. Confidential VMs can help all our customers protect sensitive data, but we think it will be especially interesting to those in regulated industries.”

In the backend, Confidential VMs make use of AMD’s Secure Encrypted Virtualization feature, available in its second-generation EPYC CPUs. With that, the data will stay encrypted when used and the encryption keys to make this happen are automatically generated in hardware and can’t be exported — and with that, even Google doesn’t have access to the keys either.

Image Credits: Google

Developers who want to shift their existing VMs to a Confidential VM can do so with just a few clicks. Google notes that it built Confidential VMs on top of its Shielded VMs, which already provide protection against rootkits and other exploits.

“With built-in secure encrypted virtualization, 2nd Gen AMD EPYC processors provide an innovative hardware-based security feature that helps secure data in a virtualized environment,” said Raghu Nambiar, corporate vice president, Data Center Ecosystem, AMD. “For the new Google Compute Engine Confidential VMs in the N2D series, we worked with Google to help customers both secure their data and achieve performance of their workloads.”

That last part is obviously important, given that the extra encryption and decryption steps do incur at least a minor performance penalty. Google says it worked with AMD and developed new open-source drivers to ensure that “the performance metrics of Confidential VMs are close to those of non-confidential VMs.” At least according to the benchmarks Google itself has disclosed so far, both startup times and memory read and throughput performance are virtually the same for regular VMs and Confidential VMs.

Jul
14
2020
--

Google Cloud’s new BigQuery Omni will let developers query data in GCP, AWS and Azure

At its virtual Cloud Next ’20 event, Google today announced a number of updates to its cloud portfolio, but the private alpha launch of BigQuery Omni is probably the highlight of this year’s event. Powered by Google Cloud’s Anthos hybrid-cloud platform, BigQuery Omni allows developers to use the BigQuery engine to analyze data that sits in multiple clouds, including those of Google Cloud competitors like AWS and Microsoft Azure — though for now, the service only supports AWS, with Azure support coming later.

Using a unified interface, developers can analyze this data locally without having to move data sets between platforms.

“Our customers store petabytes of information in BigQuery, with the knowledge that it is safe and that it’s protected,” said Debanjan Saha, the GM and VP of Engineering for Data Analytics at Google Cloud, in a press conference ahead of today’s announcement. “A lot of our customers do many different types of analytics in BigQuery. For example, they use the built-in machine learning capabilities to run real-time analytics and predictive analytics. […] A lot of our customers who are very excited about using BigQuery in GCP are also asking, ‘how can they extend the use of BigQuery to other clouds?’ ”

Image Credits: Google

Google has long said that it believes that multi-cloud is the future — something that most of its competitors would probably agree with, though they all would obviously like you to use their tools, even if the data sits in other clouds or is generated off-platform. It’s the tools and services that help businesses to make use of all of this data, after all, where the different vendors can differentiate themselves from each other. Maybe it’s no surprise then, given Google Cloud’s expertise in data analytics, that BigQuery is now joining the multi-cloud fray.

“With BigQuery Omni customers get what they wanted,” Saha said. “They wanted to analyze their data no matter where the data sits and they get it today with BigQuery Omni.”

Image Credits: Google

He noted that Google Cloud believes that this will help enterprises break down their data silos and gain new insights into their data, all while allowing developers and analysts to use a standard SQL interface.

Today’s announcement is also a good example of how Google’s bet on Anthos is paying off by making it easier for the company to not just allow its customers to manage their multi-cloud deployments but also to extend the reach of its own products across clouds. This also explains why BigQuery Omni isn’t available for Azure yet, given that Anthos for Azure is still in preview, while AWS support became generally available in April.

Jul
09
2020
--

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier.

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for compute services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.

Jul
08
2020
--

Google launches the Open Usage Commons, a new organization for managing open-source trademarks

Google, in collaboration with a number of academic leaders and its consulting partner SADA Systems, today announced the launch of the Open Usage Commons, a new organization that aims to help open-source projects manage their trademarks.

To be fair, at first glance, open-source trademarks may not sound like it would be a major problem (or even a really interesting topic), but there’s more here than meets the eye. As Google’s director of open source Chris DiBona told me, trademarks have increasingly become an issue for open-source projects, not necessarily because there have been legal issues around them, but because commercial entities that want to use the logo or name of an open-source project on their websites, for example, don’t have the reassurance that they are free to use those trademarks.

“One of the things that’s been rearing its ugly head over the last couple years has been trademarks,” he told me. “There’s not a lot of trademarks in open-source software in general, but particularly at Google, and frankly the higher tier, the more popular open-source projects, you see them more and more over the last five years. If you look at open-source licensing, they don’t treat trademarks at all the way they do copyright and patents, even Apache, which is my favorite license, they basically say, nope, not touching it, not our problem, you go talk.”

Traditionally, open-source licenses didn’t cover trademarks because there simply weren’t a lot of trademarks in the ecosystem to worry about. One of the exceptions here was Linux, a trademark that is now managed by the Linux Mark Institute on behalf of Linus Torvalds.

With that, commercial companies aren’t sure how to handle this situation and developers also don’t know how to respond to these companies when they ask them questions about their trademarks.

“What we wanted to do is give guidance around how you can share trademarks in the same way that you would share patents and copyright in an open-source license […],” DiBona explained. “And the idea is to basically provide that guidance, you know, provide that trademarks file, if you will, that you include in your source code.”

Google itself is putting three of its own open-source trademarks into this new organization: the Angular web application framework for mobile, the Gerrit code review tool and the Istio service mesh. “All three of them are kind of perfect for this sort of experiment because they’re under active development at Google, they have a trademark associated with them, they have logos and, in some cases, a mascot.”

One of those mascots is Diffi, the Kung Fu Code Review Cuckoo, because, as DiBona noted, “we were trying to come up with literally the worst mascot we could possibly come up with.” It’s now up to the Open Usage Commons to manage that trademark.

DiBona also noted that all three projects have third parties shipping products based on these projects (think Gerrit as a service).

Another thing DiBona stressed is that this is an independent organization. Besides himself, Jen Phillips, a senior engineering manager for open source at Google is also on the board. But the team also brought in SADA’s CTO Miles Ward (who was previously at Google); Allison Randal, the architect of the Parrot virtual machine and member of the board of directors of the Perl Foundation and OpenStack Foundation, among others; Charles Lee Isbell Jr., the dean of the Georgia Institute of Technology College of Computing, and Cliff Lampe, a professor at the School of Information at the University of Michigan and a “rising star,” as DiBona pointed out.

“These are people who really have the best interests of computer science at heart, which is why we’re doing this,” DiBona noted. “Because the thing about open source — people talk about it all the time in the context of business and all the rest. The reason I got into it is because through open source we could work with other people in this sort of fertile middle space and sort of know what the deal was.”

Update: even though Google argues that the Open Usage Commons are complementary to other open source organizations, the Cloud Native Computing Foundation (CNCF) released the following statement by Chris Aniszczyk, the CNCF’s CTO: “Our community members are perplexed that Google has chosen to not contribute the Istio project to the Cloud Native Computing Foundation (CNCF), but we are happy to help guide them to resubmit their old project proposal from 2017 at any time. In the end, our community remains focused on building and supporting our service mesh projects like Envoy, linkerd and interoperability efforts like the Service Mesh Interface (SMI). The CNCF will continue to be the center of gravity of cloud native and service mesh collaboration and innovation.”

 

Jul
08
2020
--

Suse acquires Kubernetes management platform Rancher Labs

Suse, which describes itself as “the world’s largest independent open source company,” today announced that it has acquired Rancher Labs, a company that has long focused on making it easier for enterprises to make their container clusters.

The two companies did not disclose the price of the acquisition, but Rancher was well funded, with a total of $95 million in investments. It’s also worth mentioning that it has only been a few months since the company announced its $40 million Series D round led by Telstra Ventures. Other investors include the likes of Mayfield and Nexus Venture Partners, GRC SinoGreen and F&G Ventures.

Like similar companies, Rancher’s original focus was first on Docker infrastructure before it pivoted to putting its emphasis on Kubernetes, once that became the de facto standard for container orchestration. Unsurprisingly, this is also why Suse is now acquiring this company. After a number of ups and downs — and various ownership changes — Suse has now found its footing again and today’s acquisition shows that its aiming to capitalize on its current strengths.

Just last month, the company reported the annual contract value of its booking increased by 30% year over year and that it saw a 63% increase in customer deals worth more than $1 million in the last quarter, with its cloud revenue growing 70%. While it is still in the Linux distribution business that the company was founded on, today’s Suse is a very different company, offering various enterprise platforms (including its Cloud Foundry-based Cloud Application Platform), solutions and services. And while it already offered a Kubernetes-based container platform, Rancher’s expertise will only help it to build out this business.

“This is an incredible moment for our industry, as two open source leaders are joining forces. The merger of a leader in Enterprise Linux, Edge Computing and AI with a leader in Enterprise Kubernetes Management will disrupt the market to help customers accelerate their digital transformation journeys,” said Suse CEO Melissa Di Donato in today’s announcement. “Only the combination of SUSE and Rancher will have the depth of a globally supported and 100% true open source portfolio, including cloud native technologies, to help our customers seamlessly innovate across their business from the edge to the core to the cloud.”

The company describes today’s acquisition as the first step in its “inorganic growth strategy” and Di Donato notes that this acquisition will allow the company to “play an even more strategic role with cloud service providers, independent hardware vendors, systems integrators and value-added resellers who are eager to provide greater customer experiences.”

Jul
07
2020
--

Nvidia’s Ampere GPUs come to Google Cloud

Nvidia today announced that its new Ampere-based data center GPUs, the A100 Tensor Core GPUs, are now available in alpha on Google Cloud. As the name implies, these GPUs were designed for AI workloads, as well as data analytics and high-performance computing solutions.

The A100 promises a significant performance improvement over previous generations. Nvidia says the A100 can boost training and inference performance by over 20x compared to its predecessors (though you’ll mostly see 6x or 7x improvements in most benchmarks) and tops out at about 19.5 TFLOPs in single-precision performance and 156 TFLOPs for Tensor Float 32 workloads.

Image Credits: Nvidia

“Google Cloud customers often look to us to provide the latest hardware and software services to help them drive innovation on AI and scientific computing workloads,” said Manish Sainani, Director of Product Management at Google Cloud, in today’s announcement. “With our new A2 VM family, we are proud to be the first major cloud provider to market Nvidia A100 GPUs, just as we were with Nvidia’s T4 GPUs. We are excited to see what our customers will do with these new capabilities.”

Google Cloud users can get access to instances with up to 16 of these A100 GPUs, for a total of 640GB of GPU memory and 1.3TB of system memory.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com