Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Nov
13
2019
--

Messaging app Wire confirms $8.2M raise, responds to privacy concerns after moving holding company to the US

Big changes are afoot for Wire, an enterprise-focused end-to-end encrypted messaging app and service that advertises itself as “the most secure collaboration platform”. In February, Wire quietly raised $8.2 million from Morpheus Ventures and others, we’ve confirmed — the first funding amount it has ever disclosed — and alongside that external financing, it moved its holding company in the same month to the US from Luxembourg, a switch that Wire’s CEO Morten Brogger described in an interview as “simple and pragmatic.”

He also said that Wire is planning to introduce a freemium tier to its existing consumer service — which itself has half a million users — while working on a larger round of funding to fuel more growth of its enterprise business — a key reason for moving to the US, he added: There is more money to be raised there.

“We knew we needed this funding and additional to support continued growth. We made the decision that at some point in time it will be easier to get funding in North America, where there’s six times the amount of venture capital,” he said.

While Wire has moved its holding company to the US, it is keeping the rest of its operations as is. Customers are licensed and serviced from Wire Switzerland; the software development team is in Berlin, Germany; and hosting remains in Europe.

The news of Wire’s US move and the basics of its February funding — sans value, date or backers — came out this week via a blog post that raises questions about whether a company that trades on the idea of data privacy should itself be more transparent about its activities.

Specifically, the changes to Wire’s financing and legal structure were only communicated to users when news started to leak out, which brings up questions not just about transparency, but about the state of Wire’s privacy policy, given the company’s holding company now being on US soil.

It was an issue picked up and amplified by NSA whistleblower Edward Snowden . Via Twitter, he described the move to the US as “not appropriate for a company claiming to provide a secure messenger — claims a large number of human rights defenders relied on.”

“There was no change in control and [the move was] very tactical [because of fundraising],” Brogger said about the company’s decision not to communicate the move, adding that the company had never talked about funding in the past, either. “Our evaluation was that this was not necessary. Was it right or wrong? I don’t know.”

The other key question is whether Wire’s shift to the US puts users’ data at risk — a question that Brogger claims is straightforward to answer: “We are in Switzerland, which has the best privacy laws in the world” — it’s subject to Europe’s General Data Protection Regulation framework (GDPR) on top of its own local laws — “and Wire now belongs to a new group holding, but there no change in control.”

In its blog post published in the wake of blowback from privacy advocates, Wire also claims it “stands by its mission to best protect communication data with state-of-the-art technology and practice” — listing several items in its defence:

  • All source code has been and will be available for inspection on GitHub (github.com/wireapp).
  • All communication through Wire is secured with end-to-end encryption — messages, conference calls, files. The decryption keys are only stored on user devices, not on our servers. It also gives companies the option to deploy their own instances of Wire in their own data centers.
  • Wire has started working on a federated protocol to connect on-premise installations and make messaging and collaboration more ubiquitous.
  • Wire believes that data protection is best achieved through state-of-the-art encryption and continues to innovate in that space with Messaging Layer Security (MLS).

But where data privacy and US law are concerned, it’s complicated. Snowden famously leaked scores of classified documents disclosing the extent of US government mass surveillance programs in 2013, including how data-harvesting was embedded in US-based messaging and technology platforms.

Six years on, the political and legal ramifications of that disclosure are still playing out — with a key judgement pending from Europe’s top court which could yet unseat the current data transfer arrangement between the EU and the US.

Privacy versus security

Wire launched at a time when interest in messaging apps was at a high watermark. The company made its debut in the middle of February 2014, and it was only one week later that Facebook acquired WhatsApp for the princely sum of $19 billion.

We described Wire’s primary selling point at the time as a “reimagining of how a communications tool like Skype should operate had it been built today” rather than in in 2003. That meant encryption and privacy protection, but also better audio tools and file compression and more.

It was a pitch that seemed especially compelling considering the background of the company. Skype co-founder Janus Friis and funds connected to him were the startup’s first backers (and they remain the largest shareholders);Wire was co-founded in by Skype alums Jonathan Christensen and Alan Duric (former no longer with the company, latter is its CTO); and even new investor Morpheus has Skype roots.

Yet even with that Skype pedigree, the strategy faced a big challenge.

“The consumer messaging market is lost to the Facebooks of the world, which dominate it,” Brogger said today. “However, we made a clear insight, which is the core strength of Wire: security and privacy.”

That, combined with trend around the consumerization of IT that’s brought new tools to business users, is what led Wire to the enterprise market in 2017 — a shift that’s seen it pick up a number of big names among its 700 enterprise customers, including Fortum, Aon, EY and SoftBank Robotics.

But fast forward to today, and it seems that even as security and privacy are two sides of the same coin, it may not be so simple when deciding what to optimise in terms of features and future development, which is part of the question now and what critics are concerned with.

“Wire was always for profit and planned to follow the typical venture backed route of raising rounds to accelerate growth,” one source familiar with the company told us. “However, it took time to find its niche (B2B, enterprise secure comms).

“It needed money to keep the operations going and growing. [But] the new CEO, who joined late 2017, didn’t really care about the free users, and the way I read it now, the transformation is complete: ‘If Wire works for you, fine, but we don’t really care about what you think about our ownership or funding structure as our corporate clients care about security, not about privacy.’”

And that is the message you get from Brogger, too, who describes individual consumers as “not part of our strategy”, but also not entirely removed from it, either, as the focus shifts to enterprises and their security needs.

Brogger said there are still half a million individuals on the platform, and they will come up with ways to continue to serve them under the same privacy policies and with the same kind of service as the enterprise users. “We want to give them all the same features with no limits,” he added. “We are looking to switch it into a freemium model.”

On the other side, “We are having a lot of inbound requests on how Wire can replace Skype for Business,” he said. “We are the only one who can do that with our level of security. It’s become a very interesting journey and we are super excited.”

Part of the company’s push into enterprise has also seen it make a number of hires. This has included bringing in two former Huddle C-suite execs, Brogger as CEO and Rasmus Holst as chief revenue officer — a bench that Wire expanded this week with three new hires from three other B2B businesses: a VP of EMEA sales from New Relic, a VP of finance from Contentful; and a VP of Americas sales from Xeebi.

Such growth comes with a price-tag attached to it, clearly. Which is why Wire is opening itself to more funding and more exposure in the US, but also more scrutiny and questions from those who counted on its services before the change.

Brogger said inbound interest has been strong and he expects the startup’s next round to close in the next two to three months.

Sep
26
2019
--

Battlefield vets StrongSalt (formerly OverNest) announces $3M seed round

StrongSalt, then known as OverNest, appeared at the TechCrunch Disrupt NYC Battlefield in 2016, and announced a product for searching encrypted code, which remains unusual to this day. Today, the company announced a $3 million seed round led by Valley Capital Partners.

StrongSalt founder and CEO Ed Yu says encryption remains a difficult proposition, and that when you look at the majority of breaches, encryption wasn’t used. He said that his company wants to simplify adding encryption to applications, and came up with a new service to let developers add encryption in the form of an API. “We decided to come up with what we call an API platform. It’s like infrastructure that allows you to integrate our solution into any existing or any new applications,” he said.

The company’s original idea was to create a product to search encrypted code, but Yu says the tech has much more utility as an API that’s applicable across applications, and that’s why they decided to package it as a service. It’s not unlike Twilio for communications or Stripe for payments, except in this case you can build in searchable encryption.

The searchable part is actually a pretty big deal because, as Yu points out, when you encrypt data it is no longer searchable. “If you encrypt all your data, you cannot search within it, and if you cannot search within it, you cannot find the data you’re looking for, and obviously you can’t really use the data. So we actually solved that problem,” he said.

Developers can add searchable encryption as part of their applications. For customers already using a commercial product, the company’s API actually integrates with popular services, enabling customers to encrypt the data stored there, while keeping it searchable.

“We will offer a storage API on top of Box, AWS S3, Google Cloud, Azure — depending on what the customer has or wants. If the customer already has AWS S3 storage, for example, then when they use our API, and after encrypting the data, it will be stored in their AWS repository,” Yu explained.

For those companies that don’t have a storage service, the company is offering one. What’s more, they are using the blockchain to provide a mechanism for sharing, auditing and managing encrypted data. “We also use the blockchain for sharing data by recording the authorization by the sender, so the receiver can retrieve the information needed to reconstruct the keys in order to retrieve the data. This simplifies key management in the case of sharing and ensures auditability and revocability of the sharing by the sender,” Yu said.

If you’re wondering how the company has been surviving since 2016, while only getting its seed round today, it had a couple of small seed rounds prior to this, and a contract with the U.S. Department of Defense, which replaced the need for substantial earlier funding.

“The DOD was looking for a solution to have secure communication between computers, and they needed to have a way to securely store data, and so we were providing a solution for them,” he said. In fact, this work was what led them to build the commercial API platform they are offering today.

The company, which was founded in 2015, currently has 12 employees spread across the globe.

Sep
12
2019
--

The mainframe business is alive and well, as IBM announces new z15

It’s easy to think about mainframes as some technology dinosaur, but the fact is these machines remain a key component of many large organizations’ computing strategies. Today, IBM announced the latest in their line of mainframe computers, the z15.

For starters, as you would probably expect, these are big and powerful machines capable of handling enormous workloads. For example, this baby can process up to 1 trillion web transactions a day and handle 2.4 million Docker containers, while offering unparalleled security to go with that performance. This includes the ability to encrypt data once, and it stays encrypted, even when it leaves the system, a huge advantage for companies with a hybrid strategy.

Speaking of which, you may recall that IBM bought Red Hat last year for $34 billion. That deal closed in July and the companies have been working to incorporate Red Hat technology across the IBM business including the z line of mainframes.

IBM announced last month that it was making OpenShift, Red Hat’s Kubernetes-based cloud-native tools, available on the mainframe running Linux. This should enable developers, who have been working on OpenShift on other systems, to move seamlessly to the mainframe without special training.

IBM sees the mainframe as a bridge for hybrid computing environments, offering a highly secure place for data that when combined with Red Hat’s tools, can enable companies to have a single control plane for applications and data wherever it lives.

While it could be tough to justify the cost of these machines in the age of cloud computing, Ray Wang, founder and principal analyst at Constellation Research, says it could be more cost-effective than the cloud for certain customers. “If you are a new customer, and currently in the cloud and develop on Linux, then in the long run the economics are there to be cheaper than public cloud if you have a lot of IO, and need to get to a high degree of encryption and security,” he said.

He added, “The main point is that if you are worried about being held hostage by public cloud vendors on pricing, in the long run the z is a cost-effective and secure option for owning compute power and working in a multi-cloud, hybrid cloud world.”

Companies like airlines and financial services companies continue to use mainframes, and while they need the power these massive machines provide, they need to do so in a more modern context. The z15 is designed to provide that link to the future, while giving these companies the power they need.

Aug
26
2019
--

IBM’s quantum-resistant magnetic tape storage is not actually snake oil

Usually when someone in tech says the word “quantum,” I put my hands on my ears and sing until they go away. But while IBM’s “quantum computing safe tape drive” nearly drove me to song, when I thought about it, it actually made a lot of sense.

First of all, it’s a bit of a misleading lede. The tape is not resistant to quantum computing at all. The problem isn’t that qubits are going to escape their cryogenic prisons and go interfere with tape drives in the basement of some data center or HQ. The problem is what these quantum computers may be able to accomplish when they’re finally put to use.

Without going too deep down the quantum rabbit hole, it’s generally acknowledged that quantum computers and classical computers (like the one you’re using) are good at different things — to the point where in some cases, a problem that might take incalculable time on a traditional supercomputer could be done in a flash on quantum. Don’t ask me how — I said we’re not going down the hole!

One of the things quantum is potentially very good at is certain types of cryptography: It’s theorized that quantum computers could absolutely smash through many currently used encryption techniques. In the worst-case scenario, that means that if someone got hold of a large cache of encrypted data that today would be useless without the key, a future adversary may be able to force the lock. Considering how many breaches there have been where the only reason your entire life wasn’t stolen was because it was encrypted, this is a serious threat.

IBM and others are thinking ahead. Quantum computing isn’t a threat right now, right? quantum tapeIt isn’t being seriously used by anyone, let alone hackers. But what if you buy a tape drive for long-term data storage today, and then a decade from now a hack hits and everything is exposed because it was using “industry standard” encryption?

To prevent that from happening, IBM is migrating its tape storage over to encryption algorithms that are resistant to state of the art quantum decryption techniques — specifically lattice cryptography (another rabbit hole — go ahead). Because these devices are meant to be used for decades if possible, during which time the entire computing landscape can change. It will be hard to predict exactly what quantum methods will emerge in the future, but at the very least you can try not to be among the low-hanging fruit favored by hackers.

The tape itself is just regular tape. In fact, the whole system is pretty much the same as you’d have bought a week ago. All the changes are in the firmware, meaning earlier drives can be retrofitted with this quantum-resistant tech.

Quantum computing may not be relevant to many applications today, but next year who knows? And in 10 years, it might be commonplace. So it behooves companies like IBM that plan to be part of the enterprise world for decades to come to plan for it today.

Aug
06
2019
--

Quantum computing is coming to TC Sessions: Enterprise on Sept. 5

Here at TechCrunch, we like to think about what’s next, and there are few technologies quite as exotic and futuristic as quantum computing. After what felt like decades of being “almost there,” we now have working quantum computers that are able to run basic algorithms, even if only for a very short time. As those times increase, we’ll slowly but surely get to the point where we can realize the full potential of quantum computing.

For our TechCrunch Sessions: Enterprise event in San Francisco on September 5, we’re bringing together some of the sharpest minds from some of the leading companies in quantum computing to talk about what this technology will mean for enterprises (p.s. early-bird ticket sales end this Friday). This could, after all, be one of those technologies where early movers will gain a massive advantage over their competitors. But how do you prepare yourself for this future today, while many aspects of quantum computing are still in development?

IBM’s quantum computer demonstrated at Disrupt SF 2018

Joining us onstage will be Microsoft’s Krysta Svore, who leads the company’s Quantum efforts; IBM’s Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort; and Jim Clark, the director of quantum hardware at Intel Labs.

That’s pretty much a Who’s Who of the current state of quantum computing, even though all of these companies are at different stages of their quantum journey. IBM already has working quantum computers, Intel has built a quantum processor and is investing heavily into the technology and Microsoft is trying a very different approach to the technology that may lead to a breakthrough in the long run but that is currently keeping it from having a working machine. In return, though, Microsoft has invested heavily into building the software tools for building quantum applications.

During the panel, we’ll discuss the current state of the industry, where quantum computing can already help enterprises today and what they can do to prepare for the future. The implications of this new technology also go well beyond faster computing (for some use cases); there are also the security issues that will arise once quantum computers become widely available and current encryption methodologies become easily breakable.

The early-bird ticket discount ends this Friday, August 9. Be sure to grab your tickets to get the max $100 savings before prices go up. If you’re a startup in the enterprise space, we still have some startup demo tables available! Each demo table comes with four tickets to the show and a high-visibility exhibit space to showcase your company to attendees — learn more here.

May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

Mar
18
2019
--

Slack hands over control of encryption keys to regulated customers

Slack announced today that it is launching Enterprise Key Management (EKM) for Slack, a new tool that enables customers to control their encryption keys in the enterprise version of the communications app. The keys are managed in the AWS KMS key management tool.

Geoff Belknap, chief security officer (CSO) at Slack, says the new tool should appeal to customers in regulated industries who might need tighter control over security. “Markets like financial services, healthcare and government are typically underserved in terms of which collaboration tools they can use, so we wanted to design an experience that catered to their particular security needs,” Belknap told TechCrunch.

Slack currently encrypts data in transit and at rest, but the new tool augments this by giving customers greater control over the encryption keys that Slack uses to encrypt messages and files being shared inside the app.

He said that regulated industries in particular have been requesting the ability to control their own encryption keys, including the ability to revoke them if it was required for security reasons. “EKM is a key requirement for growing enterprise companies of all sizes, and was a requested feature from many of our Enterprise Grid customers. We wanted to give these customers full control over their encryption keys, and when or if they want to revoke them,” he said.

Screenshot: Slack

Belknap says this is especially important when customers involve people outside the organization, such as contractors, partners or vendors in Slack communications. “A big benefit of EKM is that in the event of a security threat or if you ever experience suspicious activity, your security team can cut off access to the content at any time if necessary,” Belknap explained.

In addition to controlling the encryption keys, customers can gain greater visibility into activity inside of Slack via the Audit Logs API. “Detailed activity logs tell customers exactly when and where their data is being accessed, so they can be alerted of risks and anomalies immediately,” he said. If a customer finds suspicious activity, it can cut off access.

EKM for Slack is generally available today for Enterprise Grid customers for an additional fee. Slack, which announced plans to go public last month, has raised more than $1 billion on a $7 billion valuation.

Feb
22
2019
--

Measuring Percona Server for MySQL On-Disk Decryption Overhead

benchmark heavy IO percona server for mysql 8 encryption

Percona Server for MySQL 8.0 comes with enterprise grade total data encryption features. However, there is always the question of how much overhead – or performance penalty – comes with the data decryption. As we saw in my networking performance post, SSL under high concurrency might be problematic. Is this the case for data decryption?

To measure any overhead, I will start with a simplified read-only workload, where data gets decrypted during read IO.

MySQL decryption schematic

During query execution, the data in memory is already decrypted so there is no additional processing time. The decryption happens only for blocks that require a read from storage.

For the benchmark I will use the following workload:

sysbench oltp_read_only --mysql-ssl=off --tables=20 --table-size=10000000 --threads=$i --time=300 --report-interval=1 --rand-type=uniform run

The datasize for this workload is about 50GB, so I will use

innodb_buffer_pool_size = 5GB

  to emulate a heavy disk read IO during the benchmark. In the second run, I will use

innodb_buffer_pool_size = 60GB

  so all data is kept in memory and there are NO disk read IO operations.

I will only use table-level encryption at this time (ie: no encryption for binary log, system tablespace, redo-  and undo- logs).

The server I am using has AES hardware CPU acceleration. Read more at https://en.wikipedia.org/wiki/AES_instruction_set

Benchmark N1, heavy read IO

benchmark heavy IO percona server for mysql 8 encryption

Threads encrypted storage no encryption encryption overhead
1 389.11 423.47 1.09
4 1531.48 1673.2 1.09
16 5583.04 6055 1.08
32 8250.61 8479.61 1.03
64 8558.6 8574.43 1.00
96 8571.55 8577.9 1.00
128 8570.5 8580.68 1.00
256 8576.34 8585 1.00
512 8573.15 8573.73 1.00
1024 8570.2 8562.82 1.00
2048 8422.24 8286.65 0.98

Benchmark N2, data in memory, no read IO

benchmark data in memory percona server for mysql 8 encryption

Threads Encryption No encryption
1 578.91 567.65
4 2289.13 2275.12
16 8304.1 8584.06
32 13324.02 13513.39
64 20007.22 19821.48
96 19613.82 19587.56
128 19254.68 19307.82
256 18694.05 18693.93
512 18431.97 18372.13
1024 18571.43 18453.69
2048 18509.73 18332.59

Observations

For a high number of threads, there is no measurable difference between encrypted and unencrypted storage. This is because a lot of CPU resources are spent in contention and waits, so the relative time spend in decryption is negligible.

However, we can see some performance penalty for a low number of threads: up to 9% penalty for hardware decryption. When data fully fits into memory, there is no measurable difference between encrypted and unencrypted storage.

So if you have hardware support then you should see little impact when using storage encryption with MySQL. The easiest way to check if you have support for this is to look at CPU flags and search for ‘aes’ string:

> lscpu | grep aes Flags: ... tsc_deadline_timer aes xsave avx f16c ...

Feb
20
2019
--

Why Daimler moved its big data platform to the cloud

Like virtually every big enterprise company, a few years ago, the German auto giant Daimler decided to invest in its own on-premises data centers. And while those aren’t going away anytime soon, the company today announced that it has successfully moved its on-premises big data platform to Microsoft’s Azure cloud. This new platform, which the company calls eXtollo, is Daimler’s first major service to run outside of its own data centers, though it’ll probably not be the last.

As Daimler’s head of its corporate center of excellence for advanced analytics and big data Guido Vetter told me, the company started getting interested in big data about five years ago. “We invested in technology — the classical way, on-premise — and got a couple of people on it. And we were investigating what we could do with data because data is transforming our whole business as well,” he said.

By 2016, the size of the organization had grown to the point where a more formal structure was needed to enable the company to handle its data at a global scale. At the time, the buzz phrase was “data lakes” and the company started building its own in order to build out its analytics capacities.

Electric lineup, Daimler AG

“Sooner or later, we hit the limits as it’s not our core business to run these big environments,” Vetter said. “Flexibility and scalability are what you need for AI and advanced analytics and our whole operations are not set up for that. Our backend operations are set up for keeping a plant running and keeping everything safe and secure.” But in this new world of enterprise IT, companies need to be able to be flexible and experiment — and, if necessary, throw out failed experiments quickly.

So about a year and a half ago, Vetter’s team started the eXtollo project to bring all the company’s activities around advanced analytics, big data and artificial intelligence into the Azure Cloud, and just over two weeks ago, the team shut down its last on-premises servers after slowly turning on its solutions in Microsoft’s data centers in Europe, the U.S. and Asia. All in all, the actual transition between the on-premises data centers and the Azure cloud took about nine months. That may not seem fast, but for an enterprise project like this, that’s about as fast as it gets (and for a while, it fed all new data into both its on-premises data lake and Azure).

If you work for a startup, then all of this probably doesn’t seem like a big deal, but for a more traditional enterprise like Daimler, even just giving up control over the physical hardware where your data resides was a major culture change and something that took quite a bit of convincing. In the end, the solution came down to encryption.

“We needed the means to secure the data in the Microsoft data center with our own means that ensure that only we have access to the raw data and work with the data,” explained Vetter. In the end, the company decided to use the Azure Key Vault to manage and rotate its encryption keys. Indeed, Vetter noted that knowing that the company had full control over its own data was what allowed this project to move forward.

Vetter tells me the company obviously looked at Microsoft’s competitors as well, but he noted that his team didn’t find a compelling offer from other vendors in terms of functionality and the security features that it needed.

Today, Daimler’s big data unit uses tools like HD Insights and Azure Databricks, which covers more than 90 percents of the company’s current use cases. In the future, Vetter also wants to make it easier for less experienced users to use self-service tools to launch AI and analytics services.

While cost is often a factor that counts against the cloud, because renting server capacity isn’t cheap, Vetter argues that this move will actually save the company money and that storage costs, especially, are going to be cheaper in the cloud than in its on-premises data center (and chances are that Daimler, given its size and prestige as a customer, isn’t exactly paying the same rack rate that others are paying for the Azure services).

As with so many big data AI projects, predictions are the focus of much of what Daimler is doing. That may mean looking at a car’s data and error code and helping the technician diagnose an issue or doing predictive maintenance on a commercial vehicle. Interestingly, the company isn’t currently bringing to the cloud any of its own IoT data from its plants. That’s all managed in the company’s on-premises data centers because it wants to avoid the risk of having to shut down a plant because its tools lost the connection to a data center, for example.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com