Aug
15
2019
--

Microsoft Azure CTO Mark Russinovich will join us for TC Sessions: Enterprise on September 5

Being the CTO for one of the three major hypercloud providers may seem like enough of a job for most people, but Mark Russinovich, the CTO of Microsoft Azure, has a few other talents in his back pocket. Russinovich, who will join us for a fireside chat at our TechCrunch Sessions: Enterprise event in San Francisco on September 5 (p.s. early-bird sale ends Friday), is also an accomplished novelist who has published four novels, all of which center around tech and cybersecurity.

At our event, though, we won’t focus on his literary accomplishments (except for maybe his books about Windows Server) as much as on the trends he’s seeing in enterprise cloud adoption. Microsoft, maybe more so than its competitors, always made enterprise customers and their needs the focus of its cloud initiatives from the outset. Today, as the majority of enterprises is looking to move at least some of their legacy workloads into the cloud, they are often stumped by the sheer complexity of that undertaking.

In our fireside chat, we’ll talk about what Microsoft is doing to reduce this complexity and how enterprises can maximize their current investments into the cloud, both for running new cloud-native applications and for bringing legacy applications into the future. We’ll also talk about new technologies that can make the move to the cloud more attractive to enterprises, including the current buzz around edge computing, IoT, AI and more.

Before joining Microsoft, Russinovich, who has a Ph.D. in computer engineering from Carnegie Mellon, was the co-founder and chief architect of Winternals Software, which Microsoft acquired in 2006. During his time at Winternals, Russinovich discovered the infamous Sony rootkit. Over his 13 years at Microsoft, he moved from Technical Fellow up to the CTO position for Azure, which continues to grow at a rapid clip as it looks to challenge AWS’s leadership in total cloud revenue.

Tomorrow, Friday, August 16 is your last day to save $100 on tickets before prices go up. Book your early-bird tickets now and keep that Benjamin in your pocket.

If you’re an early-stage startup, we only have three demo table packages left! Each demo package comes with four tickets and a great location for your company to get in front of attendees. Book your demo package today before we sell out!

Aug
01
2019
--

Microsoft Azure now lets you have a server all to yourself

Microsoft today announced the preview launch of Azure Dedicated Host, a new cloud service that will allow you to run your virtual machines on single-tenant physical services. That means you’re not sharing any resources on that server with anybody else and you’ll get full control over everything that’s running on that machine.

Previously, Azure already offered isolated Virtual Machine sizes for two very large virtual machine types. Those are still available, but their use cases are comparably limited to these new hosts, which offer far more flexibility.

With this move, Microsoft is following in the footsteps of AWS, which also offers Dedicated Hosts with very similar capabilities. Google Cloud, too, offers what it calls “sole-tenant nodes.”

Azure Dedicated Host will support Windows, Linux and SQL Server virtual machines and pricing is per host, independent of the number of virtual machines you end up running on them. You can currently opt for machines with up to 48 physical cores and prices start at $4.039 per hour.

To do this, Microsoft is offering two different processors to power these machines. Type 1 is based on the 2.3 GHz Intel Xeon E5-2673 v4 with up to 3.5 gigahertz of clock speed, while Type 2 features the Intel Xeon® Platinum 8168 with single-core clock speeds of up to 3.7 gigahertz. The available memory ranges from 144GiB to 448GiB. You can find more details here.

As Microsoft notes, these new dedicated hosts can help companies reach their compliance requirements for physical security, data integrity and monitoring. The dedicated hosts still share the same underlying infrastructure as any other host in the Azure data centers, but users have full control over any maintenance window that could impact their servers.

These dedicated hosts can also be grouped into larger host groups in a given Azure region, allowing you to build clusters of your own physical servers inside the Azure data center. Because you’re actually renting a physical machine, any hardware issue on that machine will impact the virtual machines you are running on them, so chances are you’ll want to have multiple dedicated hosts for your failover strategy anyway.

110b3725 54e2 4840 a609 adf18fcbe32f

May
09
2019
--

AWS remains in firm control of the cloud infrastructure market

It has to be a bit depressing to be in the cloud infrastructure business if your name isn’t Amazon. Sure, there’s a huge, growing market, and the companies behind Amazon are growing even faster. Yet it seems no matter how fast they grow, Amazon remains a dot on the horizon.

It seems inconceivable that AWS can continue to hold sway over such a large market for so long, but as we’ve pointed out before, it has been able to maintain its position through true first-mover advantage. The other players didn’t even show up until several years after Amazon launched its first service in 2006, and they are paying the price for their failure to see the way computing would change the way Amazon did.

They certainly see it now, whether it’s IBM, Microsoft or Google, or Tencent and Alibaba, both of which are growing fast in the China/Asia markets. All of these companies are trying to find the formula to help differentiate themselves from AWS and give them some additional market traction.

Cloud market growth

Interestingly, even though companies have begun to move with increasing urgency to the cloud, the pace of growth slowed a bit in the first quarter to a 42 percent rate, according to data from Synergy Research, but that doesn’t mean the end of this growth cycle is anywhere close.

May
02
2019
--

Microsoft launches a drag-and-drop machine learning tool

Microsoft today announced three new services that all aim to simplify the process of machine learning. These range from a new interface for a tool that completely automates the process of creating models, to a new no-code visual interface for building, training and deploying models, all the way to hosted Jupyter-style notebooks for advanced users.

Getting started with machine learning is hard. Even to run the most basic of experiments take a good amount of expertise. All of these new tools great simplify this process by hiding away the code or giving those who want to write their own code a pre-configured platform for doing so.

The new interface for Azure’s automated machine learning tool makes creating a model as easy importing a data set and then telling the service which value to predict. Users don’t need to write a single line of code, while in the backend, this updated version now supports a number of new algorithms and optimizations that should result in more accurate models. While most of this is automated, Microsoft stresses that the service provides “complete transparency into algorithms, so developers and data scientists can manually override and control the process.”

For those who want a bit more control from the get-go, Microsoft also today launched a visual interface for its Azure Machine Learning service into preview that will allow developers to build, train and deploy machine learning models without having to touch any code.

This tool, the Azure Machine Learning visual interface looks suspiciously like the existing Azure ML Studio, Microsoft’s first stab at building a visual machine learning tool. Indeed, the two services look identical. The company never really pushed this service, though, and almost seemed to have forgotten about it despite that fact that it always seemed like a really useful tool for getting started with machine learning.

Microsoft says that this new version combines the best of Azure ML Studio with the Azure Machine Learning service. In practice, this means that while the interface is almost identical, the Azure Machine Learning visual interface extends what was possible with ML Studio by running on top of the Azure Machine Learning service and adding that services’ security, deployment and lifecycle management capabilities.

The service provides an easy interface for cleaning up your data, training models with the help of different algorithms, evaluating them and, finally, putting them into production.

While these first two services clearly target novices, the new hosted notebooks in Azure Machine Learning are clearly geared toward the more experiences machine learning practitioner. The notebooks come pre-packaged with support for the Azure Machine Learning Python SDK and run in what the company describes as a “secure, enterprise-ready environment.” While using these notebooks isn’t trivial either, this new feature allows developers to quickly get started without the hassle of setting up a new development environment with all the necessary cloud resources.

May
02
2019
--

Microsoft brings Azure SQL Database to the edge (and Arm)

Microsoft today announced an interesting update to its database lineup with the preview of Azure SQL Database Edge, a new tool that brings the same database engine that powers Azure SQL Database in the cloud to edge computing devices, including, for the first time, Arm-based machines.

Azure SQL Edge, Azure corporate vice president Julia White writes in today’s announcement, “brings to the edge the same performant, secure and easy to manage SQL engine that our customers love in Azure SQL Database and SQL Server.”

The new service, which will also run on x64-based devices and edge gateways, promises to bring low-latency analytics to edge devices as it allows users to work with streaming data and time-series data, combined with the built-in machine learning capabilities of Azure SQL Database. Like its larger brethren, Azure SQL Database Edge will also support graph data and comes with the same security and encryption features that can, for example, protect the data at rest and in motion, something that’s especially important for an edge device.

As White rightly notes, this also ensures that developers only have to write an application once and then deploy it to platforms that feature Azure SQL Database, good old SQL Server on premises and this new edge version.

SQL Database Edge can run in both connected and fully disconnected fashion, something that’s also important for many use cases where connectivity isn’t always a given, yet where users need the kind of data analytics capabilities to keep their businesses (or drilling platforms, or cruise ships) running.

Apr
11
2019
--

Much to Oracle’s chagrin, Pentagon names Microsoft and Amazon as $10B JEDI cloud contract finalists

Yesterday, the Pentagon announced two finalists in the $10 billion, decade-long JEDI cloud contract process — and Oracle was not one of them. In spite of lawsuits, official protests and even back-channel complaining to the president, the two finalists are Microsoft and Amazon.

“After evaluating all of the proposals received, the Department of Defense has made a competitive range determination for the Joint Enterprise Defense Infrastructure Cloud request for proposals, in accordance with all applicable laws and regulations. The two companies within the competitive range will participate further in the procurement process,” Elissa Smith, DoD spokesperson for Public Affairs Operations told TechCrunch. She added that those two finalists were in fact Microsoft and Amazon Web Services (AWS, the cloud computing arm of Amazon).

This contract procurement process has caught the attention of the cloud computing market for a number of reasons. For starters, it’s a large amount of money, but perhaps the biggest reason it had cloud companies going nuts was that it is a winner-take-all proposition.

It is important to keep in mind that whether it’s Microsoft or Amazon that is ultimately chosen for this contract, the winner may never see $10 billion, and it may not last 10 years, because there are a number of points where the DoD could back out —  but the idea of a single winner has been irksome for participants in the process from the start.

Over the course of the last year, Google dropped out of the running, while IBM and Oracle have been complaining to anyone who will listen that the contract unfairly favored Amazon. Others have questioned the wisdom of even going with a single-vendor approach. Even at $10 billion, an astronomical sum to be sure, we have pointed out that in the scheme of the cloud business, it’s not all that much money — but there is more at stake here than money.

There is a belief here that the winner could have an upper hand in other government contracts, that this is an entrée into a much bigger pot of money. After all, if you are building the cloud for the Department of Defense and preparing it for a modern approach to computing in a highly secure way, you would be in a pretty good position to argue for other contracts with similar requirements.

In the end, in spite of the protests of the other companies involved, the Pentagon probably got this right. The two finalists are the most qualified to carry out the contract’s requirements. They are the top two cloud infrastructure vendors on the market, although Microsoft is far behind with around 13 or 14 percent market share. Amazon is far head, with around 33 percent, according to several companies that track such things.

Microsoft in particular has tools and resources that would be very appealing, especially Azure Stack — a mini private version of Azure, that you can stand up anywhere, an approach that would have great appeal to the military — but both companies have experience with government contracts, and both bring strengths and weaknesses to the table. It will undoubtedly be a tough decision.

In February, the contract drama took yet another turn when the department reported it was investigating new evidence of conflict of interest by a former Amazon employee who was involved in the RFP process for a time before returning to the company. Smith reports that the department found no such conflict, but there could be some ethical violations they are looking into.

“The department’s investigation has determined that there is no adverse impact on the integrity of the acquisition process. However, the investigation also uncovered potential ethical violations, which have been further referred to DOD IG,” Smith explained.

The DoD is supposed to announce the winner this month, but the drama has continued non-stop.

Feb
20
2019
--

Why Daimler moved its big data platform to the cloud

Like virtually every big enterprise company, a few years ago, the German auto giant Daimler decided to invest in its own on-premises data centers. And while those aren’t going away anytime soon, the company today announced that it has successfully moved its on-premises big data platform to Microsoft’s Azure cloud. This new platform, which the company calls eXtollo, is Daimler’s first major service to run outside of its own data centers, though it’ll probably not be the last.

As Daimler’s head of its corporate center of excellence for advanced analytics and big data Guido Vetter told me, the company started getting interested in big data about five years ago. “We invested in technology — the classical way, on-premise — and got a couple of people on it. And we were investigating what we could do with data because data is transforming our whole business as well,” he said.

By 2016, the size of the organization had grown to the point where a more formal structure was needed to enable the company to handle its data at a global scale. At the time, the buzz phrase was “data lakes” and the company started building its own in order to build out its analytics capacities.

Electric lineup, Daimler AG

“Sooner or later, we hit the limits as it’s not our core business to run these big environments,” Vetter said. “Flexibility and scalability are what you need for AI and advanced analytics and our whole operations are not set up for that. Our backend operations are set up for keeping a plant running and keeping everything safe and secure.” But in this new world of enterprise IT, companies need to be able to be flexible and experiment — and, if necessary, throw out failed experiments quickly.

So about a year and a half ago, Vetter’s team started the eXtollo project to bring all the company’s activities around advanced analytics, big data and artificial intelligence into the Azure Cloud, and just over two weeks ago, the team shut down its last on-premises servers after slowly turning on its solutions in Microsoft’s data centers in Europe, the U.S. and Asia. All in all, the actual transition between the on-premises data centers and the Azure cloud took about nine months. That may not seem fast, but for an enterprise project like this, that’s about as fast as it gets (and for a while, it fed all new data into both its on-premises data lake and Azure).

If you work for a startup, then all of this probably doesn’t seem like a big deal, but for a more traditional enterprise like Daimler, even just giving up control over the physical hardware where your data resides was a major culture change and something that took quite a bit of convincing. In the end, the solution came down to encryption.

“We needed the means to secure the data in the Microsoft data center with our own means that ensure that only we have access to the raw data and work with the data,” explained Vetter. In the end, the company decided to use the Azure Key Vault to manage and rotate its encryption keys. Indeed, Vetter noted that knowing that the company had full control over its own data was what allowed this project to move forward.

Vetter tells me the company obviously looked at Microsoft’s competitors as well, but he noted that his team didn’t find a compelling offer from other vendors in terms of functionality and the security features that it needed.

Today, Daimler’s big data unit uses tools like HD Insights and Azure Databricks, which covers more than 90 percents of the company’s current use cases. In the future, Vetter also wants to make it easier for less experienced users to use self-service tools to launch AI and analytics services.

While cost is often a factor that counts against the cloud, because renting server capacity isn’t cheap, Vetter argues that this move will actually save the company money and that storage costs, especially, are going to be cheaper in the cloud than in its on-premises data center (and chances are that Daimler, given its size and prestige as a customer, isn’t exactly paying the same rack rate that others are paying for the Azure services).

As with so many big data AI projects, predictions are the focus of much of what Daimler is doing. That may mean looking at a car’s data and error code and helping the technician diagnose an issue or doing predictive maintenance on a commercial vehicle. Interestingly, the company isn’t currently bringing to the cloud any of its own IoT data from its plants. That’s all managed in the company’s on-premises data centers because it wants to avoid the risk of having to shut down a plant because its tools lost the connection to a data center, for example.

Feb
07
2019
--

Microsoft Azure sets its sights on more analytics workloads

Enterprises now amass huge amounts of data, both from their own tools and applications, as well as from the SaaS applications they use. For a long time, that data was basically exhaust. Maybe it was stored for a while to fulfill some legal requirements, but then it was discarded. Now, data is what drives machine learning models, and the more data you have, the better. It’s maybe no surprise, then, that the big cloud vendors started investing in data warehouses and lakes early on. But that’s just a first step. After that, you also need the analytics tools to make all of this data useful.

Today, it’s Microsoft turn to shine the spotlight on its data analytics services. The actual news here is pretty straightforward. Two of these are services that are moving into general availability: the second generation of Azure Data Lake Storage for big data analytics workloads and Azure Data Explorer, a managed service that makes easier ad-hoc analysis of massive data volumes. Microsoft is also previewing a new feature in Azure Data Factory, its graphical no-code service for building data transformation. Data Factory now features the ability to map data flows.

Those individual news pieces are interesting if you are a user or are considering Azure for your big data workloads, but what’s maybe more important here is that Microsoft is trying to offer a comprehensive set of tools for managing and storing this data — and then using it for building analytics and AI services.

(Photo credit:Josh Edelson/AFP/Getty Images)

“AI is a top priority for every company around the globe,” Julia White, Microsoft’s corporate VP for Azure, told me. “And as we are working with our customers on AI, it becomes clear that their analytics often aren’t good enough for building an AI platform.” These companies are generating plenty of data, which then has to be pulled into analytics systems. She stressed that she couldn’t remember a customer conversation in recent months that didn’t focus on AI. “There is urgency to get to the AI dream,” White said, but the growth and variety of data presents a major challenge for many enterprises. “They thought this was a technology that was separate from their core systems. Now it’s expected for both customer-facing and line-of-business applications.”

Data Lake Storage helps with managing this variety of data since it can handle both structured and unstructured data (and is optimized for the Spark and Hadoop analytics engines). The service can ingest any kind of data — yet Microsoft still promises that it will be very fast. “The world of analytics tended to be defined by having to decide upfront and then building rigid structures around it to get the performance you wanted,” explained White. Data Lake Storage, on the other hand, wants to offer the best of both worlds.

Likewise, White argued that while many enterprises used to keep these services on their on-premises servers, many of them are still appliance-based. But she believes the cloud has now reached the point where the price/performance calculations are in its favor. It took a while to get to this point, though, and to convince enterprises. White noted that for the longest time, enterprises that looked at their analytics projects thought $300 million projects took forever, tied up lots of people and were frankly a bit scary. “But also, what we had to offer in the cloud hasn’t been amazing until some of the recent work,” she said. “We’ve been on a journey — as well as the other cloud vendors — and the price performance is now compelling.” And it sure helps that if enterprises want to meet their AI goals, they’ll now have to tackle these workloads, too.

Jan
17
2019
--

Former Facebook engineer picks up $15M for AI platform Spell

In 2016, Serkan Piantino packed up his desk at Facebook with hopes to move on to something new. The former director of Engineering for Facebook AI Research had every intention to keep working on AI, but quickly realized a huge issue.

Unless you’re under the umbrella of one of these big tech companies like Facebook, it can be very difficult and incredibly expensive to get your hands on the hardware necessary to run machine learning experiments.

So he built Spell, which today received $15 million in Series A funding led by Eclipse Ventures and Two Sigma Ventures.

Spell is a collaborative platform that lets anyone run machine learning experiments. The company connects clients with the best, newest hardware hosted by Google, AWS and Microsoft Azure and gives them the software interface they need to run, collaborate and build with AI.

“We spent decades getting to a laptop powerful enough to develop a mobile app or a website, but we’re struggling with things we develop in AI that we haven’t struggled with since the 70s,” said Piantino. “Before PCs existed, the computers filled the whole room at a university or NASA and people used terminals to log into a single main frame. It’s why Unix was invented, and that’s kind of what AI needs right now.”

In a meeting with Piantino this week, TechCrunch got a peek at the product. First, Piantino pulled out his MacBook and opened up Terminal. He began to run his own code against MNIST, which is a database of handwritten digits commonly used to train image detection algorithms.

He started the program and then moved over to the Spell platform. While the original program was just getting started, Spell’s cloud computing platform had completed the test in less than a minute.

The advantage here is obvious. Engineers who want to work on AI, either on their own or for a company, have a huge task in front of them. They essentially have to build their own computer, complete with the high-powered GPUs necessary to run their tests.

With Spell, the newest GPUs from Nvidia and Google are virtually available for anyone to run their tests.

Individual users can get on for free, specify the type of GPU they need to compute their experiment and simply let it run. Corporate users, on the other hand, are able to view the runs taking place on Spell and compare experiments, allowing users to collaborate on their projects from within the platform.

Enterprise clients can set up their own cluster, and keep all of their programs private on the Spell platform, rather than running tests on the public cluster.

Spell also offers enterprise customers a “spell hyper” command that offers built-in support for hyperparameter optimization. Folks can track their models and results and deploy them to Kubernetes/Kubeflow in a single click.

But perhaps most importantly, Spell allows an organization to instantly transform their model into an API that can be used more broadly throughout the organization, or used directly within an app or website.

The implications here are huge. Small companies and startups looking to get into AI now have a much lower barrier to entry, whereas large traditional companies can build out their own proprietary machine learning algorithms for use within the organization without an outrageous upfront investment.

Individual users can get on the platform for free, whereas enterprise clients can get started for $99/month per host you use over the course of a month. Piantino explains that Spell charges based on concurrent usage, so if the customer has 10 concurrent things running, the company considers that the “size” of the Spell cluster and charges based on that.

Piantino sees Spell’s model as the key to defensibility. Whereas many cloud platforms try to lock customers in to their entire suite of products, Spell works with any language framework and lets users plug and play on the platforms of their choice by simply commodifying the hardware. In fact, Spell doesn’t even share with clients which cloud cluster (Microsoft Azure, Google or AWS) they’re on.

So, on the one hand the speed of the tests themselves goes up based on access to new hardware, but, because Spell is an agnostic platform, there is also a huge advantage in how quickly one can get set up and start working.

The company plans to use the funding to further grow the team and the product, and Piantino says he has his eye out for top-tier engineering talent, as well as a designer.

Sep
29
2018
--

What each cloud company could bring to the Pentagon’s $10 B JEDI cloud contract

The Pentagon is going to make one cloud vendor exceedingly happy when it chooses the winner of the $10 billion, ten-year enterprise cloud project dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short). The contract is designed to establish the cloud technology strategy for the military over the next 10 years as it begins to take advantage of current trends like Internet of Things, artificial intelligence and big data.

Ten billion dollars spread out over ten years may not entirely alter a market that’s expected to reach $100 billion a year very soon, but it is substantial enough give a lesser vendor much greater visibility, and possibly deeper entree into other government and private sector business. The cloud companies certainly recognize that.

Photo: Glowimages/Getty Images

That could explain why they are tripping over themselves to change the contract dynamics, insisting, maybe rightly, that a multi-vendor approach would make more sense.

One look at the Request for Proposal (RFP) itself, which has dozens of documents outlining various criteria from security to training to the specification of the single award itself, shows the sheer complexity of this proposal. At the heart of it is a package of classified and unclassified infrastructure, platform and support services with other components around portability. Each of the main cloud vendors we’ll explore here offers these services. They are not unusual in themselves, but they do each bring a different set of skills and experiences to bear on a project like this.

It’s worth noting that it’s not just interested in technical chops, the DOD is also looking closely at pricing and has explicitly asked for specific discounts that would be applied to each component. The RFP process closes on October 12th and the winner is expected to be chosen next April.

Amazon

What can you say about Amazon? They are by far the dominant cloud infrastructure vendor. They have the advantage of having scored a large government contract in the past when they built the CIA’s private cloud in 2013, earning $600 million for their troubles. It offers GovCloud, which is the product that came out of this project designed to host sensitive data.

Jeff Bezos, Chairman and founder of Amazon.com. Photo: Drew Angerer/Getty Images

Many of the other vendors worry that gives them a leg up on this deal. While five years is a long time, especially in technology terms, if anything, Amazon has tightened control of the market. Heck, most of the other players were just beginning to establish their cloud business in 2013. Amazon, which launched in 2006, has maturity the others lack and they are still innovating, introducing dozens of new features every year. That makes them difficult to compete with, but even the biggest player can be taken down with the right game plan.

Microsoft

If anyone can take Amazon on, it’s Microsoft. While they were somewhat late the cloud they have more than made up for it over the last several years. They are growing fast, yet are still far behind Amazon in terms of pure market share. Still, they have a lot to offer the Pentagon including a combination of Azure, their cloud platform and Office 365, the popular business suite that includes Word, PowerPoint, Excel and Outlook email. What’s more they have a fat contract with the DOD for $900 million, signed in 2016 for Windows and related hardware.

Microsoft CEO, Satya Nadella Photo: David Paul Morris/Bloomberg via Getty Images

Azure Stack is particularly well suited to a military scenario. It’s a private cloud you can stand up and have a mini private version of the Azure public cloud. It’s fully compatible with Azure’s public cloud in terms of APIs and tools. The company also has Azure Government Cloud, which is certified for use by many of the U.S. government’s branches, including DOD Level 5. Microsoft brings a lot of experience working inside large enterprises and government clients over the years, meaning it knows how to manage a large contract like this.

Google

When we talk about the cloud, we tend to think of the Big Three. The third member of that group is Google. They have been working hard to establish their enterprise cloud business since 2015 when they brought in Diane Greene to reorganize the cloud unit and give them some enterprise cred. They still have a relatively small share of the market, but they are taking the long view, knowing that there is plenty of market left to conquer.

Head of Google Cloud, Diane Greene Photo: TechCrunch

They have taken an approach of open sourcing a lot of the tools they used in-house, then offering cloud versions of those same services, arguing that who knows better how to manage large-scale operations than they do. They have a point, and that could play well in a bid for this contract, but they also stepped away from an artificial intelligence contract with DOD called Project Maven when a group of their employees objected. It’s not clear if that would be held against them or not in the bidding process here.

IBM

IBM has been using its checkbook to build a broad platform of cloud services since 2013 when it bought Softlayer to give it infrastructure services, while adding software and development tools over the years, and emphasizing AI, big data, security, blockchain and other services. All the while, it has been trying to take full advantage of their artificial intelligence engine, Watson.

IBM Chairman, President and CEO Ginni Romett Photo: Ethan Miller/Getty Images

As one of the primary technology brands of the 20th century, the company has vast experience working with contracts of this scope and with large enterprise clients and governments. It’s not clear if this translates to its more recently developed cloud services, or if it has the cloud maturity of the others, especially Microsoft and Amazon. In that light, it would have its work cut out for it to win a contract like this.

Oracle

Oracle has been complaining since last spring to anyone who will listen, including reportedly the president, that the JEDI RFP is unfairly written to favor Amazon, a charge that DOD firmly denies. They have even filed a formal protest against the process itself.

That could be a smoke screen because the company was late to the cloud, took years to take it seriously as a concept, and barely registers today in terms of market share. What it does bring to the table is broad enterprise experience over decades and one of the most popular enterprise databases in the last 40 years.

Larry Ellison, chairman of Oracle Corp.

Larry Ellison, chairman of Oracle. Photo: David Paul Morris/Bloomberg via Getty Images

It recently began offering a self-repairing database in the cloud that could prove attractive to DOD, but whether its other offerings are enough to help it win this contract remains to be to be seen.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com