Sep
29
2018
--

What each cloud company could bring to the Pentagon’s $10 B JEDI cloud contract

The Pentagon is going to make one cloud vendor exceedingly happy when it chooses the winner of the $10 billion, ten-year enterprise cloud project dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short). The contract is designed to establish the cloud technology strategy for the military over the next 10 years as it begins to take advantage of current trends like Internet of Things, artificial intelligence and big data.

Ten billion dollars spread out over ten years may not entirely alter a market that’s expected to reach $100 billion a year very soon, but it is substantial enough give a lesser vendor much greater visibility, and possibly deeper entree into other government and private sector business. The cloud companies certainly recognize that.

Photo: Glowimages/Getty Images

That could explain why they are tripping over themselves to change the contract dynamics, insisting, maybe rightly, that a multi-vendor approach would make more sense.

One look at the Request for Proposal (RFP) itself, which has dozens of documents outlining various criteria from security to training to the specification of the single award itself, shows the sheer complexity of this proposal. At the heart of it is a package of classified and unclassified infrastructure, platform and support services with other components around portability. Each of the main cloud vendors we’ll explore here offers these services. They are not unusual in themselves, but they do each bring a different set of skills and experiences to bear on a project like this.

It’s worth noting that it’s not just interested in technical chops, the DOD is also looking closely at pricing and has explicitly asked for specific discounts that would be applied to each component. The RFP process closes on October 12th and the winner is expected to be chosen next April.

Amazon

What can you say about Amazon? They are by far the dominant cloud infrastructure vendor. They have the advantage of having scored a large government contract in the past when they built the CIA’s private cloud in 2013, earning $600 million for their troubles. It offers GovCloud, which is the product that came out of this project designed to host sensitive data.

Jeff Bezos, Chairman and founder of Amazon.com. Photo: Drew Angerer/Getty Images

Many of the other vendors worry that gives them a leg up on this deal. While five years is a long time, especially in technology terms, if anything, Amazon has tightened control of the market. Heck, most of the other players were just beginning to establish their cloud business in 2013. Amazon, which launched in 2006, has maturity the others lack and they are still innovating, introducing dozens of new features every year. That makes them difficult to compete with, but even the biggest player can be taken down with the right game plan.

Microsoft

If anyone can take Amazon on, it’s Microsoft. While they were somewhat late the cloud they have more than made up for it over the last several years. They are growing fast, yet are still far behind Amazon in terms of pure market share. Still, they have a lot to offer the Pentagon including a combination of Azure, their cloud platform and Office 365, the popular business suite that includes Word, PowerPoint, Excel and Outlook email. What’s more they have a fat contract with the DOD for $900 million, signed in 2016 for Windows and related hardware.

Microsoft CEO, Satya Nadella Photo: David Paul Morris/Bloomberg via Getty Images

Azure Stack is particularly well suited to a military scenario. It’s a private cloud you can stand up and have a mini private version of the Azure public cloud. It’s fully compatible with Azure’s public cloud in terms of APIs and tools. The company also has Azure Government Cloud, which is certified for use by many of the U.S. government’s branches, including DOD Level 5. Microsoft brings a lot of experience working inside large enterprises and government clients over the years, meaning it knows how to manage a large contract like this.

Google

When we talk about the cloud, we tend to think of the Big Three. The third member of that group is Google. They have been working hard to establish their enterprise cloud business since 2015 when they brought in Diane Greene to reorganize the cloud unit and give them some enterprise cred. They still have a relatively small share of the market, but they are taking the long view, knowing that there is plenty of market left to conquer.

Head of Google Cloud, Diane Greene Photo: TechCrunch

They have taken an approach of open sourcing a lot of the tools they used in-house, then offering cloud versions of those same services, arguing that who knows better how to manage large-scale operations than they do. They have a point, and that could play well in a bid for this contract, but they also stepped away from an artificial intelligence contract with DOD called Project Maven when a group of their employees objected. It’s not clear if that would be held against them or not in the bidding process here.

IBM

IBM has been using its checkbook to build a broad platform of cloud services since 2013 when it bought Softlayer to give it infrastructure services, while adding software and development tools over the years, and emphasizing AI, big data, security, blockchain and other services. All the while, it has been trying to take full advantage of their artificial intelligence engine, Watson.

IBM Chairman, President and CEO Ginni Romett Photo: Ethan Miller/Getty Images

As one of the primary technology brands of the 20th century, the company has vast experience working with contracts of this scope and with large enterprise clients and governments. It’s not clear if this translates to its more recently developed cloud services, or if it has the cloud maturity of the others, especially Microsoft and Amazon. In that light, it would have its work cut out for it to win a contract like this.

Oracle

Oracle has been complaining since last spring to anyone who will listen, including reportedly the president, that the JEDI RFP is unfairly written to favor Amazon, a charge that DOD firmly denies. They have even filed a formal protest against the process itself.

That could be a smoke screen because the company was late to the cloud, took years to take it seriously as a concept, and barely registers today in terms of market share. What it does bring to the table is broad enterprise experience over decades and one of the most popular enterprise databases in the last 40 years.

Larry Ellison, chairman of Oracle Corp.

Larry Ellison, chairman of Oracle. Photo: David Paul Morris/Bloomberg via Getty Images

It recently began offering a self-repairing database in the cloud that could prove attractive to DOD, but whether its other offerings are enough to help it win this contract remains to be to be seen.

Sep
25
2018
--

Salesforce, AWS expand partnership with secure data sharing between platforms

Salesforce and Amazon’s cloud arm, AWS, have had a pretty close relationship for some time, signing a $400 million deal for infrastructure cloud services in 2016, but today at Dreamforce, Salesforce’s massive customer conference taking place this week in San Francisco, they took it to another level. The two companies announced they were offering a new set of data integration services between the two cloud platforms for common customers.

Matt Garman, vice president of Amazon Elastic Compute Cloud, says customers looking to transform digitally are still primarily concerned about security when moving data between cloud vendors, More specifically, they were asking for a way to move data more securely between the Salesforce and Amazon platforms. “Customers talked to us about sensitive data in Salesforce and using deep analytics and data processing on AWS and moving them back and forth in secure way,” he said. Today’s announcements let them do that.

In practice, Salesforce customers can set up a direct connection using AWS Private Link to connect directly to private Salesforce APIs and move data from Salesforce to an Amazon service such as Redshift, the company’s data warehouse product, without ever exposing the data to the open internet.

Further, Salesforce customers can set up Lambda functions so that when certain conditions are met in Salesforce, it triggers an action such as moving data (or vice versa). This is commonly known as serverless computing and developers are increasingly using event triggers to drive business processes.

Finally, the two companies are integrating more directly with Amazon Connect, the Amazon contact center software it launched in 2017. This is where it gets more interesting because of course Salesforce offers its own contact center services with Salesforce Service Cloud. The two companies found a way to help common customers work together here to build what they are calling AI-driven self-service applications using Amazon Connect on the Salesforce mobile Lightning development platform.

This could involve among other things, building mobile applications that take advantage of Amazon Lex, AWS’s bot building application and Salesforce Einstein, Salesforce’s artificial intelligence platform. Common customers can download the Amazon Connect CTI Adapter on the Salesforce AppExchange.

Make no mistake, this is a significant announcement in that it involves two of the most successful cloud companies on the planet working directly together to offer products and services that benefit their common customers. This was not lost on Bret Taylor, president and chief product officer at Salesforce. “We’re enabling something that wouldn’t have been possible. It’s really exciting because it’s something unique in the marketplace,” he said.

What’s more, it comes on the heels of yesterday’s partnership news with Apple, giving Salesforce two powerful partners to work with moving forward.

While the level of today’s news is unprecedented between the two companies, they  have been working together for some time. As Garman points out, Heroku, which Salesforce bought in 2010 and Quip, which it bought last year were both built on AWS from the get-go. Salesforce, which mostly runs its own data centers in the U.S. runs most of its public cloud on AWS, especially outside the U.S. Conversely, Amazon uses Salesforce tools internally.

Sep
05
2018
--

Elastic’s IPO filing is here

Elastic, the provider of subscription-based data search software used by Dell, Netflix, The New York Times and others, has unveiled its IPO filing after confidentially submitting paperwork to the SEC in June. The company will be the latest in a line of enterprise SaaS businesses to hit the public markets in 2018.

Headquartered in Mountain View, Elastic plans to raise $100 million in its NYSE listing, though that’s likely a placeholder amount. The timing of the filing suggests the company will transition to the public markets this fall; we’ve reached out to the company for more details. 

Elastic will trade under the symbol ESTC.

The business is known for its core product, an open-source search tool called ElasticSearch. It also offers a range of analytics and visualization tools meant to help businesses organize large data sets, competing directly with companies like Splunk and even Amazon — a name it mentions 14 times in the filing.

Amazon offers some of our open source features as part of its Amazon Web Services offering. As such, Amazon competes with us for potential customers, and while Amazon cannot provide our proprietary software, the pricing of Amazon’s offerings may limit our ability to adjust,” the company wrote in the filing, which also lists Endeca, FAST, Autonomy and several others as key competitors.

This is our first look at Elastic’s financials. The company brought in $159.9 million in revenue in the 12 months ended July 30, 2018, up roughly 100 percent from $88.1 million the year prior. Losses are growing at about the same rate. Elastic reported a net loss of $18.5 million in the second quarter of 2018. That’s an increase from $9.9 million in the same period in 2017.

Founded in 2012, the company has raised about $100 million in venture capital funding, garnering a $700 million valuation the last time it raised VC, which was all the way back in 2014. Its investors include Benchmark, NEA and Future Fund, which each retain a 17.8 percent, 10.2 percent and 8.2 percent pre-IPO stake, respectively.

A flurry of business software companies have opted to go public this year. Domo, a business analytics company based in Utah, went public in June raising $193 million in the process. On top of that, subscription biller Zuora had a positive debut in April in what was a “clear sign post on the road to SaaS maturation,” according to TechCrunch’s Ron Miller. DocuSign and Smartsheet are also recent examples of both high-profile and successful SaaS IPOs.

Jun
04
2018
--

How Yelp (mostly) shut down its own data centers and moved to AWS

Back in 2013, Yelp was a 9-year old company built on a set of internal systems. It was coming to the realization that running its own data centers might not be the most efficient way to run a business that was continuing to scale rapidly. At the same time, the company understood that the tech world had changed dramatically from 2004 when it launched and it needed to transform the underlying technology to a more modern approach.

That’s a lot to take on in one bite, but it wasn’t something that happened willy-nilly or overnight says Jason Fennell, SVP of engineering at Yelp . The vast majority of the company’s data was being processed in a massive Python repository that was getting bigger all the time. The conversation about shifting to a microservices architecture began in 2012.

The company was also running the massive Yelp application inside its own datacenters, and as it grew it was increasingly becoming limited by long lead times required to procure and get new hardware online. It saw this was an unsustainable situation over the long-term and began a process of transforming from running a huge monolithic application on-premises to one built on microservices running in the cloud. It was a quite a journey.

The data center conundrum

Fennell described the classic scenario of a company that could benefit from a shift to the cloud. Yelp had a small operations team dedicated to setting up new machines. When engineering anticipated a new resource requirement, they had to give the operations team sufficient lead time to order new servers and get them up and running, certainly not the most efficient way to deal with a resource problem, and one that would have been easily solved by the cloud.

“We kept running into a bottleneck, I was running a chunk of the search team [at the time] and I had to project capacity out to 6-9 months. Then it would take a few months to order machines and another few months to set them up,” Fennell explained. He emphasized that the team charged with getting these machines going was working hard, but there were too few people and too many demands and something had to give.

“We were on this cusp. We could have scaled up that team dramatically and gotten [better] at building data centers and buying servers and doing that really fast, but we were hearing a lot of AWS and the advantages there,” Fennell explained.

To the cloud!

They looked at the cloud market landscape in 2013 and AWS was the clear leader technologically. That meant moving some part of their operations to EC2. Unfortunately, that exposed a new problem: how to manage this new infrastructure in the cloud. This was before the notion of cloud-native computing even existed. There was no Kubernetes. Sure, Google was operating in a cloud-native fashion in-house, but it was not really an option for most companies without a huge team of engineers.

Yelp needed to explore new ways of managing operations in a hybrid cloud environment where some of the applications and data lived in the cloud and some lived in their data center. It was not an easy problem to solve in 2013 and Yelp had to be creative to make it work.

That meant remaining with one foot in the public cloud and the other in a private data center. One tool that helped ease the transition was AWS Direct Connect, which was released the prior year and enabled Yelp to directly connect from their data center to the cloud.

Laying the groundwork

About this time, as they were figuring out how AWS works, another revolutionary technological change was occurring when Docker emerged and began mainstreaming the notion of containerization. “That’s another thing that’s been revolutionary. We could suddenly decouple the context of the running program from the machine it’s running on. Docker gives you this container, and is much lighter weight than virtualization and running full operating systems on a machine,” Fennell explained.

Another thing that was happening was the emergence of the open source data center operating system called Mesos, which offered a way to treat the data center as a single pool of resources. They could apply this notion to wherever the data and applications lived. Mesos also offered a container orchestration tool called Marathon in the days before Kubernetes emerged as a popular way of dealing with this same issue.

“We liked Mesos as a resource allocation framework. It abstracted away the fleet of machines. Mesos abstracts many machines and controls programs across them. Marathon holds guarantees about what containers are running where. We could stitch it all together into this clear opinionated interface,” he said.

Pulling it all together

While all this was happening, Yelp began exploring how to move to the cloud and use a Platform as a Service approach to the software layer. The problem was at the time they started, there wasn’t really any viable way to do this. In the buy versus build decision making that goes on in large transformations like this one, they felt they had little choice but to build that platform layer themselves.

In late 2013 they began to pull together the idea of building this platform on top of Mesos and Docker, giving it the name PaaSTA, an internal joke that stood for Platform as a Service, Totally Awesome. It became simply known as Pasta.

Photo: David Silverman/Getty Images

The project had the ambitious goal of making their infrastructure work as a single fabric, in a cloud-native fashion before most anyone outside of Google was using that term. Pasta developed slowly with the first developer piece coming online in August 2014 and the first  production service later that year in December. The company actually open sourced the technology the following year.

“Pasta gave us the interface between the applications and development teams. Operations had to make sure Pasta is up and running, while Development was responsible for implementing containers that implemented the interface,” Fennell said.

Moving to deeper into the public cloud

While Yelp was busy building these internal systems, AWS wasn’t sitting still. It was also improving its offerings with new instance types, new functionality and better APIs and tooling. Fennell reports this helped immensely as Yelp began a more complete move to the cloud.

He says there were a couple of tipping points as they moved more and more of the application to AWS — including eventually, the master database. This all happened in more recent years as they understood better how to use Pasta to control the processes wherever they lived. What’s more, he said that adoption of other AWS services was now possible due to tighter integration between the in-house data centers and AWS.

Photo: erhui1979/Getty Images

The first tipping point came around 2016 as all new services were configured for the cloud. He said they began to get much better at managing applications and infrastructure in AWS and their thinking shifted from how to migrate to AWS to how to operate and manage it.

Perhaps the biggest step in this years-long transformation came last summer when Yelp moved its master database from its own data center to AWS. “This was the last thing we needed to move over. Otherwise it’s clean up. As of 2018, we are serving zero production traffic through physical data centers,” he said. While they still have two data centers, they are getting to the point, they have the minimum hardware required to run the network backbone.

Fennell said they went from two weeks to a month to get a service up and running before this was all in place to just a couple of minutes. He says any loss of control by moving to the cloud has been easily offset by the convenience of using cloud infrastructure. “We get to focus on the things where we add value,” he said — and that’s the goal of every company.

Apr
04
2018
--

AWS adds automated point-in-time recovery to DynamoDB

One of the joys of cloud computing is handing over your data to the cloud vendor and letting them handle the heavy lifting. Up until now that has meant they updated the software or scaled the hardware for you. Today, AWS took that to another level when it announced Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR).

With this new service, the company lets you simply enable the new backup tool, and the backup happens automatically. Amazon takes care of the rest, providing a continuous backup of all the data in your DynamoDB database.

But it doesn’t stop there, it lets the backup system act as a recording of sorts. You can rewind your data set to any point in time in the backup to any time with “per second granularity” up to 35 days in the past. What’s more, you can access the tool from the AWS Management Console, an API call or via the AWS Command Line Interface (CLI).

Screenshot: Amazon

“We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict,” Amazon’s Randall Hunt wrote in the blog post announcing the new feature.

If you’re concerned about the 35 day limit, you needn’t be as the system is an adjunct to your regular on-demand backups, which you can keep for as long as you need.

Amazon’s Chief Technology Officer, Werner Vogels, who introduced the new service at the Amazon Summit in San Francisco today, said it doesn’t matter how much data you have. Even with a terabyte of data, you can make use of this service. “This is a truly powerful mechanism here,” Vogels said.

The new service is available in various regions today. You can learn about regional availability and pricing options here.

Mar
06
2017
--

Amazon’s AWS buys Thinkbox Software, maker of tools for creative professionals

 It looks like AWS, Amazon’s cloud computing arm, has made another acquisition to add more productivity tools for its customers beyond basic cloud-computing services. It has picked up Thinkbox Software, which develops and sells solutions for media design and content creation aimed at people in the video and wider visual media industries. Examples of services that Thinkbox already… Read More

Jan
30
2017
--

MXNet accepted to the Apache Incubator

Connecting lines, computer illustration. MXNet, Amazon Web Services’ preferred deep learning framework, was accepted to the Apache Incubator today. Admission to the incubator is the first step necessary for the open-source initiative to officially become part of the Apache Software Foundation. The Apache Software Foundation supports the efforts of thousands of developers maintaining open-source projects around the world.… Read More

Dec
01
2016
--

AWS lets developers execute Lambda functions on edge locations with Lambda@Edge

screen-shot-2016-12-01-at-10-34-23-am Today, Amazon said it was rolling out a tool called Lambda@Edge, that allows developers to execute Lambda functions at edge locations on a content delivery network, cutting down the time needed to make a process. With Lambda@Edge, developers can do processing at the edge locations without having to go back to the original source. These functions are able to inspect HTTP requests and take… Read More

Dec
01
2016
--

AWS Batch simplifies batch computing in the cloud

img_20161201_101449 Amazon’s new AWS Batch allows engineers to execute a series of jobs automatically, in the cloud. The tool lets you run apps and container images on whatever EC2 instances are required to accomplish a given task.
Amazon recognized that many of its customers were bootstrapping their own batch computing systems. Users were stringing together EC2 instances, containers, notifications, and… Read More

Dec
01
2016
--

AWS Personal Health Dashboard helps developers monitor the state of their cloud apps

img_20161201_091825 DevOps teams will be happy to hear that Amazon is launching its own dashboard for Amazon Web Services. Personal Health Dashboard, as the company calls it, is its latest release from the stage of re:Invent 2016 to support more advanced cloud apps monitoring. The tool puts critical infrastructure data in one place. The dashboard will automatically notify teams of failures and allow them… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com