Dec
12
2018
--

Oracle is suing the US government over $10B Pentagon JEDI cloud contract process

Oracle filed suit in federal court last week alleging yet again that the decade-long $10 billion Pentagon JEDI contract with its single-vendor award is unfair and illegal. The complaint, which has been sealed at Oracle’s request, is available in the public record with redactions.

If all of this sounds familiar, it’s because it’s the same argument the company used when it filed a similar complaint with the Government Accountability Office (GAO) last August. The GAO ruled against Oracle last month stating, “…the Defense Department’s decision to pursue a single-award approach to obtain these cloud services is consistent with applicable statutes (and regulations) because the agency reasonably determined that a single-award approach is in the government’s best interests for various reasons, including national security concerns, as the statute allows.”

That hasn’t stopped Oracle from trying one more time, this time filing suit in the United States Court of Federal Claims this week, alleging pretty much the same thing it did with the GAO, that the process was unfair and violated federal procurement law.

Oracle Senior Vice President Ken Glueck reiterated this point in a statement to TechCrunch. “The technology industry is innovating around next generation cloud at an unprecedented pace and JEDI as currently envisioned virtually assures DoD will be locked into legacy cloud for a decade or more. The single-award approach is contrary to well established procurement requirements and is out of sync with industry’s multi-cloud strategy, which promotes constant competition, fosters rapid innovation and lowers prices,” he said, echoing the language in the complaint.

The JEDI contract process is about determining the cloud strategy for the Department of Defense for the next decade, but it’s important to point out that even though it is framed as a 10-year contract, it has been designed with several opt-out points for DOD with an initial two-year option, two three-year options and a final two-year option, leaving open the possibility it might never go the full 10 years.

Oracle has complained for months that it believes the contract has been written to favor the industry leader, Amazon Web Services. Company co-CEO Safra Catz even complained directly to the president in April, before the RFP process even started. IBM filed a similar protest in October, citing many of the same arguments. Oracle’s federal court complaint filing cites the IBM complaint and language from other bidders including, Google (which has since withdrawn from the process) and Microsoft that supports their point that a multi-vendor solution would make more sense.

The Department of Justice, which represents the U.S. government in the complaint, declined to comment.

The DOD also indicated it wouldn’t comment on pending litigation, but in September spokesperson Heather Babb told TechCrunch that the contract RFP was not written to favor any vendor in advance. “The JEDI Cloud final RFP reflects the unique and critical needs of DOD, employing the best practices of competitive pricing and security. No vendors have been pre-selected,” she said at the time.

That hasn’t stopped Oracle from continually complaining about the process to whomever would listen. This time they have literally made a federal case out of it. The lawsuit is only the latest move by the company. It’s worth pointing out that the RFP process closed in October and a winner won’t be chosen until April. In other words, they appear to be assuming they will lose before the vendor selection process is even completed.

Nov
29
2018
--

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

Nov
26
2018
--

AWS Transit Gateway helps customers understand their entire network

Tonight at AWS re:Invent, the company announced a new tool called AWS Transit Gateway designed to help build a network topology inside of AWS that lets you share resources across accounts and bring together on premises and cloud resources in a single network topology.

Amazon already has a popular product called Amazon Virtual Private Cloud (VPC), which helps customers build private instances of their applications. The Transit Gateway is designed to help build connections between VPCs, which, up until now, has been tricky to do.

As Peter DeSantis, VP of global infrastructure and customer support at AWS speaking at an event Monday night at AWS Re:Invent explained, AWS Transit Gateway gives you a single set of controls that lets you connect to a centrally managed gateway to grow your network easily and quickly.

Diagram: AWS

DeSantis said that this tool also gives you the ability to traverse your AWS and on-premises networks. “A gateway is another way that we’re innovating to enable customers to have secure, easy-to-manage networking across both on premise and their AWS cloud environment,” he explained.

AWS Transit Gateway lets you build connections across a network wherever the resources live in a standard kind of network topology. “Today we are giving you the ability to use the new AWS Transit Gateway to build a hub-and-spoke network topology. You can connect your existing VPCs, data centers, remote offices, and remote gateways to a managed Transit Gateway, with full control over network routing and security, even if your VPCs, Active Directories, shared services, and other resources span multiple AWS accounts,” Amazon’s Jeff Barr wrote in a blog post announcing to the new feature.

For much of its existence, AWS was about getting you to the cloud and managing your cloud resources. This makes sense for a pure cloud company like AWS, but customers tend to have complex configurations with some infrastructure and software still living on premises and some in the cloud. This could help bridge the two worlds.

more AWS re:Invent 2018 coverage

Nov
26
2018
--

AWS Global Accelerators helps customers manage traffic across zones

Many AWS customers have to run in multiple zones for many reasons, including performance requirements, regulatory issues or fail-over management. Whatever the reason, AWS announced a new tool tonight called Global Accelerators designed to help customers route traffic more easily across multiple regions.

Peter DeSantis, VP of global infrastructure and customer support at AWS speaking at an event Monday night at AWS Re:Invent, explained that much of AWS customer traffic already flows over their massive network, and customers are using AWS Direct Connect to help applications get consistent performance and low network variability as customers move between AWS regions. He said what has been missing is a way to use the AWS global network to optimize their applications.

“Tonight I’m excited to announce AWS Global Accelerator. AWS Global Accelerator makes it easy for you to improve the performance and availability of your applications by taking advantage of the AWS global network,” he told the AWS re:Invent audience.

Graphic: AWS

“Your customer traffic is routed from your end users to the closest AWS edge location and from there traverses congestion-free redundant, highly available AWS global network. In addition to improving performance AWS Global Accelerator has built-in fault isolation, which instantly reacts to changes in the network health or your applications configuration,” DeSantis explained.

In fact, network administrators can route traffic based on defined policies such as health or geographic requirements and the traffic will move to the designated zone automatically based on those policies.

AWS plans to charge customers based on the number of accelerators they create. “An accelerator is the resource you create to direct traffic to optimal endpoints over the AWS global network. Customers will typically set up one accelerator for each application, but more complex applications may require more than one accelerator,” AWS’s Shaun Ray wrote in a blog post announcing the new feature.

AWS Global Accelerator is available today in several regions in the U.S., Europe and Asia.

more AWS re:Invent 2018 coverage

Sep
29
2018
--

What each cloud company could bring to the Pentagon’s $10 B JEDI cloud contract

The Pentagon is going to make one cloud vendor exceedingly happy when it chooses the winner of the $10 billion, ten-year enterprise cloud project dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short). The contract is designed to establish the cloud technology strategy for the military over the next 10 years as it begins to take advantage of current trends like Internet of Things, artificial intelligence and big data.

Ten billion dollars spread out over ten years may not entirely alter a market that’s expected to reach $100 billion a year very soon, but it is substantial enough give a lesser vendor much greater visibility, and possibly deeper entree into other government and private sector business. The cloud companies certainly recognize that.

Photo: Glowimages/Getty Images

That could explain why they are tripping over themselves to change the contract dynamics, insisting, maybe rightly, that a multi-vendor approach would make more sense.

One look at the Request for Proposal (RFP) itself, which has dozens of documents outlining various criteria from security to training to the specification of the single award itself, shows the sheer complexity of this proposal. At the heart of it is a package of classified and unclassified infrastructure, platform and support services with other components around portability. Each of the main cloud vendors we’ll explore here offers these services. They are not unusual in themselves, but they do each bring a different set of skills and experiences to bear on a project like this.

It’s worth noting that it’s not just interested in technical chops, the DOD is also looking closely at pricing and has explicitly asked for specific discounts that would be applied to each component. The RFP process closes on October 12th and the winner is expected to be chosen next April.

Amazon

What can you say about Amazon? They are by far the dominant cloud infrastructure vendor. They have the advantage of having scored a large government contract in the past when they built the CIA’s private cloud in 2013, earning $600 million for their troubles. It offers GovCloud, which is the product that came out of this project designed to host sensitive data.

Jeff Bezos, Chairman and founder of Amazon.com. Photo: Drew Angerer/Getty Images

Many of the other vendors worry that gives them a leg up on this deal. While five years is a long time, especially in technology terms, if anything, Amazon has tightened control of the market. Heck, most of the other players were just beginning to establish their cloud business in 2013. Amazon, which launched in 2006, has maturity the others lack and they are still innovating, introducing dozens of new features every year. That makes them difficult to compete with, but even the biggest player can be taken down with the right game plan.

Microsoft

If anyone can take Amazon on, it’s Microsoft. While they were somewhat late the cloud they have more than made up for it over the last several years. They are growing fast, yet are still far behind Amazon in terms of pure market share. Still, they have a lot to offer the Pentagon including a combination of Azure, their cloud platform and Office 365, the popular business suite that includes Word, PowerPoint, Excel and Outlook email. What’s more they have a fat contract with the DOD for $900 million, signed in 2016 for Windows and related hardware.

Microsoft CEO, Satya Nadella Photo: David Paul Morris/Bloomberg via Getty Images

Azure Stack is particularly well suited to a military scenario. It’s a private cloud you can stand up and have a mini private version of the Azure public cloud. It’s fully compatible with Azure’s public cloud in terms of APIs and tools. The company also has Azure Government Cloud, which is certified for use by many of the U.S. government’s branches, including DOD Level 5. Microsoft brings a lot of experience working inside large enterprises and government clients over the years, meaning it knows how to manage a large contract like this.

Google

When we talk about the cloud, we tend to think of the Big Three. The third member of that group is Google. They have been working hard to establish their enterprise cloud business since 2015 when they brought in Diane Greene to reorganize the cloud unit and give them some enterprise cred. They still have a relatively small share of the market, but they are taking the long view, knowing that there is plenty of market left to conquer.

Head of Google Cloud, Diane Greene Photo: TechCrunch

They have taken an approach of open sourcing a lot of the tools they used in-house, then offering cloud versions of those same services, arguing that who knows better how to manage large-scale operations than they do. They have a point, and that could play well in a bid for this contract, but they also stepped away from an artificial intelligence contract with DOD called Project Maven when a group of their employees objected. It’s not clear if that would be held against them or not in the bidding process here.

IBM

IBM has been using its checkbook to build a broad platform of cloud services since 2013 when it bought Softlayer to give it infrastructure services, while adding software and development tools over the years, and emphasizing AI, big data, security, blockchain and other services. All the while, it has been trying to take full advantage of their artificial intelligence engine, Watson.

IBM Chairman, President and CEO Ginni Romett Photo: Ethan Miller/Getty Images

As one of the primary technology brands of the 20th century, the company has vast experience working with contracts of this scope and with large enterprise clients and governments. It’s not clear if this translates to its more recently developed cloud services, or if it has the cloud maturity of the others, especially Microsoft and Amazon. In that light, it would have its work cut out for it to win a contract like this.

Oracle

Oracle has been complaining since last spring to anyone who will listen, including reportedly the president, that the JEDI RFP is unfairly written to favor Amazon, a charge that DOD firmly denies. They have even filed a formal protest against the process itself.

That could be a smoke screen because the company was late to the cloud, took years to take it seriously as a concept, and barely registers today in terms of market share. What it does bring to the table is broad enterprise experience over decades and one of the most popular enterprise databases in the last 40 years.

Larry Ellison, chairman of Oracle Corp.

Larry Ellison, chairman of Oracle. Photo: David Paul Morris/Bloomberg via Getty Images

It recently began offering a self-repairing database in the cloud that could prove attractive to DOD, but whether its other offerings are enough to help it win this contract remains to be to be seen.

Sep
25
2018
--

Salesforce, AWS expand partnership with secure data sharing between platforms

Salesforce and Amazon’s cloud arm, AWS, have had a pretty close relationship for some time, signing a $400 million deal for infrastructure cloud services in 2016, but today at Dreamforce, Salesforce’s massive customer conference taking place this week in San Francisco, they took it to another level. The two companies announced they were offering a new set of data integration services between the two cloud platforms for common customers.

Matt Garman, vice president of Amazon Elastic Compute Cloud, says customers looking to transform digitally are still primarily concerned about security when moving data between cloud vendors, More specifically, they were asking for a way to move data more securely between the Salesforce and Amazon platforms. “Customers talked to us about sensitive data in Salesforce and using deep analytics and data processing on AWS and moving them back and forth in secure way,” he said. Today’s announcements let them do that.

In practice, Salesforce customers can set up a direct connection using AWS Private Link to connect directly to private Salesforce APIs and move data from Salesforce to an Amazon service such as Redshift, the company’s data warehouse product, without ever exposing the data to the open internet.

Further, Salesforce customers can set up Lambda functions so that when certain conditions are met in Salesforce, it triggers an action such as moving data (or vice versa). This is commonly known as serverless computing and developers are increasingly using event triggers to drive business processes.

Finally, the two companies are integrating more directly with Amazon Connect, the Amazon contact center software it launched in 2017. This is where it gets more interesting because of course Salesforce offers its own contact center services with Salesforce Service Cloud. The two companies found a way to help common customers work together here to build what they are calling AI-driven self-service applications using Amazon Connect on the Salesforce mobile Lightning development platform.

This could involve among other things, building mobile applications that take advantage of Amazon Lex, AWS’s bot building application and Salesforce Einstein, Salesforce’s artificial intelligence platform. Common customers can download the Amazon Connect CTI Adapter on the Salesforce AppExchange.

Make no mistake, this is a significant announcement in that it involves two of the most successful cloud companies on the planet working directly together to offer products and services that benefit their common customers. This was not lost on Bret Taylor, president and chief product officer at Salesforce. “We’re enabling something that wouldn’t have been possible. It’s really exciting because it’s something unique in the marketplace,” he said.

What’s more, it comes on the heels of yesterday’s partnership news with Apple, giving Salesforce two powerful partners to work with moving forward.

While the level of today’s news is unprecedented between the two companies, they  have been working together for some time. As Garman points out, Heroku, which Salesforce bought in 2010 and Quip, which it bought last year were both built on AWS from the get-go. Salesforce, which mostly runs its own data centers in the U.S. runs most of its public cloud on AWS, especially outside the U.S. Conversely, Amazon uses Salesforce tools internally.

Sep
05
2018
--

Elastic’s IPO filing is here

Elastic, the provider of subscription-based data search software used by Dell, Netflix, The New York Times and others, has unveiled its IPO filing after confidentially submitting paperwork to the SEC in June. The company will be the latest in a line of enterprise SaaS businesses to hit the public markets in 2018.

Headquartered in Mountain View, Elastic plans to raise $100 million in its NYSE listing, though that’s likely a placeholder amount. The timing of the filing suggests the company will transition to the public markets this fall; we’ve reached out to the company for more details. 

Elastic will trade under the symbol ESTC.

The business is known for its core product, an open-source search tool called ElasticSearch. It also offers a range of analytics and visualization tools meant to help businesses organize large data sets, competing directly with companies like Splunk and even Amazon — a name it mentions 14 times in the filing.

Amazon offers some of our open source features as part of its Amazon Web Services offering. As such, Amazon competes with us for potential customers, and while Amazon cannot provide our proprietary software, the pricing of Amazon’s offerings may limit our ability to adjust,” the company wrote in the filing, which also lists Endeca, FAST, Autonomy and several others as key competitors.

This is our first look at Elastic’s financials. The company brought in $159.9 million in revenue in the 12 months ended July 30, 2018, up roughly 100 percent from $88.1 million the year prior. Losses are growing at about the same rate. Elastic reported a net loss of $18.5 million in the second quarter of 2018. That’s an increase from $9.9 million in the same period in 2017.

Founded in 2012, the company has raised about $100 million in venture capital funding, garnering a $700 million valuation the last time it raised VC, which was all the way back in 2014. Its investors include Benchmark, NEA and Future Fund, which each retain a 17.8 percent, 10.2 percent and 8.2 percent pre-IPO stake, respectively.

A flurry of business software companies have opted to go public this year. Domo, a business analytics company based in Utah, went public in June raising $193 million in the process. On top of that, subscription biller Zuora had a positive debut in April in what was a “clear sign post on the road to SaaS maturation,” according to TechCrunch’s Ron Miller. DocuSign and Smartsheet are also recent examples of both high-profile and successful SaaS IPOs.

Jun
04
2018
--

How Yelp (mostly) shut down its own data centers and moved to AWS

Back in 2013, Yelp was a 9-year old company built on a set of internal systems. It was coming to the realization that running its own data centers might not be the most efficient way to run a business that was continuing to scale rapidly. At the same time, the company understood that the tech world had changed dramatically from 2004 when it launched and it needed to transform the underlying technology to a more modern approach.

That’s a lot to take on in one bite, but it wasn’t something that happened willy-nilly or overnight says Jason Fennell, SVP of engineering at Yelp . The vast majority of the company’s data was being processed in a massive Python repository that was getting bigger all the time. The conversation about shifting to a microservices architecture began in 2012.

The company was also running the massive Yelp application inside its own datacenters, and as it grew it was increasingly becoming limited by long lead times required to procure and get new hardware online. It saw this was an unsustainable situation over the long-term and began a process of transforming from running a huge monolithic application on-premises to one built on microservices running in the cloud. It was a quite a journey.

The data center conundrum

Fennell described the classic scenario of a company that could benefit from a shift to the cloud. Yelp had a small operations team dedicated to setting up new machines. When engineering anticipated a new resource requirement, they had to give the operations team sufficient lead time to order new servers and get them up and running, certainly not the most efficient way to deal with a resource problem, and one that would have been easily solved by the cloud.

“We kept running into a bottleneck, I was running a chunk of the search team [at the time] and I had to project capacity out to 6-9 months. Then it would take a few months to order machines and another few months to set them up,” Fennell explained. He emphasized that the team charged with getting these machines going was working hard, but there were too few people and too many demands and something had to give.

“We were on this cusp. We could have scaled up that team dramatically and gotten [better] at building data centers and buying servers and doing that really fast, but we were hearing a lot of AWS and the advantages there,” Fennell explained.

To the cloud!

They looked at the cloud market landscape in 2013 and AWS was the clear leader technologically. That meant moving some part of their operations to EC2. Unfortunately, that exposed a new problem: how to manage this new infrastructure in the cloud. This was before the notion of cloud-native computing even existed. There was no Kubernetes. Sure, Google was operating in a cloud-native fashion in-house, but it was not really an option for most companies without a huge team of engineers.

Yelp needed to explore new ways of managing operations in a hybrid cloud environment where some of the applications and data lived in the cloud and some lived in their data center. It was not an easy problem to solve in 2013 and Yelp had to be creative to make it work.

That meant remaining with one foot in the public cloud and the other in a private data center. One tool that helped ease the transition was AWS Direct Connect, which was released the prior year and enabled Yelp to directly connect from their data center to the cloud.

Laying the groundwork

About this time, as they were figuring out how AWS works, another revolutionary technological change was occurring when Docker emerged and began mainstreaming the notion of containerization. “That’s another thing that’s been revolutionary. We could suddenly decouple the context of the running program from the machine it’s running on. Docker gives you this container, and is much lighter weight than virtualization and running full operating systems on a machine,” Fennell explained.

Another thing that was happening was the emergence of the open source data center operating system called Mesos, which offered a way to treat the data center as a single pool of resources. They could apply this notion to wherever the data and applications lived. Mesos also offered a container orchestration tool called Marathon in the days before Kubernetes emerged as a popular way of dealing with this same issue.

“We liked Mesos as a resource allocation framework. It abstracted away the fleet of machines. Mesos abstracts many machines and controls programs across them. Marathon holds guarantees about what containers are running where. We could stitch it all together into this clear opinionated interface,” he said.

Pulling it all together

While all this was happening, Yelp began exploring how to move to the cloud and use a Platform as a Service approach to the software layer. The problem was at the time they started, there wasn’t really any viable way to do this. In the buy versus build decision making that goes on in large transformations like this one, they felt they had little choice but to build that platform layer themselves.

In late 2013 they began to pull together the idea of building this platform on top of Mesos and Docker, giving it the name PaaSTA, an internal joke that stood for Platform as a Service, Totally Awesome. It became simply known as Pasta.

Photo: David Silverman/Getty Images

The project had the ambitious goal of making their infrastructure work as a single fabric, in a cloud-native fashion before most anyone outside of Google was using that term. Pasta developed slowly with the first developer piece coming online in August 2014 and the first  production service later that year in December. The company actually open sourced the technology the following year.

“Pasta gave us the interface between the applications and development teams. Operations had to make sure Pasta is up and running, while Development was responsible for implementing containers that implemented the interface,” Fennell said.

Moving to deeper into the public cloud

While Yelp was busy building these internal systems, AWS wasn’t sitting still. It was also improving its offerings with new instance types, new functionality and better APIs and tooling. Fennell reports this helped immensely as Yelp began a more complete move to the cloud.

He says there were a couple of tipping points as they moved more and more of the application to AWS — including eventually, the master database. This all happened in more recent years as they understood better how to use Pasta to control the processes wherever they lived. What’s more, he said that adoption of other AWS services was now possible due to tighter integration between the in-house data centers and AWS.

Photo: erhui1979/Getty Images

The first tipping point came around 2016 as all new services were configured for the cloud. He said they began to get much better at managing applications and infrastructure in AWS and their thinking shifted from how to migrate to AWS to how to operate and manage it.

Perhaps the biggest step in this years-long transformation came last summer when Yelp moved its master database from its own data center to AWS. “This was the last thing we needed to move over. Otherwise it’s clean up. As of 2018, we are serving zero production traffic through physical data centers,” he said. While they still have two data centers, they are getting to the point, they have the minimum hardware required to run the network backbone.

Fennell said they went from two weeks to a month to get a service up and running before this was all in place to just a couple of minutes. He says any loss of control by moving to the cloud has been easily offset by the convenience of using cloud infrastructure. “We get to focus on the things where we add value,” he said — and that’s the goal of every company.

Apr
04
2018
--

AWS adds automated point-in-time recovery to DynamoDB

One of the joys of cloud computing is handing over your data to the cloud vendor and letting them handle the heavy lifting. Up until now that has meant they updated the software or scaled the hardware for you. Today, AWS took that to another level when it announced Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR).

With this new service, the company lets you simply enable the new backup tool, and the backup happens automatically. Amazon takes care of the rest, providing a continuous backup of all the data in your DynamoDB database.

But it doesn’t stop there, it lets the backup system act as a recording of sorts. You can rewind your data set to any point in time in the backup to any time with “per second granularity” up to 35 days in the past. What’s more, you can access the tool from the AWS Management Console, an API call or via the AWS Command Line Interface (CLI).

Screenshot: Amazon

“We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict,” Amazon’s Randall Hunt wrote in the blog post announcing the new feature.

If you’re concerned about the 35 day limit, you needn’t be as the system is an adjunct to your regular on-demand backups, which you can keep for as long as you need.

Amazon’s Chief Technology Officer, Werner Vogels, who introduced the new service at the Amazon Summit in San Francisco today, said it doesn’t matter how much data you have. Even with a terabyte of data, you can make use of this service. “This is a truly powerful mechanism here,” Vogels said.

The new service is available in various regions today. You can learn about regional availability and pricing options here.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com