Nov
29
2018
--

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS launches a managed Kafka service

Kafka is an open-source tool for handling incoming streams of data. Like virtually all powerful tools, it’s somewhat hard to set up and manage. Today, Amazon’s AWS is making this all a bit easier for its users with the launch of Amazon Managed Streaming for Kafka. That’s a mouthful, but it’s essentially Kafka as a fully managed, highly available service on AWS. It’s now available on AWS as a public preview.

As AWS CTO Werner Vogels noted in his AWS re:Invent keynote, Kafka users traditionally had to do a lot of heavy lifting to set up a cluster on AWS and to ensure that it could scale and handle failures. “It’s a nightmare having to restart all the cluster and the main nodes,” he said. “This is what I would call the traditional heavy lifting that AWS is really good at solving for you.”

It’s interesting to see AWS launch this service, given that it already offers a very similar tool in Kinesis, a tool that also focuses on ingesting streaming data. There are plenty of applications on the market today that already use Kafka, and AWS is clearly interested in giving those users a pathway to either move to a managed Kafka service or to AWS in general.

As with all things AWS, the pricing is a bit complicated, but a basic Kafka instance will start at $0.21 per hour. You’re not likely to just use one instance, so for a somewhat useful setup with three brokers and a good amount of storage and some other fees, you’ll quickly pay well over $500 per month.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS is bringing the cloud on prem with Outposts

AWS has always been the pure cloud vendor, and even though it has given a nod to hybrid, it is now fully embracing it. Today in conjunction with VMware, it announced a pair of options to bring AWS into the data center.

Yes, you read it correctly. You can now put AWS into your data center with AWS hardware, the same design they use in their own data centers. The two new products are part of AWS Outposts.

There are two Outposts variations — VMware Cloud on AWS Outposts and AWS Outposts. The first uses the VMware control panel. The second allows customers to run compute and storage on premises using the same AWS APIs that are used in the AWS cloud.

In fact, VMware CEO Pat  Gelsinger joined AWS CEO Andy Jassy onstage at AWS re:Invent for a joint announcement. The two companies have been working together for some time to bring VMware to the AWS cloud. Part of this announcement flips that on its head, bringing the AWS cloud on prem to work with VMware. In both cases, AWS sells you their hardware, installs it if you wish, and will even maintain it for you.

This is an area that AWS has lagged, preferring the vision of a cloud, rather than moving back to the data center, but it’s a tacit acknowledgment that customers want to operate in both places for the foreseeable future.

The announcement also extends the company’s cloud-native-like vision. On Monday, the company announced Transit Gateways, which is designed to provide a single way to manage network resources, whether they live in the cloud or on prem.

Now AWS is bringing its cloud on prem, something that Microsoft, Canonical, Oracle and others have had for some time. It’s worth noting that today’s announcement is a public preview. The actual release is expected in the second half of next year.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

Amazon Textract brings intelligence to OCR

One of the challenges just about every business faces is converting forms to a useful digital format. This has typically involved using human data entry clerks to enter the data into the computer. State of the art involved using OCR to read forms automatically, but AWS CEO Andy Jassy explained that OCR is basically just a dumb text reader. It doesn’t recognize text types. Amazon wanted to change that and today it announced Amazon Textract, an intelligent OCR tool to move data from forms to a more useable digital format.

In an example, he showed a form with tables. Regular OCR didn’t recognize the table and interpreted it as a string of text. Textract is designed to recognize common page elements like a table and pull the data in a sensible way.

Jassy said that forms also often change, and if you are using a template as a workaround for OCR’s lack of intelligence, the template breaks if you move anything. To fix that, Textract is smart enough to understand common data types like Social Security numbers, dates of birth and addresses, and interprets them correctly no matter where they fall on the page.

“We have taught Textract to recognize this set of characters is a date of birth and this is a Social Security number. If forms change, Textract won’t miss it,” Jassy explained.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS launches new time series database

AWS announced a new time series database today at AWS re:Invent in Las Vegas. The new product called DynamoDB On-Demand is a fully managed database designed to track items over time, which can be particularly useful for Internet of Things scenarios.

“With time series data each data point consists of a timestamp and one or more attributes and it really measures how things change over time and helps drive real time decisions,” AWS CEO Andy Jassy explained.

He sees a problem though with existing open source and commercial solutions, which says don’t scale well and hard to manage. This is of course a problem that a cloud service like AWS often helps solve.

Not surprising as customers were looking for a good time series database solution, AWS decided to create one themselves. “Today we are introducing Amazon DynamoDB on-demand, a flexible new billing option for DynamoDB capable of serving thousands of requests per second without capacity planning,” Danilo Poccia from AWS wrote in the blog post introducing the new service.

Jassy said that they built DynamoDB on-demand from the ground up with an architecture that organizes data by time intervals and enables time series specific data compression, which leads to less scanning and faster performance.

He claims it will be a thousand times faster at a tenth of cost, and of course it scales up and down as required and includes all of the analytics capabilities you need to understand all of the data you are tracking.

This new service is available across the world starting today.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS Lake Formation makes setting up data lakes easier

The concept of data lakes has been around for a long time, but being able to set up one of these systems, which store vast amounts of raw data in its native formats, was never easy. AWS wants to change this with the launch of AWS Lake Formation. At its core, this new service, which is available today, allows developers to create a secure data lake within a few days.

While “a few days” may still sound like a long time in this age of instant gratification, it’s nothing in the world of enterprise software.

“Everybody is excited about data lakes,” said AWS CEO Andy Jassy in today’s AWS re:Invent keynote. “People realize that there is significant value in moving all that disparate data that lives in your company in different silos and make it much easier by consolidating it in a data lake.”

Setting up a data lake today means you have to, among other things, configure your storage and (on AWS) S3 buckets, move your data, add metadata and add that to a catalog. And then you have to clean up that data and set up the right security policies for the data lake. “This is a lot of work and for most companies, it takes them several months to set up a data lake. It’s frustrating,” said Jassy.

Lake Formation is meant to handle all of these complications with just a few clicks. It sets up the right tags and cleans up and dedupes the data automatically. And it provides admins with a list of security policies to help secure that data.

“This is a step-level change for how easy it is to set up data lakes,” said Jassy.

more AWS re:Invent 2018 coverage

Nov
28
2018
--

AWS tries to lure Windows users with Amazon FSx for Windows File Server

Amazon has had storage options for Linux file servers for some time, but it recognizes that a number of companies still use Windows file servers, and they are not content to cede that market to Microsoft. Today the company announced Amazon FSx for Windows File Server to provide a fully compatible Windows option.

“You get a native Windows file system backed by fully-managed Windows file servers, accessible via the widely adopted SMB (Server Message Block) protocol. Built on SSD storage, Amazon FSx for Windows File Server delivers the throughput, IOPS, and consistent sub-millisecond performance that you (and your Windows applications) expect,” AWS’s Jeff Barr wrote in a blog post introducing the new feature.

That means if you use this service, you have a first-class Windows system with all of the compatibility with Windows services that you would expect, such as Active Directory and Windows Explorer.

AWS CEO Andy Jassy introduced the new feature today at AWS re:Invent, the company’s customer conference going on in Las Vegas this week. He said that even though Windows File Server usage is diminishing as more IT pros turn to Linux, there are still a fair number of customers who want a Windows-compatible system and they wanted to provide a service for them to move their Windows files to the cloud.

Of course, it doesn’t hurt that it provides a path for Microsoft customers to use AWS instead of turning to Azure for these workloads. Companies undertaking a multi-cloud strategy should like having a fully compatible option.

more AWS re:Invent 2018 coverage

Nov
27
2018
--

AWS launches a base station for satellites as a service

Today at AWS Re:invent in Las Vegas, AWS announced a new service for satellite providers with the launch of AWS Ground Station, the first fully-managed ground station as a service.

With this new service, AWS will provide ground antennas through their existing network of worldwide availability zones, as well as data processing services to simplify the entire data retrieval and processing process for satellite companies, or for others who consume the satellite data.

Satellite operators need to get data down from the satellite, process it and then make it available for developers to use in applications. In that regard, it’s not that much different from any IoT device. It just so happens that these are flying around in space.

AWS CEO Andy Jassy pointed out that they hadn’t really considered a service like this until they had customers asking for it. “Customers said that we have so much data in space with so many applications that want to use that data. Why don’t you make it easier,” Jassy said. He said they thought about that and figured they could put their vast worldwide network to bear on the problem. .

Prior to this service, companies had to build these base stations themselves to get the data down from the satellites as they passed over the base stations on earth wherever those base stations happened to be. It required that providers buy land and build the hardware, then deal with the data themselves. By offering this as a managed service, it greatly simplifies every aspect of the workflow.

Holger Mueller, an analyst at Constellation Research says that the service will help put the satellite data into the hands of developers faster. “To rule real world application use cases you need to make maps and real-time spatial data available in an easy-to-consume, real time and affordable way,” Mueller told TechCrunch. This is precisely the type of data, you can get from satellites.

The value proposition of any cloud service has always been about reducing the resource allocation required by a company to achieve a goal. With AWS Ground Station, AWS handles every aspect of the satellite data retrieval and processing operation for the company, greatly reducing the cost and complexity associated with it.

AWS claims it can save up to 80 percent by using an on-demand model over ownership. They are starting with two ground stations today as they launch the service, but plan to expand it to 12 by the middle of next year.

Customers and partners involved in the Ground Station preview included Lockheed Martin, Open Cosmos, HawkEye360 and DigitalGlobe, among others.

more AWS re:Invent 2018 coverage

Nov
27
2018
--

Red Hat acquires hybrid cloud data management service NooBaa

Red Hat is in the process of being acquired by IBM for a massive $34 billion, but that deal hasn’t closed yet and, in the meantime, Red Hat is still running independently and making its own acquisitions, too. As the company today announced, it has acquired Tel Aviv-based NooBaa, an early-stage startup that helps enterprises manage their data more easily and access their various data providers through a single API.

NooBaa’s technology makes it a good fit for Red Hat, which has recently emphasized its ability to help enterprise more effectively manage their hybrid and multicloud deployments. At its core, NooBaa is all about bringing together various data silos, which should make it a good fit in Red Hat’s portfolio. With OpenShift and the OpenShift Container Platform, as well as its Ceph Storage service, Red Hat already offers a range of hybrid cloud tools, after all.

“NooBaa’s technologies will augment our portfolio and strengthen our ability to meet the needs of developers in today’s hybrid and multicloud world,” writes Ranga Rangachari, the VP and general manager for storage and hyperconverged infrastructure at Red Hat, in today’s announcement. “We are thrilled to welcome a technical team of nine to the Red Hat family as we work together to further solidify Red Hat as a leading provider of open hybrid cloud technologies.”

While virtually all of Red Hat’s technology is open source, NooBaa’s code is not. The company says that it plans to open source NooBaa’s technology in due time, though the exact timeline has yet to be determined.

NooBaa was founded in 2013. The company has raised some venture funding from the likes of Jerusalem Venture Partners and OurCrowd, with a strategic investment from Akamai Capital thrown in for good measure. The company never disclosed the size of that round, though, and neither Red Hat nor NooBaa are disclosing the financial terms of the acquisition.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com