Dec
03
2019
--

Verizon and AWS announce 5G Edge computing partnership

Just as Qualcomm was starting to highlight its 5G plans for the coming years, Verizon CEO Hans Vestberg hit the stage at AWS re:Invent to discuss the carrier’s team up with the cloud computing giant.

As part of Verizon’s (TechCrunch’s parent company, disclosure, disclosure, disclosure) upcoming focus on 5G edge computing, the carrier will be the first to use the newly announced AWS Wavelength. The platform is designed to let developers build super-low-latency apps for 5G devices.

Currently, it’s being piloted in Chicago with a handful of high-profile partners, including the NFL and Bethesda, the game developer behind Fallout and Elder Scrolls. No details yet on those specific applications (though remote gaming and live streaming seem like the obvious ones), but potential future uses include things like smart cars, IoT devices, AR/VR — you know, the sorts of things people cite when discussing 5G’s life beyond the smartphone.

“AWS Wavelength provides the same AWS environment — APIs, management console and tools — that they’re using today at the edge of the 5G network,” AWS CEO Andy Jassy said onstage. Starting with Verizon’s 5G network locations in the U.S., customers will be able to deploy the latency-sensitive portions of an application at the edge to provide single-digit millisecond latency to mobile and connected devices.”

As Verizon’s CEO joined Vestberg onstage, CNO Nicki Palmer joined Qualcomm in Hawaii to discuss the carrier’s mmwave approach to the next-gen wireless. The technology has raised some questions around its coverage area. Verizon has addressed this to some degree with partnerships with third-parties like Boingo.

The company plans to have coverage in 30 U.S. cities by end of year. That number is currently at 18.

Dec
03
2019
--

AWS announces new enterprise search tool powered by machine learning

Today at AWS re:Invent in Las Vegas, the company announced a new search tool called Kendra, which provides natural language search across a variety of content repositories using machine learning.

Matt Wood, AWS VP of artificial intelligence, said the new search tool uses machine learning, but doesn’t actually require machine learning expertise of any kind. Amazon is taking care of that for customers under the hood.

You start by identifying your content repositories. This could be anything from an S3 storage repository to OneDrive to Salesforce — anywhere you store content. You can use pre-built connectors from AWS, provide your credentials and connect to all of these different tools.

Kendra then builds an index based on the content it finds in the connected repositories, and users can begin to interact with the search tool using natural language queries. The tool understands concepts like time, so if the question is something like “When is the IT Help Desk is open,” the search engine understands that this is about time, checks the index and delivers the right information to the user.

The beauty of this search tool is not only that it uses machine learning, but based on simple feedback from a user, like a smiley face or sad face emoji, it can learn which answers are good and which ones require improvement, and it does this automatically for the search team.

Once you have it set up, you can drop the search on your company intranet or you can use it internally inside an application and it behaves as you would expect a search tool to do, with features like type ahead.

Dec
03
2019
--

AWS’ CodeGuru uses machine learning to automate code reviews

AWS today announced CodeGuru, a new machine learning-based service that automates code reviews based on the data the company has gathered from doing code reviews internally.

Developers write the code and simply add CodeGuru to the pull requests. It supports GitHub and CodeCommit, for the time being. CodeGuru uses its knowledge of reviews from Amazon and about 10,000 open-source projects to find issues, then comments on the pull request as needed. It will obviously identify the issues, but it also will suggest remediations and offer links to the relevant documentation.

Encoded in CodeGuru are AWS’s own best practices. Among other things, it also finds concurrency issues, incorrect handling of resources and issues with input validation.

AWS and Amazon’s consumer side have used the profiler part of CodeGuru for the last few years to find the “most expensive line of code.” Over the last few years, even as some of the company’s applications grew, some teams were able to increase their CPU utilization by more than 325% at 36% lower cost.

Dec
03
2019
--

AWS AutoPilot gives you more visible AutoML in SageMaker Studio

Today at AWS re:Invent in Las Vegas, the company announced AutoPilot, a new tool that gives you greater visibility into automated machine learning model creation, known as AutoML. This new tool is part of the new SageMaker Studio also announced today.

As AWS CEO Andy Jassy pointed out onstage today, one of the problems with AutoML is that it’s basically a black box. If you want to improve a mediocre model, or just evolve it for your business, you have no idea how it was built.

The idea behind AutoPilot is to give you the ease of model creation you get from an AutoML-generated model, but also give you much deeper insight into how the system built the model. “AutoPilot is a way to create a model automatically, but give you full visibility and control,” Jassy said.

“Using a single API call, or a few clicks in Amazon SageMaker Studio, SageMaker Autopilot first inspects your data set, and runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms and hyperparameters. Then, it uses this combination to train an Inference Pipeline, which you can easily deploy either on a real-time endpoint or for batch processing. As usual with Amazon SageMaker, all of this takes place on fully-managed infrastructure,” the company explained in a blog post announcing the new feature.

You can look at the model’s parameters, and see 50 automated models, and it provides you with a leader board of what models performed the best. What’s more, you can look at the model’s underlying notebook, and also see what trade-offs were made to generate that best model. For instance, it may be the most accurate, but sacrifices speed to get that.

Your company may have its own set of unique requirements and you can choose the best model based on whatever parameters you consider to be most important, even though it was generated in an automated fashion.

Once you have the model you like best, you can go into SageMaker Studio, select it and launch it with a single click. The tool is available now.

 

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Dec
02
2019
--

CircleCI launches improved AWS support

For about a year now, continuous integration and delivery service CircleCI has offered Orbs, a way to easily reuse commands and integrations with third-party services. Unsurprisingly, some of the most popular Orbs focus on AWS, as that’s where most of the company’s developers are either testing their code or deploying it. Today, right in time for AWS’s annual re:Invent developer conference in Las Vegas, the company announced that it has now added Orb support for the AWS Serverless Application Model (SAM), which makes setting up automated CI/CD platforms for testing and deploying to AWS Lambda significantly easier.

In total, the company says, more than 11,000 organizations started using Orbs since it launched a year ago. Among the AWS-centric Orbs are those for building and updating images for the Amazon Elastic Container Services and the Elastic Container Service for Kubernetes (EKS), for example, as well as AWS CodeDeploy support, an Orb for installing and configuring the AWS command line interface, an Orb for working with the S3 storage service and more.

“We’re just seeing a momentum of more and more companies being ready to adopt [managed services like Lambda, ECS and EKS], so this became really the ideal time to do most of the work with the product team at AWS that manages their serverless ecosystem and to add in this capability to leverage that serverless application model and really have this out of the box CI/CD flow ready for users who wanted to start adding these into to Lambda,” CircleCI VP of business development Tom Trahan told me. “I think when Lambda was in its earlier days, a lot of people would use it and they would use it and not necessarily follow the same software patterns and delivery flow that they might have with their traditional software. As they put more and more into Lambda and are really putting a lot more what I would call ‘production quality code’ out there to leverage. They realize they do want to have that same software delivery capability and discipline for Lambda as well.”

Trahan stressed that he’s still talking about early adopters and companies that started out as cloud-native companies, but these days, this group includes a lot of traditional companies, as well, that are now rapidly going through their own digital transformations.

Dec
02
2019
--

New Amazon tool simplifies delivery of containerized machine learning models

As part of the flurry of announcements coming this week out of AWS re:Invent, Amazon announced the release of Amazon SageMaker Operators for Kubernetes, a way for data scientists and developers to simplify training, tuning and deploying containerized machine learning models.

Packaging machine learning models in containers can help put them to work inside organizations faster, but getting there often requires a lot of extra management to make it all work. Amazon SageMaker Operators for Kubernetes is supposed to make it easier to run and manage those containers, the underlying infrastructure needed to run the models and the workflows associated with all of it.

“While Kubernetes gives customers control and portability, running ML workloads on a Kubernetes cluster brings unique challenges. For example, the underlying infrastructure requires additional management such as optimizing for utilization, cost and performance; complying with appropriate security and regulatory requirements; and ensuring high availability and reliability,” AWS’ Aditya Bindal wrote in a blog post introducing the new feature.

When you combine that with the workflows associated with delivering a machine learning model inside an organization at scale, it becomes part of a much bigger delivery pipeline, one that is challenging to manage across departments and a variety of resource requirements.

This is precisely what Amazon SageMaker Operators for Kubernetes has been designed to help DevOps teams do. “Amazon SageMaker Operators for Kubernetes bridges this gap, and customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale,” Bindal wrote.

The promise of Kubernetes is that it can orchestrate the delivery of containers at the right moment, but if you haven’t automated delivery of the underlying infrastructure, you can over (or under) provision and not provide the correct amount of resources required to run the job. That’s where this new tool, combined with SageMaker, can help.

“With workflows in Amazon SageMaker, compute resources are pre-configured and optimized, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete, offering near 100% utilization,” Bindal wrote.

Amazon SageMaker Operators for Kubernetes are available today in select AWS regions.

Nov
25
2019
--

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Nov
19
2019
--

Salesforce, AWS expand partnership to bring Amazon Connect to Service Cloud

Salesforce and AWS announced an expansion of their on-going partnership that actually goes back to a $400 million 2016 infrastructure services agreement, and expanded last year to include data integration between the two companies. This year, Salesforce announced it will be offering AWS telephony and call transcription services with Amazon Connect as part of its Service Cloud call center solution.

“We have a strategic partnership with Amazon Web Services, which will allow customers to purchase Amazon Connect from us, and then it will be pre-integrated and out of the box to provide a full transcription of the call, and of course that’s alongside of an actual call recording of the call,” Patrick Beyries, VP of product management for Service Cloud. explained.

It’s worth noting that the company will be partnering with other telephony vendors as well, so that customers can choose the Amazon solution or another from Cisco, Avaya or Genesys, Beyries said.

These telephony partnerships fill in a gap in the Service Cloud call center offering, and give Salesforce direct access to the call itself. The telephony vendors will handle call transcription and hand that off to Salesforce, which can then use its intelligence layer called Einstein to “read” the transcript and offer the CSR next best actions in real time, something the company has been able to do with interactions from chat and other channels, but couldn’t do with voice.

“As this conversation evolves, the consumer is explaining what their problem is, and Einstein is [monitoring] that conversation. As the conversation gets to a critical mass, Einstein begins to understand what the content is about and suggests a specific solution to the agent,” Beyries said.

Salesforce will begin piloting this new Service Cloud capability in the spring with general availability expected next summer.

Only last week, Salesforce announced a major partnership with Microsoft to move Salesforce Marketing Cloud to Azure. These announcements show Salesforce will continue to use multiple cloud partners when it makes sense for the business. Today, it’s Amazon’s turn.

Nov
14
2019
--

AWS confirms reports it will challenge JEDI contract award to Microsoft

Surely just about everyone was surprised when the Department of Defense last month named Microsoft as the winner of the decade-long, $10 billion JEDI cloud contract — none more so than Amazon, the company everyone assumed all along would be the winner. Today the company confirmed earlier reports that it was challenging the contract award in the Court of Federal Claims.

The Federal Times broke this story.

In a statement, an Amazon spokesperson suggested that there was possible bias and issues in the selection process. “AWS is uniquely experienced and qualified to provide the critical technology the U.S. military needs, and remains committed to supporting the DoD’s modernization efforts. We also believe it’s critical for our country that the government and its elected leaders administer procurements objectively and in a manner that is free from political influence.

“Numerous aspects of the JEDI evaluation process contained clear deficiencies, errors, and unmistakable bias — and it’s important that these matters be examined and rectified,” an Amazon spokesperson told TechCrunch.

It’s certainly worth noting that the president has not hidden his disdain for Amazon CEO and founder Jeff Bezos, who also is owner of The Washington Post newspaper. As I wrote in Even after Microsoft wins, JEDI saga could drag on:

Amazon, for instance, could point to Jim Mattis’ book where he wrote that the president told the then Defense Secretary to “screw Bezos out of that $10 billion contract.” Mattis says he refused, saying he would go by the book, but it certainly leaves the door open to a conflict question.

Oracle also filed a number of protests throughout the process, including one with the Government Accountability Office that was later rejected. It also went to court and the case was dismissed. All of the protests claimed that the process favored Amazon. The end result proved it didn’t.

The president interjected himself in the decision process in August, asking the defense secretary, Mark T. Esper, to investigate once again if the procurement process somehow favored Amazon, and the week the contract was awarded, the White House canceled its subscription to The Washington Post.

In October, the decision finally came and the DOD chose Microsoft . Now Amazon is filing a challenge in federal Court, and the JEDI saga really ain’t over until it’s over.

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com