Jul
31
2018
--

The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Jul
26
2018
--

Amazon’s AWS continues to lead its performance highlights

Amazon’s web services AWS continue to be the highlight of the company’s balance sheet, once again showing the kind of growth Amazon is looking for in a new business for the second quarter — especially one that has dramatically better margins than its core retail business.

Despite now running a grocery chain, the company’s AWS division — which has an operating margin over 25 percent compared to its tiny margins on retail — grew 49 percent year-over-year in the quarter compared to last year’s second quarter. It’s also up 49 percent year-over-year when comparing the most recent six months to the same period last year. AWS is now on a run rate well north of $10 billion annually, generating more than $6 billion in revenue in the second quarter this year. Meanwhile, Amazon’s retail operations generated nearly $47 billion with a net income of just over $1.3 billion (unaudited). Amazon’s AWS generated $1.6 billion in operating income on its $6.1 billion in revenue.

So, in short, Amazon’s dramatically more efficient AWS business is its biggest contributor to its actual net income. The company reported earnings of $5.07 per share, compared to analyst estimates of around $2.50 per share, on revenue of $52.9 billion. That revenue number fell under what investors were looking for, so the stock isn’t really doing anything in after-hours, and Amazon still remains in the race to become a company with a market cap of $1 trillion alongside Google, Apple and Microsoft.

This isn’t extremely surprising, as Amazon was one of the original harbingers of the move to a cloud computing-focused world, and, as a result, Microsoft and Google are now chasing it to capture up as much share as possible. While Microsoft doesn’t break out Azure, the company says it’s one of its fastest-growing businesses, and Google’s “other revenue” segment that includes Google Cloud Platform also continues to be one of its fastest-growing divisions. Running a bunch of servers with access to on-demand compute, it turns out, is a pretty efficient business that can account for the very slim margins that Amazon has on the rest of its core business.

Jul
17
2018
--

Walmart enlists Microsoft cloud in battle against Amazon

Once a seemingly unstoppable retail juggernaut, Walmart’s been scrambling to define its digitally in this Amazon-defined era. This morning, the company announced that it’s struck a five-year deal with Microsoft, Amazon’s chief cloud competitor.

These sorts of partnerships are a regular occurrence for AWS — in fact, it announced one with Fortnite maker Epic Games, just this morning. The companies involved tend to put on a big show, in return for a discount on services, but Walmart and Microsoft are happily playing into the concept of teaming up to take on Amazon.

Microsoft’s certainly not making any bones about the competition. In an interview, Satya Nadella told The Wall Street Journal that the fight against Amazon “is absolutely core to this,” adding, “How do we get more leverage as two organizations that have depth and breadth and investment to be able to outrun our respective competition?”

Of course, neither Walmart nor Microsoft can be framed as an underdog in any respect, but Amazon’s stranglehold on online retail also can’t be understated. Not even a massive outage at the height of Prime Day could do much to ruffle the company’s feathers.

Included in the deal are AI/ML technologies design to help optimize the in-store experience — one of the key factors Walmart brings to the table in a battle against Amazon, which has mostly just dabbled in brick and mortar. For its part, Walmart has been testing cashier-less stores along the lines of Amazon’s, but the company has just to officially unveil its plans in that space.

Jul
17
2018
--

Standard Cognition raises another $5.5M to create a cashier-less checkout experience

As Amazon looks to increasingly expand its cashier-less grocery stories — called Amazon Go – across different regions, there’s at least one startup hoping to end up everywhere else beyond Amazon’s empire.

Standard Cognition aims to help businesses create that kind of checkout experience based on machine vision, using image recognition to figure out that a specific person is picking up and walking out the door with a bag of Cheetos. The company said it’s raised an additional $5.5 million in a round in what the company is calling a seed round extension from CRV. The play here is, like many startups, to create something that a massive company is going after — like image recognition for cashier-less checkouts — for the long tail businesses rather than locking them into a single ecosystem.

Standard Cognition works with security cameras that have a bit more power than typical cameras to identify people that walk into a store. Those customers use an app, and the camera identifies everything they are carrying and bills them as they exit the store. The company has said it works to anonymize that data, so there isn’t any kind of product tracking that might chase you around the Internet that you might find on other platforms.

“The platform is built at this point – we are now focused on releasing the platform to each retail partner that signs on with us,” Michael Suswal, Co-founder and COO said. “Most of the surprises coming our way come from learning about how each retailer prefers to run their operations and store experiences. They are all a little different and require us to be flexible with how we deploy.”

It’s a toolkit that makes sense for both larger and smaller retailers, especially as the actual technology to install cameras or other devices that can get high-quality video or have more processing power goes down over time. Baking that into smaller retailers or mom-and-pop stores could help them get more foot traffic or make it easier to keep tabs on what kind of inventory is most popular or selling out more quickly. It offers an opportunity to have an added layer of data about how their store works, which could be increasingly important over time as something like Amazon looks to start taking over the grocery experience with stores like Amazon Go or its massive acquisition of Whole Foods.

“While we save no personal data in the cloud, and the system is built for privacy (no facial recognition among other safety features that come with being a non-cloud solution), we do use the internet for a couple of things,” Suswal said. “One of those things is to update our models and push them fleet wide. This is not a data push. It is light and allows us to make updates to models and add new features. We refer to it as the Tesla model, inspired by the way a driver can have a new feature when they wake up in the morning. We are also able to offer cross-store analytics to the retailer using the cloud, but no personal data is ever stored there.”

It’s thanks to advances in machine learning — and the frameworks and hardware that support it — that have made this kind of technology easier to build for smaller companies. Already there are other companies that look to be third-party providers for popular applications like voice recognition (think SoundHound) or machine vision (think Clarifai). All of those aim to be an option outside of whatever options larger companies might have like Alexa. It also means there is probably going to be a land grab and that there will be other interpretations of what the cashier-less checkout experience looks like, but Standard Cognition is hoping it’ll be able to get into enough stores to be an actual challenger to Amazon Go.

May
31
2018
--

AWS launches pay-per-session pricing for its QuickSight BI tool

Amazon QuickSight, the company’s business intelligence tool for AWS, launched back in 2015, but it’s hard to say how much impact the service has made in the highly competitive BI market. The company has far from given up on this project, though, and today, it’s introducing a new pay-per-session pricing plan for access to QuickSight dashboards that is surely meant to give it a bit of a lift in a market where Tableau and Microsoft’s Power BI have captured much of the mindshare.

Under the new pricing plan, creating and publishing dashboards will stay cost $18 per user and month. For readers, though, who only need to have access to these dashboards, AWS now offers a very simple option: they will now pay $0.30 per session up to a maximum of $5 per month and user. Under this scheme, a session is defined as the first 30 minutes from login.

Previously, AWS offered two tiers of QuickSight plans: a $9 per user/month standard plan and a $24/user/month enterprise edition with support for Active Directory and encryption at rest.

That $9/user/month is still available and probably still makes sense for smaller companies where those who build dashboards and consume them are often the same person. The new pricing plan replaces the existing enterprise edition.

QuickSight already significantly undercuts the pricing of services like Tableau and others, though we’re also talking about a somewhat more limited feature set. This new pay-per-session offering only widens the pricing gap.

“With highly scalable object storage in Amazon Simple Storage Service (Amazon S3), data warehousing at one-tenth the cost of traditional solutions in Amazon Redshift, and serverless analytics offered by Amazon Athena, customers are moving data into AWS at an unprecedented pace,” said Dorothy Nicholls, Vice President of Amazon QuickSight at AWS, in a canned comment. “What’s changed is that virtually all knowledge workers want easy access to that data and the insights that can be derived. It’s been cost-prohibitive to enable that access for entire companies until the Amazon QuickSight pay-per-session pricing — this is a game-changer in terms of information and analytics access.”

Current QuickSight users include the NFL, Siemens, Volvo and AutoTrader.

May
14
2018
--

AWS introduces 1-click Lambda functions app for IoT

When Amazon introduced AWS Lambda in 2015, the notion of serverless computing was relatively unknown. It enables developers to deliver software without having to manage a server to do it. Instead, Amazon manages it all and the underlying infrastructure only comes into play when an event triggers a requirement. Today, the company released an app in the iOS App Store called AWS IoT 1-Click to bring that notion a step further.

The 1-click part of the name may be a bit optimistic, but the app is designed to give developers even quicker access to Lambda event triggers. These are designed specifically for simple single-purpose devices like a badge reader or a button. When you press the button, you could be connected to customer service or maintenance or whatever makes sense for the given scenario.

One particularly good example from Amazon is the Dash Button. These are simple buttons that users push to reorder goods like laundry detergent or toilet paper. Pushing the button connects to the device to the internet via the home or business’s WiFi and sends a signal to the vendor to order the product in the pre-configured amount. AWS IoT 1-Click extends this capability to any developers, so long as it is on a supported device.

To use the new feature, you need to enter your existing account information. You configure your WiFi and you can choose from a pre-configured list of devices and Lambda functions for the given device. Supported devices in this early release include AWS IoT Enterprise Button, a commercialized version of the Dash button and the AT&T LTE-M Button.

Once you select a device, you define the project to trigger a Lambda function, or send an SMS or email, as you prefer. Choose Lambda for an event trigger, then touch Next to move to the configuration screen where you configure the trigger action. For instance, if pushing the button triggers a call to IT from the conference room, the trigger would send a page to IT that there was a call for help in the given conference room.

Finally, choose the appropriate Lambda function, which should work correctly based on your configuration information.

All of this obviously requires more than one click and probably involves some testing and reconfiguring to make sure you’ve entered everything correctly, but the idea of having an app to create simple Lambda functions could help people with non-programming background configure buttons with simple functions with some training on the configuration process.

It’s worth noting that the service is still in Preview, so you can download the app today, but you have to apply to participate at this time.

Apr
11
2018
--

With Fargate, AWS wants to make containers more cloud native

At its re:Invent developer conference, AWS made so many announcements that even some of the company’s biggest launches only got a small amount of attention. While the company’s long-awaited Elastic Container Service for Kubernetes got quite a bit of press, the launch of the far more novel Fargate container service stayed under the radar.

When I talked to him earlier this week, AWS VP and Amazon CTO (and EDM enthusiast) Werner Vogels admitted as much. “I think some of the Fargate stuff got a bit lost in all the other announcements that there were,” he told me. “I think it is a major step forward in making containers more cloud native and we see quite a few of our customers jumping on board with Fargate.”

Fargate, if you haven’t followed along, is a technology for AWS’ Elastic Container Service (ECS) and Kubernetes Service (EKS) that abstracts all of the underlying infrastructure for running containers away. You pick your container orchestration engine and the service does the rest. There’s no need for managing individual servers or clusters. Instead, you simply tells ECS or EKS that you want to launch a container with Fargate, define the CPU and memory requirements of your application and let the service handle the rest.

To Vogels, who also published a longer blog post on Fargate today, the service is part of the company’s mission to help developers focus on their applications — and not the infrastructure. “I always compare it a bit to the early days of cloud,” said Vogels. “Before we had AWS, there were only virtual machines. And many companies build successful businesses around it. But when you run virtual machines, you still have to manage the hardware. […] One of the things that happened when we introduced EC2 [the core AWS cloud computing service] in the early days, was sort of that it decoupled things from the hardware. […] I think that tremendously improved developer productivity.”

But even with the early containers tools, if you wanted to run them directly on AWS or even in ECS, you still had to do a lot of work that had little to do with actually running the containers. “Basically, it’s the same story,” Vogels said. “VMs became the hardware for the containers. And a significant amount of work for developers went into that orchestration piece.”

What Amazon’s customers wanted, however, was being able to focus on running their containers — not what Vogels called the “hands-on hardware-type of management.” “That was so pre-cloud,” he added and in his blog post today, he also notes that “container orchestration has always seemed to me to be very not cloud native.”

In Vogels’ view, it seems, if you are still worried about infrastructure, you’re not really cloud native. He also noted that the original promise of AWS was that AWS would worry about running the infrastructure while developers got to focus on what mattered for their businesses. It’s services like Fargate and maybe also Lambda that take this overall philosophy the furthest.

Even with a container service like ECS or EKS, though, the clusters still don’t run completely automatically and you still end up provisioning capacity that you don’t need all the time. The promise of Fargate is that it will auto-scale for you and that you only pay for the capacity you actually need.

“Our customers, they just want to build software, they just want to build their applications. They don’t want to be bothered with how to exactly map this container down to that particular virtual machine — which is what they had to do,” Vogels said. “With Fargate, you select the type of CPUs you want to use for a particular task and it will autoscale this for you. Meaning that you actually only have to pay for the capacity you use.”

When it comes to abstracting away infrastructure, though, Fargate does this for containers, but it’s worth noting that a serverless product like AWS Lambda takes it even further. For Vogels, this is a continuum and driven by customer demand. While AWS is clearly placing big bets on containers, he is also quite realistic about the fact that many companies will continue to use containers for the foreseeable future. “VMs won’t go away,” he said.

With a serverless product like Lambda, you don’t even think about the infrastructure at all anymore, not even containers — you get to fully focus on the code and only pay for the execution of that code. And while Vogels sees the landscape of VMs, containers and serverless as a continuum, where customers move from one to the next, he also noted that AWS is seeing enterprises that are skipping over the container step and going all in on serverless right away.

Apr
04
2018
--

AWS launches a cheaper single-zone version of its S3 storage service

AWS’ S3 storage service today launched a cheaper option for keeping data in the cloud — as long as developers are willing to give up a few 9s of availability in return for saving up to 20 percent compared to the standard S3 price for applications that need infrequent access. The name for this new S3 tier: S3 One Zone-Infrequent Access.

S3 was among the first services AWS offered. Over the years, the company added a few additional tiers to the standard storage service. There’s the S3 Standard tier with the promise of 99.999999999 percent durability and 99.99 percent availability and S3 Standard-Infrequent Access with the same durability promise and 99.9 percent availability. There’s also Glacier for cold storage.

Data stored in the Standard and Standard-Infrequent access tiers is replicated across three or more availability zones. As the name implies, the main difference between those and the One Zone-Infrequent Access tier is that with this cheaper option, all the data sits in only one availability zone. It’s still replicated across different machines, but if that zone goes down (or is destroyed), you can’t access your data.

Because of this, AWS only promises 99.5 percent availability and only offers a 99 percent SLA. In terms of features and durability, though, there’s no difference between this tier and the other S3 tiers.

As Amazon CTO Werner Vogels noted in a keynote at the AWS Summit in San Francisco today, it’s the replication across availability zones that defines the storage cost. In his view, this new service should be used for data that is infrequently accessed but can be replicated.

An availability of 99.5 percent does mean that you should expect to experience a day or two per year where you can’t access your data, though. For some applications, that’s perfectly acceptable, and Vogels noted that he expects AWS customers to use this for secondary backup copies or for storing media files that can be replicated.

Apr
04
2018
--

AWS adds automated point-in-time recovery to DynamoDB

One of the joys of cloud computing is handing over your data to the cloud vendor and letting them handle the heavy lifting. Up until now that has meant they updated the software or scaled the hardware for you. Today, AWS took that to another level when it announced Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR).

With this new service, the company lets you simply enable the new backup tool, and the backup happens automatically. Amazon takes care of the rest, providing a continuous backup of all the data in your DynamoDB database.

But it doesn’t stop there, it lets the backup system act as a recording of sorts. You can rewind your data set to any point in time in the backup to any time with “per second granularity” up to 35 days in the past. What’s more, you can access the tool from the AWS Management Console, an API call or via the AWS Command Line Interface (CLI).

Screenshot: Amazon

“We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict,” Amazon’s Randall Hunt wrote in the blog post announcing the new feature.

If you’re concerned about the 35 day limit, you needn’t be as the system is an adjunct to your regular on-demand backups, which you can keep for as long as you need.

Amazon’s Chief Technology Officer, Werner Vogels, who introduced the new service at the Amazon Summit in San Francisco today, said it doesn’t matter how much data you have. Even with a terabyte of data, you can make use of this service. “This is a truly powerful mechanism here,” Vogels said.

The new service is available in various regions today. You can learn about regional availability and pricing options here.

Apr
02
2018
--

Apple, in a very Apple move, is reportedly working on its own Mac chips

Apple is planning to use its own chips for its Mac devices, which could replace the Intel chips currently running on its desktop and laptop hardware, according to a report from Bloomberg.

Apple already designs a lot of custom silicon, including its chipsets like the W-series for its Bluetooth headphones, the S-series in its watches, its A-series iPhone chips, as well as customized GPU for the new iPhones. In that sense, Apple has in a lot of ways built its own internal fabless chip firm, which makes sense as it looks for its devices to tackle more and more specific use cases and remove some of its reliance on third parties for their equipment. Apple is already in the middle of in a very public spat with Qualcomm over royalties, and while the Mac is sort of a tertiary product in its lineup, it still contributes a significant portion of revenue to the company.

Creating an entire suite of custom silicon could do a lot of things for Apple, the least of which bringing in the Mac into a system where the devices can talk to each other more efficiently. Apple already has a lot of tools to shift user activities between all its devices, but making that more seamless means it’s easier to lock users into the Apple ecosystem. If you’ve ever compared connecting headphones with a W1 chip to the iPhone and just typical Bluetooth headphones, you’ve probably seen the difference, and that could be even more robust with its own chipset. Bloomberg reports that Apple may implement the chips as soon as 2020.

Intel may be the clear loser here, and the market is reflecting that. Intel’s stock is down nearly 8% after the report came out, as it would be a clear shift away from the company’s typical architecture where it has long held its ground as Apple moves on from traditional silicon to its own custom designs. Apple, too, is not the only company looking to design its own silicon, with Amazon looking into building its own AI chips for Alexa in another move to create a lock-in for the Amazon ecosystem. And while the biggest players are looking at their own architecture, there’s an entire suite of startups getting a lot of funding building custom silicon geared toward AI.

Apple declined to comment.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com