Oct
09
2020
--

How Roblox completely transformed its tech stack

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem

Sep
24
2020
--

Webinar October 14: Percona, AWS, & ScienceLogic – Converting DBaaS to a Fully Managed Solution

Converting DBaaS to a Fully Managed Solution

Converting DBaaS to a Fully Managed SolutionDatabase-as-a-service (DBaaS) can be thought of as a platform that can be used to manage an organization’s database environment(s). One of the most well known DBaaS platforms is Aurora powered by AWS.

In this webinar, Ananias Tsalouchidis, Senior MySQL DBA at Percona,  will discuss how Percona can convert your DBaaS to a fully-managed solution ensuring that Aurora is properly optimized for better application performance, creating proper architecture and design, achieving better monitoring and troubleshooting, and performing other mission-critical platform operations.

We’ll be joined by Richard Chart, Chief Scientist at ScienceLogic, who will discuss the hands-on implications of working with Percona and utilizing our technology to support their growing needs. In addition, we’ll be joined by technical experts Vijay Karumajji and Aditya Samant, MySQL Specialist Solutions Architects from AWS, who’ll touch base on the many benefits of Aurora.

Please join Ananias Tsalouchidis, Richard Chart, Vijay Karumajji, and Aditya Samant on Wednesday, October 14, 2020, at 1:00 pm EDT for the webinar “Percona, AWS, & ScienceLogic – Converting DBaaS to a Fully Managed Solution“.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

Aug
05
2020
--

Amazon inks cloud deal with Airtel in India

Amazon has found a new partner to expand the reach of its cloud services business — AWS — in India, the world’s second largest internet market.

On Wednesday, the e-commerce giant announced it has partnered with Bharti Airtel, the third-largest telecom operator in India with more than 300 million subscribers, to sell a wide-range of AWS offerings under Airtel Cloud brand to small, medium, and large-sized businesses in the country.

The deal could help AWS, which leads the cloud market in India, further expand its dominance in the country. The move follows a similar deal Reliance Jio — India’s largest telecom operator and which has raised more than $20 billion in recent months from Google, Facebook and a roster of other high-profile investors — struck with Microsoft last year to sell cloud services to small businesses. The two announced a 10-year partnership to “serve millions of customers.”

Airtel, which serves over 2,500 large enterprises and more than a million emerging businesses, itself signed a similar cloud deal with Google in January this year. That partnership is still in place, Airtel said.

“AWS brings over 175 services to the table. We pretty much support any workload on the cloud. We have the largest and the most vibrant community of customers,” said Puneet Chandok, President of AWS in India and South Asia, on a call with reporters Wednesday noon.

The two companies, which signed a similar agreement in 2015, will also collaborate on building new services and help existing customers migrate to Airtel Cloud, they said.

Today’s deal illustrates Airtel’s push to build businesses beyond its telecom venture, said Harmeen Mehta, Global CIO and Head of Cloud and Security Business at Airtel, on the call. Last month, Airtel partnered with Verizon — TechCrunch’s parent company — to sell BlueJeans video conferencing service to business customers in India.

Deals with carriers were very common a decade ago in India as tech giants rushed to amass users in the country. Replicating a similar strategy now illustrates the phase of the cloud adoption in the nation.

Nearly half a billion people in India came online last decade. And slowly, small businesses and merchants are also beginning to use digital tools, storage services, and accept online payments.

India has emerged as one of the emerging leading grounds for cloud services. The public cloud services market of the country is estimated to reach $7.1 billion by 2024, according to research firm IDC.

Jul
31
2020
--

Even as cloud infrastructure growth slows, revenue rises over $30B for quarter

The cloud market is coming into its own during the pandemic as the novel coronavirus forced many companies to accelerate plans to move to the cloud, even while the market was beginning to mature on its own.

This week, the big three cloud infrastructure vendors — Amazon, Microsoft and Google — all reported their earnings, and while the numbers showed that growth was beginning to slow down, revenue continued to increase at an impressive rate, surpassing $30 billion for a quarter for the first time, according to Synergy Research Group numbers.

Jul
09
2020
--

Docker partners with AWS to improve container workflows

Docker and AWS today announced a new collaboration that introduces a deep integration between Docker’s Compose and Desktop developer tools and AWS’s Elastic Container Service (ECS) and ECS on AWS Fargate. Previously, the two companies note, the workflow to take Compose files and run them on ECS was often challenging for developers. Now, the two companies simplified this process to make switching between running containers locally and on ECS far easier.

docker/AWS architecture overview“With a large number of containers being built using Docker, we’re very excited to work with Docker to simplify the developer’s experience of building and deploying containerized applications to AWS,” said Deepak Singh, the VP for compute services at AWS. “Now customers can easily deploy their containerized applications from their local Docker environment straight to Amazon ECS. This accelerated path to modern application development and deployment allows customers to focus more effort on the unique value of their applications, and less time on figuring out how to deploy to the cloud.”

In a bit of a surprise move, Docker last year sold off its enterprise business to Mirantis to solely focus on cloud-native developer experiences.

“In November, we separated the enterprise business, which was very much focused on operations, CXOs and a direct sales model, and we sold that business to Mirantis,” Docker CEO Scott Johnston told TechCrunch’s Ron Miller earlier this year. “At that point, we decided to focus the remaining business back on developers, which was really Docker’s purpose back in 2013 and 2014.”

Today’s move is an example of this new focus, given that the workflow issues this partnership addresses had been around for quite a while already.

It’s worth noting that Docker also recently engaged in a strategic partnership with Microsoft to integrate the Docker developer experience with Azure’s Container Instances.

Jun
24
2020
--

AWS launches Amazon Honeycode, a no-code mobile and web app builder

AWS today announced the beta launch of Amazon Honeycode, a new, fully managed low-code/no-code development tool that aims to make it easy for anybody in a company to build their own applications. All of this, of course, is backed by a database in AWS and a web-based, drag-and-drop interface builder.

Developers can build applications for up to 20 users for free. After that, they pay per user and for the storage their applications take up.

Image Credits: Amazon/AWS

“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said AWS VP Larry Augustin in the announcement. “Now with Amazon Honeycode, almost anyone can create powerful custom mobile and web applications without the need to write code.”

Like similar tools, Honeycode provides users with a set of templates for common use cases like to-do list applications, customer trackers, surveys, schedules and inventory management. Traditionally, AWS argues, a lot of businesses have relied on shared spreadsheets to do these things.

“Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors,” the company notes in today’s announcement. “As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wait for developers to free up or have to hire expensive consultants to build applications.”

It’s no surprise then that Honeycode uses a spreadsheet view as its core data interface, which makes sense, given how familiar virtually every potential user is with this concept. To manipulate data, users can work with standard spreadsheet-style formulas, which seems to be about the closest the service gets to actual programming. ‘Builders,” as AWS calls Honeycode users, can also set up notifications, reminders and approval workflows within the service.

AWS says these databases can easily scale up to 100,000 rows per workbook. With this, AWS argues, users can then focus on building their applications without having to worry about the underlying infrastructure.

As of now, it doesn’t look like users will be able to bring in any outside data sources, though that may still be on the company’s roadmap. On the other hand, these kinds of integrations would also complicate the process of building an app and it looks like AWS is trying to keep things simple for now.

Honeycode currently only runs in the AWS US West region in Oregon but is coming to other regions soon.

Among Honeycode’s first customers are SmugMug and Slack.

“We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, VP of Business and Corporate Development at Slack in today’s release. “We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”

Jun
05
2020
--

Slack’s new integration deal with AWS could also be about tweaking Microsoft

Slack and Amazon announced a big integration late yesterday afternoon. As part of the deal, Slack will use Amazon Chime for its call feature, while reiterating its commitment to use AWS as its preferred cloud provider to run its infrastructure. At the same time, Amazon has agreed to offer Slack as an option for all internal communications.

“Some parts of Amazon had licensed Slack before, but this is the first time it will be offered as an option to all employees,” an Amazon spokesperson told TechCrunch.

Make no mistake, this is a big deal as the SaaS communications tool increases its ties with AWS, but this agreement could also be about slighting Microsoft and its rival Teams product by making a deal with a cloud rival. In the past, Slack CEO Stewart Butterfield has had choice words for Microsoft saying the Redmond technology giant sees his company as an “existential threat.”

Whether that’s true  — Teams is but one piece of a huge technology company — it’s impossible not to look at the deal in this context. Aligning more deeply with AWS sends a message to Microsoft, whose Azure infrastructure services compete with AWS.

Butterfield didn’t say that of course. He talked about how synergistic the deal was. “Strategically partnering with AWS allows both companies to scale to meet demand and deliver enterprise-grade offerings to our customers. By integrating AWS services with Slack’s channel-based messaging platform, we’re helping teams easily and seamlessly manage their cloud infrastructure projects and launch cloud-based services without ever leaving Slack,” he said in a statement.

The deal also includes several other elements including integrating AWS Key Management Service with Slack Enterprise Key Management (EKM) for encryption key management, deeper alignment with AWS’s chatbot service and direct integration with AWS AppFlow to enable secure transfer of data between Slack and Amazon S3 storage and the Amazon Redshift data warehouse.

AWS CEO Andy Jassy saw it as a pure integration play. “Together, AWS and Slack are giving developer teams the ability to collaborate and innovate faster on the front end with applications, while giving them the ability to efficiently manage their backend cloud infrastructure,” Jassy said in a statement.

Like any good deal, it’s good for both sides. Slack gets a big customer in AWS and AWS now has Slack directly integrating more of its services. One of the reasons enterprise users are so enamored with Slack is the ability to get work done in a single place without constantly having to change focus and move between interfaces.

This deal will provide more of that for common customers, while tweaking a common rival. That’s what you call win-win.

May
01
2020
--

In spite of pandemic (or maybe because of it), cloud infrastructure revenue soars

It’s fair to say that even before the impact of COVID-19, companies had begun a steady march to the cloud. Maybe it wasn’t fast enough for AWS, as Andy Jassy made clear in his 2019 Re:invent keynote, but it was happening all the same and the steady revenue increases across the cloud infrastructure market bore that out.

As we look at the most recent quarter’s earnings reports for the main players in the market, it seems the pandemic and economic fall out has done little to slow that down. In fact, it may be contributing to its growth.

According to numbers supplied by Synergy Research, the cloud infrastructure market totaled $29 billion in revenue for Q12020.

Image Credit: Synergy Research

Synergy’s John Dinsdale, who has been watching this market for a long time, says that the pandemic could be contributing to some of that growth, at least modestly. In spite of the numbers, he doesn’t necessarily see these companies getting out of this unscathed either, but as companies shift operations from offices, it could be part of the reason for the increased demand we saw in the first quarter.

“For sure, the pandemic is causing some issues for cloud providers, but in uncertain times, the public cloud is providing flexibility and a safe haven for enterprises that are struggling to maintain normal operations. Cloud provider revenues continue to grow at truly impressive rates, with AWS and Azure in aggregate now having an annual revenue run rate of well over $60 billion,” Dinsdale said in a statement.

AWS led the way with a third of the market or more than $10 billion in quarterly revenue as it continues to hold a substantial lead in market share. Microsoft was in second, growing at a brisker 59% for 18% of the market. While Microsoft doesn’t break out its numbers, using Synergy’s numbers, that would work out to around $5.2 billion for Azure revenue. Meanwhile Google came in third with $2.78 billion.

If you’re keeping track of market share at home, it comes out to 32% for AWS, 18% for Microsoft and 8% for Google. This split has remained fairly steady, although Microsoft has managed to gain a few percentage points over the last several quarters as its overall growth rate outpaces Amazon.

Apr
22
2020
--

AWS launches Amazon AppFlow, its new SaaS integration service

AWS today launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand.

Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows and while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

“Developers spend huge amounts of time writing custom integrations so they can pass data between SaaS applications and AWS services so that it can be analysed; these can be expensive and can often take months to complete,” said AWS principal advocate Martin Beeby in today’s announcement. “If data requirements change, then costly and complicated modifications have to be made to the integrations. Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error.”

Every flow (which AWS defines as a call to a source application to transfer data to a destination) costs $0.001 per run, though, in typical AWS fashion, there’s also cost associated with data processing (starting at 0.02 per GB).

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third-party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, Vice President, AWS. “Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public Internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications – all without having to develop custom connectors or manage underlying API and network connectivity.”

At this point, the number of supported services remains comparatively low, with only 14 possible sources and four destinations (Amazon Redshift and S3, as well as Salesforce and Snowflake). Sometimes, depending on the source you select, the only possible destination is Amazon’s S3 storage service.

Over time, the number of integrations will surely increase, but for now, it feels like there’s still quite a bit more work to do for the AppFlow team to expand the list of supported services.

AWS has long left this market to competitors, even though it has tools like AWS Step Functions for building serverless workflows across AWS services and EventBridge for connections applications. Interestingly, EventBridge currently supports a far wider range of third-party sources, but as the name implies, its focus is more on triggering events in AWS than moving data between applications.

Apr
21
2020
--

AWS and Facebook launch an open-source model server for PyTorch

AWS and Facebook today announced two new open-source projects around PyTorch, the popular open-source machine learning framework. The first of these is TorchServe, a model-serving framework for PyTorch that will make it easier for developers to put their models into production. The other is TorchElastic, a library that makes it easier for developers to build fault-tolerant training jobs on Kubernetes clusters, including AWS’s EC2 spot instances and Elastic Kubernetes Service.

In many ways, the two companies are taking what they have learned from running their own machine learning systems at scale and are putting this into the project. For AWS, that’s mostly SageMaker, the company’s machine learning platform, but as Bratin Saha, AWS VP and GM for Machine Learning Services, told me, the work on PyTorch was mostly motivated by requests from the community. And while there are obviously other model servers like TensorFlow Serving and the Multi Model Server available today, Saha argues that it would be hard to optimize those for PyTorch.

“If we tried to take some other model server, we would not be able to quote optimize it as much, as well as create it within the nuances of how PyTorch developers like to see this,” he said. AWS has lots of experience in running its own model servers for SageMaker that can handle multiple frameworks, but the community was asking for a model server that was tailored toward how they work. That also meant adapting the server’s API to what PyTorch developers expect from their framework of choice, for example.

As Saha told me, the server that AWS and Facebook are now launching as open source is similar to what AWS is using internally. “It’s quite close,” he said. “We actually started with what we had internally for one of our model servers and then put it out to the community, worked closely with Facebook, to iterate and get feedback — and then modified it so it’s quite close.”

Bill Jia, Facebook’s VP of AI Infrastructure, also told me, he’s very happy about how his team and the community has pushed PyTorch forward in recent years. “If you look at the entire industry community — a large number of researchers and enterprise users are using AWS,” he said. “And then we figured out if we can collaborate with AWS and push PyTorch together, then Facebook and AWS can get a lot of benefits, but more so, all the users can get a lot of benefits from PyTorch. That’s our reason for why we wanted to collaborate with AWS.”

As for TorchElastic, the focus here is on allowing developers to create training systems that can work on large distributed Kubernetes clusters where you might want to use cheaper spot instances. Those are preemptible, though, so your system has to be able to handle that, while traditionally, machine learning training frameworks often expect a system where the number of instances stays the same throughout the process. That, too, is something AWS originally built for SageMaker. There, it’s fully managed by AWS, though, so developers never have to think about it. For developers who want more control over their dynamic training systems or to stay very close to the metal, TorchElastic now allows them to recreate this experience on their own Kubernetes clusters.

AWS has a bit of a reputation when it comes to open source and its engagement with the open-source community. In this case, though, it’s nice to see AWS lead the way to bring some of its own work on building model servers, for example, to the PyTorch community. In the machine learning ecosystem, that’s very much expected, and Saha stressed that AWS has long engaged with the community as one of the main contributors to MXNet and through its contributions to projects like Jupyter, TensorFlow and libraries like NumPy.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com