Sep
05
2021
--

The time Animoto almost brought AWS to its knees

Today, Amazon Web Services is a mainstay in the cloud infrastructure services market, a $60 billion juggernaut of a business. But in 2008, it was still new, working to keep its head above water and handle growing demand for its cloud servers. In fact, 15 years ago last week, the company launched Amazon EC2 in beta. From that point forward, AWS offered startups unlimited compute power, a primary selling point at the time.

EC2 was one of the first real attempts to sell elastic computing at scale — that is, server resources that would scale up as you needed them and go away when you didn’t. As Jeff Bezos said in an early sales presentation to startups back in 2008, “you want to be prepared for lightning to strike, […] because if you’re not that will really generate a big regret. If lightning strikes, and you weren’t ready for it, that’s kind of hard to live with. At the same time you don’t want to prepare your physical infrastructure, to kind of hubris levels either in case that lightning doesn’t strike. So, [AWS] kind of helps with that tough situation.”

An early test of that value proposition occurred when one of their startup customers, Animoto, scaled from 25,000 to 250,000 users in a 4-day period in 2008 shortly after launching the company’s Facebook app at South by Southwest.

At the time, Animoto was an app aimed at consumers that allowed users to upload photos and turn them into a video with a backing music track. While that product may sound tame today, it was state of the art back in those days, and it used up a fair amount of computing resources to build each video. It was an early representation of not only Web 2.0 user-generated content, but also the marriage of mobile computing with the cloud, something we take for granted today.

For Animoto, launched in 2006, choosing AWS was a risky proposition, but the company found trying to run its own infrastructure was even more of a gamble because of the dynamic nature of the demand for its service. To spin up its own servers would have involved huge capital expenditures. Animoto initially went that route before turning its attention to AWS because it was building prior to attracting initial funding, Brad Jefferson, co-founder and CEO at the company explained.

“We started building our own servers, thinking that we had to prove out the concept with something. And as we started to do that and got more traction from a proof-of-concept perspective and started to let certain people use the product, we took a step back, and were like, well it’s easy to prepare for failure, but what we need to prepare for success,” Jefferson told me.

Going with AWS may seem like an easy decision knowing what we know today, but in 2007 the company was really putting its fate in the hands of a mostly unproven concept.

“It’s pretty interesting just to see how far AWS has gone and EC2 has come, but back then it really was a gamble. I mean we were talking to an e-commerce company [about running our infrastructure]. And they’re trying to convince us that they’re going to have these servers and it’s going to be fully dynamic and so it was pretty [risky]. Now in hindsight, it seems obvious but it was a risk for a company like us to bet on them back then,” Jefferson told me.

Animoto had to not only trust that AWS could do what it claimed, but also had to spend six months rearchitecting its software to run on Amazon’s cloud. But as Jefferson crunched the numbers, the choice made sense. At the time, Animoto’s business model was for free for a 30 second video, $5 for a longer clip, or $30 for a year. As he tried to model the level of resources his company would need to make its model work, it got really difficult, so he and his co-founders decided to bet on AWS and hope it worked when and if a surge of usage arrived.

That test came the following year at South by Southwest when the company launched a Facebook app, which led to a surge in demand, in turn pushing the limits of AWS’s capabilities at the time. A couple of weeks after the startup launched its new app, interest exploded and Amazon was left scrambling to find the appropriate resources to keep Animoto up and running.

Dave Brown, who today is Amazon’s VP of EC2 and was an engineer on the team back in 2008, said that “every [Animoto] video would initiate, utilize and terminate a separate EC2 instance. For the prior month they had been using between 50 and 100 instances [per day]. On Tuesday their usage peaked at around 400, Wednesday it was 900, and then 3,400 instances as of Friday morning.” Animoto was able to keep up with the surge of demand, and AWS was able to provide the necessary resources to do so. Its usage eventually peaked at 5000 instances before it settled back down, proving in the process that elastic computing could actually work.

At that point though, Jefferson said his company wasn’t merely trusting EC2’s marketing. It was on the phone regularly with AWS executives making sure their service wouldn’t collapse under this increasing demand. “And the biggest thing was, can you get us more servers, we need more servers. To their credit, I don’t know how they did it — if they took away processing power from their own website or others — but they were able to get us where we needed to be. And then we were able to get through that spike and then sort of things naturally calmed down,” he said.

The story of keeping Animoto online became a main selling point for the company, and Amazon was actually the first company to invest in the startup besides friends and family. It raised a total of $30 million along the way, with its last funding coming in 2011. Today, the company is more of a B2B operation, helping marketing departments easily create videos.

While Jefferson didn’t discuss specifics concerning costs, he pointed out that the price of trying to maintain servers that would sit dormant much of the time was not a tenable approach for his company. Cloud computing turned out to be the perfect model and Jefferson says that his company is still an AWS customer to this day.

While the goal of cloud computing has always been to provide as much computing as you need on demand whenever you need it, this particular set of circumstances put that notion to the test in a big way.

Today the idea of having trouble generating 3,400 instances seems quaint, especially when you consider that Amazon processes 60 million instances every day now, but back then it was a huge challenge and helped show startups that the idea of elastic computing was more than theory.

Aug
31
2021
--

Extra Crunch roundup: Toast and Freshworks S-1s, pre-pitch tips, flexible funding lessons

The digital transformation currently sweeping society has likely reached your favorite local restaurant.

Since 2013, Boston-based Toast has offered bars and eateries a software platform that lets them manage orders, payments and deliveries.

Over the last year, its customers have processed more than $38 billion in gross payment volume, so Alex Wilhelm analyzed the company’s S-1 for The Exchange with great interest.

“Toast was last valued at just under $5 billion when it last raised, per Crunchbase data,” he writes. “And folks are saying that it could be worth $20 billion in its debut. Does that square with the numbers?”


Full Extra Crunch articles are only available to members.
Use discount code ECFriday to save 20% off a one- or two-year subscription.


Airbnb, DoorDash and Coinbase each debuted at past Y Combinator Demo Days; as of this writing, they employ a combined 10,000 people.

Today and tomorrow, TechCrunch reporters will cover the proceedings at YC’s Summer 20201 Demo Day. In addition to writing up founder pitches, they’ll also rank their favorites.

Even remotely, I can feel a palpable sense of excitement radiating from our team — anything can happen at YC Demo Day, so sign up for Extra Crunch to follow the action.

Thanks very much for reading; I hope you have an excellent week.

Walter Thompson
Senior Editor, TechCrunch
@yourprotagonist

How Amazon EC2 grew from a notion into a foundational element of cloud computing

Image Credits: Ron Miller/TechCrunch

In August 2006, AWS activated its EC2 cloud-based virtual computer, a milestone in the cloud infrastructure giant’s development.

“You really can’t overstate what Amazon was able to accomplish,” writes enterprise reporter Ron Miller.

In the 15 years since, EC2 has enabled clients of any size to test and run their own applications on AWS’ virtual machines.

To learn more about a fundamental technological shift that “would help fuel a whole generation of startups,” Ron interviewed EC2 VP Dave Brown, who built and led the Amazon EC2 Frontend team.

3 ways to become a better manager in the work-from-home era

Image of a manager talking to his team via a video conference.

Image Credits: Jasmin Merdan (opens in a new window)/ Getty Images

Most managers agree that OKRs foster transparency and accountability, but running a team effectively has different challenges when workers are attending all-hands meetings from their kitchen tables.

Instead of just discussing key metrics before board meetings or performance reviews, make them part of the day-to-day culture, recommends Jeremy Epstein, Gtmhub’s CMO.

“Strengthen your team by creating authentic workplace transparency using numbers as a universal language and providing meaning behind your team’s work.”

The pre-pitch: 7 ways to build relationships with VCs

A person attracts people to his side with a magnet.

Image Credits: Getty Images under an Andrii Yalanskyi (opens in a new window) license

Many founders must overcome a few emotional hurdles before they’re comfortable pitching a potential investor face-to-face.

To alleviate that pressure, Unicorn Capital founder Evan Fisher recommends that entrepreneurs use pre-pitch meetings to build and strengthen relationships before asking for a check:

“This is the ‘we actually aren’t looking for money; we just want to be friends for now’ pitch that gets you on an investor’s radar so that when it’s time to raise your next round, they’ll be far more likely to answer the phone because they actually know who you are.”

Pre-pitches are good for more than curing the jitters: These conversations help founders get a better sense of how VCs think and sometimes lead to serendipitous outcomes.

“Investors are opportunists by necessity,” says Fisher, “so if they like the cut of your business’s jib, you never know — the FOMO might start kicking hard.”

Lessons from COVID: Flexible funding is a must for alternative lenders

Flexible Multi Colored Coil Crossing Hexagon Frame on White Background.

Image Credits: MirageC (opens in a new window) / Getty Images

FischerJordan’s Deeba Goyal and Archita Bhandari break down the pandemic’s impact on alternative lenders, specifically what they had to do to survive the crisis, taking a look at smaller lenders including Credibly, Kabbage, Kapitus and BlueVine.

“Only those who were able to find a way through the complexities of their existing capital sources were able to maintain their performance, and the rest were left to perish or find new funding avenues,” they write.

Inside Freshworks’ IPO filing

Customer engagement software company Freshworks’ S-1 filing depicts a company that’s experiencing accelerating revenue growth, “a great sign for the health of its business,” reports Alex Wilhelm in this morning’s The Exchange.

“Most companies see their growth rates decline as they scale, as larger denominators make growth in percentage terms more difficult.”

Studying the company’s SEC filing, he found that “Freshworks isn’t a company where we need to cut it lots of slack, as we might with an adjusted EBITDA number. It is going public ready for Big Kid metrics.”

Aug
31
2021
--

Octane banks $2M for flexible billing software

Software billing startup Octane announced Tuesday that it raised $2 million on a post-money valuation of $10 million to advance its pay-as-you-go billing software.

Akash Khanolkar and his co-founders met a decade ago at Carnegie Mellon University and since then went off in different directions. In Khanolkar’s case, he ran a cloud consulting business and saw how fast companies like Datadog and Snowflake were coming to market and dealing with Amazon Web Services.

He found that the commonality in all of those fast-growing companies was billing software using a pay-as-you-go business model versus the traditional flat-rate plans, Khanolkar told TechCrunch.

However, he explained that monitoring consumption means that billing becomes complicated: companies now have to track how customers are using the software per second in order to bill correctly each month.

Seeing the shift toward consumption-based billing, the co-founders came back together in June 2020 to create Octane, a metered billing system that helps vendors create a plan, monitor usage and charge in a similar way to Snowflake and AWS, Khanolkar said.

“We are API-driven, and you as a vendor will send us usage data, and on our end, we store it and then do real-time aggregations so at the end of the month, you can accordingly bill customers,” Khanolkar said. “We have seen contention between engineering and product. Engineers are there to create core plans, so we built a no-code experience for product teams to be able to create new price plans and then perform changes, like adding coupons.”

Within the global cloud billing market, which is expected to reach $6.5 billion by 2025, there are a set of Octane competitors, like Chargebee and Zuora, that Khanolkar said are tackling the subscription management side and succeeding in the past several years. Now there is a usage and consumption-based world coming and a whole new set of software businesses, like Octane, coming in to succeed there.

The new round of funding was led by Basis Set Ventures and included Dropbox co-founder Arash Ferdowsi, Github CTO Jason Warner, Fortress CTO Assunta Gaglione, Scale AI CRO Chetan Chaudhary, former Twilio executive Evan Cummack, Esteban Reyes, Abstraction Capital and Script Capital.

“With the rise of product-led growth and usage-based pricing models, usage-based billing is a critical and foundational piece of infrastructure that has been simply missing,” said Chang Xu, partner at Basis Set Ventures, via email. “At the same time, it’s something that every department cares about as it’s your revenue. Many later-stage companies we talk to that have built this in-house talk about the ongoing maintenance costs and wishes that there is a vendor they can outsource it to.”

We are super impressed with the Octane team with their dedication to building a best-in-class and robust usage-based billing solution. They’ve validated this opportunity by talking to lots of engineering teams so they can solve for all the edge cases, which is important in something as mission critical as billing. We are convinced that Octane will become an inevitable part of the tech infrastructure.”

The new funding will go primarily toward hiring engineers, as well as product, marketing and sales staff. Octane currently has seven employees, and Khanolkar expects to be around 10 by the end of the year.

The company is working with a large range of companies, primarily focused on infrastructure and the depth gauge industries. Octane is also seeing some unique use cases emerge, like a construction company using the usage meter to track the hours an employee works and companies in electric charging using the meter for those purposes.

“We didn’t envision construction guys using it, but in theory, it could be used by any company that tracks time — even legal,” Khanolkar added.

He declined to speak about the company’s revenue, but did say it now had two to three years of runway.

Up next, the company plans to roll out new features like price experimentation based on usage to help customers better make decisions on how to price their software, another problem Khanolkar sees happening. It will build ways that customers can try different plans against usage data to validate which one works the best.

“We are still in the early innings of consumption-based models, but we see more end users opting to go with an enterprise that wants to let them try out the software and then pay as they go,” he added.

Aug
28
2021
--

How Amazon EC2 grew from a notion into a foundational element of cloud computing

Fifteen years ago this week on August 25, 2006, AWS turned on the very first beta instance of EC2, its cloud-based virtual computers. Today cloud computing, and more specifically infrastructure as a service, is a staple of how businesses use computing, but at that moment it wasn’t a well known or widely understood concept.

The EC in EC2 stands for Elastic Compute, and that name was chosen deliberately. The idea was to provide as much compute power as you needed to do a job, then shut it down when you no longer needed it — making it flexible like an elastic band. The launch of EC2 in beta was preceded by the beta release of S3 storage six months earlier, and both services marked the starting point in AWS’ cloud infrastructure journey.

You really can’t overstate what Amazon was able to accomplish with these moves. It was able to anticipate an entirely different way of computing and create a market and a substantial side business in the process. It took vision to recognize what was coming and the courage to forge ahead and invest the resources necessary to make it happen, something that every business could learn from.

The AWS origin story is complex, but it was about bringing the IT power of the Amazon business to others. Amazon at the time was not the business it is today, but it was still rather substantial and still had to deal with massive fluctuations in traffic such as Black Friday when its website would be flooded with traffic for a short but sustained period of time. While the goal of an e-commerce site, and indeed every business, is attracting as many customers as possible, keeping the site up under such stress takes some doing and Amazon was learning how to do that well.

Those lessons and a desire to bring the company’s internal development processes under control would eventually lead to what we know today as Amazon Web Services, and that side business would help fuel a whole generation of startups. We spoke to Dave Brown, who is VP of EC2 today, and who helped build the first versions of the tech, to find out how this technological shift went down.

Sometimes you get a great notion

The genesis of the idea behind AWS started in the 2000 timeframe when the company began looking at creating a set of services to simplify how they produced software internally. Eventually, they developed a set of foundational services — compute, storage and database — that every developer could tap into.

But the idea of selling that set of services really began to take shape at an executive offsite at Jeff Bezos’ house in 2003. A 2016 TechCrunch article on the origins AWS described how that started to come together:

As the team worked, Jassy recalled, they realized they had also become quite good at running infrastructure services like compute, storage and database (due to those previously articulated internal requirements). What’s more, they had become highly skilled at running reliable, scalable, cost-effective data centers out of need. As a low-margin business like Amazon, they had to be as lean and efficient as possible.

They realized that those skills and abilities could translate into a side business that would eventually become AWS. It would take a while to put these initial ideas into action, but by December 2004, they had opened an engineering office in South Africa to begin building what would become EC2. As Brown explains it, the company was looking to expand outside of Seattle at the time, and Chris Pinkham, who was director in those days, hailed from South Africa and wanted to return home.

Jul
29
2021
--

4 key areas SaaS startups must address to scale infrastructure for the enterprise

Startups and SMBs are usually the first to adopt many SaaS products. But as these customers grow in size and complexity — and as you rope in larger organizations — scaling your infrastructure for the enterprise becomes critical for success.

Below are four tips on how to advance your company’s infrastructure to support and grow with your largest customers.

Address your customers’ security and reliability needs

If you’re building SaaS, odds are you’re holding very important customer data. Regardless of what you build, that makes you a threat vector for attacks on your customers. While security is important for all customers, the stakes certainly get higher the larger they grow.

Given the stakes, it’s paramount to build infrastructure, products and processes that address your customers’ growing security and reliability needs. That includes the ethical and moral obligation you have to make sure your systems and practices meet and exceed any claim you make about security and reliability to your customers.

Here are security and reliability requirements large customers typically ask for:

Formal SLAs around uptime: If you’re building SaaS, customers expect it to be available all the time. Large customers using your software for mission-critical applications will expect to see formal SLAs in contracts committing to 99.9% uptime or higher. As you build infrastructure and product layers, you need to be confident in your uptime and be able to measure uptime on a per customer basis so you know if you’re meeting your contractual obligations.

While it’s hard to prioritize asks from your largest customers, you’ll find that their collective feedback will pull your product roadmap in a specific direction.

Real-time status of your platform: Most larger customers will expect to see your platform’s historical uptime and have real-time visibility into events and incidents as they happen. As you mature and specialize, creating this visibility for customers also drives more collaboration between your customer operations and infrastructure teams. This collaboration is valuable to invest in, as it provides insights into how customers are experiencing a particular degradation in your service and allows for you to communicate back what you found so far and what your ETA is.

Backups: As your customers grow, be prepared for expectations around backups — not just in terms of how long it takes to recover the whole application, but also around backup periodicity, location of your backups and data retention (e.g., are you holding on to the data too long?). If you’re building your backup strategy, thinking about future flexibility around backup management will help you stay ahead of these asks.

Jun
22
2021
--

Vantage raises $4M to help businesses understand their AWS costs

Vantage, a service that helps businesses analyze and reduce their AWS costs, today announced that it has raised a $4 million seed round led by Andreessen Horowitz. A number of angel investors, including Brianne Kimmel, Julia Lipton, Stephanie Friedman, Calvin French Owen, Ben and Moisey Uretsky, Mitch Wainer and Justin Gage, also participated in this round.

Vantage started out with a focus on making the AWS console a bit easier to use — and helping businesses figure out what they are spending their cloud infrastructure budgets on in the process. But as Vantage co-founder and CEO Ben Schaechter told me, it was the cost transparency features that really caught on with users.

“We were advertising ourselves as being an alternative AWS console with a focus on developer experience and cost transparency,” he said. “What was interesting is — even in the early days of early access before the formal GA launch in January — I would say more than 95% of the feedback that we were getting from customers was entirely around the cost features that we had in Vantage.”

Image Credits: Vantage

Like any good startup, the Vantage team looked at this and decided to double down on these features and highlight them in its marketing, though it kept the existing AWS Console-related tools as well. The reason the other tools didn’t quite take off, Schaechter believes, is because more and more, AWS users have become accustomed to infrastructure-as-code to do their own automatic provisioning. And with that, they spend a lot less time in the AWS Console anyway.

“But one consistent thing — across the board — was that people were having a really, really hard time 12 times a year, where they would get a shocking AWS bill and had to figure out what happened. What Vantage is doing today is providing a lot of value on the transparency front there,” he said.

Over the course of the last few months, the team added a number of new features to its cost transparency tools, including machine learning-driven predictions (both on the overall account level and service level) and the ability to share reports across teams.

Image Credits: Vantage

While Vantage expects to add support for other clouds in the future, likely starting with Azure and then GCP, that’s actually not what the team is focused on right now. Instead, Schaechter noted, the team plans to add support for bringing in data from third-party cloud services instead.

“The number one line item for companies tends to be AWS, GCP, Azure,” he said. “But then, after that, it’s Datadog, Cloudflare, Sumo Logic, things along those lines. Right now, there’s no way to see, P&L or an ROI from a cloud usage-based perspective. Vantage can be the tool where that’s showing you essentially, all of your cloud costs in one space.”

That is likely the vision the investors bought into, as well, and even though Vantage is now going up against enterprise tools like Apptio’s Cloudability and VMware’s CloudHealth, Schaechter doesn’t seem to be all that worried about the competition. He argues that these are tools that were born in a time when AWS had only a handful of services and only a few ways of interacting with those. He believes that Vantage, as a modern self-service platform, will have quite a few advantages over these older services.

“You can get up and running in a few clicks. You don’t have to talk to a sales team. We’re helping a large number of startups at this stage all the way up to the enterprise, whereas Cloudability and CloudHealth are, in my mind, kind of antiquated enterprise offerings. No startup is choosing to use those at this point, as far as I know,” he said.

The team, which until now mostly consisted of Schaechter and his co-founder and CTO Brooke McKim, bootstrapped the company up to this point. Now they plan to use the new capital to build out its team (and the company is actively hiring right now), both on the development and go-to-market side.

The company offers a free starter plan for businesses that track up to $2,500 in monthly AWS cost, with paid plans starting at $30 per month for those who need to track larger accounts.

Dec
15
2020
--

Twitter taps AWS for its latest foray into the public cloud

Twitter has a lot going on, and it’s not always easy to manage that kind of scale on your own. Today, Amazon announced that Twitter has signed a multi-year agreement with AWS to run its real-time timelines. It’s a major win for Amazon’s cloud arm.

While the companies have worked together in some capacity for over a decade, this marks the first time that Twitter is tapping AWS to help run its core timelines.

“This expansion onto AWS marks the first time that Twitter is leveraging the public cloud to scale their real-time service. Twitter will rely on the breadth and depth of AWS, including capabilities in compute, containers, storage and security, to reliably deliver the real-time service with the lowest latency, while continuing to develop and deploy new features to improve how people use Twitter,” the company explained in the announcement.

Parag Agrawal, chief technology officer at Twitter, sees this as a way to expand and improve the company’s real-time offerings by taking advantage of AWS’s network of data centers to deliver content closer to the user. “The collaboration with AWS will improve performance for people who use Twitter by enabling us to serve Tweets from data centers closer to our customers at the same time as we leverage the Arm-based architecture of AWS Graviton2 instances. In addition to helping us scale our infrastructure, this work with AWS enables us to ship features faster as we apply AWS’s diverse and growing portfolio of services,” Agrawal said in a statement.

It’s worth noting that Twitter also has a relationship with Google Cloud. In 2018, it announced it was moving its Hadoop clusters to GCP.

This announcement could be considered a case of the rich getting richer as AWS is the leader in the cloud infrastructure market by far, with around 33% market share. Microsoft is in second with around 18% and Google is in third with 9%, according to Synergy Research. In its most recent earnings report, Amazon reported $11.6 billion in AWS revenue, putting it on a run rate of over $46 billion.

Dec
08
2020
--

AWS expands on SageMaker capabilities with end-to-end features for machine learning

Nearly three years after it was first launched, Amazon Web Services’ SageMaker platform has gotten a significant upgrade in the form of new features, making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said.

As machine learning moves into the mainstream, business units across organizations will find applications for automation, and AWS is trying to make the development of those bespoke applications easier for its customers.

“One of the best parts of having such a widely adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables,” said AWS vice president of machine learning, Swami Sivasubramanian. “Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability and automation at scale.”

Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino’s Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.

The company’s new products include Amazon SageMaker Data Wrangler, which the company said was providing a way to normalize data from disparate sources so the data is consistently easy to use. Data Wrangler can also ease the process of grouping disparate data sources into features to highlight certain types of data. The Data Wrangler tool contains more than 300 built-in data transformers that can help customers normalize, transform and combine features without having to write any code.

Amazon also unveiled the Feature Store, which allows customers to create repositories that make it easier to store, update, retrieve and share machine learning features for training and inference.

Another new tool that Amazon Web Services touted was Pipelines, its workflow management and automation toolkit. The Pipelines tech is designed to provide orchestration and automation features not dissimilar from traditional programming. Using pipelines, developers can define each step of an end-to-end machine learning workflow, the company said in a statement. Developers can use the tools to re-run an end-to-end workflow from SageMaker Studio using the same settings to get the same model every time, or they can re-run the workflow with new data to update their models.

To address the longstanding issues with data bias in artificial intelligence and machine learning models, Amazon launched SageMaker Clarify. First announced today, this tool allegedly provides bias detection across the machine learning workflow, so developers can build with an eye toward better transparency on how models were set up. There are open-source tools that can do these tests, Amazon acknowledged, but the tools are manual and require a lot of lifting from developers, according to the company.

Other products designed to simplify the machine learning application development process include SageMaker Debugger, which enables developers to train models faster by monitoring system resource utilization and alerting developers to potential bottlenecks; Distributed Training, which makes it possible to train large, complex, deep learning models faster than current approaches by automatically splitting data across multiple GPUs to accelerate training times; and SageMaker Edge Manager, a machine learning model management tool for edge devices, which allows developers to optimize, secure, monitor and manage models deployed on fleets of edge devices.

Last but not least, Amazon unveiled SageMaker JumpStart, which provides developers with a searchable interface to find algorithms and sample notebooks so they can get started on their machine learning journey. The company said it would give developers new to machine learning the option to select several pre-built machine learning solutions and deploy them into SageMaker environments.

Dec
03
2020
--

AWS expands startup assistance program

Last year, AWS launched the APN Global Startup Program, which is sort of AWS’s answer to an incubator for mid to late-stage startups deeply involved with AWS technology. This year, the company wants to expand that offering, and today it announced some updates to the program at the Partner keynote at AWS re:Invent.

While startups technically have to pay a $2,500 fee if they are accepted to the program, AWS typically refunds that fee, says Doug Yeum, head of the Global Partner Organization at AWS — and they get a lot of benefits for being part of the program.

“While the APN has a $2,500 annual program fee, startups that are accepted into the invite-only APN Global Startup Program get that fee back, as well as free access to substantial additional resources both in terms of funding as well as exclusive program partner managers and co-sell specialists resources,” Yeum told TechCrunch.

And those benefits are pretty substantial, including access to a new “white glove program” that lets them work with a program manager with direct knowledge of AWS and who has experience working with startups. In addition, participants get access to an ISV program to work more directly with these vendors to increase sales and access to data exchange services to move third-party data into the AWS cloud.

What’s more, they can apply to the new AI/ML Acceleration program. As AWS describes it, “This includes up to $5,000 AWS credits to fund experiments on AWS services, enabling startups to explore AWS AI/ML tools that offer the best fit for them at low risk.”

Finally, they get partially free access to the AWS Marketplace, offsetting the normal marketplace listing fees for the first five offerings. Some participants will also get access to AWS sales to help use the power of the large company to drive a startup’s sales.

While you can apply to the program, the company also recruits individual startups that catch its attention. “We also proactively invite mid to late-stage startups built on AWS that, based on market signals, are showing traction and offer interesting use cases for our mutual enterprise customers,” Yeum explained.

Among the companies currently involved in the program are HashiCorp, Logz.io and Snapdocs. Interested startups can apply on the APN Global Startup website.

Dec
01
2020
--

AWS updates its edge computing solutions with new hardware and Local Zones

AWS today closed out its first re:Invent keynote with a focus on edge computing. The company launched two smaller appliances for its Outpost service, which originally brought AWS as a managed service and appliance right into its customers’ existing data centers in the form of a large rack. Now, the company is launching these smaller versions so that its users can also deploy them in their stores or office locations. These appliances are fully managed by AWS and offer 64 cores of compute, 128GB of memory and 4TB of local NVMe storage.

In addition, the company expanded its set of Local Zones, which are basically small extensions of existing AWS regions that are more expensive to use but offer low-latency access in metro areas. This service launched in Los Angeles in 2019 and starting today, it’s also available in preview in Boston, Houston and Miami. Soon, it’ll expand to Atlanta, Chicago, Dallas, Denver, Kansas City, Las Vegas, Minneapolis, New York, Philadelphia, Phoenix, Portland and Seattle. Google, it’s worth noting, is doing something similar with its Mobile Edge Cloud.

The general idea here — and that’s not dissimilar from what Google, Microsoft and others are now doing — is to bring AWS to the edge and to do so in a variety of form factors.

As AWS CEO Andy Jassy rightly noted, AWS always believed that the vast majority of companies, “in the fullness of time” (Jassy’s favorite phrase from this keynote), would move to the cloud. Because of this, AWS focused on cloud services over hybrid capabilities early on. He argues that AWS watched others try and fail in building their hybrid offerings, in large parts because what customers really wanted was to use the same control plane on all edge nodes and in the cloud. None of the existing solutions from other vendors, Jassy argues, got any traction (though AWSs competitors would surely deny this) because of this.

The first result of that was VMware Cloud on AWS, which allowed customers to use the same VMware software and tools on AWS they were already familiar with. But at the end of the day, that was really about moving on-premises services to the cloud.

With Outpost, AWS launched a fully managed edge solution that can run AWS infrastructure in its customers’ data centers. It’s been an interesting journey for AWS, but the fact that the company closed out its keynote with this focus on hybrid — no matter how it wants to define it — shows that it now understands that there is clearly a need for this kind of service. The AWS way is to extend AWS into the edge — and I think most of its competitors will agree with that. Microsoft tried this early on with Azure Stack and really didn’t get a lot of traction, as far as I’m aware, but it has since retooled its efforts around Azure Arc. Google, meanwhile, is betting big on Anthos.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com