Jun
10
2019
--

AWS is now making Amazon Personalize available to all customers

Amazon Personalize, first announced during AWS re:Invent last November, is now available to all Amazon Web Services customers. The API enables developers to add custom machine learning models to their apps, including ones for personalized product recommendations, search results and direct marketing, even if they don’t have machine learning experience.

The API processes data using algorithms originally created for Amazon’s own retail business,  but the company says all data will be “kept completely private, owned entirely by the customer.” The service is now available to AWS users in three U.S. regions, East (Ohio), East (North Virginia) and West (Oregon), two Asia Pacific regions (Tokyo and Singapore) and Ireland in the European Union, with more regions to launch soon.

AWS customers who have already added Amazon Personalize to their apps include Yamaha Corporation of America, Subway, Zola and Segment. In Amazon’s press release, Yamaha Corporation of America Director of Information Technology Ishwar Bharbhari said Amazon Personalize “saves us up to 60% of the time needed to set up and tune the infrastructure and algorithms for our machine learning models when compared to building and configuring the environment on our own.”

Amazon Personalize’s pricing model charges five cents per GB of data uploaded to Amazon Personalize and 24 cents per training hour used to train a custom model with their data. Real-time recommendation requests are priced based on how many are uploaded, with discounts for larger orders.

May
09
2019
--

AWS remains in firm control of the cloud infrastructure market

It has to be a bit depressing to be in the cloud infrastructure business if your name isn’t Amazon. Sure, there’s a huge, growing market, and the companies behind Amazon are growing even faster. Yet it seems no matter how fast they grow, Amazon remains a dot on the horizon.

It seems inconceivable that AWS can continue to hold sway over such a large market for so long, but as we’ve pointed out before, it has been able to maintain its position through true first-mover advantage. The other players didn’t even show up until several years after Amazon launched its first service in 2006, and they are paying the price for their failure to see the way computing would change the way Amazon did.

They certainly see it now, whether it’s IBM, Microsoft or Google, or Tencent and Alibaba, both of which are growing fast in the China/Asia markets. All of these companies are trying to find the formula to help differentiate themselves from AWS and give them some additional market traction.

Cloud market growth

Interestingly, even though companies have begun to move with increasing urgency to the cloud, the pace of growth slowed a bit in the first quarter to a 42 percent rate, according to data from Synergy Research, but that doesn’t mean the end of this growth cycle is anywhere close.

Apr
25
2019
--

AWS expands cloud infrastructure offerings with new AMD EPYC-powered T3a instances

Amazon is always looking for ways to increase the options it offers developers in AWS, and to that end, today it announced a bunch of new AMD EPYC-powered T3a instances. These were originally announced at the end of last year at re:Invent, AWS’s annual customer conference.

Today’s announcement is about making these chips generally available. They have been designed for a specific type of burstable workload, where you might not always need a sustained amount of compute power.

“These instances deliver burstable, cost-effective performance and are a great fit for workloads that do not need high sustained compute power but experience temporary spikes in usage. You get a generous and assured baseline amount of processing power and the ability to transparently scale up to full core performance when you need more processing power, for as long as necessary,” AWS’s Jeff Barr wrote in a blog post.

These instances are built on the AWS Nitro System, Amazon’s custom networking interface hardware that the company has been working on for the last several years. The primary components of this system include the Nitro Card I/O Acceleration, Nitro Security Chip and the Nitro Hypervisor.

Today’s release comes on top of the announcement last year that the company would be releasing EC2 instances powered by Arm-based AWS Graviton Processors, another option for developers looking for a solution for scale-out workloads.

It also comes on the heels of last month’s announcement that it was releasing EC2 M5 and R5 instances, which use lower-cost AMD chips. These are also built on top of the Nitro System.

The EPCY processors are available starting today in seven sizes in your choice of spot instances, reserved instances or on-demand, as needed. They are available in US East in northern Virginia, US West in Oregon, Europe in Ireland, US East in Ohio and Asia-Pacific in Singapore.

Apr
24
2019
--

Docker developers can now build Arm containers on their desktops

Docker and Arm today announced a major new partnership that will see the two companies collaborate in bringing improved support for the Arm platform to Docker’s tools.

The main idea here is to make it easy for Docker developers to build their applications for the Arm platform right from their x86 desktops and then deploy them to the cloud (including the Arm-based AWS EC2 A1 instances), edge and IoT devices. Developers will be able to build their containers for Arm just like they do today, without the need for any cross-compilation.

This new capability, which will work for applications written in JavaScript/Node.js, Python, Java, C++, Ruby, .NET core, Go, Rust and PHP, will become available as a tech preview next week, when Docker hosts its annual North American developer conference in San Francisco.

Typically, developers would have to build the containers they want to run on the Arm platform on an Arm-based server. With this system, which is the first result of this new partnership, Docker essentially emulates an Arm chip on the PC for building these images.

“Overnight, the 2 million Docker developers that are out there can use the Docker commands they already know and become Arm developers,” Docker EVP of Strategic Alliances David Messina told me. “Docker, just like we’ve done many times over, has simplified and streamlined processes and made them simpler and accessible to developers. And in this case, we’re making x86 developers on their laptops Arm developers overnight.”

Given that cloud-based Arm servers like Amazon’s A1 instances are often significantly cheaper than x86 machines, users can achieve some immediate cost benefits by using this new system and running their containers on Arm.

For Docker, this partnership opens up new opportunities, especially in areas where Arm chips are already strong, including edge and IoT scenarios. Arm, similarly, is interested in strengthening its developer ecosystem by making it easier to develop for its platform. The easier it is to build apps for the platform, the more likely developers are to then run them on servers that feature chips from Arm’s partners.

“Arm’s perspective on the infrastructure really spans all the way from the endpoint, all the way through the edge to the cloud data center, because we are one of the few companies that have a presence all the way through that entire path,” Mohamed Awad, Arm’s VP of Marketing, Infrastructure Line of Business, said. “It’s that perspective that drove us to make sure that we engage Docker in a meaningful way and have a meaningful relationship with them. We are seeing compute and the infrastructure sort of transforming itself right now from the old model of centralized compute, general purpose architecture, to a more distributed and more heterogeneous compute system.”

Developers, however, Awad rightly noted, don’t want to have to deal with this complexity, yet they also increasingly need to ensure that their applications run on a wide variety of platforms and that they can move them around as needed. “For us, this is about enabling developers and freeing them from lock-in on any particular area and allowing them to choose the right compute for the right job that is the most efficient for them,” Awad said.

Messina noted that the promise of Docker has long been to remove the dependence of applications from the infrastructure on which they run. Adding Arm support simply extends this promise to an additional platform. He also stressed that the work on this was driven by the company’s enterprise customers. These are the users who have already set up their systems for cloud-native development with Docker’s tools — at least for their x86 development. Those customers are now looking at developing for their edge devices, too, and that often means developing for Arm-based devices.

Awad and Messina both stressed that developers really don’t have to learn anything new to make this work. All of the usual Docker commands will just work.

Apr
10
2019
--

The right way to do AI in security

Artificial intelligence applied to information security can engender images of a benevolent Skynet, sagely analyzing more data than imaginable and making decisions at lightspeed, saving organizations from devastating attacks. In such a world, humans are barely needed to run security programs, their jobs largely automated out of existence, relegating them to a role as the button-pusher on particularly critical changes proposed by the otherwise omnipotent AI.

Such a vision is still in the realm of science fiction. AI in information security is more like an eager, callow puppy attempting to learn new tricks – minus the disappointment written on their faces when they consistently fail. No one’s job is in danger of being replaced by security AI; if anything, a larger staff is required to ensure security AI stays firmly leashed.

Arguably, AI’s highest use case currently is to add futuristic sheen to traditional security tools, rebranding timeworn approaches as trailblazing sorcery that will revolutionize enterprise cybersecurity as we know it. The current hype cycle for AI appears to be the roaring, ferocious crest at the end of a decade that began with bubbly excitement around the promise of “big data” in information security.

But what lies beneath the marketing gloss and quixotic lust for an AI revolution in security? How did AL ascend to supplant the lustrous zest around machine learning (“ML”) that dominated headlines in recent years? Where is there true potential to enrich information security strategy for the better – and where is it simply an entrancing distraction from more useful goals? And, naturally, how will attackers plot to circumvent security AI to continue their nefarious schemes?

How did AI grow out of this stony rubbish?

The year AI debuted as the “It Girl” in information security was 2017. The year prior, MIT completed their study showing “human-in-the-loop” AI out-performed AI and humans individually in attack detection. Likewise, DARPA conducted the Cyber Grand Challenge, a battle testing AI systems’ offensive and defensive capabilities. Until this point, security AI was imprisoned in the contrived halls of academia and government. Yet, the history of two vendors exhibits how enthusiasm surrounding security AI was driven more by growth marketing than user needs.

Apr
05
2019
--

On balance, the cloud has been a huge boon to startups

Today’s startups have a distinct advantage when it comes to launching a company because of the public cloud. You don’t have to build infrastructure or worry about what happens when you scale too quickly. The cloud vendors take care of all that for you.

But last month when Pinterest announced its IPO, the company’s cloud spend raised eyebrows. You see, the company is spending $750 million a year on cloud services, more specifically to AWS. When your business is primarily focused on photos and video, and needs to scale at a regular basis, that bill is going to be high.

That price tag prompted Erica Joy, a Microsoft engineer to publish this Tweet and start a little internal debate here at TechCrunch. Startups, after all, have a dog in this fight, and it’s worth exploring if the cloud is helping feed the startup ecosystem, or sending your bills soaring as they have with Pinterest.

For starters, it’s worth pointing out that Ms. Joy works for Microsoft, which just happens to be a primary competitor of Amazon’s in the cloud business. Regardless of her personal feelings on the matter, I’m sure Microsoft would be more than happy to take over that $750 million bill from Amazon. It’s a nice chunk of business, but all that aside, do startups benefit from having access to cloud vendors?

Apr
02
2019
--

Pixeom raises $15M for its software-defined edge computing platform

Pixeom, a startup that offers a software-defined edge computing platform to enterprises, today announced that it has raised a $15 million funding round from Intel Capital, National Grid Partners and previous investor Samsung Catalyst Fund. The company plans to use the new funding to expand its go-to-market capacity and invest in product development.

If the Pixeom name sounds familiar, that may be because you remember it as a Raspberry Pi-based personal cloud platform. Indeed, that’s the service the company first launched back in 2014. It quickly pivoted to an enterprise model, though. As Pixeom CEO Sam Nagar told me, that pivot came about after a conversation the company had with Samsung about adopting its product for that company’s needs. In addition, it was also hard to find venture funding. The original Pixeom device allowed users to set up their own personal cloud storage and other applications at home. While there is surely a market for these devices, especially among privacy-conscious tech enthusiasts, it’s not massive, especially as users became more comfortable with storing their data in the cloud. “One of the major drivers [for the pivot] was that it was actually very difficult to get VC funding in an industry where the market trends were all skewing towards the cloud,” Nagar told me.

At the time of its launch, Pixeom also based its technology on OpenStack, the massive open-source project that helps enterprises manage their own data centers, which isn’t exactly known as a service that can easily be run on a single machine, let alone a low-powered one. Today, Pixeom uses containers to ship and manage its software on the edge.

What sets Pixeom apart from other edge computing platforms is that it can run on commodity hardware. There’s no need to buy a specific hardware configuration to run the software, unlike Microsoft’s Azure Stack or similar services. That makes it significantly more affordable to get started and allows potential customers to reuse some of their existing hardware investments.

Pixeom brands this capability as “software-defined edge computing” and there is clearly a market for this kind of service. While the company hasn’t made a lot of waves in the press, more than a dozen Fortune 500 companies now use its services. With that, the company now has revenues in the double-digit millions and its software manages more than a million devices worldwide.

As is so often the case in the enterprise software world, these clients don’t want to be named, but Nagar tells me they include one of the world’s largest fast food chains, for example, which uses the Pixeom platform in its stores.

On the software side, Pixeom is relatively cloud agnostic. One nifty feature of the platform is that it is API-compatible with Google Cloud Platform, AWS and Azure and offers an extensive subset of those platforms’ core storage and compute services, including a set of machine learning tools. Pixeom’s implementation may be different, but for an app, the edge endpoint on a Pixeom machine reacts the same way as its equivalent endpoint on AWS, for example.

Until now, Pixeom mostly financed its expansion — and the salary of its more than 90 employees — from its revenue. It only took a small funding round when it first launched the original device (together with a Kickstarter campaign). Technically, this new funding round is part of this, so depending on how you want to look at this, we’re either talking about a very large seed round or a Series A round.

Mar
28
2019
--

Vizion.ai launches its managed Elasticsearch service

Setting up Elasticsearch, the open-source system that many companies large and small use to power their distributed search and analytics engines, isn’t the hardest thing. What is very hard, though, is to provision the right amount of resources to run the service, especially when your users’ demand comes in spikes, without overpaying for unused capacity. Vizion.ai’s new Elasticsearch Service does away with all of this by essentially offering Elasticsearch as a service and only charging its customers for the infrastructure they use.

Vizion.ai’s service automatically scales up and down as needed. It’s a managed service and delivered as a SaaS platform that can support deployments on both private and public clouds, with full API compatibility with the standard Elastic stack that typically includes tools like Kibana for visualizing data, Beats for sending data to the service and Logstash for transforming the incoming data and setting up data pipelines. Users can easily create several stacks for testing and development, too, for example.

Vizion.ai GM and VP Geoff Tudor

“When you go into the AWS Elasticsearch service, you’re going to be looking at dozens or hundreds of permutations for trying to build your own cluster,” Vision.ai’s VP and GM Geoff Tudor told me. “Which instance size? How many instances? Do I want geographical redundancy? What’s my networking? What’s my security? And if you choose wrong, then that’s going to impact the overall performance. […] We do balancing dynamically behind that infrastructure layer.” To do this, the service looks at the utilization patterns of a given user and then allocates resources to optimize for the specific use case.

What VVizion.ai hasdone here is take some of the work from its parent company Panzura, a multi-cloud storage service for enterprises that has plenty of patents around data caching, and applied it to this new Elasticsearch service.

There are obviously other companies that offer commercial Elasticsearch platforms already. Tudor acknowledges this, but argues that his company’s platform is different. With other products, he argues, you have to decide on the size of your block storage for your metadata upfront, for example, and you typically want SSDs for better performance, which can quickly get expensive. Thanks to Panzura’s IP, Vizion.ai is able to bring down the cost by caching recent data on SSDs and keeping the rest in cheaper object storage pools.

He also noted that the company is positioning the overall Vizion.ai service, with the Elasticsearch service as one of the earliest components, as a platform for running AI and ML workloads. Support for TensorFlow, PredictionIO (which plays nicely with Elasticsearch) and other tools is also in the works. “We want to make this an easy serverless ML/AI consumption in a multi-cloud fashion, where not only can you leverage the compute, but you can also have your storage of record at a very cost-effective price point.”

Feb
20
2019
--

New conflict evidence surfaces in JEDI cloud contract procurement process

For months, the drama has been steady in the Pentagon’s decade-long, $10 billion JEDI cloud contract procurement process. This week the plot thickened when the DOD reported that it has found new evidence of a possible conflict of interest, and has reopened its internal investigation into the matter.

“DOD can confirm that new information not previously provided to DOD has emerged related to potential conflicts of interest. As a result of this new information, DOD is continuing to investigate these potential conflicts,” Elissa Smith, Department of Defense spokesperson told TechCrunch.

It’s not clear what this new information is about, but The Wall Street Journal reported this week that senior federal judge Eric Bruggink of the U.S. Court of Federal Claims ordered that the lawsuit filed by Oracle in December would be put on hold to allow the DOD to investigate further.

From the start of the DOD RFP process, there have been complaints that the process itself was designed to favor Amazon, and that were possible conflicts of interest on the part of DOD personnel. The DOD’s position throughout has been that it is an open process and that an investigation found no bearing for the conflict charges. Something forced the department to rethink that position this week.

Oracle in particular has been a vocal critic of the process. Even before the RFP was officially opened, it was claiming that the process unfairly favored Amazon. In the court case, it made the conflict part clearer, claiming that an ex-Amazon employee named Deap Ubhi had influence over the process, a charge that Amazon denied when it joined the case to defend itself. Four weeks ago something changed when a single line in a court filing suggested that Ubhi’s involvement may have been more problematic than the DOD previously believed.

At the time, I wrote:

In the document, filed with the court on Wednesday, the government’s legal representatives sought to outline its legal arguments in the case. The line that attracted so much attention stated, “Now that Amazon has submitted a proposal, the contracting officer is considering whether Amazon’s re-hiring Mr. Ubhi creates an OCI that cannot be avoided, mitigated, or neutralized.” OCI stands for Organizational Conflict of Interest in DoD lingo.

And Pentagon spokesperson Heather Babb told TechCrunch:

During his employment with DDS, Mr. Deap Ubhi recused himself from work related to the JEDI contract. DOD has investigated this issue, and we have determined that Mr. Ubhi complied with all necessary laws and regulations.

Whether the new evidence that DOD has found is referring to Ubhi’s rehiring by Amazon or not is not clear at the moment, but it has clearly found new evidence it wants to explore in this case, and that has been enough to put the Oracle lawsuit on hold.

Oracle’s court case is the latest in a series of actions designed to protest the entire JEDI procurement process. The Washington Post reported last spring that co-CEO Safra Catz complained directly to the president. The company later filed a formal complaint with the Government Accountability Office (GAO), which it lost in November when the department’s investigation found no evidence of conflict. It finally made a federal case out of it when it filed suit in federal court in December, accusing the government of an unfair procurement process and a conflict on the part of Ubhi.

The cloud deal itself is what is at the root of this spectacle. It’s a 10-year contract worth up to $10 billion to handle the DOD’s cloud business — and it’s a winner-take-all proposition. There are three out clauses, which means it might never reach that number of years or dollars, but it is lucrative enough, and could possibly provide inroads for other government contracts, that every cloud company wants to win this.

The RFP process closed in October and the final decision on vendor selection is supposed to happen in April. It is unclear whether this latest development will delay that decision.

Feb
14
2019
--

Peltarion raises $20M for its AI platform

Peltarion, a Swedish startup founded by former execs from companies like Spotify, Skype, King, TrueCaller and Google, today announced that it has raised a $20 million Series A funding round led by Euclidean Capital, the family office for hedge fund billionaire James Simons. Previous investors FAM and EQT Ventures also participated, and this round brings the company’s total funding to $35 million.

There is obviously no dearth of AI platforms these days. Peltarion focus on what it calls “operational AI.” The service offers an end-to-end platform that lets you do everything from pre-processing your data to building models and putting them into production. All of this runs in the cloud and developers get access to a graphical user interface for building and testing their models. All of this, the company stresses, ensures that Peltarion’s users don’t have to deal with any of the low-level hardware or software and can instead focus on building their models.

“The speed at which AI systems can be built and deployed on the operational platform is orders of magnitude faster compared to the industry standard tools such as TensorFlow and require far fewer people and decreases the level of technical expertise needed,” Luka Crnkovic-Friis, of Peltarion’s CEO and co-founder, tells me. “All this results in more organizations being able to operationalize AI and focusing on solving problems and creating change.”

In a world where businesses have a plethora of choices, though, why use Peltarion over more established players? “Almost all of our clients are worried about lock-in to any single cloud provider,” Crnkovic-Friis said. “They tend to be fine using storage and compute as they are relatively similar across all the providers and moving to another cloud provider is possible. Equally, they are very wary of the higher-level services that AWS, GCP, Azure, and others provide as it means a complete lock-in.”

Peltarion, of course, argues that its platform doesn’t lock in its users and that other platforms take far more AI expertise to produce commercially viable AI services. The company rightly notes that, outside of the tech giants, most companies still struggle with how to use AI at scale. “They are stuck on the starting blocks, held back by two primary barriers to progress: immature patchwork technology and skills shortage,” said Crnkovic-Friis.

The company will use the new funding to expand its development team and its teams working with its community and partners. It’ll also use the new funding for growth initiatives in the U.S. and other markets.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com