Feb
20
2019
--

Google’s hybrid cloud platform is now in beta

Last July, at its Cloud Next conference, Google announced the Cloud Services Platform, its first real foray into bringing its own cloud services into the enterprise data center as a managed service. Today, the Cloud Services Platform (CSP) is launching into beta.

It’s important to note that the CSP isn’t — at least for the time being — Google’s way of bringing all of its cloud-based developer services to the on-premises data center. In other words, this is a very different project from something like Microsoft’s Azure Stack. Instead, the focus is on the Google Kubernetes Engine, which allows enterprises to then run their applications in both their own data centers and on virtually any cloud platform that supports containers.As Google Cloud engineering director Chen Goldberg told me, the idea here it to help enterprises innovate and modernize. “Clearly, everybody is very excited about cloud computing, on-demand compute and managed services, but customers have recognized that the move is not that easy,” she said and noted that the vast majority of enterprises are adopting a hybrid approach. And while containers are obviously still a very new technology, she feels good about this bet on the technology because most enterprises are already adopting containers and Kubernetes — and they are doing so at exactly the same time as they are adopting cloud and especially hybrid clouds.

It’s important to note that CSP is a managed platform. Google handles all of the heavy lifting like upgrades and security patches. And for enterprises that need an easy way to install some of the most popular applications, the platform also supports Kubernetes applications from the GCP Marketplace.

As for the tech itself, Goldberg stressed that this isn’t just about Kubernetes. The service also uses Istio, for example, the increasingly popular service mesh that makes it easier for enterprises to secure and control the flow of traffic and API calls between its applications.

With today’s release, Google is also launching its new CSP Config Management tool to help users create multi-cluster policies and set up and enforce access controls, resource quotas and more. CSP also integrates with Google’s Stackdriver Monitoring service and continuous delivery platforms.

“On-prem is not easy,” Goldberg said, and given that this is the first time the company is really supporting software in a data center that is not its own, that’s probably an understatement. But Google also decided that it didn’t want to force users into a specific set of hardware specifications like Azure Stack does, for example. Instead, CSP sits on top of VMware’s vSphere server virtualization platform, which most enterprises already use in their data centers anyway. That surely simplifies things, given that this is a very well-understood platform.

Feb
20
2019
--

New conflict evidence surfaces in JEDI cloud contract procurement process

For months, the drama has been steady in the Pentagon’s decade-long, $10 billion JEDI cloud contract procurement process. This week the plot thickened when the DOD reported that it has found new evidence of a possible conflict of interest, and has reopened its internal investigation into the matter.

“DOD can confirm that new information not previously provided to DOD has emerged related to potential conflicts of interest. As a result of this new information, DOD is continuing to investigate these potential conflicts,” Elissa Smith, Department of Defense spokesperson told TechCrunch.

It’s not clear what this new information is about, but The Wall Street Journal reported this week that senior federal judge Eric Bruggink of the U.S. Court of Federal Claims ordered that the lawsuit filed by Oracle in December would be put on hold to allow the DOD to investigate further.

From the start of the DOD RFP process, there have been complaints that the process itself was designed to favor Amazon, and that were possible conflicts of interest on the part of DOD personnel. The DOD’s position throughout has been that it is an open process and that an investigation found no bearing for the conflict charges. Something forced the department to rethink that position this week.

Oracle in particular has been a vocal critic of the process. Even before the RFP was officially opened, it was claiming that the process unfairly favored Amazon. In the court case, it made the conflict part clearer, claiming that an ex-Amazon employee named Deap Ubhi had influence over the process, a charge that Amazon denied when it joined the case to defend itself. Four weeks ago something changed when a single line in a court filing suggested that Ubhi’s involvement may have been more problematic than the DOD previously believed.

At the time, I wrote:

In the document, filed with the court on Wednesday, the government’s legal representatives sought to outline its legal arguments in the case. The line that attracted so much attention stated, “Now that Amazon has submitted a proposal, the contracting officer is considering whether Amazon’s re-hiring Mr. Ubhi creates an OCI that cannot be avoided, mitigated, or neutralized.” OCI stands for Organizational Conflict of Interest in DoD lingo.

And Pentagon spokesperson Heather Babb told TechCrunch:

During his employment with DDS, Mr. Deap Ubhi recused himself from work related to the JEDI contract. DOD has investigated this issue, and we have determined that Mr. Ubhi complied with all necessary laws and regulations.

Whether the new evidence that DOD has found is referring to Ubhi’s rehiring by Amazon or not is not clear at the moment, but it has clearly found new evidence it wants to explore in this case, and that has been enough to put the Oracle lawsuit on hold.

Oracle’s court case is the latest in a series of actions designed to protest the entire JEDI procurement process. The Washington Post reported last spring that co-CEO Safra Catz complained directly to the president. The company later filed a formal complaint with the Government Accountability Office (GAO), which it lost in November when the department’s investigation found no evidence of conflict. It finally made a federal case out of it when it filed suit in federal court in December, accusing the government of an unfair procurement process and a conflict on the part of Ubhi.

The cloud deal itself is what is at the root of this spectacle. It’s a 10-year contract worth up to $10 billion to handle the DOD’s cloud business — and it’s a winner-take-all proposition. There are three out clauses, which means it might never reach that number of years or dollars, but it is lucrative enough, and could possibly provide inroads for other government contracts, that every cloud company wants to win this.

The RFP process closed in October and the final decision on vendor selection is supposed to happen in April. It is unclear whether this latest development will delay that decision.

Feb
20
2019
--

Arm expands its push into the cloud and edge with the Neoverse N1 and E1

For the longest time, Arm was basically synonymous with chip designs for smartphones and very low-end devices. But more recently, the company launched solutions for laptops, cars, high-powered IoT devices and even servers. Today, ahead of MWC 2019, the company is officially launching two new products for cloud and edge applications, the Neoverse N1 and E1. Arm unveiled the Neoverse brand a few months ago, but it’s only now that it is taking concrete form with the launch of these new products.

“We’ve always been anticipating that this market is going to shift as we move more towards this world of lots of really smart devices out at the endpoint — moving beyond even just what smartphones are capable of doing,” Drew Henry, Arms’ SVP and GM for Infrastructure, told me in an interview ahead of today’s announcement. “And when you start anticipating that, you realize that those devices out of those endpoints are going to start creating an awful lot of data and need an awful lot of compute to support that.”

To address these two problems, Arm decided to launch two products: one that focuses on compute speed and one that is all about throughput, especially in the context of 5G.

ARM NEOVERSE N1

The Neoverse N1 platform is meant for infrastructure-class solutions that focus on raw compute speed. The chips should perform significantly better than previous Arm CPU generations meant for the data center and the company says that it saw speedups of 2.5x for Nginx and MemcacheD, for example. Chip manufacturers can optimize the 7nm platform for their needs, with core counts that can reach up to 128 cores (or as few as 4).

“This technology platform is designed for a lot of compute power that you could either put in the data center or stick out at the edge,” said Henry. “It’s very configurable for our customers so they can design how big or small they want those devices to be.”

The E1 is also a 7nm platform, but with a stronger focus on edge computing use cases where you also need some compute power to maybe filter out data as it is generated, but where the focus is on moving that data quickly and efficiently. “The E1 is very highly efficient in terms of its ability to be able to move data through it while doing the right amount of compute as you move that data through,” explained Henry, who also stressed that the company made the decision to launch these two different platforms based on customer feedback.

There’s no point in launching these platforms without software support, though. A few years ago, that would have been a challenge because few commercial vendors supported their data center products on the Arm architecture. Today, many of the biggest open-source and proprietary projects and distributions run on Arm chips, including Red Hat Enterprise Linux, Ubuntu, Suse, VMware, MySQL, OpenStack, Docker, Microsoft .Net, DOK and OPNFV. “We have lots of support across the space,” said Henry. “And then as you go down to that tier of languages and libraries and compilers, that’s a very large investment area for us at Arm. One of our largest investments in engineering is in software and working with the software communities.”

And as Henry noted, AWS also recently launched its Arm-based servers — and that surely gave the industry a lot more confidence in the platform, given that the biggest cloud supplier is now backing it, too.

Feb
20
2019
--

Why Daimler moved its big data platform to the cloud

Like virtually every big enterprise company, a few years ago, the German auto giant Daimler decided to invest in its own on-premises data centers. And while those aren’t going away anytime soon, the company today announced that it has successfully moved its on-premises big data platform to Microsoft’s Azure cloud. This new platform, which the company calls eXtollo, is Daimler’s first major service to run outside of its own data centers, though it’ll probably not be the last.

As Daimler’s head of its corporate center of excellence for advanced analytics and big data Guido Vetter told me, the company started getting interested in big data about five years ago. “We invested in technology — the classical way, on-premise — and got a couple of people on it. And we were investigating what we could do with data because data is transforming our whole business as well,” he said.

By 2016, the size of the organization had grown to the point where a more formal structure was needed to enable the company to handle its data at a global scale. At the time, the buzz phrase was “data lakes” and the company started building its own in order to build out its analytics capacities.

Electric lineup, Daimler AG

“Sooner or later, we hit the limits as it’s not our core business to run these big environments,” Vetter said. “Flexibility and scalability are what you need for AI and advanced analytics and our whole operations are not set up for that. Our backend operations are set up for keeping a plant running and keeping everything safe and secure.” But in this new world of enterprise IT, companies need to be able to be flexible and experiment — and, if necessary, throw out failed experiments quickly.

So about a year and a half ago, Vetter’s team started the eXtollo project to bring all the company’s activities around advanced analytics, big data and artificial intelligence into the Azure Cloud, and just over two weeks ago, the team shut down its last on-premises servers after slowly turning on its solutions in Microsoft’s data centers in Europe, the U.S. and Asia. All in all, the actual transition between the on-premises data centers and the Azure cloud took about nine months. That may not seem fast, but for an enterprise project like this, that’s about as fast as it gets (and for a while, it fed all new data into both its on-premises data lake and Azure).

If you work for a startup, then all of this probably doesn’t seem like a big deal, but for a more traditional enterprise like Daimler, even just giving up control over the physical hardware where your data resides was a major culture change and something that took quite a bit of convincing. In the end, the solution came down to encryption.

“We needed the means to secure the data in the Microsoft data center with our own means that ensure that only we have access to the raw data and work with the data,” explained Vetter. In the end, the company decided to use the Azure Key Vault to manage and rotate its encryption keys. Indeed, Vetter noted that knowing that the company had full control over its own data was what allowed this project to move forward.

Vetter tells me the company obviously looked at Microsoft’s competitors as well, but he noted that his team didn’t find a compelling offer from other vendors in terms of functionality and the security features that it needed.

Today, Daimler’s big data unit uses tools like HD Insights and Azure Databricks, which covers more than 90 percents of the company’s current use cases. In the future, Vetter also wants to make it easier for less experienced users to use self-service tools to launch AI and analytics services.

While cost is often a factor that counts against the cloud, because renting server capacity isn’t cheap, Vetter argues that this move will actually save the company money and that storage costs, especially, are going to be cheaper in the cloud than in its on-premises data center (and chances are that Daimler, given its size and prestige as a customer, isn’t exactly paying the same rack rate that others are paying for the Azure services).

As with so many big data AI projects, predictions are the focus of much of what Daimler is doing. That may mean looking at a car’s data and error code and helping the technician diagnose an issue or doing predictive maintenance on a commercial vehicle. Interestingly, the company isn’t currently bringing to the cloud any of its own IoT data from its plants. That’s all managed in the company’s on-premises data centers because it wants to avoid the risk of having to shut down a plant because its tools lost the connection to a data center, for example.

Feb
19
2019
--

Google acquires cloud migration platform Alooma

Google today announced its intention to acquire Alooma, a company that allows enterprises to combine all of their data sources into services like Google’s BigQuery, Amazon’s Redshift, Snowflake and Azure. The promise of Alooma is that it handles the data pipelines and manages them for its users. In addition to this data integration service, though, Alooma also helps with migrating to the cloud, cleaning up this data and then using it for AI and machine learning use cases.

“Here at Google Cloud, we’re committed to helping enterprise customers easily and securely migrate their data to our platform,” Google VP of engineering Amit Ganesh and Google Cloud Platform director of product management Dominic Preuss write today. “The addition of Alooma, subject to closing conditions, is a natural fit that allows us to offer customers a streamlined, automated migration experience to Google Cloud, and give them access to our full range of database services, from managed open source database offerings to solutions like Cloud Spanner and Cloud Bigtable.”

Before the acquisition, Alooma had raised about $15 million, including an $11.2 million Series A round led by Lightspeed Venture Partners and Sequoia Capital in early 2016. The two companies did not disclose the price of the acquisition, but chances are we are talking about a modest price, given how much Alooma had previously raised.

Neither Google nor Alooma said much about what will happen to the existing products and customers — and whether it will continue to support migrations to Google’s competitors. We’ve reached out to Google and will update this post once we hear more.

Update. Here is Google’s statement about the future of the platform:

For now, it’s business as usual for Alooma and Google Cloud as we await regulatory approvals and complete the deal. After close, the team will be joining us in our Tel Aviv and Sunnyvale offices, and we will be leveraging the Alooma technology and team to provide our Google Cloud customers with a great data migration service in the future.

Regarding supporting competitors, yes, the existing Alooma product will continue to support other cloud providers. We will only be accepting new customers that are migrating data to Google Cloud Platform, but existing customers will continue to have access to other cloud providers.   
So going forward, Alooma will not accept any new customers who want to migrate data to any competitors. That’s not necessarily unsurprising and it’s good to see that Google will continue to support Alooma’s existing users. Those who use Alooma in combination with AWS, Azure and other non-Google services will likely start looking for other solutions soon, though, as this also means that Google isn’t likely to develop the service for them beyond its current state.

Alooma’s co-founders do stress, though, that “the journey is not over. Alooma has always aimed to provide the simplest and most efficient path toward standardizing enterprise data from every source and transforming it into actionable intelligence,” they write. “Joining Google Cloud will bring us one step closer to delivering a full self-service database migration experience bolstered by the power of their cloud technology, including analytics, security, AI, and machine learning.”

Feb
19
2019
--

Redis Labs raises a $60M Series E round

Redis Labs, a startup that offers commercial services around the Redis in-memory data store (and which counts Redis creator and lead developer Salvatore Sanfilippo among its employees), today announced that it has raised a $60 million Series E funding round led by private equity firm Francisco Partners.

The firm didn’t participate in any of Redis Labs’ previous rounds, but existing investors Goldman Sachs Private Capital Investing, Bain Capital Ventures, Viola Ventures and Dell Technologies Capital all participated in this round.

In total, Redis Labs has now raised $146 million and the company plans to use the new funding to accelerate its go-to-market strategy and continue to invest in the Redis community and product development.

Current Redis Labs users include the likes of American Express, Staples, Microsoft, Mastercard and Atlassian . In total, the company now has more than 8,500 customers. Because it’s pretty flexible, these customers use the service as a database, cache and message broker, depending on their needs. The company’s flagship product is Redis Enterprise, which extends the open-source Redis platform with additional tools and services for enterprises. The company offers managed cloud services, which give businesses the choice between hosting on public clouds like AWS, GCP and Azure, as well as their private clouds, in addition to traditional software downloads and licenses for self-managed installs.

Redis Labs CEO Ofer Bengal told me the company’s isn’t cash positive yet. He also noted that the company didn’t need to raise this round but that he decided to do so in order to accelerate growth. “In this competitive environment, you have to spend a lot and push hard on product development,” he said.

It’s worth noting that he stressed that Francisco Partners has a reputation for taking companies forward and the logical next step for Redis Labs would be an IPO. “We think that we have a very unique opportunity to build a very large company that deserves an IPO,” he said.

Part of this new competitive environment also involves competitors that use other companies’ open-source projects to build their own products without contributing back. Redis Labs was one of the first of a number of open-source companies that decided to offer its newest releases under a new license that still allows developers to modify the code but that forces competitors that want to essentially resell it to buy a commercial license. Ofer specifically notes AWS in this context. It’s worth noting that this isn’t about the Redis database itself but about the additional modules that Redis Labs built. Redis Enterprise itself is closed-source.

“When we came out with this new license, there were many different views,” he acknowledged. “Some people condemned that. But after the initial noise calmed down — and especially after some other companies came out with a similar concept — the community now understands that the original concept of open source has to be fixed because it isn’t suitable anymore to the modern era where cloud companies use their monopoly power to adopt any successful open source project without contributing anything to it.”

Feb
14
2019
--

Zendesk just hired three former Microsoft, Salesforce and Adobe execs

Today, Zendesk announced it has hired three new executives — Elisabeth Zornes, former general manager of global support for Microsoft Office, as Zendesk’s first chief customer officer; former Adobe executive Colleen Berube as chief information officer and former Salesforce executive Shawna Wolverton as senior vice president, product.

The company emphasized that the hirings were about expanding the executive suite and bringing in top people to help the company grow and move into larger enterprise organizations.

From left to right: Shawna Wolverton, Colleen Berube and Elisabeth Zornes

Zornes comes to Zendesk with 20 years of experience including time at Microsoft working in a variety of roles around Microsoft Office. She says that what attracted her to Zendesk was its focus on the customer.

“When I look at businesses today, no matter what size, what type or what geography, they can agree on one thing: customer experience is the rocket fuel to drive success. Zendesk has positioned itself as a technology company that empowers companies of all kinds to drive a new level of success by focusing on their customer experience, and helping them to be at the forefront of that was a very intriguing opportunity for me,” Zornes told TechCrunch.

New CIO Berube, who comes with two decades of experience, also sees her new job as a chance to have an impact on customer experience and help companies that are trying to transform into digital organizations. “Customer experience is the linchpin for all organizations to succeed in the digital age. My background is broad, having shepherded many different types of companies through digital transformations, and developing and running modern IT organizations,” she said.

Her boss, CEO and co-founder Mikkel Svane, sees someone who can help continue to grow the company and develop the product. “We looked specifically for a CIO with a modern mindset who understands the challenges of large organizations trying to keep up with customer expectations today,” Svane told TechCrunch.

As for senior VP of product Wolverton, she comes with 15 years of experience, including a stint as head of product at Salesforce. She said that coming to Zendesk was about having an impact on a modern SaaS product. “The opportunity to build a modern, public, cloud-native CRM platform with Sunshine was a large part of my decision to join,” she said.

The three leaders have already joined the organization — Wolverton and Berube joined last month and Zornes started just this week.

Feb
14
2019
--

Peltarion raises $20M for its AI platform

Peltarion, a Swedish startup founded by former execs from companies like Spotify, Skype, King, TrueCaller and Google, today announced that it has raised a $20 million Series A funding round led by Euclidean Capital, the family office for hedge fund billionaire James Simons. Previous investors FAM and EQT Ventures also participated, and this round brings the company’s total funding to $35 million.

There is obviously no dearth of AI platforms these days. Peltarion focus on what it calls “operational AI.” The service offers an end-to-end platform that lets you do everything from pre-processing your data to building models and putting them into production. All of this runs in the cloud and developers get access to a graphical user interface for building and testing their models. All of this, the company stresses, ensures that Peltarion’s users don’t have to deal with any of the low-level hardware or software and can instead focus on building their models.

“The speed at which AI systems can be built and deployed on the operational platform is orders of magnitude faster compared to the industry standard tools such as TensorFlow and require far fewer people and decreases the level of technical expertise needed,” Luka Crnkovic-Friis, of Peltarion’s CEO and co-founder, tells me. “All this results in more organizations being able to operationalize AI and focusing on solving problems and creating change.”

In a world where businesses have a plethora of choices, though, why use Peltarion over more established players? “Almost all of our clients are worried about lock-in to any single cloud provider,” Crnkovic-Friis said. “They tend to be fine using storage and compute as they are relatively similar across all the providers and moving to another cloud provider is possible. Equally, they are very wary of the higher-level services that AWS, GCP, Azure, and others provide as it means a complete lock-in.”

Peltarion, of course, argues that its platform doesn’t lock in its users and that other platforms take far more AI expertise to produce commercially viable AI services. The company rightly notes that, outside of the tech giants, most companies still struggle with how to use AI at scale. “They are stuck on the starting blocks, held back by two primary barriers to progress: immature patchwork technology and skills shortage,” said Crnkovic-Friis.

The company will use the new funding to expand its development team and its teams working with its community and partners. It’ll also use the new funding for growth initiatives in the U.S. and other markets.

Feb
14
2019
--

AWS announces new bare metal instances for companies who want more cloud control

When you think about Infrastructure as a Service, you typically pay for a virtual machine that resides in a multi-tenant environment. That means, it’s using a set of shared resources. For many companies that approach is fine, but when a customer wants more control, they may prefer a single tenant system where they control the entire set of hardware resources. This approach is also known as “bare metal” in the industry, and today AWS announced five new bare metal instances.

You end up paying more for this kind of service because you are getting more control over the processor, storage and other resources on your own dedicated underlying server. This is part of the range of products that all cloud vendors offer. You can have a vanilla virtual machine, with very little control over the hardware, or you can go with bare metal and get much finer grain control over the underlying hardware, something that companies require if they are going to move certain workloads to the cloud.

As AWS describes it in the blog post announcing these new instances, these are for highly specific use cases. “Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted Tier 1 business critical applications,” the company explained.

The five new products, called m5.metal, m5d.metal, r5.metal, r5d.metal, and z1d.metal (catchy names there, Amazon) offer a variety of resources:

Chart courtesy of Amazon

These new offerings are available starting today as on-demand, reserved or spot instances, depending on your requirements.

Feb
12
2019
--

Google and IBM still trying desperately to move cloud market-share needle

When it comes to the cloud market, there are few known knowns. For instance, we know that AWS is the market leader with around 32 percent of market share. We know Microsoft is far back in second place with around 14 percent, the only other company in double digits. We also know that IBM and Google are wallowing in third or fourth place, depending on whose numbers you look at, stuck in single digits. The market keeps expanding, but these two major companies never seem to get a much bigger piece of the pie.

Neither company is satisfied with that, of course. Google so much so that it moved on from Diane Greene at the end of last year, bringing in Oracle veteran Thomas Kurian to lead the division out of the doldrums. Meanwhile, IBM made an even bigger splash, plucking Red Hat from the market for $34 billion in October.

This week, the two companies made some more noise, letting the cloud market know that they are not ceding the market to anyone. For IBM, which is holding its big IBM Think conference this week in San Francisco, it involved opening up Watson to competitor clouds. For a company like IBM, this was a huge move, akin to when Microsoft started building apps for iOS. It was an acknowledgement that working across platforms matters, and that if you want to gain market share, you had better start thinking outside the box.

While becoming cross-platform compatible isn’t exactly a radical notion in general, it most certainly is for a company like IBM, which if it had its druthers and a bit more market share, would probably have been content to maintain the status quo. But if the majority of your customers are pursuing a multi-cloud strategy, it might be a good idea for you to jump on the bandwagon — and that’s precisely what IBM has done by opening up access to Watson across clouds in this fashion.

Clearly buying Red Hat was about a hybrid cloud play, and if IBM is serious about that approach, and for $34 billion, it had better be — it would have to walk the walk, not just talk the talk. As IBM Watson CTO and chief architect Ruchir Puri told my colleague Frederic Lardinois about the move, “It’s in these hybrid environments, they’ve got multiple cloud implementations, they have data in their private cloud as well. They have been struggling because the providers of AI have been trying to lock them into a particular implementation that is not suitable to this hybrid cloud environment.” This plays right into the Red Hat strategy, and I’m betting you’ll see more of this approach in other parts of the product line from IBM this year. (Google also acknowledged this when it announced a hybrid strategy of its own last year.)

Meanwhile, Thomas Kurian had his coming-out party at the Goldman Sachs Technology and Internet Conference in San Francisco earlier today. Bloomberg reports that he announced a plan to increase the number of salespeople and train them to understand specific verticals, ripping a page straight from the playbook of his former employer, Oracle.

He suggested that his company would be more aggressive in pursuing traditional enterprise customers, although I’m sure his predecessor, Diane Greene, wasn’t exactly sitting around counting on inbound marketing interest to grow sales. In fact, rumor had it that she wanted to pursue government contracts much more aggressively than the company was willing to do. Now it’s up to Kurian to grow sales. Of course, given that Google doesn’t report cloud revenue it’s hard to know what growth would look like, but perhaps if it has more success it will be more forthcoming.

As Bloomberg’s Shira Ovide tweeted today, it’s one thing to turn to the tried and true enterprise playbook, but that doesn’t mean that executing on that approach is going to be simple, or that Google will be successful in the end.

These two companies obviously desperately want to alter their cloud fortunes, which have been fairly dismal to this point. The moves announced today are clearly part of a broader strategy to move the market share needle, but whether they can or the market positions have long ago hardened remains to be seen.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com